In these notes, we describe the mathematical background for the frequency-domain eigensolver algorithms in Meep (used by the
solve_eigfreq function), which build on the frequency-domain solver algorithm described in the Meep paper. See also these notes for more background on the notation for Maxwell's equations used below.
Maxwell Equations in the Frequency Domain
The Meep frequency-domain solver (at a frequency ) is essentially solving the linear system of equations , where:
That is, is the six-component field state, is the susceptibility at , is the six-component current, and is the "curl" operator. (For these notes "natural" units are used in which .) Furthermore, in most physical circumstances the matrix block-diagonalizes, and is related to the electric permittivity and the magnetic permeability :
Equivalently, we can write things in terms of the and fields
In fact, this equation is precisely the linear system of equations that Meep is actually solving internally.
The eigenvalue problem is simply i.e. to find a (complex resonant) frequency for which is singular, and a corresponding eigenvector in the nullspace of . If we have non-dispersive materials, those where is independent of , then this is a linear generalized eigenproblem or equivalently a linear eigenproblem For lossless transparent materials, i.e. real , these problems are Hermitian under an inner product weighted by (for ) or (for ), which leads to real . More generally, for dispersive (-dependent) materials, or is a "nonlinear eigenvalue problem."
Iterative Eigenvalue Algorithms
There are many algorithms for linear and nonlinear eigenvalue problems, but let us focus on the case where we have a good initial guess for the desired eigenvalue . That is suppose we want the closest eigenvalue to , and that there is a single eigenvalue much closer to than any other eigenvalue. (In a time-domain solver like Meep, we get good estimates for many eigenvalues simultaneously by signal processing analyses of the response to a short pulse input.)
In fact, suppose that is so small that we can approximate , allowing us to neglect material dispersion when computing . In this case, we can use the standard shift-and-invert power method to repeatedly solve with some arbitrary (e.g. random or a point source). This is, in fact, just a Maxwell solve, where the "current source" is i.e. the and fields of the previous solve. The factor is essentially irrelevant, since we can scale eigenfunctions arbitrarily, and in fact one ordinarily wants to renormalize on each power iteration to prevent the iterations from blowing up (or decaying to zero).
Equivalently, we are solving the shifted eigenproblem whose eigenvalue is instead of .
Estimating the Eigenvalue
Given an estimated eigenvector , the typical way to estimate the corresponding eigenvalue is to compute a Rayleigh quotient
using some inner product . For a general non-normal where we have arbitrary complex eigenvalues, it doesn't matter too much which inner product we choose, e.g. the obvious inner product is fine.
In the case of lossless media (Hermitian positive-definite ) with real , the accuracy of can be improved by using an inner product where is Hermitian, i.e. , which implies that which corresponds physically to electromagnetic energy. Doing this essentially squares the error (i.e. it doubles the number of digits in the eigenvalue estimate) because eigenvalues of Hermitian operators are extrema of the Rayleigh quotient. Unfortunately, if the medium is not lossless, you can run into problems because this is not an inner product, and one can even have at exceptional points. Since the main utility of computing eigenvalues in Meep is arguably for computing resonance modes of non-Hermitian problems (since most Hermitian cases can be handled more efficiently in MPB), we should probably just stick with the inner product.
Correcting for Time Discretization
Meep does not compute exactly, of course—it uses a finite-difference approximation : So, whereas a time-harmonic field would have , we instead have Note that , so that the two agree for .
In all of the analyses above, we simply replace with (for , , and ), and everything carries through in the same way. At the end of an eigenvalue calculation, we compute from using the formula
There are many ways to potentially improve a numerical eigensolver beyond the simple shift-and-invert power method describe above. For example, the most common technique would be to plug the same solves into an Arnoldi iteration, e.g. as implemented by a library like ARPACK. An advantage of Arnoldi iterations, beyond accelerated convergence (especially if our shift estimate is not so accurate) is that it can compute multiple eigenvalues simulaneously (albeit with increased computational expense).