The following are frequently asked questions.

Table of Contents


What is Meep?

Meep is a free and open-source software package for electromagnetics simulation via the finite-difference time-domain (FDTD) method. The name Meep is an acronym for MIT Electromagnetic Equation Propagation.

Who are the developers of Meep?

Meep was originally developed as part of graduate research at MIT. The project is now being maintained by Simpetus and the developer community on GitHub.

Where can I ask questions regarding Meep?

There is a public mailing list for users to discuss issues pertaining to setting up simulations, post-processing output, installation, etc. A useful place to start is the list archives which includes all postings (6000+) since 2006 spanning a variety of topics. Bug reports and new feature requests should be filed as a GitHub issue.

Are professional consulting services available?

Yes. Simpetus, a company started by Meep's developers and maintainers, provides professional consulting services for photonic design and modeling including development of turn-key simulation modules as well as training and technical support for getting up and running with Meep.

How can I contribute to the Meep project?

Pull requests involving bug fixes, new features, and general improvements are welcome and can be made to the master branch on GitHub. This includes tweaks, revisions, and updates to this documentation, generated from markdown, which is also part of the source repository.

Is there a technical reference for Meep?

Yes. The technical details of Meep's inner workings are described in the peer-reviewed publication MEEP: A flexible free-software package for electromagnetic simulations by the FDTD method, Computer Physics Communications, Vol. 181, pp. 687-702, 2010 (pdf). Additional information is provided in the book Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology in Chapters 4 ("Electromagnetic Wave Source Conditions"), 5 ("Rigorous PML Validation and a Corrected Unsplit PML for Anisotropic Dispersive Media"), 6 ("Accurate FDTD Simulation of Discontinuous Materials by Subpixel Smoothing"), and 20 ("MEEP: A Flexible Free FDTD Software Package"). A video presentation and slides as well as a podcast are also available.

Where can I find a list of projects which have used Meep?

For a list of more than 2500 published works which have used Meep, see the Google Scholar citation page as well as that for the technical reference and also the subpixel smoothing reference.

Can I access Meep in the public cloud?

Yes. Meep is available preinstalled on Ubuntu on Amazon Web Services (AWS) Elastic Compute Cloud (EC2) as a free Amazon Machine Image (AMI). To access this AMI, follow these instructions.


Where can I install Meep?

Meep runs on any Unix-like operating system, such as Linux, macOS, and FreeBSD, from notebooks to desktops to supercomputers. Conda packages of the latest released version are available for Linux and macOS. There are also Conda packages of nightly development builds which can be used to experiment with new features. Installing Meep from the source code requires some understanding of Unix, especially to install the various dependencies. Installation shell scripts are available for Ubuntu 16.04 and 18.04 and macOS Sierra.

Can I install Meep on Windows machines?

Yes. For Windows 10, you can install the Ubuntu terminal as an app which is based on the Windows Subsystem for Linux framework and then follow the instructions for obtaining the Conda packages or building from source. Support for visualization is enabled using a browser-based Jupyter notebook which can be installed via the Ubuntu terminal. For Windows 8 and older versions, you can use the free Unix-compatibility environment Cygwin following these instructions.

Are there precompiled binary packages for Ubuntu?

Yes. Ubuntu and Debian packages can be obtained via the package manager APT as described in Download. However, the Meep packages for Ubuntu 16.04 (serial and parallel) and 18.04 (serial and parallel) are for version 1.3 (March 2015) which is out of date. The Meep package for Ubuntu is in the process of being updated and will likely appear in Ubuntu 19.10 as derived from the unstable Debian package. In the meantime, since the Scheme interface is no longer being supported and has been replaced by the Python interface, you can use the Conda packages which contain the official releases as well as nightly builds of the master branch of the source repository.

Guile is installed, but configure complains that it can't find guile

With most Linux distributions as well as Cygwin, packages like Guile are split into two parts: a guile package that just contains the libraries and executables, and a guile-dev or guile-devel package that contains the header files and other things needed to compile programs using Guile. Usually, the former is installed by default but the latter is not. You need to install both, which means that you probably need to install guile-dev. Similarly for any other library packages needed by Meep.


How does the current amplitude relate to the resulting field amplitude?

There is no simple formula relating the input current amplitude (J in Maxwell's equations) to the resulting fields (E) etcetera, even at the same point as the current. The exact same current will produce a different field and radiate a different total power depending upon the surrounding materials/geometry, and depending on the frequency. This is a physical consequence of the geometry's effect on the local density of states; it can also be thought of as feedback from reflections on the source. A classic example is an antenna in front of a ground plane, which radiates very different amounts of power depending on the distance between the antenna and the plane (half wavelength vs. quarter wavelength, for example). Alternatively, if you put a current source inside a perfect electric conductor, the resulting field will be zero. Also, as the frequency of the current increases, the amplitude of the resulting field will also increase. This is related to why the sky is blue: scattered power increases with frequency (alternatively the density of states increases as the frequency to the d-1 power in d dimensions).

For a mathematical description, see Section 4.4 ("Currents and Fields: The Local Density of States") in Chapter 4 ("Electromagnetic Wave Source Conditions") of Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology.

If you are worried about this, then you are probably setting up your calculation in the wrong way. Especially in linear materials, the absolute magnitude of the field is useless; the only meaningful quantities are dimensionless ratios like the fractional transmittance: the transmitted power relative to the transmitted power in some reference calculation. Almost always, you want to perform two calculations, one of which is a reference, and compute the ratio of a result in one calculation to the result in the reference. For nonlinear calculations, see Units and Nonlinearity.

How is the source current defined?

The source current in Meep is defined as a free charge current J in Maxwell's equations. Meep does not simulate the driving force behind this free charge current, nor does the current have to be placed in a conductor. Specifying a current means that somehow you are shaking a charge at that point (by whatever means, Meep doesn't care) and you want to know the resulting fields. In a linear system, multiplying J by 2 results in multiplying the fields by 2.

In the interface, the source currents are labeled Ex or Hy etc. according to what components of the electric/magnetic fields they correspond to.

How do I compute the local density of states (LDOS) in a lossy material?

If you put a point source inside a lossy material (e.g., a Lorentz-Drude metal), then the power expended by a dipole diverges as you increase the resolution. In the limit of infinite resolution, infinite power is absorbed. LDOS is not well defined for points inside of a lossy material.

However, LDOS is perfectly well defined for points outside of a lossy material. For example, you can choose a point outside of a lossy object and calculate the LDOS, and it will converge to a finite value as you increase the resolution. For an example, see Tutorial/Local Density of States.

How do I set the imaginary part of ε?

If you only care about the imaginary part of ε in a narrow bandwidth around some frequency ω, you should set it by using the electric conductivity. If you care about the imaginary part of ε over a broad bandwidth, then for any physical material the imaginary part will be frequency-dependent and you will have to fit the data to a Drude-Lorentz susceptibility model.

Meep doesn't implement a frequency-independent complex ε. Not only is this not physical, but it also leads to both exponentially decaying and exponentially growing solutions in Maxwell's equations from positive- and negative-frequency Fourier components, respectively. Thus, it cannot be simulated in the time domain.

Why does my simulation diverge if ε < 0?

Maxwell's equations have exponentially growing solutions for a frequency-independent negative ε. For any physical medium with negative ε, there must be dispersion, and you must likewise use dispersive materials in Meep to obtain negative ε at some desired frequency. The requirement of dispersion to obtain negative ε follows from the Kramers–Kronig relations, and also follows from thermodynamic considerations that the energy in the electric field must be positive. For example, see Electrodynamics of Continuous Media by Landau, Pitaevskii, and Lifshitz. At an even more fundamental level, it can be derived from passivity constraints as shown in Physical Review A, Vol. 90, 023847, 2014.

If you solve Maxwell's equations in a homogeneous-epsilon material at some real wavevector k, you get a dispersion relation . If ε is positive, there are two real solutions , giving oscillating solutions. If ε is negative, there are two imaginary solutions corresponding to exponentially decaying and exponentially growing solutions from any current source. These solutions can always be spatially decomposed into a superposition of real-k values via a spatial Fourier transform.

If you do a simulation of any kind in the time domain (not just FDTD), you pretty much can't avoid exciting both the decaying and the growing solutions. This is not a numerical instability, it is a real solution of the underlying equations for an unphysical material.

See Materials for how to include dispersive materials which can have negative ε and loss.

If you have negative ε and negative μ everywhere, the case of a negative-index material, then the simulation is fine. However at the boundary between negative- and positive-index materials, you will encounter instabilities: because of the way Maxwell's equations are discretized in FDTD, the ε and μ are discretized on different spatial grids, so you will get a half-pixel or so of εμ < 0 at the boundary between negative and positive indices, which will cause the simulation to diverge. But of course, any physical negative-index metamaterial also involves dispersion.

Note also that, as a consequence of the above analysis, ε must go to a positive value in the ω limit to get non-diverging solutions of Maxwell's equations. So the ε in your dispersion model must be positive.

Why are there strange peaks in my reflectance/transmittance spectrum when modeling planar or periodic structures?

There are two possible explanations: (1) the simulation run time may be too short and your results have not sufficiently converged, or (2) you may be using a higher-dimensional cell with multiple periods (a supercell) which introduces unwanted additional modes due to band folding. Modeling flat/planar structures typically requires a 1d cell and periodic structures a single unit cell in 2d/3d. For more details, see Section 4.6 ("Sources in Supercells") in Chapter 4 ("Electromagnetic Wave Source Conditions") of Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology. Note that a 1d cell must be along the direction with only the Ex and Hy field components permitted.

How do I model the solar radiation spectrum?

For simulations involving solar radiation, including the air mass, the reflectance/transmittance spectra is computed as normal. Since typical solar-cell problems are linear, the reflected or transmitted power can then be obtained by simply multiplying the reflectance or transmittance by the solar spectrum.


Is there a Python interface?

Yes. An official Python interface was released in version 1.4 and replaces the Scheme interface which is no longer being supported. An unofficial Python interface, which predates and is incompatible with the official version, has been developed independently by researchers at the Institute of Physics at the Czech Academy of Sciences and Ghent University, and maintained by Filip Dominec. Unfortunately, this interface has several shortcomings including missing support for geometric objects, lack of high-level abstractions for low-level functionality, and limited documentation. The official interface addresses all these issues.

What are the different ways to define a structure?

There are four ways to define a structure: (1) the GeometricObject (Python) or geometric-object (Scheme) class used to specify a collection of predefined shapes including Prism, Sphere, Cylinder, Cone, Block, and Ellipsoid, (2) material_function (Python) or material-function (Scheme) used to define an arbitrary function: for a given position in the cell, return the ε/μ at that point, (3) import the scalar, real-valued, frequency-independent permittivity from an HDF5 file via the epsilon_input_file (Python) or epsilon-input-file (Scheme) input parameter, or (4) import planar geometries from a GDSII file. Combinations of (1), (2), and (4) are allowed but not (3).

Is there a materials library?

Yes. A materials library is available containing crystalline silicon (c-Si), amorphous silicon (a-Si) including the hydrogenated form, silicon dioxide (SiO2), indium tin oxide (ITO), alumina (Al2O3), gallium arsenide (GaAs), gallium nitride (GaN), aluminum arsenide (AlAs), aluminum nitride (AlN), borosilicate glass (BK7), fused quartz, silicon nitride (Si3N4), germanium (Ge), indium phosphide (InP), lithium niobate (LiNbO3), as well as 11 elemental metals: silver (Ag), gold (Au), copper (Cu), aluminum (Al), berylium (Be), chromium (Cr), nickel (Ni), palladium (Pd), platinum (Pt), titanium (Ti), and tungsten (W). Additional information is provided in Materials.

How do I import n and k values into Meep?

You can import any arbitrary complex permittivity profile via n and k values into Meep by fitting the wavelength- or frequency-dependent data to a sum of Drude-Lorentz polarizability terms as described in Materials. In general, you have to use nonlinear optimization to do the fit (e.g., to minimize the sum-of-squares errors or whatever error criterion you prefer). Enough Lorentzians should form a complete basis, so you should be able to fit any function given enough Lorentzians. A wavelength-dependent, purely-real permittivity (i.e., with no loss) which can be represented using the Sellmeier equation can be directly transferred to the Lorentz model using a simple substitution of variables. Note that Meep only does subpixel averaging of the nondispersive part of ε (and μ).

Does Meep support gyrotropic materials?

No. Currently, Meep only supports anisotropic, real-symmetric, permittivity tensors. In the magneto-optic effect, an external magnetic field yields imaginary off-diagonal components of ε (with no absorption) which is not yet supported (issue #60).

Why are the fields blowing up in my simulation?

Instability in the fields is likely due to one of four causes: (1) PML overlapping dispersive materials based on a Drude-Lorentzian susceptibility in the presence of backward-wave modes (fix: replace the PML with an Absorber), (2) the frequency of a Lorentzian susceptibility term is too high relative to the grid discretization (fix: increase the resolution and/or reduce the Courant factor), (3) a material with a wavelength-independent negative real permittivity (fix: fit the permittivity to a broadband Drude-Lorentzian susceptibility), or (4) a grid voxel contains more than one dielectric interface (fix: turn off subpixel averaging).

Does Meep support importing GDSII files?

Yes. The get_GDSII_prisms routine is used to import GDSII files. See Tutorial/GDSII Import for an example. This feature facilitates the simulation of 2d/planar structures which are fabricated using semiconductor foundries. Also, it enables Meep's plug-and-play capability with electronic design automation (EDA) circuit-layout editors (e.g., Cadence Virtuoso Layout, Silvaco Expert, KLayout, etc.). EDA is used for the synthesis and verification of large and complex integrated circuits.

Checking convergence

In any computer simulation like Meep, you should check that your results are converged with respect to any approximation that was made. There is no simple formula that will tell you in advance exactly how much resolution (etc.) is required for a given level of accuracy; the most reliable procedure is to simply double the resolution and verify that the answers you care about don't change to your desired tolerance. Useful things to check (ideally by doubling) in this way are: resolution, run time (for Fourier spectra), PML thickness.

Meep's subpixel smoothing often improves the rate of convergence and makes convergence a smoother function of resolution. However, subpixel smoothing does not occur for dispersive materials or user-defined material functions ε(x) instead of the built-in geometric objects.

For flux calculations involving pulsed (i.e., Gaussian) sources, it is important to run the simulation long enough to ensure that all the transient fields have sufficiently decayed away (i.e., due to absorption by the PMLs, etc). Terminating the simulation prematurely will result in the Fourier-transformed fields, which are being accumulated during the time stepping (as explained in Introduction), to not be fully converged. Convergence of the fields is typically achieved by lowering the decay_by parameter in the stop_when_fields_decayed run function. Alternatively, you can explicitly set the run time to some numeric value that you repeatedly double, instead of using the field decay. Sometimes it is also informative to double the cutoff parameter of sources to increase their smoothness (reducing the amplitude of long-lived high-frequency modes).

Why are my reflectance/transmittance values less than zero and/or greater than one?

There are four possible explanations: (1) the normalization and the scattering runs are not comparable because e.g., the sources or flux monitors are not in the same position within the structure, (2) the run time is not long enough and hence all of the flux is not being collected in either or both runs, (3) the flux is being computed at a frequency which is too far away from the center of the source bandwidth; in such cases the flux values are too small and may be dominated by rounding errors, or (4) the source or flux monitor is positioned too close to the scatterer which modifies the local density of states (LDOS); for example, a source emits more power near a band edge and less power within a bandgap than the same source surrounded by many wavelengths of vacuum.

Note: the Poynting flux is a dimensionful quantity which can be any value (positive or negative).

How do I compute the modes of a non-orthogonal (i.e., triangular) lattice?

Meep does not support non-rectangular unit cells. To deal with a triangular lattice, you have to use a supercell. This will cause the band structure to be folded. However, if you take your point source and replicate it according to the underlying triangular lattice vectors, with the right phase relationship according to the Bloch wavevector, then it should excite the folded bands only with very low amplitude as reported by Harminv. Also, for every Harminv point you put in, you should analyze the fields from the periodic copies of that point (with the periodicity of the underlying lattice). Then, reject any frequency that is not detected at all points, with an amplitude that is related by something close to the correct exp(ikx) phase.

In principle, the excitation of the folded bands would be exactly zero if you place your sources correctly in the supercell. However, this doesn't happen in FDTD because the finite grid spoils the symmetry slightly. It also means that the detection of folded bands will vary with resolution.

For an example, see Section 4.6 ("Sources in Supercells") in Chapter 4 ("Electromagnetic Wave Source Conditions") of Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology.

Why doesn't turning off subpixel averaging work?

By default, when Meep assigns a dielectric constant ε or μ to each pixel, it uses a carefully designed average of the ε values within that pixel. This subpixel averaging generally improves the accuracy of the simulation — perhaps counter-intuitively, for geometries with discontinuous ε it is more accurate (i.e. closer to the exact Maxwell result for the discontinuous case) to do the simulation with the subpixel-averaged (smoothed) ε, as long as the averaging is done properly. For details, see Section 3 ("Interpolation and the illusion of continuity") of Computer Physics Communications, Vol. 181, pp. 687-702, 2010.

Still, there are times when, for whatever reason, you might not want this feature. For example, if your accuracy is limited by other issues, or if you want to skip the wait at the beginning of the simulation for it do to the averaging. In this case, you can disable the subpixel averaging by setting Simulation.eps_averaging = False (Python) or (set! eps-averaging? false) (Scheme). For more details, see Python User Interface.

Even if you disable the subpixel averaging, however, when you output the dielectric function to a file and visualize it, you may notice that there are some pixels with intermediate ε values, right at the boundary between two materials. This is due to a completely different reason. Internally, Meep's simulation is performed on a Yee grid, in which every field component is stored on a slightly different grid which are offset from one another by half-pixels, and the ε values are also stored on this Yee grid. For output purposes, however, it is more user-friendly to output all fields etcetera on the same grid at the center of each pixel, so all quantities are interpolated onto this grid for output. Therefore, even though the internal ε values are indeed discontinuous when you disable subpixel averaging, the output file will still contain some "averaged" values at interfaces due to the interpolation from the Yee grid to the center-pixel grid. For the same reason, if k_point is set and the boundaries are Bloch-periodic, the permittivity function of the entire cell obtained via get_epsilon or output_epsilon will show that a little of the cell from one edge "leaks" over to the other edge. This is independent of PML and the way the structure is defined (i.e., using geometric objects or a material function, etc.). An example is shown in the figure below comparing output_epsilon for two cases involving with and without k_point. The discretization artifacts are highlighted.

Can subpixel averaging be applied to dispersive materials?

No. Meep only does subpixel averaging of the non-dispersive part of ε and μ. The dispersive part is not averaged at all. This means that any sharp interfaces between dispersive materials will dominate the error, and you will probably get only first-order convergence, the same as if you do no subpixel averaging at all. It is possible that the subpixel averaging may still improve the constant factor in the convergence if not the asymptotic convergence rate, if you also have a lot of interfaces between non-dispersive materials or if the dispersion is small (i.e., if ε is close to ε over your bandwidth). On the other hand, if the dispersion is large and most of your interfaces are between large-dispersion materials, then subpixel averaging may not help at all and you might as well turn it off (which may improve stability). Generally, the subpixel averaging will not degrade accuracy though it will affect performance.

What is a good rule of thumb for the grid resolution?

At least 8 pixels per wavelength in the lossless dielectric material with the highest index. Resolving the skin depth of metals, which is typically tens of nanometers at optical frequencies, will require a pixel size of comparable dimensions since subpixel averaging does not apply to dispersive materials.

Why does subpixel averaging take so long?

There are at least two possible reasons due to using: (1) a material_function to define a Medium object or (2) the C++ interface. Unlike either the Python or Scheme interfaces which are based on analytically computing the averaged permittivity for boundary voxels involving at most one GeometricObject (e.g., Sphere, Prism, Block, etc.), the C++ interface computes these averages from the material_function using numerical quadrature if the parameter use_anisotropic_averaging=true is passed to the constructor of set_materials. This procedure involves calling the material_function many times for every voxel in the structure object which can be slow due to the SWIG callbacks, particularly because the voxel density is repeatedly doubled until a given threshold tolerance (subpixel_tol) or maximum iteration number (subpixel_maxeval) is reached. Because of this discrepancy in the subpixel averaging, the results for the C++ and Python/Scheme interfaces may be slightly different at the same resolution. You can potentially speed up subpixel averaging by increasing subpixel_tol or decreasing subpixel_maxeval. Note that the slow callbacks may still be noticeable during the grid initialization even when subpixel averaging is turned off. Just remember that if you turn off subpixel averaging, it usually means that you may need to increase the grid resolution to obtain the same accuracy. You will have to determine how much accuracy you want to trade for time. Alternatively, in the C++ interface you can use the meepgeom.hpp routines to define your geometry in terms of blocks, cylinders, etcetera similar to Python and Scheme, with semi-analytical subpixel averaging.

For calculations involving Fourier-transformed fields, why should the source be a pulse rather than a continuous wave?

A continuous-wave source (ContinuousSource) produces fields which are not integrable: their Fourier transform will not converge as the run time of the simulation is increased because the source never terminates. The Fourier-transformed fields are therefore arbitrarily defined by the run time. This windowing does different things to the normalization and scattering runs because the spectra are different in the two cases. In contrast, a pulsed source (GaussianSource) produces fields which are L2-integrable: their Fourier transform is well defined and convergent as long as the run time is sufficiently large and the fields have decayed away.

Note: the amplitude of the Fourier transform grows linearly with time and the Poynting flux, which is proportional to the amplitude squared, grows quadratically.

How do I create a focused beam with a Gaussian envelope?

A focused beam with a Gaussian envelope can be created using the amplitude function (amp_func) of the Source object. Examples are provided for Python and Scheme. Four snapshots of the resulting field profile generated using this script for different values of the beam width (sigma) and rotation angle (tilt_angle) are shown in the following image:

The numerical aperture (NA) of a Gaussian beam of width w (2*sigma from the example script) and vacuum wavelength λ in a medium of index n is n⋅sin(λ/(πnw)).

Note: in this example, the beam waist is at the source position (i.e., top center of the cell). If you want the beam waist to be at a position other than the position of the source, you need to adjust the phase of the beam accordingly. If you assume you have a Gaussian beam profile with zero phase at some plane y=y0, then you can work out the beam profile (including phase) at any other plane y=y1 by taking the Fourier transform and looking at the propagation of each planewave component, and then inverse Fourier transforming. In this way, you can work out the desired source profile at any plane y=y1 to get a Gaussian beam waist at y=y0.

How do I compute the effective index of an eigenmode of a lossy waveguide?

To compute the effective index, you will need to first compute the complex ω (the loss in time) for a real β (the propagation constant) and then convert this quantity into a loss in space (complex β at a real ω) by dividing by the group velocity vg. This procedure is described in more detail below.

To obtain the loss in time, you make your computational cell a cross-section of your waveguide (i.e. 2d for a waveguide with constant cross-section), and set Bloch-periodic boundary conditions via the k_point input variable — this specifies your (real) β. You then treat it exactly the same as a resonant-cavity problem: you excite the system with a short pulse source, monitor the field at some point, and then analyze the result with Harminv; all of which is done if you call run_kpoints. This will give you the complex ω at the given β, where the imaginary part is the loss rate in time. Note: the loss in a uniform waveguide, with no absorption or disorder, is zero, even in the discretized system.

That is, you have ω(βr) = ωr+iωi where the subscripts r and i denote real and imaginary parts. Now, what you want to do is to get the complex β at the real ω which is given by: β(ωr)=βr-iωi/vg+O(ωi2). That is, to first order in the loss, the imaginary part of β (the propagation loss) at the real frequency ωr is given just by dividing ωi by the group velocity vg=dω/dβ, which you can get from the dispersion relation in the absence of loss. This relationship is just a consequence of the first-order Taylor expansion of the dispersion relation ω(β) in the complex plane.

This analysis is only valid if the loss is small, i.e. ωi << ωr. This should always be the case in any reasonable waveguide, where the light can travel for many wavelengths before dissipating/escaping. If you have extremely large losses so that it only propagates for a few wavelengths or less, then you would have to treat the problem differently — but in this case, the whole concept of a "waveguide mode" is not clearly defined.

How do I compute the group velocity of a mode?

There are two possible approaches for computing the group velocity: (1) compute the band diagram ω(k) using Harminv, fit it to a polynomial, and calculate its derivative using a finite difference, or (2) excite the mode using a narrowband pulse and compute the ratio of the flux to energy density.

How do I compute the time average of the harmonic fields?

For a linear system, you can use a ContinuousSource with force_complex_fields=True and time-step the fields until all transients have disappeared. Once the fields have reached steady state, the instantaneous intensity |E|2/2 or Poynting flux Re[E*xH]/2 is equivalent to the time average. If you don't use complex fields, then these are just the instantaneous values at a given time, and will oscillate. An alternative to time-stepping is the frequency-domain solver.

How do I set up an oblique planewave source?

A planewave incident at any angle can be generated by typically setting the amplitude function amp_func of a 1d/line source for a 2d cell or 2d/planar source for a 3d cell, as well as the Bloch-periodic boundary conditions via k_point. For a 1d example, see Tutorial/Basics. There is also a Scheme version. For 2d, see Tutorial/Mode Decomposition. Additional examples are available for Python and Scheme.

The oblique planewave is incident at a given angle for only a single frequency component of the source. Alternatively, each frequency of a pulsed source corresponds to a different angle. This is a fundamental feature of FDTD simulations and not of Meep per se. Thus, to simulate an incident planewave at multiple angles for a given frequency ω, you will need to do separate simulations involving different values of k (k_point) since each set of (k,ω) specifying the Bloch-periodic boundaries and the frequency of the source will produce a different angle of the planewave.

For more details, refer to Section 4.5 ("Efficient Frequency-Angle Coverage") in Chapter 4 ("Electromagnetic Wave Source Conditions") of Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology.

What is Meep's frequency-domain solver and how does it work?

Meep contains a frequency-domain solver that directly computes the steady-state fields produced in a geometry in response to a continuous-wave (CW) source, using an iterative linear solver instead of time-stepping. This is possible because the FDTD timestep can be used to formulate a frequency-domain problem via an iterative linear solver. The frequency-domain response can often be determined using many fewer timesteps while exploiting the FDTD code almost without modification. For details, see Section 5.3 ("Frequency-domain solver") of Computer Physics Communications, Vol. 181, pp. 687-702, 2010.

This means that all of the features from the time-domain solver (e.g., arbitrary materials, symmetries, subpixel averaging, parallelization, etc.) are also available as a frequency-domain solver. For certain problems, such as cavities (e.g., ring resonators) with long-lived resonant modes, the frequency-domain solver converges much faster than the straightforward approach of simply running a long simulation until transients have disappeared. Another benefit is that an arbitrary, complex, refractive index can be specified directly using the electric conductivity without having to fit the data to a sum of Drude-Lorentz susceptibility terms.

For examples, see Tutorial/Frequency-Domain Solver and Tutorial/Mode Decomposition.

Should I expect linear speedup from the parallel Meep?

For a given computational grid, Meep divides the grid points roughly equally among the processors, and each process is responsible for all computations involving its "own" grid points (computing ε from the materials, timestepping the fields, accumulating Fourier transforms, computing far fields, etcetera). How much speedup this parallelization translates into depends on a number of factors, especially:

  • The ratio of communications to computation, and the speed of your network. During timestepping, each processor needs to communicate neighboring grid points with other processors, and if you have too few grid points per processor (or your network is too slow) then the cost of this communication could overwhelm the computational gains.
  • Load balancing: different portions of the grid may be more expensive than other portions, causing processors in the latter portions to sit idle while a few processors work on the expensive regions. For example, setting up the materials at the beginning is more expensive in regions with lots of objects or interfaces. Timestepping is more expensive in regions with Fourier-transformed flux planes. Computing far fields only uses the processors where the corresponding near fields are located.
  • If you write lots of fields to files, the parallel I/O speed (which depends on your network, filesystem, etc) may dominate.

Unless the computational parallelism outweighs the extra communications overhead, the parallel program will actually be slower than the serial one. This means, for example, that even if you really have two or more physical processors you won't be able to benefit from parallelization until the problem is sufficiently large. In general, you will need large simulations to benefit from lots of processors. A rule of thumb is to keep doubling the number of processors until you no longer see much speedup.

Why are simulations involving Fourier-transformed fields slow?

The discrete time Fourier transform (DTFT) of the fields, which is necessary for computing the Poynting flux, local density of states (LDOS), near to far field transformation, etc., is accumulated at every time step for every point in the flux region. The DTFT computation is parallelized but only in the sense that each processor computes the DTFT fields at points in its own chunk of the grid. If the division of the grid among processors into approximately equal-sized chunks allocates most of the points where the DTFT fields are computed to one processor, it is not going to parallelize.

Ideally, the parallelization should take the DTFT computation into account: the grid should be divided such that the flux region, if it is sufficiently expensive, is divided among the processors. The best approach would be some kind of adaptive load-balancing. However, nothing like this is currently implemented. Also, it is not possible in Meep to customize how the cell is divided among the processes.

A simple approach to reduce the cost of the DTFT computation is to reduce the number of frequency points. If you need high frequency resolution in a certain bandwidth, consider adding a second flux region just for that bandwidth, with as many points as you need there, and use a smaller number of frequency points over a broad bandwidth.

Another approach might be to change your structure so that the flux region is closer to the center of the grid, which will increase the likelihood that the parallelization will divide the grid in such a way as to split the flux region between different processors.

Does Meep support shared-memory parallelism?

Meep can run in parallel on a shared-memory machine using MPI. However, it doesn't yet take special advantage of shared memory using multithreading (issue #228).

Why does the amplitude of a dipole point source increase with resolution?

The field from a point source is singular — it blows up as you approach the source. At any finite resolution, this singularity is truncated to a finite value by the discretization but the peak field at the source location increases as you increase the resolution.

How does Meep deal with numerical dispersion?

Numerical dispersion can be analyzed and quantified analytically for a homogeneous medium. For details, see e.g., Chapter 4 ("Numerical Dispersion and Stability") of Computational Electrodynamics: The Finite Difference Time-Domain Method (3rd edition). However, in practice numerical dispersion is rarely the dominant source of error in FDTD calculations which almost always involve material inhomogeneities that give rise to much larger errors. Similar to other errors associated with the finite grid resolution, numerical dispersion decreases with resolution, so you can deal with it by increasing the resolution until convergence is obtained to the desired accuracy. In particular, the errors from numerical dispersion vary quadratically with resolution (in the ordinary center-difference FDTD scheme). On the other hand, the errors introduced by discretization of material interfaces go linearly with the resolution, so they are almost always dominant. Meep can partially correct for these errors using subpixel averaging.

How do I create a circularly-polarized planewave source in cylindrical coordinates?

A circularly-polarized planewave in cylindrical coordinates corresponds to E=(+i)exp(iφ). This can be created using a constant Er (radial) current source with amplitude=1 and a constant Ep (φ) current source with amplitude=0+1i as well as m=1.

How do I compute S-parameters?

Meep contains a mode-decomposition feature which can be used to compute complex-valued S-parameters. An example is provided for a two-port network based on a silicon directional coupler in Tutorial/GDSII Import. An additional example is available for a waveguide mode converter.

Harminv is unable to find the resonant modes of my structure

There are six possible explanations for why Harminv could not find the resonant modes: (1) the run time was not long enough and the decay rate of the mode is so small that the Harminv data was mainly noise, (2) the Harminv call was not wrapped in after_sources; if Harminv overlaps sources turning on and off it will get confused because the sources are not exponentially decaying fields, (3) the Harminv monitor is near the mode's nodal point (e.g., in a symmetry plane), (4) there are field instabilities where the fields are actually blowing up; this may result in Harminv returning a negative quality factor, (5) the decay rate of the mode is too fast; Harminv discards any modes which have a quality factor less than 50 (and thus Harminv won't likely find any modes inside a metal), or (6) the PML overlaps the non-radiated/evanescent field and has introduced artificial absorption effects in the local density of states (LDOS).

In order to resolve two closely-spaced modes, in general it is preferable to run with a narrow bandwidth source around the frequency of interest to excite/analyze as few modes as possible and/or increase the run time to improve the frequency resolution.

Note: any real-valued signal consists of both positive and negative frequency components (with complex-conjugate amplitudes) in a Fourier domain decomposition into complex exponentials. Harminv usually is set up to find just one sign of the frequency, but occasionally converges to a negative-frequency component as well; these are just as meaningful as the positive frequencies.

How does k_point define the phase relation between adjacent unit cells?

If you set the k_point to any meep.Vector3, the structure will be periodic in all directions. (There is a lower-level field::set_boundary function that allows you to set individual boundary conditions independently, however.)

A periodic structure does not imply periodic fields. The value of the k_point determines the phase relation between the fields and sources in adjacent periodic unit cells. In general, if you have period (Lx,Ly) and you are looking at the (n,m) unit cell it has a phase of exp(2πi * (kx * Lx * n + ky * Ly * m)). For example, if you set the k_point to meep.Vector3(0,0,0), that means the fields/sources are periodic: the phase is unity from one cell to the next. If you set the k_point to meep.Vector3(1,0,0) it means that there is a phase difference of exp(2πi * Lx) between adjacent cells in the x direction. This is known as a Bloch wave.

Note: in any cell direction where there is a PML, the boundary conditions are mostly irrelevant. For example, if there is a PML in front of a periodic boundary, the periodicity doesn't matter because the field will have decayed almost to zero by the time it "wraps around" to the other side of the cell.

Should I include the 2π factor when defining the frequency or the wavevector?

No. Frequency inputs and outputs in Meep are the ordinary frequency f, not the angular frequency ω=2πf. Similarly, spatial wavevectors k (e.g. for Bloch-periodic boundary conditions) are specified without the 2π factor, so that the spatial dependence is exp(2πikx). For example, if you specify a frequency=0.3 in a source, then the time-dependence of the source is exp(-2πi0.3t), where time t is also in Meep units. Similarly, if you specify k_point = meep.Vector3(0.4,0,0) in the interface, then the phase factor between adjacent unit cells with period L in the x direction is exp(2πi0.4L).

Can Meep simulate time-varying structures?

Yes. The most general method is to re-initialize the material at every timestep by calling field::set_materials or set_materials_from_geometry in C++, or simulation.set_materials in Python. However, this is potentially quite slow. One alternative is a function field::phase_in_material that allows you to linearly interpolate between two precomputed structures, gradually transitioning over a given time period; we hope to have a more general version of this functionality in the future (issue #207).

How do I model a moving point charge?

You can use an instantaneous ContinuousSource with large wavelength (or nearly-zero frequency). This is analogous to a direct current. You will also need to create a run function which contains change_sources and specify the center property of the point source to be time dependent. As an example, the following image demonstrates Cherenkov radiation involving a moving point charge with superluminal phase velocity (see examples/cherenkov-radiation.py).

When outputting the dielectric function to a file, I don't see any dispersive materials

Only the real, frequency-independent part of ε/μ is written to an HDF5 file. As an example, many of the dispersive materials in the materials library which have a broadband, complex, refractive index will appear as ε=1 in the output file. Thus, in order to verify the material geometry during debugging using visualization tools, etc., you may have to artificially adjust the epsilon value.

Does Meep support grids with non-uniform discretization?

No. Meep does not support non-orthogonal grids with spatially varying resolution. One possible approach, which does not require changes to the underlying code and is not yet implemented, is to use a coordinate transformation to selectively increase the resolution in a given region of the cell. This is possible using transformation optics which involves a change of materials: an arbitrary coordinate transformation can be mapped to Cartesian coordinates with transformed ε/μ. For more details, see the notes Coordinate Transformation and Invariance in Electromagnetism and Variable Resolution in Meep using this technique.

Can Meep model electrostatic effects?

In principle, this corresponds to the limit as the frequency goes to zero or the wavelength goes to infinity. However, a time-domain simulation is rather inefficient for such electrostatic (or magnetostatic) calculation; this includes lumped circuit models such as resistance, voltage, capacitance, etc. In this regime, you are usually much better off directly solving e.g. Poisson's equation to obtain the fields from a given charge distribution. There are many available Poisson solvers based on finite or boundary element methods. In Meep, probably the best you can do is to use a source with a very low frequency and a gradual turn-on specified by the width parameter of ContinuousSrc.

Why doesn't the continuous-wave (CW) source produce an exact single-frequency response?

The ContinuousSource does not produce an exact single-frequency response due to its finite turn-on time which is described by a hyperbolic-tangent function. In the asymptotic limit, the resulting fields are the single-frequency response; it's just that if you Fourier transform the response over the entire simulation you will see a finite bandwidth due to the turn-on.

If the width is 0 (the default) then the source turns on sharply which creates high-frequency transient effects. Otherwise, the source turns on with a shape of (1 + tanh(t/width - slowness))/2. That is, the width parameter controls the width of the turn-on. The slowness parameter controls how far into the exponential tail of the tanh function the source turns on. The default slowness of 3.0 means that the source turns on at (1 + tanh(-3))/2 = 0.00247 of its maximum amplitude. A larger value for slowness means that the source turns on even more gradually at the beginning (i.e., farther in the exponential tail). The effect of varying the two parameters width and slowness independently in the turn-on function is shown below.

How do I access or visualize the structure, fields, or sources in a subregion of the cell?

You can use the routines get_array, get_dft_array, or get_source_slice to obtain the fields/sources and get_array_metadata or get_dft_array_metadata to obtain information for the geometric slice. Visualization in 3d can be done with Mayavi. For an example, see Tutorial/Basics.

To output the data to an HDF5 file, you can use the in_volume or in_point routines as part of your run function. For example, to restrict the output to a line, you could use: meep.in_volume(meep.Volume(center=meep.Vector3(0,0,0), size=meep.Vector3(10,0,0)), meep.output_dpwr) which outputs ε|E|2 along a line of length 10 in the x direction centered at (0,0,0). You can even wrap this statement in to_appended("line.h5", ...) to output the intensity along the line as a function of time to a 2d HDF5 dataset. This would enable you to plot intensity vs. time and space as a 2d color image.

Can Meep be used to investigate lasing phenomena?

Yes. More specifically, Meep can be used to model saturable gain and absorption via multilevel atomic susceptibility. This feature may be used to investigate optically-pumped lasing phenomena such as Raman lasers. For details, see Materials/Saturable Gain and Absorption.

Does Meep support adjoint-based optimization?

Not currently but work is underway to add support for this feature with expected release in early 2019 (issue #600).