FAQ#
The following are frequently asked questions grouped into ten categories: General, Installation, Physics, Sources, Fields, Materials, Structures, Subpixel Averaging, Performance, and Other.
- FAQ
- General
- Installation
- Physics
- How does the current amplitude relate to the resulting field amplitude?
- How is the source current defined?
- How do I compute the local density of states (LDOS) in a lossy material?
- How do I set the imaginary part of ε?
- Why does my simulation diverge if the permittivity ε is less than 0?
- Why are there strange peaks in my reflectance/transmittance spectrum when modeling planar or periodic structures?
- How do I model the solar radiation spectrum?
- Are complex fields physical?
- How do I model incoherent spontaneous/thermal emission?
- Usage: Sources
- How do I create an oblique planewave source?
- How do I create a focused beam with a Gaussian envelope?
- How do I create a circularly polarized planewave source in cylindrical coordinates?
- How do I model a moving point charge?
- Why doesn't the continuous-wave (CW) source produce an exact single-frequency response?
- Why does the amplitude of a point dipole source increase with resolution?
- Is a narrow-bandwidth Gaussian pulse considered the same as a continuous-wave (CW) source?
- How do I create a chirped pulse?
- Usage: Fields
- Why are the fields blowing up in my simulation?
- How do I compute the steady-state fields?
- How do I compute S-parameters?
- Harminv is unable to find the resonant modes of my structure
- How do I compute the effective index of an eigenmode of a lossy waveguide?
- How do I compute the group velocity of a mode?
- How do I compute the time average of the harmonic fields?
- Why are the fields not being absorbed by the PML?
- How do I compute the modes of a non-orthogonal (i.e., triangular) lattice?
- For calculations involving Fourier-transformed fields, why should the source be a pulse rather than a continuous wave?
- How does k_point define the phase relation between adjacent unit cells?
- How do I compute the integral of the energy density over a given region?
- How do I compute the energy density for a dispersive material?
- How do I output the angular fields in cylindrical coordinates?
- How does the use of symmetry affect nonlinear media?
- Usage: Materials
- Usage: Structures
- Usage: Subpixel Averaging
- Usage: Performance
- Checking convergence
- Should I expect linear speedup from the parallel Meep?
- Why are simulations involving Fourier-transformed fields slow?
- Does Meep support shared-memory parallelism?
- Why does the time-stepping rate fluctuate erratically for jobs running on a shared-memory system?
- What is an estimate of the memory requirements of a simulation given only its cell size and resolution?
- Can Meep be compiled to run on graphics processing units (GPUs)?
- Usage: Other
- Is there a Python interface?
- What is a good rule of thumb for the grid resolution?
- What is Meep's frequency-domain solver and how does it work?
- Why are my reflectance/transmittance values less than zero and/or greater than one?
- How does Meep deal with numerical dispersion?
- Should I include the 2π factor when defining the frequency or the wavevector?
- Does Meep support grids with non-uniform discretization?
- How do I access the structure, fields, or sources in a subregion/slice of the cell?
- How do I compute the absorbed power in a local subregion of the cell?
- What happens if I specify an output volume that extends beyond a cell with periodic boundaries?
- Is it possible to specify the boundary conditions independently?
- How do I model a 2d structure involving an out-of-plane wavevector?
- Can Meep model electrostatic effects?
- Can Meep model lasing phenomena?
- Can Meep model Raman scattering?
- Does Meep support photonics inverse design?
General#
What is Meep?#
Meep is a free and open-source software package for electromagnetics simulation via the finite-difference time-domain (FDTD) method spanning a broad range of applications. The name Meep is an acronym for MIT Electromagnetic Equation Propagation.
Who are the developers of Meep?#
Meep was originally developed as part of graduate research at MIT. The project has been under continuous development for nearly 20 years. It is currently maintained by an active developer community on GitHub.
Where can I ask questions regarding Meep?#
The Discussions page on GitHub can be used to ask questions regarding setting up simulations, analyzing results, installation, etc. Bug reports and new feature requests should be filed as an issue on GitHub. However, do not use issues as a general help desk if you do not understand something (use the Discussions instead).
How can I contribute to the Meep project?#
Pull requests involving bug fixes, new features, and general improvements are welcome and can be made to the master branch on GitHub. This includes tweaks, revisions, and updates to this documentation, generated from markdown, which is also part of the source repository.
Is there a technical reference for Meep?#
Yes. The technical details of Meep's inner workings are described in the peer-reviewed publication MEEP: A flexible free-software package for electromagnetic simulations by the FDTD method, Computer Physics Communications, Vol. 181, pp. 687-702 (2010) (pdf). Additional information is provided in the book Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology in Chapters 4 ("Electromagnetic Wave Source Conditions"), 5 ("Rigorous PML Validation and a Corrected Unsplit PML for Anisotropic Dispersive Media"), 6 ("Accurate FDTD Simulation of Discontinuous Materials by Subpixel Smoothing"), and 20 ("MEEP: A Flexible Free FDTD Software Package"). A video presentation and slides as well as a podcast are also available.
Where can I find a list of projects which have used Meep?#
For a list of more than 2500 published works which have used Meep, see the Google Scholar citation page as well as that for the Meep manuscript and the subpixel smoothing reference. For examples based on technology applications, see this projects page.
Can I access Meep in the public cloud?#
Yes. Using a virtual instance running Ubuntu, you can install the Conda package for Meep.
Installation#
Where can I install Meep?#
Meep runs on any Unix-like operating system, such as Linux, macOS, and FreeBSD, from notebooks to desktops to supercomputers. Conda packages of the latest released version are available for Linux and macOS. Installing Meep from the source code requires some understanding of Unix, especially to install the various dependencies. Installation shell scripts are available for Ubuntu 16.04 and 18.04 and macOS Sierra.
Can I install Meep on Windows machines?#
Yes. For Windows 10, you can install the Ubuntu terminal as an app which is based on the Windows Subsystem for Linux framework and then follow the instructions for obtaining the Conda packages (recommended) or building from source. Support for visualization is enabled using a browser-based Jupyter notebook which can also be installed via the Ubuntu terminal. For Windows 8 and older versions, you can use the free Unix-compatibility environment Cygwin following these instructions.
Are there precompiled binary packages for Ubuntu?#
Yes. Ubuntu and Debian packages can be obtained via the package manager APT as described in Download. However, the Meep packages for Ubuntu 16.04 (serial and parallel) and 18.04 (serial and parallel) are for version 1.3 (September 2017) which is out of date. The Meep package for Ubuntu is in the process of being updated and will likely appear in Ubuntu 19.10 as derived from the unstable Debian package. In the meantime, since the Scheme interface is no longer being supported and has been replaced by the Python interface, you can use the Conda packages.
Guile is installed, but configure complains that it can't find guile
#
With most Linux distributions as well as Cygwin, packages like Guile are split into two parts: a guile
package that just contains the libraries and executables, and a guile-dev
or guile-devel
package that contains the header files and other things needed to compile programs using Guile. Usually, the former is installed by default but the latter is not. You need to install both, which means that you probably need to install guile-dev
. Similarly for any other library packages needed by Meep.
Physics#
How does the current amplitude relate to the resulting field amplitude?#
There is no simple formula relating the input current amplitude ( in Maxwell's equations) to the resulting fields () etcetera, even at the same point as the current. The exact same current will produce a different field and radiate a different total power depending upon the surrounding materials/geometry, and depending on the frequency. This is a physical consequence of the geometry's effect on the local density of states (LDOS); it can also be thought of as feedback from reflections on the source. A classic example is an antenna in front of a ground plane, which radiates very different amounts of power depending on the distance between the antenna and the plane (half wavelength vs. quarter wavelength, for example). Alternatively, if you put a current source inside a perfect electric conductor, the resulting field will be zero. Also, as the frequency of the current increases, the amplitude of the resulting field will also increase. (This is related to Rayleigh scattering which explains why the sky is blue: scattered power increases with frequency; alternatively, the density of states increases as the frequency to the power in dimensions.)
For a leaky resonant mode where the fields are spatially confined and decaying away exponentially with time, the power expended by a dipole source at a given frequency and position is proportional to the ratio of the quality factor () and modal volume (). This is known as Purcell enhancement of the LDOS: the same current source in a higher Q cavity emits more power if the coupling to the mode is the same.
(On the other hand, if you were to put in a dipole source with a fixed voltage, instead of a fixed current, you would get less power out with higher . For an antenna, the Purcell enhancement factor is proportional to its radiation resistance . If you fix current , then power increases with resistance whereas if you fix voltage then the power decreases with resistance.)
For a mathematical description, see Section 4.4 ("Currents and Fields: The Local Density of States") in Chapter 4 ("Electromagnetic Wave Source Conditions") of Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology.
If you are worried about this, then you are probably setting up your calculation in the wrong way. Especially in linear materials, the absolute magnitude of the field is useless; the only meaningful quantities are dimensionless ratios like the fractional transmittance: the transmitted power relative to the transmitted power in some reference calculation. Almost always, you want to perform two calculations, one of which is a reference, and compute the ratio of a result in one calculation to the result in the reference. For nonlinear calculations, see Units and Nonlinearity.
How is the source current defined?#
The source current in Meep is defined as a free-charge current in Maxwell's equations. Meep does not simulate the driving force behind this free-charge current, nor does the current have to be placed in a conductor. Specifying a current means that somehow you are shaking a charge at that point (by whatever means, Meep doesn't care) and you want to know the resulting fields. In a linear system, multiplying by 2 results in multiplying the fields by 2.
Note that the time integral of the free current is a free-charge polarization , so if this integral is nonzero
when the current has turned off, it corresponds to an electrostatic free-charge distribution that is left in the system, which produces electrostatic electric fields (depending also on the surrounding medium, of course). For example, the Gaussian current source in Meep can have a slightly nonzero integral, and the signature of this is that a slight electrostatic field may be left over after
the simulation has run for a long time. This has no affect on Fourier-transformed (DFT) field quantities in Meep at nonzero frequencies, but it can occasionally be an annoyance.
In order to get rid of this residual electrostatic field, you can set the is_integrated
parameter of the source time-dependence (e.g. GaussianSource
) to True
(default is False
): what this does is to make the source term equal to the polarization (the integral of ), which enters Maxwell's equations via the relationship between and , and thus ensures that when the source is zero, the free
polarization is also zero. This is demonstrated in the figure below showing a plot of the electric energy over the entire cell versus simulation time for a point dipole in 3d vacuum surrounded by PML: for is_integrated=False
, we see a slight residual electric field at large times, which vanishes for is_integrated=False
.
In the interface, the source currents are labeled or etc. according to what components of the electric/magnetic fields they correspond to.
How do I compute the local density of states (LDOS) in a lossy material?#
If you put a point source inside a lossy material (e.g., a Lorentz-Drude metal), then the power expended by a dipole diverges as you increase the resolution. In the limit of infinite resolution, infinite power is absorbed. LDOS is not well defined for points inside of a lossy material.
However, LDOS is perfectly well defined for points outside of a lossy material. For example, you can choose a point outside of a lossy object and calculate the LDOS, and it will converge to a finite value as you increase the resolution. For an example, see Tutorial/Local Density of States.
How do I set the imaginary part of ε?#
If you only care about the imaginary part of in a narrow bandwidth around some frequency , you should set it by using the electric conductivity. If you care about the imaginary part of over a broad bandwidth, then for any physical material the imaginary part will be frequency-dependent and you will have to fit the data to a Drude-Lorentz susceptibility model.
Meep doesn't implement a frequency-independent complex . Not only is this not physical, but it also leads to both exponentially decaying and exponentially growing solutions in Maxwell's equations from positive- and negative-frequency Fourier components, respectively. Thus, it cannot be simulated in the time domain.
Why does my simulation diverge if the permittivity ε is less than 0?#
Maxwell's equations have exponentially growing solutions for a frequency-independent negative . For any physical medium with negative , there must be dispersion, and you must likewise use dispersive materials in Meep to obtain negative at some desired frequency. The requirement of dispersion to obtain negative follows from the Kramers–Kronig relations, and also follows from thermodynamic considerations that the energy in the electric field must be positive. For example, see Electrodynamics of Continuous Media by Landau, Pitaevskii, and Lifshitz. At an even more fundamental level, it can be derived from passivity constraints as shown in Physical Review A, Vol. 90, 023847 (2014).
If you solve Maxwell's equations in a homogeneous- material at some real wavevector , you get a dispersion relation . If is positive, there are two real solutions , giving oscillating solutions. If is negative, there are two imaginary solutions corresponding to exponentially decaying and exponentially growing solutions from any current source. These solutions can always be spatially decomposed into a superposition of real- values via a spatial Fourier transform.
If you do a simulation of any kind in the time domain (not just FDTD), you pretty much can't avoid exciting both the decaying and the growing solutions. This is not a numerical instability, it is a real solution of the underlying equations for an unphysical material.
See Materials for how to include dispersive materials which can have negative and loss.
If you have nondispersive negative and negative everywhere, the case of a negative-index material, then the simulation is fine, but our PML implementation doesn't currently support this situation (unless you edit the source code to flip the sign), and in any case such simulations are trivially equivalent to positive-index simulations under coordinate inversion . However at the boundary between nondispersive negative- and positive-index materials, you will encounter instabilities: because of the way Maxwell's equations are discretized in FDTD, the and are discretized on different spatial grids, so you will get a half-pixel or so of at the boundary between negative and positive indices, which will cause the simulation to diverge. But of course, any physical negative-index metamaterial also involves dispersion.
Note also that, as a consequence of the above analysis, must go to a positive value in the limit to get non-diverging solutions of Maxwell's equations. So the in your dispersion model must be positive.
Why are there strange peaks in my reflectance/transmittance spectrum when modeling planar or periodic structures?#
There are two possible explanations: (1) the simulation run time may be too short or the resolution may be too low and thus your results have not sufficiently converged, or (2) you may be using a higher-dimensional cell with multiple periods (a supercell) which introduces unwanted additional modes due to band folding. One indication that band folding is present is that small changes in the resolution produce large and unexpected changes in the frequency spectra. Modeling flat/planar structures typically requires a 1d cell and periodic structures a single unit cell in 2d/3d. For more details, see Section 4.6 ("Sources in Supercells") in Chapter 4 ("Electromagnetic Wave Source Conditions") of Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology. Note that a 1d cell must be along the direction with only the and field components permitted.
How do I model the solar radiation spectrum?#
For simulations involving solar radiation, the reflectance/transmittance spectra is computed using the standard procedure involving two separate calculations (i.e., the first to obtain the input power and the second for the scattered power). Since typical solar-cell problems are linear, the reflected/transmitted power can then be obtained by simply multiplying the reflectance/transmittance (a fractional quantity) by the solar spectrum (a dimensionful quantity).
In general, the accuracy of any type of calculation involving a weighted input/output spectrum can be improved and the size of the simulation reduced using a coarse, unequally spaced frequency grid which is passed as an array/list to add_flux
. This involves precomputing the frequency points and weights given the air mass spectrum as demonstrated in the notebook Solar-weighted Gaussian Quadrature.
Are complex fields physical?#
No. Unlike quantum mechanics, complex fields in classical electromagnetics are not physical; they are simply a convenient mathematical representation of sinusoidal waves using Euler's formula, sometimes called a phasor. In a linear system, one can always take the real part at the end of the computation to obtain a physical result. When there are nonlinearities, the physical interpretation is much more non-obvious.
Note: specifying a complex amplitude
for the Source
object does not automatically yield complex fields. Unless the parameter force_complex_fields=True
is specified, only the real part of the source is used. The complex amplitude is just a phase shift of the real sinusoidal source.
How do I model incoherent spontaneous/thermal emission?#
Semiclassically, spontaneous or thermal emission can be modeled simply as random dipole current sources. One direct way to express this in Meep is to use a Monte Carlo method: take an ensemble average of multiple runs involving a collection of random dipole sources. See Tutorial/Custom Source/Stochastic Dipole Emission in Light Emitting Diodes. For example, to model thermal radiation in linear materials, you can use a custom source function to input white-noise sources with the appropriate noise spectrum included via postprocessing by scaling the output spectrum as explained in these references: far field and near field cases. There is also a noisy Lorentzian material that can be used to model thermal fluctuations even more directly as noise in the materials themselves.
Usage: Sources#
How do I create an oblique planewave source?#
An arbitrary-angle planewave with wavevector can be generated in two different ways: (1) by setting the amplitude function amp_func
to for a dimensional source in a dimensional cell (i.e., line source in 2d, planar source in 3d), or (2) via the eigenmode source. These two approaches generate identical planewaves with the only difference being that the planewave produced by the eigenmode source is unidirectional. In both cases, generating an infinitely-extended planewave requires that: (1) the source span the entire length of the cell and (2) the Bloch-periodic boundary condition k_point
be set to .
The first approach involving the amplitude function is based on the principle that just as you can create a directional antenna by a phased array, you can create a directional source by setting the phase of the current appropriately. Alternatively, by specifying the wavevector of the fields in directions of a -dimensional cell, the wavevector in the remaining direction is automatically defined by the frequency via the dispersion relation for a planewave in homogeneous medium with index : . Note that for a pulsed source (unlike a continuous wave), each frequency component produces a planewave at a different angle. Also, the fields do not have to be complex (which would double the storage requirements).
For an example of the first approach in 1d, see Tutorial/Basics/Angular Reflectance of a Planar Interface (Scheme version). For 2d, see Tutorial/Mode Decomposition/Reflectance and Transmittance Spectra for Planewave at Oblique Incidence (Scheme version) as well as examples/pw-source.py (Scheme version).
For an example of the second approach, see Tutorial/Eigenmode Source/Planewaves in Homogeneous Media (Scheme version).
How do I create a focused beam with a Gaussian envelope?#
A focused beam with a Gaussian envelope can be created using the GaussianBeamSource
as demonstrated in this script. The following are four snapshots of the resulting field profile generated using a line source (red line) in 2d for different values of the beam waist radius/width (beam_w0
), propagation direction (beam_kdir
), and focus (beam_x0
):
Note: beams in a homogeneous material do not have a fixed width in Maxwell's equations; they always spread out during propagation (i.e., diffraction). The numerical aperture (NA) of a Gaussian beam with waist radius and vacuum wavelength within a medium of index is .
How do I create a circularly polarized planewave source in cylindrical coordinates?#
A circularly polarized planewave in cylindrical coordinates corresponds to . This can be created using a constant (radial) current source with amplitude
=1 and a constant (azimuthal) current source with amplitude
=0+1i as well as m
=1.
How do I model a moving point charge?#
You can use an instantaneous ContinuousSource
with large wavelength (or nearly-zero frequency). This is analogous to a direct current. You will also need to create a run function which contains change_sources
and specify the center
property of the point source to be time dependent. As an example, the following image demonstrates Cherenkov radiation involving a moving point charge with superluminal phase velocity (see examples/cherenkov-radiation.py).
Why doesn't the continuous-wave (CW) source produce an exact single-frequency response?#
The ContinuousSource does not produce an exact single-frequency response due to its finite turn-on time which is described by a hyperbolic-tangent function. In the asymptotic limit, the resulting fields are the single-frequency response; it's just that if you Fourier transform the response over the entire simulation you will see a finite bandwidth due to the turn-on.
If the width
is 0 (the default) then the source turns on sharply which creates high-frequency transient effects. Otherwise, the source turns on with a shape of (1 + tanh(t/width
- slowness
))/2. That is, the width
parameter controls the width of the turn-on. The slowness
parameter controls how far into the exponential tail of the tanh function the source turns on. The default slowness
of 3.0 means that the source turns on at of its maximum amplitude. A larger value for slowness
means that the source turns on even more gradually at the beginning (i.e., farther in the exponential tail). The effect of varying the two parameters width
and slowness
independently in the turn-on function is shown below.
Note: even if you have a continuous wave (CW) source at a frequency , the time dependence of the electric field after transients have died away won't necessarily be , because in general there is a phase difference between the current and the resulting fields. In general for a CW source you will eventually get fields proportional to for some phase which depends on the field component, the source position, and the surrounding geometry.
Why does the amplitude of a point dipole source increase with resolution?#
The field from a point source is singular — it blows up as you approach the source. At any finite grid resolution, this singularity is truncated to a finite value by the grid discretization but the peak field at the source location increases as you increase the resolution.
Is a narrow-bandwidth Gaussian pulse considered the same as a continuous-wave (CW) source?#
No. A narrow-bandwidth Gaussian is still a Gaussian: it goes to zero at both the beginning and end of its time profile unlike a continuous-wave (CW) source which oscillates indefinitely (but has a finite turn-on). Assuming you have linear materials, you should get the same results if you use a narrow- or broad-band pulse and look at a single frequency component of the Fourier transform via e.g. dft_fields
. The latter has the advantage that it requires a shorter simulation for the fields to decay away due to the Fourier Uncertainty Principle. Note also that an almost zero-bandwidth Gaussian will produce high-frequency spectral components due to its abrupt turn on and off which are poorly absorbed by PML.
How do I create a chirped pulse?#
You can use a CustomSource
to define an arbitrary time-profile for the Source
object. As an example, the following snapshots of the out-of-plane electric field demonstrates a linear-chirped pulse planewave propagating from the left to the right with higher frequencies (smaller wavelengths) at the front (i.e., a down chirp). For the simulation script, see examples/chirped_pulse.py.
Usage: Fields#
Why are the fields blowing up in my simulation?#
Instability in the fields is likely due to one of five causes: (1) PML overlapping dispersive materials based on a Drude-Lorentzian susceptibility in the presence of backward-wave modes (fix: replace the PML with an Absorber), (2) the frequency of a Lorentzian susceptibility term is too high relative to the grid discretization (fix: increase the resolution
and/or reduce the Courant
factor), (3) a material with a wavelength-independent negative real permittivity (fix: fit the permittivity to a broadband Drude-Lorentzian susceptibility), (4) a grid voxel contains more than one dielectric interface (fix: turn off subpixel averaging), or (5) a material with a wavelength-independent refractive index between 0 and 1 (fix: reduce the Courant
factor; alternatively, fit the permittivity to a broadband Drude-Lorentzian susceptibility).
As another possibility, if an entire boundary consists of a material where has a negative real part (and is positive), or is sufficiently lossy, then realize that you don't need a PML at all — all waves are exponentially decaying in such a material, so you just need to make the material sufficiently thick at the boundary.
Note: when the fields blow up, the CPU slows down due to floating-point exceptions in IEEE 754. Also, Meep automatically checks the fields at the cell origin after every timestep and aborts the simulation if the electric energy density has diverged.
How do I compute the steady-state fields?#
The "steady-state" response is defined as the exp(-iωt) response field (ω=2πf is the angular frequency) from an exp(-iωt) source after all transients have died away. There are three different approaches for computing the steady-state fields: (1) use a continuous-wave (CW) source via ContinuousSource
with smooth turn-on and run for a long time (i.e., ≫ 1/f), (2) use the frequency-domain solver, or (3) use a broad-bandwidth pulse (which has short time duration) via GaussianSource
and compute the Fourier-transform of the fields via add_dft_fields
. Often, (2) and (3) require fewer timesteps to converge than (1). Note that Meep uses real fields by default and if you want complex amplitudes, you must set force_complex_fields=True
.
How do I compute S-parameters?#
Meep contains a mode-decomposition feature which can be used to compute complex-valued S-parameters. An example is provided for a two-port network based on a silicon directional coupler in Tutorial/GDSII Import. Additional examples are available for a waveguide mode converter and subwavelength grating.
Harminv is unable to find the resonant modes of my structure#
There are six possible explanations for why Harminv could not find the resonant modes: (1) the run time was not long enough and the decay rate of the mode is so small that the Harminv data was mainly noise, (2) the Harminv call was not wrapped in after_sources
; if Harminv overlaps sources turning on and off it will get confused because the sources are not exponentially decaying fields, (3) the Harminv monitor is near the mode's nodal point (e.g., in a symmetry plane), (4) there are field instabilities where the fields are actually blowing up; this may result in Harminv returning a negative quality factor, (5) the decay rate of the mode is too fast; Harminv discards any modes which have a quality factor less than 50 where the leaky-mode approximation of the modes as perfectly exponentially decaying (i.e. a Lorentzian lineshape) begins to break down (and thus Harminv won't likely find any modes inside a metal), or (6) the PML overlaps the non-radiated/evanescent field and has introduced artificial absorption effects in the local density of states (LDOS).
Harminv will find modes in perfect-conductor cavities (i.e. with no loss) with a quality factor that is very large and has an arbitrary sign; it has no way to tell that the decay rate is zero, it just knows it is very small.
Harminv becomes less effective as the frequency approaches zero, so you should specify a non-zero frequency range.
In order to resolve two closely-spaced modes, in general it is preferable to run with a narrow bandwidth source around the frequency of interest to excite/analyze as few modes as possible and/or increase the run time to improve the frequency resolution. If you want to analyze an arbitrary spectrum, just use the Fourier transform as computed by dft_fields
.
For a structure with two doubly-degenerate modes (e.g., a dipole-like mode or two counter-propagating modes in a ring resonator), the grid discretization will almost certainly break the degeneracy slightly. In this case, Harminv may find two distinct nearly-degenerate modes.
Note: any real-valued signal consists of both positive and negative frequency components (with complex-conjugate amplitudes) in a Fourier domain decomposition into complex exponentials. Harminv usually is set up to find just one sign of the frequency, but occasionally converges to a negative-frequency component as well; these are just as meaningful as the positive frequencies.
How do I compute the effective index of an eigenmode of a lossy waveguide?#
The effective index is usually defined as .
To compute the (complex) effective index of a lossy waveguide, you will need to first compute the complex (the loss in time) for a real (the propagation constant) and then convert this quantity into a loss in space (complex at a real ) by dividing by the group velocity . This procedure is described in more detail below.
To obtain the loss in time, you make your computational cell a cross-section of your waveguide (i.e. 2d for a waveguide with constant cross-section), and set Bloch-periodic boundary conditions via the k_point
input variable — this specifies your (real) . You then treat it exactly the same as a resonant-cavity problem: you excite the system with a short pulse source, monitor the field at some point, and then analyze the result with Harminv; all of which is done if you call run_kpoints
. This will give you the complex at the given , where the imaginary part is the loss rate in time. Note: the loss in a uniform waveguide, with no absorption or disorder, is zero, even in the discretized system.
That is, you have where the subscripts and denote real and imaginary parts. Now, what you want to do is to get the complex at the real which is given by: . That is, to first order in the loss, the imaginary part of (the propagation loss) at the real frequency is given just by dividing by the group velocity , which you can get from the dispersion relation in the absence of loss. This relationship is just a consequence of the first-order Taylor expansion of the dispersion relation in the complex plane.
This analysis is only valid if the loss is small, i.e. . This should always be the case in any reasonable waveguide, where the light can travel for many wavelengths before dissipating/escaping. If you have extremely large losses so that it only propagates for a few wavelengths or less, then you would have to treat the problem differently — but in this case, the whole concept of a "waveguide mode" is not clearly defined.
For a lossless waveguide, and are purely real quantities. This means you do not have to worry about group velocity and imaginary parts. You can also compute (more efficiently) using the mode solver MPB.
How do I compute the group velocity of a mode?#
There are two possible approaches for manually computing the group velocity : (1) compute the dispersion relation using Harminv, fit it to a polynomial, and calculate its derivative using a finite difference (i.e. , or (2) excite the mode using a narrowband pulse and compute the ratio of the Poynting flux to electric-field energy density.
For eigenmodes obtained using mode decomposition, the group velocities are computed automatically along with the mode coefficients.
How do I compute the time average of the harmonic fields?#
For a linear system, you can use a ContinuousSource with force_complex_fields=True
and time-step the fields until all transients have disappeared. Once the fields have reached steady state, the instantaneous intensity or Poynting flux is equivalent to the time average. If you don't use complex fields, then these are just the instantaneous values at a given time, and will oscillate. An alternative to time-stepping is the frequency-domain solver.
Why are the fields not being absorbed by the PML?#
The decay coefficient of the fields within any PML contains a factor where is the incidence angle ( is normal incidence). The decay therefore becomes slower as glancing incidence () is approached. This is true in the continuum limit and can be demonstrated by verifying that the reflections from the PML do not change with resolution. Otherwise, if the reflection decreases with increasing resolution then it may be due to transition reflections (i.e., the impedance mismatch at the PML interface) which can also be reduced by making the PML thicker instead of increasing the resolution. Transition reflections also increase as glancing incidence is approached, because at glancing incidence the phase-velocity mismatch between incident and reflected waves goes to zero as described in Optics Express, Vol. 16, pp. 11376-92 (2008). Glancing-angle fields commonly arise in simulations where one direction is periodic and the other is terminated by a PML (such as in diffraction gratings), in which case you can have spurious solutions that travel parallel to the PML interface; a workaround is to use a thicker non-PML absorber. This is demonstrated in the figure below by the glancing-angle fields remaining in the cell after a source planewave at 45° has long been turned off: the PML (left inset) is unable to absorb these fields unlike the absorber (right inset).
How do I compute the modes of a non-orthogonal (i.e., triangular) lattice?#
Meep does not support non-rectangular unit cells. To model a triangular lattice, you have to use a supercell. This will cause the band structure to be folded. However, if you take your point source and replicate it according to the underlying triangular lattice vectors, with the right phase relationship according to the Bloch wavevector, then it should excite the folded bands only with very low amplitude as reported by Harminv. Also, for every Harminv point you put in, you should analyze the fields from the periodic copies of that point (with the periodicity of the underlying lattice). Then, reject any frequency that is not detected at all points, with an amplitude that is related by something close to the correct phase.
In principle, the excitation of the folded bands would be exactly zero if you place your sources correctly in the supercell. However, this doesn't happen in FDTD because the finite grid spoils the symmetry slightly. It also means that the detection of folded bands will vary with resolution. For an example, see Section 4.6 ("Sources in Supercells") in Chapter 4 ("Electromagnetic Wave Source Conditions") of Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology.
For structures with a lossless (i.e., purely real) permittivity, you can also use MPB to compute the dispersion relation which does support a non-orthogonal lattice.
For calculations involving Fourier-transformed fields, why should the source be a pulse rather than a continuous wave?#
A continuous-wave source (ContinuousSource) produces fields which are not integrable: their Fourier transform will not converge as the run time of the simulation is increased because the source never terminates. The Fourier-transformed fields are therefore arbitrarily defined by the run time. This windowing does different things to the normalization and scattering runs because the spectra are different in the two cases. In contrast, a pulsed source (GaussianSource) produces fields which are L2-integrable: their Fourier transform is well defined and convergent as long as the run time is sufficiently large and the fields have decayed away. Note that the amplitude of the Fourier transform grows linearly with time; the Poynting flux, which is proportional to the amplitude squared, grows quadratically.
When computing the reflectance/transmittance for linear materials, you should get the same results if you put in a narrow- or broad-band Gaussian and look at only one frequency component of the Fourier transform. The latter has the advantage that it requires a shorter simulation for the fields to decay away. Moreover, if you want the scattering properties as a function of both frequency and angle (of an incident planewave), then the short-time pulses have a further advantage: each simulation with a short pulse and fixed k_point
yields a broad spectrum result, each frequency of which corresponds to a different angle. Then you repeat the simulation for a range of k_point
s, and at the end you'll have a 2d dataset of reflectance/transmittance vs. both frequency and angle. For an example, see Tutorial/Basics/Angular Reflectance Spectrum of a Planar Interface.
Note: for broadband calculations involving Fourier-transformed fields or Harminv, the frequency bandwidth of the Gaussian source and the fields monitor should typically be similar. If the source bandwidth is too narrow, then the normalization procedure for something like a transmittance/reflectance spectra (in which you divide by the incident-wave spectrum) will involve dividing by values close to zero, which amplifies computational errors due to floating-point roundoff or other effects.
How does k_point
define the phase relation between adjacent unit cells?#
If you set the k_point
to any meep.Vector3
, the structure will be periodic in all directions. There is a set_boundary
routine that allows you to set individual boundary conditions independently, however.
A periodic structure does not imply periodic fields. The value of the k_point
determines the phase relation between the fields and sources in adjacent periodic unit cells. In general, if you have period (Lx
,Ly
) and you are looking at the (n
,m
) unit cell it has a phase of exp(2πi * (kx
* Lx
* n
+ ky
* Ly
* m
)). For example, if you set the k_point
to meep.Vector3(0,0,0)
, that means the fields/sources are periodic: the phase is unity from one cell to the next. If you set the k_point
to meep.Vector3(1,0,0)
it means that there is a phase difference of exp(2πi * Lx
) between adjacent cells in the x direction. This is known as a Bloch wave.
A non-zero k_point
introduces a set of "ghost" pixels along one side of each direction of the cell. These additional pixels are used to store the complex Bloch phase multiplied by the field value from the opposite side of the cell boundary. As a result, the size of the cell increases by one pixel in each direction relative to the case of no k_point
.
Note: in any cell direction where there is a PML, the boundary conditions are mostly irrelevant. For example, if there is a PML in front of a periodic boundary, the periodicity doesn't matter because the field will have decayed almost to zero by the time it "wraps around" to the other side of the cell.
How do I compute the integral of the energy density over a given region?#
For the instantaneous fields, you can use electric_energy_in_box
to compute the integral of in some region. For the magnetic or total field energy, you can use magnetic_energy_in_box
or field_energy_in_box
. When computing the total field energy, you will need to first synchronize the magnetic and electric fields. To compute the integral of the energy density for a single field component e.g. , you can use the field function: integrate_field_function([meep.Dielectric, meep.Ez], def f(eps,ez): return 0.5*eps*abs(ez)**2, where=meep.Volume(...))
.
For the Fourier-transformed fields, you can use add_energy
to compute the energy density over a region and sum the list of values (at a fixed frequency) returned by get_electric_energy
/get_magnetic_energy
/get_total_energy
multiplied by the volume of the grid voxel to obtain the integral.
How do I compute the energy density for a dispersive material?#
The energy density computed by Meep is via add_energy
(Fourier-transformed fields) or electric_energy_in_box
/magnetic_energy_in_box
/field_energy_in_box
(instantaneous fields). This is not the energy density for a dispersive medium. With dispersion and negligible absorption, the energy density is described by a "Brillouin" formula that includes an additional term (and in magnetic media) as described in Classical Electrodynamics by J.D. Jackson as well as other standard textbooks. More generally, one can define a "dynamical energy density" that captures the energy in the fields and the work done on the polarization currents; as reviewed in Physical Review A, 90, 023847 (2014) (pdf), the dynamical energy density reduces to the Brillouin formula for negligible absorption, and is constructed so as to enforce Poynting's theorem for conservation of energy.
Although Meep does not currently implement a function to compute the dynamical energy density (or the limiting case of the Brillouin formula) explicitly, since this density is expressed in terms of time derivatives it could in principle be implemented by processing the fields during time-stepping. However, if you only want the total energy in some box, you can instead compute it via Poynting's theorem from the total energy flux flowing into that box, and this is equivalent to integrating the dynamical energy density or its special case of the Brillouin formula.
How do I output the angular fields in cylindrical coordinates?#
Meep can only output sections of the plane in cylindrical coordinates. To obtain the angular fields, you will need to do the conversion manually using the fields in the plane: the fields at all other φ are related by a factor of exp(imφ) due to the continuous rotational symmetry.
How does the use of symmetry affect nonlinear media?#
In linear media, if you use an odd-symmetry source, the even-symmetry modes cannot be excited. If you then specify odd symmetry in your simulation, nothing changes. The even-symmetry modes are not disregarded, they are simply not relevant. Vice-versa for an even-symmetry source.
In nonlinear media involving e.g. , it is possible for an odd-symmetry source to excite an even-symmetry mode (because the square of an odd function is an even function). In this case, imposing odd symmetry in your simulation will affect the results rather than just improve its performance.
Usage: Materials#
Is there a materials library?#
Yes. A materials library is available containing crystalline silicon (c-Si), amorphous silicon (a-Si) including the hydrogenated form, silicon dioxide (SiO2), indium tin oxide (ITO), alumina (Al2O3), gallium arsenide (GaAs), gallium nitride (GaN), aluminum arsenide (AlAs), aluminum nitride (AlN), borosilicate glass (BK7), fused quartz, silicon nitride (Si3N4), germanium (Ge), indium phosphide (InP), lithium niobate (LiNbO3), as well as 11 elemental metals: silver (Ag), gold (Au), copper (Cu), aluminum (Al), berylium (Be), chromium (Cr), nickel (Ni), palladium (Pd), platinum (Pt), titanium (Ti), and tungsten (W). Additional information is provided in Materials.
Does Meep support gyrotropic materials?#
Yes. Meep supports gyrotropic media which involve tensor ε (or μ) with imaginary off-diagonal components and no absorption due to an external magnetic field.
When outputting the permittivity function to a file, I don't see any tensors#
Only the harmonic mean of eigenvalues of the / tensor are written to an HDF5 file using output_epsilon
/output_mu
. The complex and tensor can be obtained at the frequency f
as a 3x3 Numpy array via the functions epsilon(f)
and mu(f)
of the Medium
class.
How do I model graphene or other 2d materials with single-atom thickness?#
Typically, graphene and similar "2d" materials are mathematically represented as a delta function conductivity in Maxwell's equations because their thickness is negligible compared to the wavelength, where the conductivity is furthermore usually anisotropic (producing surface-parallel currents in response to the surface-parallel components of the electric field). In a discretized computer model like Meep, this is approximated by an anisotropic volume conductivity (or other polarizable dispersive material) whose thickness is proportional to (1/resolution
) and whose amplitude is proportional resolution
. For example, this could be represented by a one-pixel-thick conductor can be represented by e.g. a Block
with size=meep.Vector3(x,y,1/resolution)
in a 3d cell, with the value of the conductivity explicitly multiplied by resolution
.
How do I model a continuously varying permittivity?#
You can use a material function to model any arbitrary, position-dependent permittivity/permeability function / including anisotropic, dispersive, and nonlinear media. For an example involving a non-dispersive, anisotropic material, see Tutorial/Mode Decomposition/Diffraction Spectrum of Liquid-Crystal Polarization Gratings. The material function construct can also be used to specify arbitrary shapes (e.g., curves such as parabolas, sinusoids, etc.) within: (1) the interior boundary of a GeometricObject
(e.g., Block
, Sphere
, Cylinder
, etc.), (2) the entire cell via the material_function
parameter of the Simulation
constructor, or (3) a combination of the two.
Usage: Structures#
What are the different ways to define a structure?#
There are six ways to define a structure: (1) the GeometricObject
(Python) or geometric-object
(Scheme) class used to specify a collection of predefined shapes including Prism
, Sphere
, Cylinder
, Cone
, Block
, and Ellipsoid
, (2) material_function
(Python) or material-function
(Scheme) used to define an arbitrary function: for a given position in the cell, return the / at that point, (3) import the scalar, real-valued, frequency-independent permittivity from an HDF5 file (which can be created using e.g., h5py) via the epsilon_input_file
(Python) or epsilon-input-file
(Scheme) input parameter, (4) import planar geometries from a GDSII file, (5) load the raw / saved from a previous simulation using load_structure
(Python) or meep-structure-load
(Scheme), or (6) a MaterialGrid
used to specify a pixel grid. Combinations of (1), (2), (4), and (6) are allowed but not (3) or (5).
Does Meep support importing GDSII files?#
Yes. The get_GDSII_prisms
routine is used to import GDSII files. See Tutorial/GDSII Import for examples. This feature facilitates the simulation of 2d/planar structures which are fabricated using semiconductor foundries. Also, it enables Meep's plug-and-play capability with electronic design automation (EDA) circuit-layout editors (e.g., Cadence Virtuoso Layout, Silvaco Expert, KLayout, etc.). EDA is used for the synthesis and verification of large and complex integrated circuits. A useful tool for creating GDS files of simple geometries (e.g., curved waveguides, ring resonators, directional couplers, etc.) is gdspy.
Can Meep simulate time-varying structures?#
Yes. The most general method is to re-initialize the material at every timestep by calling field::set_materials
or set_materials_from_geometry
in C++, or simulation.set_materials
in Python. However, this is potentially quite slow. One alternative is a function field::phase_in_material
that allows you to linearly interpolate between two precomputed structures, gradually transitioning over a given time period; a more general version of this functionality may be enabled in the future (Issue #207).
Usage: Subpixel Averaging#
Why doesn't turning off subpixel averaging work?#
By default, when Meep assigns a dielectric constant or to each pixel, it uses a carefully designed average of the values within that pixel. This subpixel averaging generally improves the accuracy of the simulation — perhaps counter-intuitively, for geometries with discontinuous it is more accurate (i.e. closer to the exact Maxwell result for the discontinuous case) to do the simulation with the subpixel-averaged (smoothed) , as long as the averaging is done properly. For details, see Section 3 ("Interpolation and the illusion of continuity") of Computer Physics Communications, Vol. 181, pp. 687-702 (2010).
Still, there are times when, for whatever reason, you might not want this feature. For example, if your accuracy is limited by other issues, or if you want to skip the wait at the beginning of the simulation for it do to the averaging. In this case, you can disable the subpixel averaging by setting Simulation.eps_averaging = False
(Python) or (set! eps-averaging? false)
(Scheme). For more details, see Python Interface. Note: even if subpixel averaging is disabled, the time required for the grid initialization may still be non trivial (e.g., minutes for a 3d cell with large resolution
— it still needs to evaluate three times for every voxel due to the Yee grid).
Even if you disable the subpixel averaging, however, when you output the dielectric function to a file and visualize it, you may notice that there are some pixels with intermediate ε values, right at the boundary between two materials. This is due to a completely different reason. Internally, Meep's simulation is performed on a Yee grid, in which every field component is stored on a slightly different grid which are offset from one another by half-pixels, and the ε values are also stored on this Yee grid. For output purposes, however, it is more user-friendly to output all fields etcetera on the same grid at the center of each pixel, so all quantities are interpolated onto this grid for output. Therefore, even though the internal values are indeed discontinuous when you disable subpixel averaging, the output file will still contain some "averaged" values at interfaces due to the interpolation from the Yee grid to the center-pixel grid. For the same reason, if k_point
is set and the boundaries are Bloch-periodic, the permittivity function of the entire cell obtained via get_epsilon
or output_epsilon
will show that a little of the cell from one edge "leaks" over to the other edge: these extra pixels are added to implement the boundary conditions. This is independent of PML and the way the structure is defined (i.e., using geometric objects or a material function, etc.). An example is shown in the figure below comparing output_epsilon
for two cases involving with and without k_point
. The discretization artifacts are highlighted.
Why does subpixel averaging take so long?#
There are at least two possible reasons due to using: (1) a material function to define a Medium
object or (2) the C++ interface. Unlike either the Python or Scheme interfaces which are based on analytically computing the averaged permittivity for boundary voxels involving at most one GeometricObject
(e.g., Sphere
, Prism
, Block
, etc.), the C++ interface computes these averages from the material function using numerical quadrature if the parameter use_anisotropic_averaging=true
is passed to the constructor of set_materials
. This procedure involves calling the material function many times for every voxel in the structure object which can be slow due to the SWIG callbacks, particularly because the voxel density is repeatedly doubled until a given threshold tolerance (subpixel_tol
) or maximum iteration number (subpixel_maxeval
) is reached. Because of this discrepancy in the subpixel averaging, the results for the C++ and Python/Scheme interfaces may be slightly different at the same resolution. You can potentially speed up subpixel averaging by increasing subpixel_tol
or decreasing subpixel_maxeval
. Note: the slow callbacks may still be noticeable during the grid initialization even when subpixel averaging is turned off. Just remember that if you turn off subpixel averaging, it usually means that you may need to increase the grid resolution to obtain the same accuracy. You will have to determine how much accuracy you want to trade for time. Alternatively, in the C++ interface you can use the meepgeom.hpp
routines to define your geometry in terms of blocks, cylinders, etcetera similar to Python and Scheme, with semi-analytical subpixel averaging.
Why are there artifacts in the permittivity grid when two geometric objects are touching?#
Subpixel averaging affects pixels that contain at most one object interface. If a boundary pixel contains two object interfaces, Meep punts in this case because the analytical calculations for the material filling fraction are too messy to compute and brute-force numerical integration is too slow. Instead, subpixel averaging just uses the ε at the grid point. Sometimes if a grid point falls exactly on the boundary there are roundoff effects on which (if any) object the point lies within; you can eliminate some such artifacts by slightly padding the object sizes (e.g. by 1e-8
) or by specifying your geometry in some other way that doesn't involve exactly coincident.
Can subpixel averaging be applied to a user-defined material function?#
Yes but its performance tends to be slow. Subpixel averaging is performed by default (eps_averaging=True
) for GeometricObject
s (e.g. Cylinder
, Block
, Prism
, etc.) where the material filling fraction and normal vector of boundary pixels, which are used to form the effective permittivity, can be computed analytically using a level-set function. This procedure typically takes a few seconds for a 3d cell. Computing these quantities for a user-defined material function using adaptive numerical integration can be very slow (minutes, hours) and also less accurate than the analytic approach. As a result, simulations involving a discontinuous material_function
may require disabling subpixel averaging (the default) and increasing the resolution
for accurate results. For an example, see Subpixel Smoothing/Enabling Averaging for Material Function.
Also, fast and accurate subpixel smoothing is available for the MaterialGrid
which is based on an analytic formulation.
Usage: Performance#
Checking convergence#
In any computer simulation like Meep, you should check that your results are converged with respect to any approximation that was made. There is no simple formula that will tell you in advance exactly how much resolution (etc.) is required for a given level of accuracy; the most reliable procedure is to simply double the resolution and verify that the answers you care about don't change to your desired tolerance. Useful things to check (ideally by doubling) in this way are: resolution, run time (for Fourier spectra), PML thickness.
Meep's subpixel smoothing often improves the rate of convergence and makes convergence a smoother function of resolution. However, unlike the built-in geometric objects (e.g., Sphere
, Cylinder
, Block
, etc.), subpixel smoothing does not occur for dispersive materials or user-defined material functions .
For flux calculations involving pulsed (i.e., Gaussian) sources, it is important to run the simulation long enough to ensure that all the transient fields have sufficiently decayed away (i.e., due to absorption by the PMLs, etc). Terminating the simulation prematurely will result in the Fourier-transformed fields, which are being accumulated during the time stepping (as explained in Introduction), to not be fully converged. Another side effect of not running the simulation long enough is that the Fourier spectra are smoothed out, causing sharp peaks to broaden into adjacent frequency bins (sometimes known as spectral leakage) and hence limiting the spectral resolution (also known as the Fourier uncertainty principle) — in general, the minimum runtime should be the inverse of the desired frequency resolution. (Conversely, running for long after the time-domain fields have decayed away has a negligible effect on the Fourier-transformed fields: the discrete-time Fourier transform is just a sum, and adding exponentially small values to a sum will not significantly change the result.) Convergence of the fields is typically achieved by lowering the decay_by
parameter in the stop_when_fields_decayed
run function. Alternatively, you can explicitly set the run time to some numeric value that you repeatedly double, instead of using the field decay. Sometimes it is also informative to double the cutoff
parameter of sources to increase their smoothness (reducing the amplitude of long-lived high-frequency modes).
Should I expect linear speedup from the parallel Meep?#
For a given computational grid when split_chunks_evenly=True
(the default), Meep divides the grid points roughly equally among the processors, and each process is responsible for all computations involving its "own" grid points (computing from the materials, timestepping the fields, accumulating Fourier transforms, computing far fields, etcetera). How much speedup this parallelization translates into depends on a number of factors, especially:
- The ratio of communications to computation, and the speed of your network. During timestepping, each processor needs to communicate neighboring grid points with other processors, and if you have too few grid points per processor (or your network is too slow) then the cost of this communication could overwhelm the computational gains.
- Load balancing: different portions of the grid may be more expensive than other portions, causing processors in the latter portions to sit idle while a few processors work on the expensive regions. For example, setting up the materials at the beginning is more expensive in regions with lots of objects or interfaces. Timestepping is more expensive in regions with Fourier-transformed flux planes. Computing far fields only uses the processors where the corresponding near fields are located.
- If you write lots of fields to files, the parallel I/O speed (which depends on your network, filesystem, etc) may dominate.
Unless the computational parallelism outweighs the extra communications overhead, the parallel program will actually be slower than the serial one. This means, for example, that even if you really have two or more physical processors you won't be able to benefit from parallelization until the problem is sufficiently large. In general, you will need large simulations to benefit from lots of processors. A rule of thumb is to keep doubling the number of processors until you no longer see much speedup.
For more details on Meep's load balancing features as well as some performance metrics, see arXiv:2003.04287 and Features/Parallel Meep/Runtime Scaling on MPI Clusters.
Why are simulations involving Fourier-transformed fields slow?#
The discrete time Fourier transform (DTFT) of the fields, which is necessary for computing the Poynting flux, local density of states (LDOS), near to far field transformation, etc., is typically accumulated at every time step for every point in the FluxRegion. The DTFT computation is parallelized but only in the sense that each processor computes the DTFT fields at points in its own chunk of the grid. If the division of the grid among processors into approximately equal-sized chunks (which is the default specified by split_chunks_evenly=True
) allocates most of the points where the DTFT fields are computed to one processor, it is not going to parallelize.
The DTFT computation can be sped up significantly using decimation of the time-series inputs. By default, this feature is enabled for all monitors (using the parameter decimation_factor=0
). The optimal decimation factor is chosen automatically based on the frequency content of the sources and monitors.
To improve load balancing, the parallelization can be made to take the DTFT computation into account by specifying split_chunks_evenly=False
. This option divides the grid into chunks with nearly-equal cost rather than size such that the region in which the DTFT fields are computed is optimally partitioned among the processors.
Synchronization of the fields (i.e., for add_flux
, add_energy
, add_near2far
, etc.), which is both expensive and unnecessary, is not performed or required for second-order accuracy when accumulating the Fourier transforms. In the Fourier summation, you multiply the field by and add the product (multiplied by ) to the Fourier amplitude. The value of "t" is simply different for the E and H fields because they are staggered in time by half a timestep.
Note: a simple approach to reduce the cost of the DTFT computation is to reduce the number of frequency points. If you need high frequency resolution in a certain bandwidth, consider adding a second flux region just for that bandwidth, with as many points as you need there, and use a smaller number of frequency points over a broad bandwidth.
Does Meep support shared-memory parallelism?#
Yes. Meep provides limited support for multithreading via OpenMP on a single, shared-memory, multi-core machine to speed up multi-frequency near-to-far field calculations involving get_farfields
or output_farfields
. Also, you can always run the MPI parallel Meep on a shared-memory machine, and some MPI implementations take special advantage of shared memory communications.
Why does the time-stepping rate fluctuate erratically for jobs running on a shared-memory system?#
Running jobs may experience intermittent slowdown on shared-memory systems (Issue #882). This may possibly be due to cache contention with other simultaneous jobs although though the cause has yet to be determined. The slowdown can be observed via increasing values of the time-stepping rate (units of "s/step") which is shown as part of the progress output.
What is an estimate of the memory requirements of a simulation given only its cell size and resolution?#
In general, calculating the computational cost of an arbitrary simulation is challenging given the variety of computational elements involved. A simple lower bound of the memory requirements can be obtained for an empty cell (i.e., without scalar dielectric materials, PMLs, DFT monitors, etc.) with Cartesian dimensions and grid resolution containing only real fields as: bytes. In this case which involves a single chunk, each grid voxel stores 9 electromagnetic field components (E,D,H in three directions) as well as 3 scalar permittivity values (diagonal entries of the ε tensor) using double-precision floating point (8 bytes per number). (For the default case of nonmagnetic materials, and are not stored.) As an example, a 10×20×30 cell with resolution 50 pixels/unit would require at least 72 GB of memory. Note: doubling the grid resolution would increase the memory requirements by a factor of at least 8 and the number of timesteps by exactly 2 (given that the Courant factor and runtime are fixed). Additional storage (and floating-point operations per time step) is required for e.g. dispersive materials based on Drude-Lorentz susceptibilities and PMLs which involve auxiliary fields, subpixel smoothing which introduces anisotropic tensors at interface voxels, DFT fields for each frequency bin and field component necessary to compute the Poynting vector, etc. The actual memory requirement may therefore be significantly larger than this lower bound.
Can Meep be compiled to run on graphics processing units (GPUs)?#
No. Currently, Meep does not support GPUs via frameworks such as CUDA, OpenCL, etc. There have been reports of using GPUs to speed up FDTD, so this is a potential future area of development for Meep, but requires a substantial effort to port the core timestepping routines to a GPU architecture.
Usage: Other#
Is there a Python interface?#
Yes. An official Python interface was released in version 1.4 and replaces the Scheme interface which is no longer being supported. An unofficial Python interface, which predates and is incompatible with the official version, has been developed independently by researchers at the Institute of Physics at the Czech Academy of Sciences and Ghent University, and maintained by Filip Dominec. Unfortunately, this interface has several shortcomings including missing support for geometric objects, lack of high-level abstractions for low-level functionality, and limited documentation. The official interface addresses all these issues.
What is a good rule of thumb for the grid resolution?#
At least 8 pixels per wavelength in the lossless dielectric material with the highest index. Resolving the skin depth of metals, which is typically tens of nanometers at optical frequencies, will require a pixel size of comparable dimensions since subpixel averaging does not apply to dispersive materials.
What is Meep's frequency-domain solver and how does it work?#
Meep contains a frequency-domain solver that directly computes the steady-state fields produced in a geometry in response to a continuous-wave (CW) source, using an iterative linear solver instead of time-stepping. This is possible because the FDTD timestep can be used to formulate a frequency-domain problem via an iterative linear solver. The frequency-domain response can often be determined using many fewer timesteps while exploiting the FDTD code almost without modification. For details, see Section 5.3 ("Frequency-domain solver") of Computer Physics Communications, Vol. 181, pp. 687-702 (2010).
This means that all of the features from the time-domain solver (e.g., arbitrary materials, symmetries, subpixel averaging, parallelization, etc.) are also available as a frequency-domain solver. For certain problems, such as cavities (e.g., ring resonators) with long-lived resonant modes, the frequency-domain solver converges much faster than the straightforward approach of simply running a long simulation until transients have disappeared. Another benefit is that an arbitrary, complex, refractive index can be specified directly using the electric conductivity without having to fit the data to a sum of Drude-Lorentz susceptibility terms.
For examples, see Tutorial/Frequency-Domain Solver and Tutorial/Mode Decomposition.
Why are my reflectance/transmittance values less than zero and/or greater than one?#
There are five possible explanations: (1) the normalization and the scattering runs are not comparable because e.g., the sources or monitors are not in the same position within the structure, (2) the run time is not long enough and hence all of the flux is not being collected in either or both runs, (3) the flux is being computed at a frequency which is too far away from the center of the source bandwidth; in such cases the flux values are too small and may be dominated by rounding errors, (4) the source or monitor is positioned too close to the scatterer which modifies the local density of states (LDOS); for example, a source emits more power near a band edge and less power within a bandgap than the same source surrounded by many wavelengths of vacuum, or (5) in the normalization run, the monitor is positioned too close to the source and is capturing unwanted (e.g., radiating) modes.
Note: the Poynting flux is a dimensionful quantity which can be any value (positive or negative).
How does Meep deal with numerical dispersion?#
Numerical dispersion can be analyzed and quantified analytically for a homogeneous medium. For details, see e.g., Chapter 4 ("Numerical Dispersion and Stability") of Computational Electrodynamics: The Finite Difference Time-Domain Method (3rd edition). However, in practice numerical dispersion is rarely the dominant source of error in FDTD calculations which almost always involve material inhomogeneities that give rise to much larger errors. Similar to other errors associated with the finite grid resolution, numerical dispersion decreases with resolution, so you can deal with it by increasing the resolution until convergence is obtained to the desired accuracy. In particular, the errors from numerical dispersion vary quadratically with resolution (in the ordinary center-difference FDTD scheme). On the other hand, the errors introduced by discretization of material interfaces go linearly with the resolution, so they are almost always dominant. Meep can partially correct for these errors using subpixel averaging.
Should I include the 2π factor when defining the frequency or the wavevector?#
No. Frequency inputs and outputs in Meep are the ordinary frequency , not the angular frequency . Similarly, spatial wavevectors (e.g., for Bloch-periodic boundary conditions) are specified without the factor, so that the spatial dependence is . For example, if you specify a frequency=0.3
in a source, then the time-dependence of the source is , where time is also in Meep units. Similarly, if you specify k_point = meep.Vector3(0.4,0,0)
in the interface, then the phase factor between adjacent unit cells with period in the direction is .
Does Meep support grids with non-uniform discretization?#
No. Meep does not support non-orthogonal grids with spatially varying resolution. One possible approach, which does not require changes to the underlying code and is not yet implemented, is to use a coordinate transformation to selectively increase the resolution in a given region of the cell. This is possible using transformation optics which involves a change of materials: an arbitrary coordinate transformation can be mapped to Cartesian coordinates with transformed ε/μ. For more details, see the notes Coordinate Transformation and Invariance in Electromagnetism and the source repository Variable Resolution in Meep which implements this technique.
How do I access the structure, fields, or sources in a subregion/slice of the cell?#
You can use the routines get_array
, get_dft_array
, or get_source
to obtain the fields/sources and get_array_metadata
to obtain information for the geometric slice.
Visualization in 3d can be done with Mayavi. For an example, see Tutorial/Basics/Visualizing 3d Structures.
To output the data to an HDF5 file, you can use the in_volume
or in_point
routines as part of your run function. For example, to restrict the output to a line, you could use: meep.in_volume(meep.Volume(center=meep.Vector3(0,0,0), size=meep.Vector3(10,0,0)), meep.output_dpwr)
which outputs along a line of length 10 in the x direction centered at (0,0,0). You can even wrap this statement in to_appended("line.h5", ...)
to output the intensity along the line as a function of time to a 2d HDF5 dataset. This would enable you to plot intensity vs. time and space as a 2d color image.
How do I compute the absorbed power in a local subregion of the cell?#
To compute the absorbed power anywhere in the cell, you can use Poynting's theorem: place a closed surface of DFT flux monitors surrounding the subregion and specify the weight
parameter of each FluxRegion
accordingly (i.e., ±1) in order to capture all incoming power. For a 2d example, see Tutorial/Near-to-Far Field Spectra/Radiation Pattern of an Antenna. There is also a 3d example for calculating the light-extraction efficiency of an organic light-emitting diode (OLED).
What happens if I specify an output volume that extends beyond a cell with periodic boundaries?#
Any output or computation function that requires a Volume
, such as in_volume
or the field integration routines, etcetera, doesn't restrict the output volume to lie within, or even to intersect, the cell. As long as ensure_periodicity=True
(the default), Meep will extend the data according to the periodic boundary conditions as needed.
Is it possible to specify the boundary conditions independently?#
Yes. You can use the set_boundary
routine to specify different boundary conditions (Metallic
, Magnetic
) on different sides of the cell (High
for positive or Low
for negative directions). The default boundary condition is perfect electric conductor (i.e., Metallic
or zero electric field), unless you specified a k_point
in which case the default is Bloch-periodic. Note that PML is not a boundary condition. It is an artificial absorbing material placed adjacent to the boundaries. The boundary condition is essentially irrelevant to the operation of the PML.
How do I model a 2d structure involving an out-of-plane wavevector?#
To model e.g., fiber waveguides with 2d claddings, diffractive binary gratings with arbitrary incident planewaves in 3d, etc. in Cartesian coordinates, you would create a 2d cell in the plane and specify a k_point
with non-zero component in . The resulting 3d simulation involves all electromagnetic fields (rather than a 2d simulation which involves a subset of the fields determined by the polarization of the current source). Performance can be improved by specifying special_kz=True
in the Simulation
constructor; this results in a 2d simulation with real rather than complex fields (as long as the and components of k_point
are zero).
Can Meep model electrostatic effects?#
In principle, this corresponds to the limit as the frequency goes to zero or the wavelength goes to infinity. However, a time-domain simulation is rather inefficient for electrostatic or magnetostatic calculation; this includes lumped circuit models involving resistance, voltage, capacitance, etc. In this regime, you are usually much better off directly solving e.g. Poisson's equation to obtain the fields from a given charge distribution. There are many available Poisson solvers based on finite or boundary element methods. In Meep, probably the best you can do is to use a source with a very low frequency and a gradual turn-on specified by the width
parameter of ContinuousSource
.
Can Meep model lasing phenomena?#
Yes. More specifically, Meep can be used to model saturable gain and absorption via multilevel atomic susceptibility. This feature may be used to investigate optically-pumped lasing phenomena such as Raman lasers. For details, see Materials/Saturable Gain and Absorption.
Can Meep model Raman scattering?#
Yes. There are two different possible approaches to model stimulated Raman scattering.
The first approach in the weak-scattering (undepleted pump) approximation would be to do two linear calculations. First, you do a linear calculation with your source field to get the incident electric field at the location of the Raman material. Then you multiply the field by the Raman susceptibility to get a polarization (i.e., the induced dipole moment) at the scattered (Stokes or anti-Stokes) frequency. Using this polarization as a source at the new frequency, you can do a second linear calculation to compute the Raman-scattered field. This is called a first Born approximation or alternatively a "volume-current method".
The second approach is based on a full nonlinear simulation of the Raman process. This involves modeling the populations of the atomic vibrational states corresponding to the Raman bands using saturable gain and absorption.
Does Meep support photonics inverse design?#
Yes. Meep has an adjoint solver which can be used for inverse design and density-based topology optimization in which the degrees of freedom are a grid of voxels. (Of course, you can always use finite differences or similar methods to compute sensitivities or gradients for other parameters, as well as derivative-free optimization methods. However, such methods become increasingly impractical for ≳ 10 parameters.)