Or a sturdy little hatchback with pizzazz?
In this post I discuss some issues around commercial modelling software used by the Photonics community. Many people see them as the answer to all simulation and modelling needs. I advise caution before trusting blindly in the results that software produces. It’s vital to remember that all commercial software is heavily reliant on underpinning techniques and assumptions, need sensitive and problem-specific benchmarking, with experiments and simulation interacting with theory, to supplement, verify and sanity-check the results that they produce. I expand below on these individual themes a bit.
The software used for modelling optical structures essentially solves Maxwell’s equations and implement numerical and/or semi-analytical techniques to do so (such as Finite Element, FDTD, Method of Lines, Transmission Line Method and others). See my post on the OSA blog on this…
Virtually every experimental lab has one or more software that suits their needs. Most people do not write their own code for every simulation technique they use and many users do not have in depth knowledge about simulation. That doesn’t mean they can’t make the most of their piece of kit!
The virtues of the best of this software are many:
– With snazzy Graphical User Interfaces (GUIs) it’s possible to design your particular optical structure fairly easily (if it’s not too complicated)
– Inbuilt algorithms allow users to change meshes and discretization at will
– They can crunch huge amounts of data and with their optimized solvers give you solutions fairly quickly
– To change parameter values, model settings or run parameter sweeps can be very easy and efficient
– Post processing of results is a joy with the ability to plot gorgeous graphs with all manner of bells and whistles.
– They enable the non-expert in modelling (and coding) methods to use simulations effectively.
So, what is the catch?
Like everything that has some excellent features, this software also has limitations. When we do not fully understand the limitations we risk a GIGO situation (Garbage in Garbage Out), getting results that are not trustworthy and misusing a valuable resource.
Users would benefit by considering the following points:
– All software will have inherent assumptions and error limits relating to the underlying numerical technique it is based on as well as the solvers employed. Understanding these limits is essential to make the most of the software and something not everyone does. Consider these examples based on hypothetical software:
a) If a Finite Difference based software uses central differencing technique to discretize the x direction, usually the error is some order of dx(dx being the smallest separation between two points). So no matter what you do, the accuracy in the solution can’t be better than that. When the numbers show up to 8th place of decimal, you need to check to what place you can trust the solution.
b) If the wave equation being solved is not wide angle (Fresnel or paraxial approximation is being used), is it really practical to simulate a device with branch angle of 30 degrees with the software?
c) If the index contrast that the method can handle is small, then is it feasible to model high index contrast air clad Si structures with that software?
– Is it what you think it is? The software allows users to change many settings, but the terminology in the documentation of that software may well be different from that in the technical mathematical/physics literature. For example:
a) Changing order of shape functions in Finite Element Methods (FEM) is powerful tool in some software. So when you choose ‘geometry shape order =cubic’ and order of shape functions as 2, what is really happening to the actual FEM settings in a hypothetical FEM software?
b) Edge or vector elements can give excellent accuracy as compared to node based elements in FEM. Which is suitable to your problem? In the software settings, what sort of elements are you really using?
c) Many pieces of software use curve fitting or other inbuilt numerical differentiation procedures for certain calculations. For example, dispersion requires a second derivative of the effective index with respect to wavelength. Fitting higher order polynomials doesn’t always give the correct dispersion, however. How does your software make the calculation? Is it correct?
– Benchmarking, benchmarking, benchmarking! How do you know how far the Perfectly Matched Layer (PML) should be from your structure? If you get a result with a time step of 0.1fs, is it accurate enough? Those features that appear in the field plot, are they spurious modes or numerical artefacts or is there something physical that needs investigation?
The list of stuff that can be solved with benchmarking is very long. The procedure is to first model a structure for which you know the parameter to be measured/calculated. Then fiddle with the software settings till you know which setting values give you a reasonably accurate solution. Only then start modelling the new structure for which you don’t know the solution- look for convergence of results with the parameter settings that you have determined from the benchmarking.
– Experiments and simulation: the meeting point! Modelling is often a supporting activity to actual experiments. In some cases (where technology is not advanced enough, see my post on Science of Haute Coutour… for an example) simulation is more feasible. For the former, modelling can be used really effectively when the numerical experiment setup in the software corresponds to the actual physical experiment. Some simple considerations that may arise:
a) Physically, it might be possible to keep the lens/detector/source 10mm away, but in the simulation is it feasible to locate components that far? Will the simulation run till God is old? If we reduce the distance, is the Physics changing in the simulation? Are we still running the same experiment that is taking place on the bench?
b) Let’s say the data needed from the simulation is spectral (wavelength dependence) with a specific dλ. But the software is time domain in nature… so when we use the DFT algorithm in the software to convert to wavelength, how many simulation points in the time window are needed to get the required dλ separation?
c) Has the solution converged? I can’t stress this enough- when running simulations it’s important to identify if the results are acceptable or not. For example, when calculating the modal effective index with a particular mesh, how does this value change with increase in mesh? What is the error in any reading you get, so you can talk about accuracy with certainty to some percentage?
– Post processing and visualization. Most software allow for excellent and easy post processing of results. However, when the need is to calculate quantities that are not predefined in the software (it could be field overlap integrals, confinement factors, field gradients etc.) It may be possible to use ‘scripting’ or coding within software. The user written script can manipulate the field/quantities calculated by the software. But care has to be taken to fully understand what the software has computed and how these can be used. Equally it is important to understand what the user can’t change or manipulate. Since most software will not allow access to their actual code, users can’t get down and dirty with the real beating heart of the software/technique.
a) For example, if the software solver gives the E field, and you want the Poynting vector. You need to obtain the H field first.
All in all, the software variety available to the user is immense and performance as well as costs can vary quite a bit. It would be a sad waste to not fully understand and optimize the use of a £5000 purchase or to get erroneous results!