Is that a Ferrari?

Or a sturdy little hatchback with pizzazz?

In this post I discuss some issues around commercial modelling software used by the Photonics community. Many people see them as the answer to all simulation and modelling needs. I advise caution before trusting blindly in the results that software produces. It’s vital to remember that all commercial software is heavily reliant on underpinning techniques and assumptions, need sensitive and problem-specific benchmarking, with experiments and simulation interacting with theory, to supplement, verify and sanity-check the results that they produce. I expand below on these individual themes a bit.

The software used for modelling optical structures essentially solves Maxwell’s equations and implement numerical and/or semi-analytical techniques to do so (such as Finite Element, FDTD, Method of Lines, Transmission Line Method and others). See my post on the OSA blog on this…

Virtually every experimental lab has one or more software that suits their needs. Most people do not write their own code for every simulation technique they use and many users do not have in depth knowledge about simulation. That doesn’t mean they can’t make the most of their piece of kit!

The virtues of the best of this software are many:

– With snazzy Graphical User Interfaces (GUIs) it’s possible to design your particular optical structure fairly easily (if it’s not too complicated)

– Inbuilt algorithms allow users to change meshes and discretization at will

– They can crunch huge amounts of data and with their optimized solvers give you solutions fairly quickly

– To change parameter values, model settings or run parameter sweeps can be very easy and efficient

– Post processing of results is a joy with the ability to plot gorgeous graphs with all manner of bells and whistles.

– They enable the non-expert in modelling (and coding) methods to use simulations effectively.

So, what is the catch?

Like everything that has some excellent features, this software also has limitations. When we do not fully understand the limitations we risk a GIGO situation (Garbage in Garbage Out), getting results that are not trustworthy and misusing a valuable resource.

Users would benefit by considering the following points:

–          All software will have inherent assumptions and error limits relating to the underlying numerical technique it is based on as well as the solvers employed. Understanding these limits is essential to make the most of the software and something not everyone does. Consider these examples based on hypothetical software:

a)       If a Finite Difference based software uses central differencing technique to discretize the x direction, usually the error is some order of dx(dx being the smallest separation between two points). So no matter what you do, the accuracy in the solution can’t be better than that. When the numbers show up to 8th place of decimal, you need to check to what place you can trust the solution.

b)      If the wave equation being solved is not wide angle (Fresnel or paraxial approximation is being used), is it really practical to simulate a device with branch angle of 30 degrees with the software?

c)       If the index contrast that the method can handle is small, then is it feasible to model high index contrast air clad Si structures with that software?

–          Is it what you think it is? The software allows users to change many settings, but the terminology in the documentation of that software may well be different from that in the technical mathematical/physics literature. For example:

a)      Changing order of shape functions in Finite Element Methods (FEM) is powerful tool in some software. So when you choose ‘geometry shape order =cubic’ and order of shape functions as 2, what is really happening to the actual FEM settings in a hypothetical FEM software?

b)      Edge or vector elements can give excellent accuracy as compared to node based elements in FEM. Which is suitable to your problem? In the software settings, what sort of elements are you really using?

c)       Many pieces of software use curve fitting or other inbuilt numerical differentiation procedures for certain calculations. For example, dispersion requires a second derivative of the effective index with respect to wavelength. Fitting higher order polynomials doesn’t always give the correct dispersion, however. How does your software make the calculation?  Is it correct?

–          Benchmarking, benchmarking, benchmarking! How do you know how far the Perfectly Matched Layer (PML) should be from your structure? If you get a result with a time step of 0.1fs, is it accurate enough? Those features that appear in the field plot, are they spurious modes or numerical artefacts or is there something physical that needs investigation?

The list of stuff that can be solved with benchmarking is very long. The procedure is to first model a structure for which you know the parameter to be measured/calculated. Then fiddle with the software settings till you know which setting values give you a reasonably accurate solution. Only then start modelling the new structure for which you don’t know the solution- look for convergence of results with the parameter settings that you have determined from the benchmarking.

–          Experiments and simulation: the meeting point! Modelling is often a supporting activity to actual experiments. In some cases (where technology is not advanced enough, see my post on Science of Haute Coutour… for an example) simulation is more feasible. For the former, modelling can be used really effectively when the numerical experiment setup in the software corresponds to the actual physical experiment. Some simple considerations that may arise:

a)      Physically, it might be possible to keep the lens/detector/source 10mm away, but in the simulation is it feasible to locate components that far? Will the simulation run till God is old? If we reduce the distance, is the Physics changing in the simulation? Are we still running the same experiment that is taking place on the bench?

b)      Let’s say the data needed from the simulation is spectral (wavelength dependence) with a specific dλ. But the software is time domain in nature… so when we use the DFT algorithm in the software to convert to wavelength, how many simulation points in the time window are needed to get the required dλ separation?

c)       Has the solution converged?  I can’t stress this enough- when running simulations it’s important to identify if the results are acceptable or not. For example, when calculating the modal effective index with a particular mesh, how does this value change with increase in mesh? What is the error in any reading you get, so you can talk about accuracy with certainty to some percentage?

–          Post processing and visualization. Most software allow for excellent and easy post processing of results. However, when the need is to calculate quantities that are not predefined in the software (it could be field overlap integrals, confinement factors, field gradients etc.)  It may be possible to use ‘scripting’ or coding within software. The user written script can manipulate the field/quantities calculated by the software. But care has to be taken to fully understand what the software has computed and how these can be used. Equally it is important to understand what the user can’t change or manipulate. Since most software will not allow access to their actual code, users can’t get down and dirty with the real beating heart of the software/technique.

a)       For example, if the software solver gives the E field, and you want the Poynting vector. You need to obtain the H field first.

All in all, the software variety available to the user is immense and performance as well as costs can vary quite a bit. It would be a sad waste to not fully understand and optimize the use of a £5000 purchase or to get erroneous results!

To the Capital and the Capitol!

In this post I want to try and capture all the incredible stuff that was part of or related to the OSA Leadership Winter Conference.

In summary the stuff I loved:

–          the conference

–          the people

–          the plenaries

–          DC!!!

The conference:

Held once a year in Feb in Washington DC (which one could say is not only the capital of the United States but also of OSA whose HQ are based in the city) this conference is an essential part of the nuts and bolts machinery of OSA. As members we don’t often think of how all the OSA activities get organized, planned and delivered. Many of the activities or services we see directly, but there is a ton of stuff that goes on behind the scenes to make all that happen. And all of it takes some doing.

To manage its many publications, meetings, awards, student chapters, local sections, outreach, member services, website, public engagement, and much more OSA has several committees/councils. Each of which has oversight of that specific service. Members of these committees are volunteers and OSA employees. Through various meetings (including this one) each committee sets goals, how to achieve these, reviews progress and outcomes, debates on new challenges and opportunities facing OSA etc. A big chunk of the work is done by the OSA personnel (who are fantastic! I should know now, since I had the chance to meet some of them face to face).

When invited to collect my award as the OSA Young Professional for the year and attend the conference, I was a bit worried about the conference. Given its non-technical nature, I was a bit skeptical about how much I’d enjoy it.

Was I wrong!

The sessions I attended on Member Education Services (MSE) council, Publications Council and the Public Policy Engagement council were absolutely fascinating. To see the working of the committees that produce the journals I read and submit papers was cool. To hear the debates on how to improve opticsinfobase for users, compare performance of OSA journals with others, and the capabilities of the enhanced html files made me think for the first time not about the research itself, but what goes into making that research accessible in the way OSA does. I missed out on the sessions of International Council, which has oversight of how OSA functions as a global body.

The running theme throughout the conference for me: it was all about learning how OSA does what it does.

 This experience brought home that every scientific body, to differing degrees influences policy and trends in Science and Technology. Which technical areas it priorities influences members (their research) and journals (what gets published). How it delivers its services and the mechanisms used, determine the degree of access for user groups. Its engagement with the public, the policy makers and industry can give it direct influence. For large global bodies the challenge lie in meeting the needs of a diverse membership spread across the world. The sheer logistics of managing these are immensely demanding.

But it’s never just the business, is it?

I met so many fantastic people: volunteers from South Africa, Chile, Peru, Korea, and Japan… I think some of my most enjoyable conversations were with members of the Library committee (these are professional librarians who are not OSA members or optics researchers). I met a librarian from MIT, and we had fun talking about open access publication and the future of the gold route (you may have read my post: Open access: needs more work!). I met a statistician who works on membership data of APS/AIP and we spoke about how important modeling is for science (yes, especially Photonics)

I briefly spoke to Donna Strickland, the president of OSA. It was incredibly inspiring to see a female scientist reach the top in the profession and be recognized.  If she can do it, I guess many of us can too!

The surprise package though was the plenary talks.

I had expected serious, technical plenaries on some hot area of optics. We for serious and exciting talks for sure. We had Marc Kaufman , a senior journalist and reporter for the Washington Post talk about the search for extra-terrestrial life. The images he projected from the Curiosity mission and even of the work scientists are doing to find life on earth in extreme conditions (that might mimic some places in space/other planets etc.) was fascinating. My childhood ambition was to be an astronaut, so this talk just really hit the spot for me. Perhaps more so, because I was not expecting something like it.

And yes, DC itself.

The capital is a beautiful city, well connected by metro allowing visitors (who can’t drive) access to its many attractions. Over the weekend that I stayed in DC after the conference, I chose to do a ‘highlights of the East building tour’ in the Gallery of Art and a walking tour of the Mall (DC by Foot). Both were free and for someone short on time, an excellent way to sample some of the good things DC has to offer. With the gallery tour I got to see some of the best pieces of art in the gallery and learn about them. On the walking tour, I learned about American History: to see some key memorials that mark the birth of a nation, the actions of men such as Washington, Lincoln, and Jefferson, King and others, and the results of these actions were great experiences.

I hope to return to the city.  I hope to return to the conference! And I hope that in some way I can apply what I learned in the city to my life, in the words of Dr. Martin Luther King Jr.: ‘Out of the mountain of despair, a stone of hope’

 

Just another manic…

Monday, week, month, year…?

The way I feel right now, it seems that life has been manic for a very long time. I have postponed my doctor’s appointment twice due to lack of time.  The diary always seems to be full and each day I diligently work on many things, yet the ‘things to do’ list is unending. I feel mentally tired!

Now, I am not writing this to simply moan. I guess many of you feel this way too. So I wonder how people manage to relax, turn off the mind when going to bed, regain mental and physical energy?

Life feels like a treadmill that one can’t get off. It’s not all bad: lots of exciting work, adrenaline, enjoyment, new relationships, challenges.. Yet all these things amongst the tough stuff can tend to take over.

For me, exercise is a way to freshen the mind and unwind- when my body feels better my mind tends to follow. Meeting with friends helps and I get  a lot of energy from having a good conversation and laughter with people I like. One of my favourite ways is travel and I’ll post soon on an upcoming trip!

Stuff that I haven’t tried but have heard of includes

Gardening (it’s the good bacteria in the soil that apparently does the trick)

Acupuncture/acupressure (though I’m real scared of needles!)

Avoiding junk food and over-processed stuff like chocolates (though I don’t know if I could survive without choccies…)

What do you do, that works to de-compress?