Publishing a paper with simulation results

Dealing with tons of papers on simulation (or containing some simulation) results, as reviewer, editor (of IEEE Photonics Journal and JEOS RP) and conference chair of OWTNM 2015, I thought I’d point out a few (hopefully) useful things that I’ve picked up.

Some pointers are general and applicable to any paper, and others more specific to simulation related papers.

General:

– when suggesting potential reviewers for your manuscript, make sure the email addresses of these people are correctly entered! I cannot tell you how frustrating it is as an editor to find this information is inaccurate and preferred reviewers cannot be contacted.

– do not suggest reviewers from your own research group or institution, to avoid a potential conflict of interest

– make sure your own inbox doesn’t bounce back emails from the editor!

Simulation related:

– for papers that present new algorithms and methods, it is imperative to convince  reviewers that the method works. The best way to do so is to show test results for the method against problems for which analytical or well known solutions exist. The more standardised and well accepted the test problem, the more credibility it lends your method. Benchmarking against results from other well cited results can also be quite helpful. Do this before applying the method to a new structure and presenting those results.

–  show tolerance and stability behaviour: since the method will have a number of parameters it is a good idea to show how error/accuracy varies as a function of these variables. The stability behaviour of the method against parameters can indicate the robustness of the method.

– if there are limitations to the method, it may be worth discussing these sometimes. It can help to define the applicability of the method.

These all constitute useful information for reviewers and editors when making decision to accept a manuscript: how sound the manuscript is technically, how useful will it be to readers, how widely it will be read and how well presented it is?

Benchmarking: test drive your code/software

Hello all!

It has been a long time since I blogged. Been extremely busy with the organisation of the OWTNM 2015. The paper submission just closed. The good news is that this year OWTNM will feature a training session with Lumerical (free to attendees) and also a free Women in Optics workshop (with lunch). The technical papers and sessions will be up on the site soon. Do attend if you can!

But now on to a blogpost! Why benchmarking and what is it?

Would you buy a car without a thorough test drive?

If the answer is no, then read on…

First the why benchmark: to make best use of your simulation software/code.
Benchmarking allows you to:
– generate results that are reliable and repeatable, giving referees (and you) confidence in these. When you include some benchmark results in a paper, for example, reviewers are more likely to trust your findings. Also other authors when they want to use the same approach (and perhaps cite your work) can repeat your work and obtain the same results, so they know they are on the right track in using the technique/structure etc. This enables your results to be used and cited more widely.

learn the sweet spots and limits of the software/code: you would not expect a car to fly like an aeroplane or a bicycle to be ridden as fast as a bullet train. Similarly the software/code will have an optimal performance for the input parameters. It is worth knowing first of all for which problems the software can be used (time domain problems for example need a method belonging to either FDTD/FETD/Time BPM class, and a simple mode solver may not work). Then having identified the nature of the problem and the appropriate method/software for it, comes the choice of values of parameters.

The idea is to identify the error dependence of the results on the parameter values. How large a time step, or propagation step can you take and the error to be acceptable? What index difference (in the structure) can the software handle accurately? How many grid/mesh points are needed to represent the structure with sufficient accuracy and still be fast enough?

Benchmarking helps learn:
Can you trust the results you get (for a new as yet not well understood structure)? Are these parameter values too large/small/just right? What sort of error should you expect?

These are the sort of questions worth answering before you embark on simulating a new structure. Which brings me to the what is benchmarking and how to do it…

Benchmarking in my opinion is simply matching results with your software/code for a known problem to test its accuracy.

How should you benchmark:
– As a rule of thumb, pick up a problem/structure for which results are known well, if possible analytical solutions are available. (you would ofcourse have made sure the problem you are trying to solve and the solver/software you are using for the purpose are right for each other!)
– Then try and repeat the well known results for the parameter values published. Do these match well?
– If the first two steps make you happy then this third is worth implementing. Now fix all input parameters except one. Change values of this parameter and plot the results as a function of the parameter, tabulate them. This should show you the error as function of the parameter. Do this for all the key parameters.
This step is critical in identifying what parameter values you can use and still get results that you can trust.

Remember, with fabrication, tolerances are important. So experimental colleagues, reviewers, grant funders, manufacturers all want to know what happens to the performance if the parameter value changes by say 5%.
You need to know the answer to that one, and also to know how much error is present. If the error in the simulations is 5% then the tolerance has to be greater than 5% for the results to be useful!

Benchmarking can seem like a laborious and boring task. But it is very much worth it. Do it properly once and then you can really use the software to the max!