Dealing with tons of papers on simulation (or containing some simulation) results, as reviewer, editor (of IEEE Photonics Journal and JEOS RP) and conference chair of OWTNM 2015, I thought I’d point out a few (hopefully) useful things that I’ve picked up.
Some pointers are general and applicable to any paper, and others more specific to simulation related papers.
– when suggesting potential reviewers for your manuscript, make sure the email addresses of these people are correctly entered! I cannot tell you how frustrating it is as an editor to find this information is inaccurate and preferred reviewers cannot be contacted.
– do not suggest reviewers from your own research group or institution, to avoid a potential conflict of interest
– make sure your own inbox doesn’t bounce back emails from the editor!
– for papers that present new algorithms and methods, it is imperative to convince reviewers that the method works. The best way to do so is to show test results for the method against problems for which analytical or well known solutions exist. The more standardised and well accepted the test problem, the more credibility it lends your method. Benchmarking against results from other well cited results can also be quite helpful. Do this before applying the method to a new structure and presenting those results.
– show tolerance and stability behaviour: since the method will have a number of parameters it is a good idea to show how error/accuracy varies as a function of these variables. The stability behaviour of the method against parameters can indicate the robustness of the method.
– if there are limitations to the method, it may be worth discussing these sometimes. It can help to define the applicability of the method.
These all constitute useful information for reviewers and editors when making decision to accept a manuscript: how sound the manuscript is technically, how useful will it be to readers, how widely it will be read and how well presented it is?