Today while I was reading up articles on solar cells I came across something that touched a real chord with me:
In the September 2014 issue of Nature Photonics, Zimmermann et al. had a commentary piece titled “Erroneous efficiency reports harm organic solar cell research” on page 669.
The authors commented that mischaracterization or solar cell power conversion efficiencies and inconsistent data being published in scientific journals (in the field of solar cells) was particularly harmful for the area. The race for getting the best results and publishing them in the journals with highest impact factor, has in part led to people being less careful about incorrect measurements and poor reporting.
The danger when such articles multiply and proliferate is that the data being reported is unreliable and one doesn’t know which data/papers to trust. The progress of the field as a whole is hampered.
Having data and results that can be trusted, repeated and verified is a must for scientific research. In some cases, the methods to be used for characterization are clearly laid out and researchers can follow these, and/or conduct standardized tests/measurements to show the veracity of their results. This instills confidence in readers about the work and should positively impact the citation of the work too.
Obviously such issues are not confined to one field alone. For numerical modeling as I have said in previous posts, benchmarking results of a new technique against existing test cases/analytical solutions is a must!
The sheer number of the papers that were reporting results which overestimated performance though was quite a shock!
I think from now on I am going to be even more rigorous about my results as well as those of the papers I review/edit!