K. Lejaeghere, V. Van Speybroeck
Center for Molecular Modeling (CMM) & Department of Materials Science and Engineering(DMSE) Ghent University, Belgium.
See the places where to stay in La Plata.(read more)
Find the place of the Workshop.Go to Maps
(see the map)
Bringing DFT codes back to the testbench: what did we learn?
Density-Functional theory codes are being used increasingly more often for materials engineering applications. As good engineering practice, materials engineers need to know the error bar on DFT predictions. Something they take for granted is that two independently written DFT codes make `identical' predictions in identical situations.
However, as soon as you look a bit closer on this assumption, many questions pop up: If this is really straightforward, why aren't there papers in the literature that document this? It's easy (?) to define `identical situations', but how does one define `identical predictions'? Which kind of disagreements are acceptable, and which not? Etc. The embarassing truth is that there are no good quantitative data to compare the relative precision among the many independent DFT codes. This hinders the broader adoption of DFT in the engineering community.
In this contribution, we will report about an ongoing community-wide effort where about 45 code developers and expert users accepted the challenge to run the same benchmark set with different DFT codes, pseudopotential sets or PAW projectors. The goal is to assess quantitatively the differences between the >30 data sets that are obtained this way. Are there 'good' and 'bad' codes/methods? Did codes become 'better' over time or not? Does it make a difference which code you choose to use?
The overall conclusions we derive from this exercise will be presented, and ways to make further progress will be discussed.