The promising results of deep learning in image recognition suggest a huge potential for microscopic analyses in materials science. One major challenge for its adoption in the study of materials is the limited number of images that are available to train models on. Herein, we present a methodology to create accurate image recognition models with small datasets. By explicitly taking into account the magnification and by introducing appropriate transformations, we incorporate as many insights from material science in the model as possible. This allows for a highly data-efficient training of complex deep learning models. Our results indicate that a model trained with the presented methodology is able to outperform human experts.
The microstructure of a material, typically characterized through a set of microscopy images of two-dimensional cross-sections, is a valuable source of information about the material and its properties. Every pixel of the image is a degree of freedom causing the dimensionality of the information space to be extremely high. This makes it difficult to recognize and extract all relevant information from the images. Human experts circumvent this by manually creating a lower-dimensional representation of the microstructure. However, the question of how a microstructure image can be best represented remains open. From the field of deep learning, we present triplet networks as a method to build highly compact representations of the microstructure, condensing the relevant information into a much smaller number of dimensions. We demonstrate that these representations can be created even with a limited amount of example images, and that they are able to distinguish between visually very similar microstructures. We discuss the interpretability and generalization of the representations. Having compact microstructure representations, it becomes easier to establish processing–structure–property links that are key to rational materials design.