Resumen
The analysis of thin sections for lithology identification is a staple technique in geology. Although recent strides in deep learning have catalyzed the development of models for thin section recognition leveraging varied deep neural networks, there remains a substantial gap in the identification of ultra-fine-grained thin section types. Visual Transformer models, superior to convolutional neural networks (CNN) in fine-grained classification tasks, are underexploited, especially when dealing with limited, highly similar sample sets. To address this, we incorporated a dynamic sparse attention mechanism and tailored the structure of the Swin Transformer network. We initially applied a region-to-region (R2R) approach to conserving key regions in coarse-grained areas, which minimized the global information loss instigated by the original model?s local window mechanism and bolstered training efficiency with scarce samples. This was then fused with deep convolution, and a token-to-token (T2T) attention mechanism was introduced to extract local features from these regions, facilitating fine-grained classification. In comparison experiments, our approach surpassed various sophisticated models, showcasing superior accuracy, precision, recall, and F1-score. Furthermore, our method demonstrated impressive generalizability in experiments external to the original dataset. Notwithstanding our significant progress, several unresolved issues warrant further exploration. An in-depth investigation of the adaptability of different rock types, along with their distribution under fluctuating sample sizes, is advisable. This line of inquiry is anticipated to yield more potent tools for future geological studies, thereby widening the scope and impact of our research.