Advanced Deep Learning Architectures for Automated Silicon Wafer Defect Detection with Synthetic Data Augmentation

Kakarla Deepti*
Department of Elelctronics and Communication Engineering, Vasavi College of Engineering, Hyderabad, India.
Periodicity:October - December'2025

Abstract

The semiconductor industry needs defect-free silicon wafers to make integrated circuits, as even small flaws can reduce yield and cause financial losses. Traditional inspection methods, such as rule-based image processing and manual checks, are time-consuming, error-prone, and inflexible. This study proposes a deep learning framework for automatic wafer defect classification using advanced CNN models and generative data augmentation to fix class imbalance and improve accuracy. The WM-811K dataset with 811,457 wafer maps was reorganized into four classes: Redundant, Crystal, Mechanical, and Defect-Free. Three baseline models (WDD-Net, MobileNet-V2, and VGG-16) were tested, with VGG-16 reaching 80% accuracy. Further experiments using deeper models (VGG-19, GoogleNet) and StyleGAN-based augmentation improved performance, especially for rare defect types. GoogleNet achieved a good balance of accuracy and efficiency, while MobileNet-V2 gave the highest accuracy (92.42%) and recall (92.41%). VGG-19 also showed strong generalization (F1-score: 90.41%), proving that deep CNNs and GAN-based augmentation are effective for wafer defect detection.

Keywords

Silicon Wafer Defect Detection, Convolutional Neural Networks, GoogleNet, VGG-19, Style GAN, Semiconductor Manufacturing.

How to Cite this Article?

Deepti, K. (2025). Advanced Deep Learning Architectures for Automated Silicon Wafer Defect Detection with Synthetic Data Augmentation. i-manager’s Journal on Electronics Engineering, 16(1), 31-41.

References

[6]. Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M., & Adam, H. (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 1314-1324).
[7]. Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and- excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 7132-7141).
[15]. Ma, N., Zhang, X., Zheng, H. T., & Sun, J. (2018). Shufflenet v2: Practical guidelines for efficient CNN architecture design. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 116-131).
[17]. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4510- 4520).
[20]. Tan, M., & Le, Q. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning (pp. 6105-6114). PMLR.
[21]. Tan, M., & Le, Q. (2021). Efficientnetv2: Smaller models and faster training. In International Conference on Machine Learning (pp. 10096-10106). PMLR.
[29]. Zhang, X., Zhou, X., Lin, M., & Sun, J. (2018). Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 6848-6856).
If you have access to this article please login to view the article or kindly login to purchase the article

Purchase Instant Access

Single Article

North Americas,UK,
Middle East,Europe
India Rest of world
USD EUR INR USD-ROW
Pdf 35 35 200 20
Online 15 15 200 15
Pdf & Online 35 35 400 25

Options for accessing this content:
  • If you would like institutional access to this content, please recommend the title to your librarian.
    Library Recommendation Form
  • If you already have i-manager's user account: Login above and proceed to purchase the article.
  • New Users: Please register, then proceed to purchase the article.