Comparative Analysis of Image Interference Resistance of MobileNetV2 and EfficientNet-Lite0 Model Architectures in Classifying Fungal Types
Keywords:
Data Augmentation, Convolutional Neural Network (CNN), EfficientNet-Lite0, Model Robustness, Fungal Classification, Visual DisordersAbstract
Convolutional Neural Network (CNN) models for fungal classification often face drastic performance degradation when implemented in the real world. This is caused by the gap between "clean" (ideal) training data and field acquisition data which often experiences visual disturbances, such as uneven lighting (brightness), blur, and variations in shooting angles (rotation). This research conducts a comparative analysis to evaluate the robustness of two popular lightweight architectures, MobileNetV2 and EfficientNet-Lite0, against these three types of interference. The model was trained for 50 epochs and its performance was tested using test data that was systematically perturbed by brightness (25% to 150%), Gaussian blur (sigma 1 to 5), and fixed rotation (20° to 300°). The research results with the highest accuracy reached 73% using EfficientNet-Lite0 with clean test data without augmentation. EfficientbtNet-Lite0 is significantly superior in terms of robustness to lighting variations and rotation. This model is able to maintain stable accuracy in low light conditions and at various object orientations, where MobileNetV2 experiences a substantial decrease in performance. However, both architectures show high susceptibility to Gaussian blur, increasing blur intensity causes severe performance degradation, indicating a high dependence on sharp textural features. This research concludes that EfficinetNet-Lite0 is a more robust and reliable architecture for fungal classification applications in unpredictable field conditions.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Prosiding Seminar Nasional Amikom Surakarta

This work is licensed under a Creative Commons Attribution 4.0 International License.
