Analisis Komparatif Kinerja Arsitektur SSD-VGG16, SSD-ResNet18, dan SSD-MobileNetV2 untuk Deteksi Penyakit Brownspot Padi

Authors

  • Muammar Khadafi Universitas Darussalam Gontor
  • Oddy Virgantara Putra Universitas Darussalam Gontor

Keywords:

Object Detection, Deep Learning, Brownspot, SSD (Single Shot Detector), Accuracy-Efficiency Trade-Off, MobileNetV2, VGG16, RestNet18

Abstract

Early detection of rice leaf diseases particularly brown spot is crucial to prevent yield loss, yet manual inspection is often slow and subjective. This study compares three backbones within the Single Shot Detector (SSD) framework VGG16, ResNet18, and MobileNetV2 for brown spot detection. We use 1,000 annotated images with bounding boxes, apply preprocessing (300×300 resizing), and train SSD300 in PyTorch on a Kaggle GPU (SGD; learning rate 0.002; momentum 0.9; weight decay 5e-4; batch size 64; 200 epochs). Each model is evaluated five times, reporting mean ± standard deviation for mAP and IoU, along with efficiency metrics (model size and FPS). Results highlight a clear accuracy efficiency trade-off: SSD-VGG16 is the most accurate (mAP 0.825±0.015; IoU 0.548±0.017) but also the largest and slowest (90.71 MB; 11.53 FPS); SSD-ResNet18 is a balanced middle ground (mAP 0.796±0.015; IoU 0.513±0.024; 55.90 MB; 21.08 FPS); SSD-MobileNetV2 is the most efficient and the only truly real-time option (43.73 MB; 29.42 FPS) with a modest accuracy drop (mAP 0.707±0.018; IoU 0.468±0.012). In short, there is no one-size-fits-all model: VGG16 suits accuracy-first offline analysis, MobileNetV2 fits on-device real-time deployment, and ResNet18 offers a well-balanced compromise.

Published

2026-01-30