Comparative Analysis VGG16 Vs MobileNet Performance for Fish Identification
Main Article Content
Abstract
This research aims to conduct a comparative evaluation of the efficacy of two neural network architectures in the field of fish identification through the utilization of supervised learning techniques. The evaluation of VGG16 and MobileNet, which are prominent deep learning architectures, has been conducted about their speed, accuracy, and efficiency in resource utilization. To assess the classification performance of both architectures, we employed a dataset encompassing diverse fish categories. The findings indicated that the VGG16 model demonstrated superior accuracy in fish classification, albeit due to increased computational time and resource utilization. On the contrary, MobileNet exhibits enhanced speed and efficiency, albeit at a marginal cost to its accuracy. The findings of this study have the potential to inform the selection of deep learning models for fish recognition scenarios, considering the specific requirements of the task, such as prioritizing accuracy or efficiency. The findings mentioned above offer significant insights that can be utilized in the advancement of Artificial Intelligence (AI)-based applications within the domains of fisheries resource management and environmental monitoring. These applications specifically necessitate precise and effective fish recognition capabilities. The comparison findings indicate that the accuracy achieved by VGG16 was 0.99, whereas MobileNet also attained an accuracy of 0.99.
Downloads
Article Details
Section

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright and Licensing Agreement
Authors who publish with this journal agree to the following terms:
1. Copyright Retention and Open Access License
- Authors retain full copyright of their work
- Authors grant the journal right of first publication under the Creative Commons Attribution 4.0 International License (CC BY 4.0)
- This license allows unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
2. Rights Granted Under CC BY 4.0
Under this license, readers are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material for any purpose, including commercial use
- No additional restrictions — the licensor cannot revoke these freedoms as long as license terms are followed
3. Attribution Requirements
All uses must include:
- Proper citation of the original work
- Link to the Creative Commons license
- Indication if changes were made to the original work
- No suggestion that the licensor endorses the user or their use
4. Additional Distribution Rights
Authors may:
- Deposit the published version in institutional repositories
- Share through academic social networks
- Include in books, monographs, or other publications
- Post on personal or institutional websites
Requirement: All additional distributions must maintain the CC BY 4.0 license and proper attribution.
5. Self-Archiving and Pre-Print Sharing
Authors are encouraged to:
- Share pre-prints and post-prints online
- Deposit in subject-specific repositories (e.g., arXiv, bioRxiv)
- Engage in scholarly communication throughout the publication process
6. Open Access Commitment
This journal provides immediate open access to all content, supporting the global exchange of knowledge without financial, legal, or technical barriers.
How to Cite
References
Iswahyudi, I., Hindarto, D. and Santoso, H., 2023. PyTorch Deep Learning for Food Image Classification with Food Dataset. Sinkron: jurnal dan penelitian teknik informatika, 8(4), pp.2651-2661. DOI: https://doi.org/10.33395/sinkron.v8i4.12987.
Mansour, M., Cumak, E.N., Kutlu, M. and Mahmud, S., 2023. Deep learning based suture training system. Surgery Open Science, 15, pp.1-11. DOI: https://doi.org/10.1016/j.sopen.2023.07.023.
Chen, Y., Chen, Y., Fu, S., Yin, W., Liu, K. and Qian, S., 2023. VGG16-based intelligent image analysis in the pathological diagnosis of IgA nephropathy. Journal of Radiation Research and Applied Sciences, 16(3), p.100626. DOI: https://doi.org/10.1016/j.jrras.2023.100626.
Nijaguna, G.S., Babu, J.A., Parameshachari, B.D., de Prado, R.P. and Frnda, J., 2023. Quantum Fruit Fly algorithm and ResNet50-VGG16 for medical diagnosis. Applied Soft Computing, 136, p.110055. DOI: https://doi.org/10.1016/j.asoc.2023.110055.
Zhu, F., Li, J., Zhu, B., Li, H. and Liu, G., 2023. Uav remote sensing image stitching via improved vgg16 siamese feature extraction network. Expert Systems with Applications, 229, p.120525. DOI: https://doi.org/10.1016/j.eswa.2023.120525.
Sun, J., Yang, F., Cheng, J., Wang, S. and Fu, L., 2024. Nondestructive identification of soybean protein in minced chicken meat based on hyperspectral imaging and VGG16-SVM. Journal of Food Composition and Analysis, 125, p.105713. DOI: https://doi.org/10.1016/j.jfca.2023.105713.
Wang, H., Lu, F., Tong, X., Gao, X., Wang, L. and Liao, Z., 2021. A model for detecting safety hazards in key electrical sites based on hybrid attention mechanisms and lightweight Mobilenet. Energy Reports, 7, pp.716-724. DOI: https://doi.org/10.1016/j.egyr.2021.09.200.
Liu, Y., Wang, Z., Wang, R., Chen, J. and Gao, H., 2023. Flooding-based MobileNet to identify cucumber diseases from leaf images in natural scenes. Computers and Electronics in Agriculture, 213, p.108166. DOI: https://doi.org/10.1016/j.compag.2023.108166.
Sonmez, M.E., Altinsoy, B., Ozturk, B.Y., Gumus, N.E. and Eczacioglu, N., 2023. Deep learning-based classification of microalgae using light and scanning electron microscopy images. Micron, 172, p.103506. DOI: https://doi.org/10.1016/j.micron.2023.103506.
Chen, J., Zhang, D., Suzauddola, M. and Zeb, A., 2021. Identifying crop diseases using attention embedded MobileNet-V2 model. Applied Soft Computing, 113, p.107901. DOI: https://doi.org/10.1016/j.asoc.2021.107901.
Gujjar, J.P., Kumar, H.P. and Chiplunkar, N.N., 2021. Image classification and prediction using transfer learning in colab notebook. Global Transitions Proceedings, 2(2), pp.382-385. DOI: https://doi.org/10.1016/j.gltp.2021.08.068.
Zhou, W., Wang, H. and Wan, Z., 2022. Ore image classification based on improved CNN. Computers and Electrical Engineering, 99, p.107819. DOI: https://doi.org/10.1016/j.compeleceng.2022.107819.
Chakraborty, K.K., Mukherjee, R., Chakroborty, C. and Bora, K., 2022. Automated recognition of optical image based potato leaf blight diseases using deep learning. Physiological and Molecular Plant Pathology, 117, p.101781. DOI: https://doi.org/10.1016/j.pmpp.2021.101781.
Hindarto, D., 2023. Performance Comparison ConvDeconvNet Algorithm Vs. UNET for Fish Object Detection. Sinkron: jurnal dan penelitian teknik informatika, 8(4), pp.2827-2835. DOI: https://doi.org/10.33395/sinkron.v8i4.13135.
Hindarto, D., 2023. Enhancing Road Safety with Convolutional Neural Network Traffic Sign Classification. Sinkron: jurnal dan penelitian teknik informatika, 8(4), pp.2810-2818. DOI: https://doi.org/10.33395/sinkron.v8i4.13124.