A Comparison of Deep Neural Network for Hevea Clone Identification

Main Article Content

Thiraphat Romruensukharom
Sarayut Nonsiri

Abstract

Hevea brasiliensis Muell. Arg, a rubber tree, is a highly heterozygous perennial plant usually grown from seed (seedlings). The tree exposed the disadvantage of no genetic uniformity. Unlike a clone, it was propagated by bud grafting from a single tree, possessing an identical genetic constitution and exhibiting uniformity among them. The leaf shape of seedlings is highly variable, while the leaf shape of clones is slightly variable. It also appears in similar characteristics to other clones. Therefore, the variation of leaf shape becomes the critical concern to distinguish them. The common cultivation clone RRIM 600 was considered for experiments, the dataset of RRIM 600 clones and seedlings was used for training the model. The objective of the research was to compare the performance of deep neural networks for H. brasiliensis clone identification, including VGG16, ResNet50, InceptionV3, MobileNet, Xception, DenseNet201, NASNetLarge, MobileNetV2, EfficientNetB7, RegNetX064, RegNetY064, ResNetRS50 and ConvNeXtBase. The appropriate hyperparameters were found through k-fold cross validation. The models were trained using transfer learning technique with FEA. Various augmentation techniques were applied in order to improve the performance. The results revealed that improved retraining the model on low resolution images by implementing ConvNeXtBase as feature extractor with S1 achieved the highest accuracy of 97.82% on a quarter of dataset (E3) and outperformed classification performance across all thresholds. This research suggests the potential for developing this Hevea clone identification application as a tool to overcome the lack of experienced Hevea clone inspectors.

Article Details

How to Cite
Romruensukharom, T., & Nonsiri, S. (2025). A Comparison of Deep Neural Network for Hevea Clone Identification. CURRENT APPLIED SCIENCE AND TECHNOLOGY, e0264760. https://doi.org/10.55003/cast.2025.264760
Section
Original Research Articles

References

Anjomshoae, S. T., Rahim, M. S. M., & Javanmardi, A. (2015). Hevea leaf boundary identification based on morphological transformation and edge detection. Journal of Pattern Recognition and Image Analysis, 25(2), 291-294. https://doi.org/10.1134/S1054661815020029

Anjomshoae, S. T., & Rahim, M. S. M. (2018). Feature extraction of overlapping Hevea leaves: a comparative study. Journal of Information Processing in Agriculture, 5(2), 234-245. https://doi.org/10.1016/j.inpa.2018.02.001

Arias, M., & van Dijk, P.J. (2019). What is natural rubber and why are we searching for new sources? Journal of Frontiers for Young Minds, 7(100), 1-9. https://doi.org/10.3389/frym.2019.00100

Balaga, O. N. R., & Patayon, U. B. (2024). Effectiveness of background segmentation algorithm and deep learning technique for detecting anthracnose (leaf blight) and golovinomyces cichoracearum (powdery mildew) in rubber plant. Procedia Computer Science, 234, 294-301. https://doi.org/10.1016/j.procs.2024.03.013

Bello, I., Fedus, W., Du, X., Cubuk, E. D., Srinivas, A., Lin, T.-Y., Shlens, J., & Zoph, B. (2021). Revisiting ResNets: improved training and scaling strategies. In Proceedings of the 35th conference on neural information processing system (pp. 22614-22627). Neural Information Processing Systems Foundation.

Bengio, Y., LeCun, Y., & Hinton, G. (2021). Deep learning for AI. Journal of Communications of the ACM, 64(7), 58-65. https://doi.org/10.1145/3448250

Bianco, S., Cadene, R., Celona, L., & Napoletano, P. (2018). Benchmark analysis of representative deep neural network architectures. Journal of IEEE Access, 6, 64270-64277. https://doi.org/10.1109/ACCESS.2018.2877890

Buquet, J., Zhang, J., Roulet, P., Thibault, S., & Lalonde J.-F. (2021). Evaluating the impact of wide-angle lens distortion on learning-based depth estimation. In Proceedings of IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 3688-3696). IEEE. https://doi.org/10.1109/CVPRW53098.2021.00409

Chang, C.-Y., & Lai, C.-C. (2024). Potato leaf disease detection based on a lightweight deep learning model. Machine Learning and Knowledge Extraction, 6(4), 2321-2335. https://doi.org/10.3390/make6040114

Chollet, F. (2015). Keras applications. https://keras.io/api/applications

Chollet, F. (2017). Xception: deep learning with depthwise separable convolutions. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 1800-1807). IEEE. https://doi.org/10.1109/CVPR.2017.195

Cordonnier, J.-B., Loukas, A., & Jaggi, M. (2020). On the relationship between self- attention and convolutional layers. In Proceedings of the 8th international conference on learning representations (pp. 1-18). ICLR.

Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: a large - scale hierarchical image database. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 248-255). IEEE. https://doi.org/10.1109/CVPR.2009.5206848

Ding, B., Qian, H., & Zhou, J. (2018). Activation functions and their characteristics in deep neural networks. In Proceedings of Chinese control and decision conference (pp. 1836-1841). IEEE. https://doi.org/10.1109/CCDC.2018.8407425

Dodge, S., & Karam, L. (2016). Understanding how image quality affects deep neural networks. In Proceedings of the 8th international conference on quality of multimedia Experience (pp. 1-6). IEEE. https://doi.org/10.1109/QoMEX.2016.7498955

Dogo, E. M., Afolabi, O. J., & Twala, B. (2022). On the relative impact of optimizers on convolutional neural networks with varying depth and width for image classification. International Journal of Applied Sciences, 12(23), Article 11976. https://doi.org/10.3390/app122311976

Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, D., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2021). An image is worth 16x16 words: transformers for image recognition at scale. In Proceedings of international conference on learning representations (pp. 1-21). ICLR.

Dumoulin, V., & Visin, F. (2018). A guide to convolution arithmetic for deep learning. https://arxiv.org/pdf/1603.07285

Fonseka, D., & Chrysoulas, C. (2020). Data augmentation to improve the performance of a convolutional neural network on image classification. In Proceedings of international conference on decision aid sciences and application (pp. 515-518). IEEE. https://doi.org/10.1109/DASA51403.2020.9317249

Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36, 193-202. https://doi.org/10.1007/BF00344251

Hassan, D. P., Fajardo, A. C., & Medina, R. P. (2022). Categorization of rubber tree seedling based in leaves using neural network. Journal of Engineering Science and Technology, 2022(special issue 4), 89-96.

He, K., Zhang, X., Ren S., & Sun J. (2016). Deep residual learning for image recognition. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 770-778). IEEE. https://doi.org/10.1109/CVPR.2016.90

Hendrycks, D., Kee, K., & Mazeika, M. (2019). Using pre-training can improve model robustness and uncertainty. Proceedings of the 36th international conference on machine learning (pp. 2712-2721). PMLR. https://proceedings.mlr.press/v97/hendrycks19a.html

Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., Le, Q. V., & Adam, H. (2019). Searching for MobileNetV3. In Proceedings of IEEE/CVF international conference on computer vision (pp. 1314-1324). IEEE. https://doi.org/10.1109/ICCV.2019.00140

Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. https://arxiv.org/pdf/1704.04861

Huang, G., Lui, Z., Van Der Maaten L., & Weinberger K. Q. (2017). Densely connected convolutional network. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 2261-2269). IEEE. https://doi.org/10.1109/CVPR.2017.243

Huang, Z.-K., He, C.-Q., Wang, Z.-N. Xi, Wang H., & Hou, L.-Y. (2019). Cinnamomum camphora classification based on leaf image using transfer learning. In Proceeding of the 4th advanced information technology, electronic and automation control conference (pp. 1426-1429). IEEE. https://doi.org/10.1109/IAEAC47372.2019.8997791

Ioffe, S., & Szegedy, C. (2015). Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd international conference on machine learning (pp. 448-456). PMLR. https://proceedings.mlr.press/v37/ioffe15.html

Javid, A. M., Das, S., Skoglund, M., & Chatterjee, S. (2021). A ReLU dense layer to improve the performance of neural networks. In Proceedings of IEEE international conference of acoustics, speech and signal processing (pp. 2810-2814). IEEE. https://doi.org/10.1109/ICASSP39728.2021.9414269

Jepkoech, J., Mugo, D. M., Kendujywo, B. K., & Too, E. C. (2021). The effect of adaptive learning rate on the accuracy of neural networks. International Journal of Advanced Computer Science and Applications, 12(8), 736-751. https://doi.org/10.14569/IJACSA.2021.0120885

Kaewboonna, N., Chanakot, B., Lertkrai, J., & Lertkrai, P. (2023). Thai rubber leaf disease classification using deep learning techniques. In Proceedings of the 6th artificial intelligence and cloud computing conference (pp. 84-91). Association for Computing Machinery. https://doi.org/10.1145/3639592.3639605

Kanda, P. S., Xia, K., & Sanusi, O. H. (2021). A deep learning - based recognition technique for plant leaf classification. Journal of IEEE Access, 9, 162590-162613. https://doi.org/10.1109/ACCESS.2021.3131726

Kelleher, J. D. (2019). Deep learning. The MIT Press.

Kingma, D. P., & Ba, J. L. (2017). ADAM: a method for stochastic optimization. https://arxiv.org/pdf/1412.6980

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In Proceedings of the 26th conference on neural information processing systems (pp. 1-9). Neural Information Processing Systems Foundation.

LeCun, Y., & Bengio, Y. (1995). Convolutional networks for images, speech and time series. In M. A. Arbib (Ed.). The handbook of brain theory and neural networks (pp. 255-258). The MIT Press.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436-444. https://doi.org/10.1036/nature14539

Lei, C., Hu, B., Wang, D., Zhang, S., & Chen, Z. (2019). A preliminary study on data augmentation of deep learning for image classification. In Proceedings of the 11th Asia-Pacific symposium in internetware (pp. 1-6). Association for Computing Machinery. https://doi.org/10.1145/3361242.3361259

Li, G., Zhang, R., Qi, D., & Ni, H. (2024). Plant-leaf recognition based on sample standardization and transfer learning. Applied Sciences, 14(18), Article 8122. https://doi.org/10.3390/app14188122

Lin, M., Chen, Q., & Yan, S. (2014). Network in network. https://arxiv.org/pdf/1312.4400

Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S. & Guo, B. (2021). Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of IEEE/CVF international conference on computer vision (pp. 9992-10002). IEEE. https://doi.org/10.1109/ICCV48922.2021.00986

Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A ConvNet for the 2020s. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 11966-11976). IEEE. https://doi.org/10.1109/CVPR52688.2022.01167

Liyanage, K. K. (2021). Clone identification. In V. H. L. Rodrigo & P. Seneviratene (Eds.), Handbook of rubber Vol. 1: agronomy (pp. 253-269). Rubber Research Institute of Sri Lanka.

Ngugi, H. N., Akinyelu, A. A., & Ezugwu, A. E. (2024). Machine learning and deep learning for crop disease diagnosis: performance analysis and review. Agronomy, 14(12), Article 3001. https://doi.org/10.3390/agronomy14123001

Nibret, E. A., Mequanenit, A. M., Ayalew, A. M., Kusrini, K., & Martínez-Béjar, R. (2025). Sesame plant disease classification using deep convolution neural networks. Applied Sciences, 15(4), Article 2124. https://doi.org/10.3390/app15042124

Office of Agricultural Economics. (2024a). Agricultural statistics of Thailand 2024. https://oae.go.th/uploads/files/2025/04/30/fd747711b82231d4.pdf

Office of Agricultural Economics. (20234b). Information of agricultural economics 2023. https://oae.go.th/uploads/files/2025/05/06/13e8089a3f69ea96.pdf

O’Shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. https://arxiv.org/pdf/1511.08458

Pasaribu, S. A., Basayuni, M., Purba, E., & Hasanah, Y. (2022). Leaf characterizations of IRR 400 series, BPM 24, and RRIC 100 rubber (Hevea brasiliensis Muell. Arg.) clone using leafgram method. International Journal on Advanced Science Engineering Information Technology, 12(5), 1721-1727. https://doi.org/10.18517/ijaseit.12.5.15512

Pongsomsong, P., & Ratanaworabhan P. (2021). Automatic rubber tree classification. In Proceeding of the 18th international conference on electrical engineering/electronics, computer, telecommunications and information technology (pp. 167-170). IEEE. https://doi.org/10.1109/ECTI-CON51831.2021.9454800

Poungchompu, S., & Chantanop, S. (2015), Factor affecting technical efficiency of smallholder rubber farming in northeast Thailand. American Journal of Agricultural and Biological Sciences, 10(2), 83-90. https://doi.org/10.3844/ajabssp.2015.83.90

Pratomo, B., Lisnawita, Nisa, T. C., & Basyuni, M. (2021). Short communication: digital identification approach to characterize Hevea brasiliensis leaves. Journal of Biodiversitas, 22 (2), 1006-1013. https://doi.org/10.13057/biodiv/d220257

Radosavovic, I., Kosaraju, R. P., Girshick, R., He, K., & Dollar, P. (2020). Designing network design spaces. In Proceedings of IEEE/CVF conference on computer vision and pattern recognition (pp. 10425-10433). IEEE. https://doi.org/10.1109/CVPR42600.2020.01044

Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386-408. https://doi.org/10.1037/h0042519

Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L.-C. (2018). MobileNetV2: inverted residuals and linear bottlenecks, In Proceedings of IEEE/CVF conference on computer vision and pattern recognition (pp. 4510-4520). IEEE. https://doi.org/10.1109/CVPR.2018.00474

Saraswathyamma, C. K., Licy, J., & Marattukalem, G. J. (2000). Planting materials. In P. J. George & C. Kuruvilla Jacob, (Eds.). Natural rubber: agromanagement and crop processing (pp. 59-74). Anaswara Printing and Publishing Company.

Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. https://arxiv.org/pdf/1409.1556

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1), 1929-1958. https://dl.acm.org/doi/10.5555/2627435.2670313

Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. (2017). Inception-v4, Inception-ResNet and the impact of residual connections on learning. In Proceedings of the 31th AAAI conference on artificial intelligence (pp. 4278-4284). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v31i1.11231

Szegedy, Z., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions, In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 1-9). IEEE. https://doi.org/10.1109/CVPR.2015.7298594

Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Woina, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 2818-2826). IEEE. https://doi.org/10.1109/CVPR.2016.308

Tan, M., & Le, Q. V. (2019). EfficientNet: rethinking model scaling for convolutional neural networks. In Proceedings of the 36th international conference on machine learning (pp. 6105-6114). ML Research Press.

Thurachon, W., & Sumethawatthanaphong W. (2014). Classification system para rubber varieties using Naïve Bayes. In Proceedings of the 10th national conference on computing and information technology (pp. 20-25). King Mongkut’s University of Technology North Bangkok.

Tiwari, S. (2020). A comparative study of deep learning models with handcraft features and non-handcraft features for automatic plant species identification. International Journal of Agricultural and Environmental Information Systems, 11(2), 44-57. https://doi.org/10.4018/IJAEIS.2020040104

Xiao, K., Engstorm, L., Ilyas, A., & Madry, A., (2020). Noise or signal: the role of image backgrounds in object recognition. https://openreview.net/pdf?id=gl3D-xY7wLq

Xie, S., Girshick, R., Dollar, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 5987-5995). IEEE. https://doi.org/10.1109/CVPR.2017.634

Yadav, S., & Shukla, S. (2016). Analysis of k-fold cross validation over hold - out validation on colossal datasets for quality classification. In Proceedings of IEEE the 6th international conference on advanced computing (pp. 78-83). IEEE. https://doi.org/10.1109/IACC.2016.25

Yaiprasert, C. (2021). Artificial intelligence for para rubber identification combining five machine learning methods. Karbala International Journal of Modern Science. 7(3), 257-267. https://doi.org/10.33640/2405-609X.3154

Yin, X., Chen, W., Wu, X., & Yue, H. (2017). Fine-tuning and visualization of convolutional neural networks. In Proceedings of the 12th IEEE conference on industrial electronics and applications (pp. 1310-1315). IEEE. https://doi.org/10.1109/ICIEA.2017.8283041

Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks. In Proceedings of the 27th international conference advances in neural information processing systems (pp. 3320-3328). Neural Information Processing Systems Foundation.

Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In Proceedings of the 13th European conference on computer vision (pp. 818-833). Springer. https://doi.org/10.1007/978-3-319-10590-1_53

Zeng, T., Li, C., Zhang, B., Wang, R., Fu, W., Wang, J., & Zhang, X. (2022). Rubber leaf disease recognition based on improved deep convolutional neural networks with a cross-scale attention mechanism. Frontiers in Plant Science, 13, Article 829479. https://doi.org/10.3389/fpls.2022.829479

Zheng, S., Song, Y., Leung, T., & Goodfellow, I. (2016). Improving the robustness of deep neural networks via stability training. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 4480-4488). IEEE. https://doi.org/10.1109/CVPR.2016.485

Zhou, Y., Song, S., & Cheung, N.-M. (2017). On classification of distorted images with deep convolutional neural networks. In Proceedings of IEEE international conference on acoustics, speech and signal processing (pp. 1213-1217). IEEE. https://doi.org/10.1109/ICASSP.2017.7952349

Zoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2018). Learning transferable architecture for scaling image recognition. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 8697-8710). IEEE. https://doi.org/10.1109/CVPR.2018.00907