Ensemble multiple CNNs methods with partial training set for vehicle image classification

Main Article Content

Narong Boonsirisumpun
Olarik Surinta


Convolutional neural networks (CNNs) are now the state-of-the-art method for several types of image recognition. One challenging problem is vehicle image classification. However, applying only a single CNNs model is difficult due to the weakness of each model. This problem can be solved by using the ensemble method. Using the power of multiple CNNs together helps increase the final output accuracy but is very time-consuming. This paper introduced the new ensemble multiple CNNs methods with a partial training set method. This method combined the advantages of the ensemble technique to increase the recognition accuracy and used the idea of a partial training set to decrease the time of the training process. Its performance helped decrease the time taken by more than 60% but it was still able to maintain a high accuracy score of 96.01%, compared to the full ensemble technique. These properties made it a good choice to compete with other single CNNs models.


Download data is not yet available.

Article Details

How to Cite
Boonsirisumpun, N., & Surinta, O. (2022). Ensemble multiple CNNs methods with partial training set for vehicle image classification. Science, Engineering and Health Studies, 16, 22020001. https://doi.org/10.14456/sehs.2022.12
Physical sciences


Boonsirisumpun, N., and Surinta, O. (2022). Fast and accurate deep learning architecture on vehicle type recognition. Current Applied Science and Technology, 22(1), 1-16.

Dietterich, T. G. (2000). Ensemble methods in machine learning. In Proceedings of International Workshop on Multiple Classifier Systems, pp. 1-15. Cagliari, Italy.

Dogan, A., and Birant, D. (2019). A weighted majority voting ensemble approach for classification. In Proceedings of the 4th International Conference on Computer Science and Engineering, pp. 1-6. Samsun, Turkey.

Goh, Z. L. (2021). Model optimization for vehicle recognition on edge device, Doctoral dissertation, Naval Postgraduate School, USA.

He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778. Nevada, USA.

Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv, 1704.04861.

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In Proceedings of Advances in Neural Information Processing Systems 25, pp. 1097-1105. Nevada, USA.

Polikar, R. (2012). Ensemble learning. In Ensemble Machine Learning (Zhang, C., and Ma, Y., eds.), pp. 1-34. Boston, Massachusetts: Springer.

Puarungroj, W., and Boonsirisumpun, N. (2018). Thai license plate recognition based on deep learning. Procedia Computer Science, 135, 214-221.

Raza, K. (2019). Improving the prediction accuracy of heart disease with ensemble learning and majority voting rule. In U-Healthcare Monitoring Systems (Nilanjan, D., Ashour, A. S., Fong, S. J., and Borra, S., eds.), pp. 179-196. San Diego, California: Academic Press.

Re, M., and Valentini, G. (2012). Ensemble methods: a review. In Advances in Machine Learning and Data Mining for Astronomy (Way, M. J., Scargle, J. D., Ali, K. M., and Srivastava, A. N., eds.), pp. 563-593. New York: CRC Press.

Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L. C. (2018). MobileNetV2: inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510-4520. Utah, USA.

Sewell, M. (2011). Ensemble learning. University College London Research Note, 11(2), 1-12.

Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition, arXiv, 1409.1556.

Špaňhel, J., Sochor, J., and Makarov, A. (2018). Vehicle fine-grained recognition based on convolutional neural networks for real-world applications. In Proceedings of 14th Symposium on Neural Networks and Applications, pp. 1-5. Belgrade, Serbia.

Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 4278-4284. California, USA.

Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9. Massachusetts, USA.

Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818-2826. Nevada, USA.

Thomas, A., Harikrishnan, P. M., Palanisamy, P., and Gopi, V. P. (2020). Moving vehicle candidate recognition and classification using inception-resnet-v2. In Proceedings of IEEE 44th Annual Computers, Software, and Applications Conference, pp. 467-472. Madrid, Spain.