Main Article Content
The motifs on the center of Sukhothai ceramics are essential elements for determining the age of the ceramics. Sukhothai ceramics in each kiln were made with different pattern production techniques, and thus one specific pattern appears only in a particular kiln. Thus, archaeologists can determine which ceramic was produced from which particular kiln site by investigating its motif. Motif identification requires a well-experienced expert to identify the tracery of the pattern on the center of a ceramic. Thus, identifying such archaeological evidence is complex even for general archaeologists. The aim of this research was to study the use of deep convolutional neural networks for classifying seven motif patterns on the center of Sukhothai ceramics (i.e. Chrysanthemum bouquet, Classic scroll, Conch shell, Fish pattern, Flower head pattern, Printed Chrysanthemum head, and Tibetan Buddhist vajra ). We collected a new dataset, including 557 images of ceramics, from two kiln sites. Each ceramic’s motif was labeled by Thai ceramic experts. The collection of the motifs on the center of the Sukhothai ceramic dataset was called CMC Sukhothai Ceramic Dataset. The efficiency of the motif identification on the center of Sukhothai ceramics was evaluated by comparing five pretrained convolutional neural network models: DenseNet121, InceptionV3, VGG16, GoogLeNet, and AlexNet. Then, the models that were efficient for our dataset were selected and trained by fine tuning. Results showed that the motif recognition of VGG16 + our classification layers generated the best efficiency at 500 epochs of training and 86.54% of accuracy in the test dataset.
Keywords: ancient ceramics identification; machine learning; deep convolutional neural network; ancient Thailand ceramics recognition; ancient ceramics analysis; ımage classification
*Corresponding author: Tel.: (+66) 809513938
Copyright Transfer Statement
The copyright of this article is transferred to Current Applied Science and Technology journal with effect if and when the article is accepted for publication. The copyright transfer covers the exclusive right to reproduce and distribute the article, including reprints, translations, photographic reproductions, electronic form (offline, online) or any other reproductions of similar nature.
The author warrants that this contribution is original and that he/she has full power to make this grant. The author signs for and accepts responsibility for releasing this material on behalf of any and all co-authors.
Here is the link for download: Copyright transfer form.pdf
Thammapreechakorn, P., 2004. Using Dated Chinese and Vietnamese Ceramics for Studying Thai Archaeological Works and Art Styles: A Case Study on the 14th-18th Centuries Stucco Decorations of Ancient Monuments in Thailand. MA. Silpakorn University, Thailand.
Thammaprichako̜n, P., Phinsi, K. and Nguanphianphak, U., 1992. Sukhothai Ceramics: Explication of Thai Ceramics. Bangkok: Bo̜risat Osotsapha.
Thammapreechakorn, S. and Chantawit, N., 1987. In S. Intaralip, ed. The Ceramics of South-East Asia from the 10th to 16th Centuries A.D. Bangkok: Bo̜risat Osotsapha, p. 14.
Zhang, J., Xie, Y., Wu, Q. and Xia, Y., 2019. Medical image classification using synergic deep learning. Medical Image Analysis, 54, 10-19.
Saba, L., Biswas, M., Kuppili, V., Godia, E.C., Suri, H.S., Edla, D.R., Omerzu, T., Laird, J.R., Khanna, N.N., Mavrogeni, S. and Protogerou, A., 2019. The present and future of deep learning in radiology. European Journal of Radiology, 114, 14-24.
Hoang, D.T. and Kang, H.J., 2019. A survey on deep learning based bearing fault diagnosis. Neurocomputing, 335, 327-335.
Sun, Y., Liu, Y., Wang, G. and Zhang, H., 2017. Deep learning for plant identification in natural environment. Computational Intelligence and Neuroscience, 2017, https://doi.org/10. 1155/ 2017/7361042
Coulibaly, S., Kamsu-Foguem, B., Kamissoko, D. and Traore, D., 2019. Deep neural networks with transfer learning in millet crop images. Computers in Industry, 108, 115-120.
Lateef, F. and Ruichek, Y., 2019. Survey on semantic segmentation using deep learning techniques. Neurocomputing, 338, 321-348.
Chen, L., Chen, J., Zou, Q., Huang, K. and Li, Q., 2017. Multi-view feature combination for ancient paintings chronological classification. Journal on Computing and Cultural Heritage, 10(2), 1-15.
Khan, F.S., Beigpour, S., Van de Weijer, J. and Felsberg, M., 2014. Painting-91: a large scale database for computational painting categorization. Machine Vision and Applications, 25(6), 1385-1397.
Zou, Q., Cao, Y., Li, Q., Huang, C. and Wang, S., 2014. Chronological classification of ancient paintings using appearance and shape features. Pattern Recognition Letters, 49, 146-154.
Can, G., Odobez, J.M. and Gatica-Perez, D., 2018. How to tell ancient signs apart? Recognizing and visualizing Maya glyphs with CNNs. Journal on Computing and Cultural Heritage, 11(4), 1-25.
Can, G., Odobez, J.M. and Gatica-Perez, D., 2017. Maya codical glyph segmentation: A crowdsourcing approach. IEEE Transactions on Multimedia, 20(3), 711-725.
Schlag, I. and Arandjelovic, O., 2017. Ancient Roman coin recognition in the wild using deep learning based recognition of artistically depicted face profiles. Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, October 22-29, 2017, pp. 2898-2906.
Aslan, S., Vascon, S. and Pelillo, M., 2020. Two sides of the same coin: Improved ancient coin classification using graph transduction games. Pattern Recognition Letters, 131, 158-165.
Cooper, J. and Arandjelović, O., 2020. Understanding ancient coin images. In L. Oneto, N. Navarin, A. Sperduti and D. Anguita, eds. Recent Advances in Big Data and Deep Learning, pp. 330-340. Cham: Springer.
Ma, Y. and Arandjelović, O., 2020. Classification of ancient roman coins by denomination using colour, a forgotten feature in automatic ancient coin analysis. Sci, 2(2), 37, https://doi.org/10.3390/sci2020037.
Cui, P.F., Ji, Y., Li, G.X., Zhao, W.J., Li, R.W., Sun, H.W., Zhao, Q.Y., Sun, X.M., Zhao, W.J., Xie, J.Z. and Guo, M., 2009. Study on ancient famous porcelains classification based on support vector machine. Journal of Henan Normal University (Natural Science), 37(2), 78-81. (in Chinese).
Yang, L.-H., 2015. Study about ancient ceramics classification based on support vector machine. Proceedings of the 2015 Sixth International Conference on Digital Manufacturing and Automation, Changsha, China, January 17-18, 2015, pp. 349-353.
Yu, W.Z. and Yan, L.J., 2011. Ancient ceramics classification based on grey-relation theory. Journal of Henan Normal University (Natural Science), 39(5), 59-62. (in Chinese).
Sun, H., Liu, M., Li, L., Yan, L., Zhou, Y. and Feng, X., 2020. A new classification method of ancient Chinese ceramics based on machine learning and component analysis. Ceramics International, 46(6), 8104-8110.
Bickler, S.H., 2018. Machine learning identification and classification of historic ceramics. Archaeology in New Zealand, 2018, 20-32.
Chetouani, A., Debroutelle, T., Treuillet, S., Exbrayat, M. and Jesset, S., 2018, October. Classification of ceramic shards based on convolutional neural network. 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, October 7-10, 2018, pp. 1038-1042.
Chetouani, A., Treuillet, S., Exbrayat, M. and Jesset, S., 2020. Classification of engraved pottery sherds mixing deep-learning features by compact bilinear pooling. Pattern Recognition Letters, 131, 1-7.
Alby, E., Desbiolles, V. and Lecocq, M., 2020. Automatic identification of archaeological artifacts on the excavation site. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 43, 1347-1353.
Mu, T., Wang, F., Wang, X. and Luo, H., 2019. Research on ancient ceramic identification by artificial intelligence. Ceramics International, 45(14), 18140-18146.
Huang, G., Liu, Z., Van Der Maaten, L. and Weinberger, K.Q., 2017. Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, July 21-26, 2017, pp. 4700-4708.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. and Wojna, Z., 2016. Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, June 27-30, 2016, pp. 2818-2826.
Simonyan, K. and Zisserman, A., 2014. Very Deep Convolutional Networks for Large-scale Image Recognition. [online] Available at: http://arxiv.org/abs/1409.1556.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A., 2015. Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, June 7-12, 2015, pp. 1-9.
Krizhevsky, A., Sutskever, I. and Hinton, G., 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90.
Beikmohammadi, A. and Faez, K., 2018. Leaf classification for plant recognition with deep transfer learning. 4th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS), Tehran, Iran, December 25, 2018, pp. 21-26