Autonomous Mobile Robot Using Vision System and ESP8266 Node MCU Board
Main Article Content
Abstract
This paper proposes an automated mobile robot indoor system. A web camera sensor is equipped to detect the current location of the vehicle. The web camera is located above to capture the object and environment for mapping. The images come from the web camera via a USB interface to the computer. The image processing method is used to determine the position of the mobile robot for giving the input of path planning. The microcontroller obtains interactive actions with the combinations of image processing and suitable path planning to control the direction of the mobile robot. The experimental results show that the vision system can interact with the microcontroller. The robot can move automatically from the starting point to the goal.
Keywords: vision system; autonomous vehicle; mobile robot; microcontroller; image processing
*Corresponding author: Tel.: (086) 4259561
E-mail: napassadol.s@mail.rmutk.ac.th
Article Details
Copyright Transfer Statement
The copyright of this article is transferred to Current Applied Science and Technology journal with effect if and when the article is accepted for publication. The copyright transfer covers the exclusive right to reproduce and distribute the article, including reprints, translations, photographic reproductions, electronic form (offline, online) or any other reproductions of similar nature.
The author warrants that this contribution is original and that he/she has full power to make this grant. The author signs for and accepts responsibility for releasing this material on behalf of any and all co-authors.
Here is the link for download: Copyright transfer form.pdf
References
Farag, W., 2018. Recognition of traffic signs by convolutional neural nets for self-driving vehicles. International Journal of Knowledge Based and Intelligent Engineering Systems, 22(3), 205-214.
Elbagoury, B. M., Maskeliunas, R. and Salem, A. B. M. M., 2018. A Hybrid Liar/Radar-based deep learning and vehicle recognition engine for autonomous vehicle precrash control. Eastern-European Journal of Enterprise Technologies, 5(9), 6-17.
Zhou, Z., Akhtar, Z., Man, KL. and Siddique, K., 2019. A deep learning platooning-based video information-sharing internet of things framework for autonomous driving systems. International Journal of Distributed Sensor Networks, 15(11), https://doi.org/10.1177/ 1550147719883133
Boukerche, A., Siddiqui, A.J. and Mammeri, A., 2017. Automated vehicle detection and classification: Models, methods, and techniques. ACM Computing Surveys, 50(5), 62, https://doi.org/10.1145/3107614
Li, B., Zheng, S. and Qiu, M., 2018. Research on picking identification and positioning system based on IOT. International Journal of Online and Biomedical Engineering, 14(7), 149-160.
Visconti, P., de Fazio, R., Costantini, P., Miccoli, S. and Cafagna, D., 2019. Arduino-based solution for in-car abandoned infants’ controlling remotely managed by smartphone application. Journal of Communications Software and Systems, 15(2), 89-100.
Müller, M., Casser, V., Lahoud, J., Smith, N. and Ghanem, B., 2018. Sim4CV: A photo-realistic simulator for computer vision applications. International Journal of Computer Vision, 126(9), 902-919.
Wong, C.-C., Chen, H.-C., Lee, C.-T., Wang, C.-C. and Feng, H.-M., 2019. High interactive sensory robot system design in the indoor autonomous services applications. Journal of Intelligent and Fuzzy Systems, 36(2), 1259-1271.
Gwak, J., Jung, J., O., R.D., P., M., Ramimov, M.A.K., R. and Ahn, J., 2019. A review of intelligent self-driving vehicle software research. KSII Transactions on Internet and Information Systems, 13(11), 5299-5320.
Guillaumin, M., Küttel, D. and Ferrari, V., 2014. ImageNet auto-annotation with segmentation propagation. International Journal of Computer Vision, 110(3), 328-348.
Han, Y., Jiang, T., Ma, Y. and Xu, C., 2018. Pretraining convolutional neural networks for image-based vehicle classification. Advances in Multimedia, 2018, https://doi.org/10.1155/ 2018/3138278
Zhang, B., Zhou, Y., Pan, H. and Tillo, T., 2014. Hybrid model of clustering and kernel autoassociator for reliable vehicle type classification. Machine Vision and Applications, 25(2), 437-250.
Martinez-Guanter, J., Agüera, P., Agüera, J. and Pérez-Ruiz, M., 2019. Spray and economics assessment of a UAV-based ultra-low-volume application in olive and citrus Orchards. Precision Agriculture, 21(1), 226-243.
Wang, K., Yan, F., Zou, B., Tang, L., Yuan, Q. and Lv, C., 2019. Occlusion-free road segmentation leveraging semantics for autonomous vehicles. Sensors, 19(21), 4711, https://doi.org/10.3390/s19214711
Ravankar, A., Ravankar, A.A., Kobayashi, Y., Hoshino, Y. and Peng, C.-C., 2018. Path smoothing techniques in robot navigation: State-of-the-art, current and future challenges. Sensors, 18(9), 3170, https://doi.org/10.3390/s18093170
Layek, M.A., Chung, T.C. and Huh, E.-N., 2019. Remote distance measurement from a single image by automatic detection and perspective correction. KSII Transactions on Internet and Information Systems, 13(8), 3981-4004.
Zhao, J., Liang, B. and Chen, Q., 2018. The key technology toward the self-driving car. International Journal of Intelligent Unmanned Systems, 6(1), 2-20.
Yenkaya, S., Yenkaya, G. and Düven, E., 2013. Keeping the vehicle on the road: A survey on on-road lane detection systems. ACM Computing Surveys, 46(1), https://doi.org/10.1145/ 2522968.2522970.
Ilias, B., Shukor, A.S., Yaacob, S., Adom, A.H. and Razali, M.H.M., 2014 A nurse following robot with high speed kinetic sensor. ARPN Journal of Engineering and Applied Sciences, 9(12), 2454-2459.
Wong, C.-C., Chen, H.-C., Lee, C.-T., Wang, C.-C. and Feng, H.-M., 2019. High interactive sensory robot system design in the indoor autonomous services applications. Journal of Intelligent and Fuzzy Systems, 36(2), 1259-1271.
Ouyang, Z.-H., Cui, S.-A., Zhang, P.-F., Wang, S.-F., Dai, X. and Xia, Q., 2020. Iterative closest point (ICP) performance comparison using different types of Lidar for indoor localization and mapping. Lasers in Engineering (Old City Publishing), 47(4-6), 221-232.
Sidharta, H.A., Sidharta, S. and Sari, W.P., 2019. 2D mapping and boundary detection using 2D LIDAR sensor for prototyping autonomous PETIS (programable vehicle with integrated sensor). Kinetik, 4(2), 107-114.
Rivai, M., Hutabarat, D. and Nafis, Z.M.J., 2020. 2D mapping using omni-directional mobile robot equipped with LiDAR. Telecommunication Computing Electronics and Control, 18(3), https://doi.org/10.12928/telkomnika.v18i3.14872
Mohammed, S.K.K. and Maqbool, T.T., 2018. Android board based intelligent car anti-theft system through face recognition using GSM and GPS. Journal of Applied Information Science, 6(2), 1-5.
Okarma, K. and Fastowicz, J., 2019. Adaptation of full-reference image quality assessment methods for automatic visual evaluation of the surface quality of 3D prints. Elektronika Ir Elektrotechnika, 25(5), 57-62.
Chiu, C.-C., Ku, M.-Y. and Wang, C.-Y., 2010. Automatic traffic surveillance system for vision-based vehicle recognition and tracking. Journal of Information Science and Engineering, 26(2), 611-629.
Ospina, R.E., Cardona, S.D. and Bacca-Cortes, B., 2017. Software tool for thermographic inspection using multimodal fusing of thermal and visible images. Ingeniería y Competitividad, 19(1), 53-68.
Bargoti, S. and Underwood, J.P., 2017. Image segmentation for fruit detection and yield estimation in apple orchards. Journal of Field Robotics, 34(6), 1039-1060.
Franchetti, B., Ntouskos, V., Giuliani, P., Herman, T., Barnes, L. and Pirri, F., 2019. Vision based modeling of plants phenotyping in vertical farming under artificial lighting. Sensors, 19(20), 4378, https://doi.org/10.3390/s19204378
Esau, T., Zaman, Q., Groulx, D., Farooque, A., Schumann, A. and Chang, Y., 2018. Machine vision smart sprayer for spot-application of agrochemical in wild blueberry fields. Precision Agriculture, 19(4), 770-788.
Shinkuma, R., Nishio, T., Inagaki, Y. and Oki, E., 2020. Data assessment and prioritization in mobile networks for real-time prediction of spatial information using machine learning. EURASIP Journal on Wireless Communications and Networking, 2020, 92, https://doi.org/ 10.1186/s13638-020-01709-1
Gomes, S.L., Rebouças, E.D., Neto, E.-C., Papa, J.P., Albuquerque, V.H., Filho, P.P.R. and Tavares, J.M., 2017. Embedded real-time speed limit sign recognition using image processing and machine learning techniques. Neural Computing and Applications, 28(1), 573-584.
Zhan, W., Xiao, C., Wen, Y., Zhou, C., Yuan, H., Xiu, S., Zhang, Y., Zou, X., Lu, X. and Li, Q., 2019. Autonomous visual perception for unmanned surface vehicle navigation in an unknown environment. Sensors,19, 2216, https://doi.org/10.3390/s19102216
Xu, Y., Fang, G., Chen, S., Zou, J.L. and Ye, Z., 2014. Real-time image processing for vision-based weld seam tracking in robotic GMAW. International Journal of Advanced Manufacturing Technology, 73(9-12), 1413-1425.
Cubero, S., Aleixos, N., Albert, F., Torregrosa, A., Ortiz, C., García-Navarrete, O. and Blasco, J., 2014. Optimised computer vision system for automatic pre-grading of citrus fruit in the field using a mobile platform. Precision Agriculture, 15(1), 80-94.