JCTAM OPEN ACCESS

Journal of Computer Technology and Applied Mathematics

ISSN:3007-4126 (print) | ISSN:3007-4134 (online) | Publication Frequency: Bimonthly

OPEN ACCESS|Research Article||1 January 2025

Integrating Machine Learning for Optimal Path Planning

* Corresponding Author1: Shiru Xiao, E-Mail: siruxiao0991@gmail.com

Publication

Accepted Unknow ; Published 2025 January 1

Journal of Computer Technology and Applied Mathematics, 2025, 2(1), 3007-4126.

Abstract

In the area of AI based path planning, the learner is not told which actions to take, as is common in most forms of machine learning. Instead, the learner must discover through trial and error, which actions yield the most rewards. In the most interesting and challenging cases, actions affect not only the immediate rewards but also the next station or subsequent rewards. The characteristics of trial and error searches and delayed reward are two important distinguishing features of RL, which are defined not by characterizing learning methods, but by characterizing a learning problem.

Keywords

Machine Learning , Robotic Vision , Path Planning .

Metadata

Pages: 21-26

References: 22

Disciplines: Artificial Intelligence

Subjects: Machine Learning

Cite This Article

APA Style

Xiao, S. (2025). Integrating machine learning for optimal path planning. Journal of Computer Technology and Applied Mathematics, 2(1), 21-26. https://doi.org/10.70393/6a6374616d.323534

Acknowledgments

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

FUNDING

Not applicable.

INSTITUTIONAL REVIEW BOARD STATEMENT

Not applicable.

DATA AVAILABILITY STATEMENT

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

INFORMED CONSENT STATEMENT

Not applicable.

CONFLICT OF INTEREST

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

AUTHOR CONTRIBUTIONS

Not applicable.

References

1.
Naveed, K. B., Qiao, Z., & Dolan, J. M. (2021, September). Trajectory planning for autonomous vehicles using hierarchical reinforcement learning. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC) (pp. 601-606). IEEE.

2.
Kosuru, V. S. R., & Venkitaraman, A. K. (2022). Developing a deep Q-learning and neural network framework for trajectory planning. European Journal of Engineering and Technology Research, 7(6), 148-157.

3.
Che, C., & Tian, J. (2024). Maximum flow and minimum cost flow theory to solve the evacuation planning. Advances in Engineering Innovation, 12, 60-64.

4.
Che, C., & Tian, J. (2024). Understanding the Interrelation Between Temperature and Meteorological Factors: A Case Study of Szeged Using Machine Learning Techniques. Journal of Computer Technology and Applied Mathematics, 1(4), 47-52.

5.
Wulfmeier, M., Rao, D., Wang, D. Z., Ondruska, P., & Posner, I. (2017). Large-scale cost function learning for path planning using deep inverse reinforcement learning. The International Journal of Robotics Research, 36(10), 1073-1087

6.
Che, C., & Tian, J. (2024). Analyzing patterns in Airbnb listing prices and their classification in London through geospatial distribution analysis. Advances in Engineering Innovation, 12, 53-59.

7.
Lyridis, D. V. (2021). An improved ant colony optimization algorithm for unmanned surface vehicle local path planning with multi-modality constraints. Ocean Engineering, 241, 109890.

8.
Che, C., & Tian, J. (2024). Game Theory: Concepts, Applications, and Insights from Operations Research. Journal of Computer Technology and Applied Mathematics, 1(4), 53-59.

9.
Paxton, C., Raman, V., Hager, G. D., & Kobilarov, M. (2017, September). Combining neural networks and tree search for task and motion planning in challenging environments. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 6059-6066). IEEE.

10.
Che, C., & Tian, J. (2024). Methods comparison for neural network-based structural damage recognition and classification. Advances in Operation Research and Production Management, 3, 20-26.

11.
Tian, J., & Che, C. (2024). Automated Machine Learning: A Survey of Tools and Techniques. Journal of Industrial Engineering and Applied Science, 2(6), 71-76.

12.
Che, C., & Tian, J. (2024). Leveraging AI in Traffic Engineering to Enhance Bicycle Mobility in Urban Areas. Journal of Industrial Engineering and Applied Science, 2(6), 10-15.

13.
Zhou, X., Wu, P., Zhang, H., Guo, W., & Liu, Y. (2019). Learn to navigate: cooperative path planning for unmanned surface vehicles using deep reinforcement learning. Ieee Access, 7, 165262-165278.

14.
Cheng, X. (2024). Investigations into the Evolution of Generative AI. Journal of Computer Technology and Applied Mathematics, 1(4), 117-122.

15.
Cheng, X., & Che, C. (2024). Optimizing Urban Road Networks for Resilience Using Genetic Algorithms. Academic Journal of Sociology and Management, 2(6), 1-7.

16.
Kim, B., & Pineau, J. (2016). Socially adaptive path planning in human environments using inverse reinforcement learning. International Journal of Social Robotics, 8, 51-66.

17.
Cheng, X. (2024). Machine Learning-Driven Fraud Detection: Management, Compliance, and Integration. Academic Journal of Sociology and Management, 2(6), 8-13.

18.
Cheng, X., & Che, C. (2024). Interpretable Machine Learning: Explainability in Algorithm Design. Journal of Industrial Engineering and Applied Science, 2(6), 65-70.

19.
Ait Saadi, A., Soukane, A., Meraihi, Y., Benmessaoud Gabis, A., Mirjalili, S., & Ramdane-Cherif, A. (2022). UAV path planning using optimization approaches: A survey. Archives of Computational Methods in Engineering, 29(6), 4233-4284.

20.
Cheng, X. (2024). A Comprehensive Study of Feature Selection Techniques in Machine Learning Models.

21.
Low, E. S., Ong, P., & Cheah, K. C. (2019). Solving the optimal path planning of a mobile robot using improved Q-learning. Robotics and Autonomous Systems, 115, 143-161.

22.
Zolfpour-Arokhlo, M., Selamat, A., Hashim, S. Z. M., & Afkhami, H. (2014). Modeling of route planning system based on Q value-based dynamic programming with multi-agent reinforcement learning algorithms. Engineering Applications of Artificial Intelligence, 29, 163-177.

PUBLISHER'S NOTE

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

cc Copyright © 2025 The Author(s). Published by Southern United Academy of Sciences.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
t