Cognitive prediction of obstacle's movement for reinforcement learning pedestrian interacting model

Thanh Trung Trinh, Masaomi Kimura

Research output: Contribution to journalArticlepeer-review


Recent studies in pedestrian simulation have been able to construct a highly realistic navigation behaviour in many circumstances. However, when replicating the close interactions between pedestrians, the replicated behaviour is often unnatural and lacks human likeness. One of the possible reasons is that the current models often ignore the cognitive factors in the human thinking process. Another reason is that many models try to approach the problem by optimising certain objectives. On the other hand, in real life, humans do not always take the most optimised decisions, particularly when interacting with other people. To improve the navigation behaviour in this circumstance, we proposed a pedestrian interacting model using reinforcement learning. Additionally, a novel cognitive prediction model, inspired by the predictive system of human cognition, is also incorporated. This helps the pedestrian agent in our model to learn to interact and predict the movement in a similar practice as humans. In our experimental results, when compared to other models, the path taken by our model's agent is not the most optimised in certain aspects like path lengths, time taken and collisions. However, our model is able to demonstrate a more natural and human-like navigation behaviour, particularly in complex interaction settings.

Original languageEnglish
Pages (from-to)127-147
Number of pages21
JournalJournal of Intelligent Systems
Issue number1
Publication statusPublished - 2022 Jan 1


  • agent
  • cognitive prediction
  • navigation
  • pedestrian
  • reinforcement learning

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Artificial Intelligence


Dive into the research topics of 'Cognitive prediction of obstacle's movement for reinforcement learning pedestrian interacting model'. Together they form a unique fingerprint.

Cite this