Behavior learning based on a policy gradient method: Separation of environmental dynamics and state-values in policies

Ishihara Seiji, Igarashi Harukazu

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Policy gradient methods are useful approaches to reinforcement learning. Applying the method to behavior learning, we can deal with each decision problem in different time-steps as a problem of minimizing an objective function. In this paper, we give the objective function consists of two types of parameters, which represent state-values and environmental dynamics. In order to separate the learning of the state-value from that of the environmental dynamics, we also give respective learning rules for each type of parameters. Furthermore, we show that the same set of state-values can be reused under different environmental dynamics.

Original languageEnglish
Pages (from-to)1737-1746+15
JournalIEEJ Transactions on Electronics, Information and Systems
Volume129
Issue number9
DOIs
Publication statusPublished - 2009 Jan 1

Keywords

  • Policy gradient method
  • Pursuit problem
  • Reinforcement learning
  • State transition probabilities

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Behavior learning based on a policy gradient method: Separation of environmental dynamics and state-values in policies'. Together they form a unique fingerprint.

Cite this