Reinforcement Learning Reduced H Output Tracking Control of Nonlinear Two-Time-Scale Industrial Systems

Xiaomin Liu, Gonghe Li, Linna Zhou, Chunyu Yang, Xinkai Chen

研究成果: Article査読

5 被引用数 (Scopus)

抄録

In this article, based upon reinforcement learning (RL) and reduced control techniques, an H∞ output tracking control method is represented for nonlinear two-time-scale industrial systems with external disturbances and unknown dynamics. First, the original H∞ output tracking problem is transformed into a reduced problem of the augmented error system. Based on zero-sum game idea, the Nash equilibrium solution is given and the tracking Hamilton-Jacobi-Isaacs (HJI) equation is established. Then, to handle the issue of unmeasurable states of the virtual reduced system, full-order system state data are collected to reconstruct the reduced system states, and the model-free RL algorithm is proposed to solve the tracking HJI equation. Next, the algorithm implementation is given under the actor-critic-disturbance framework. It is proved that the control policy obtained from reconstructed state data can make the augmented error system asymptotically stable and satisfy the L2 gain condition. Finally, the effectiveness of the proposed method is illustrated by the permanent-magnet synchronous motor experiment.

本文言語English
ページ(範囲)2465-2476
ページ数12
ジャーナルIEEE Transactions on Industrial Informatics
20
2
DOI
出版ステータスPublished - 2024 2月 1

ASJC Scopus subject areas

  • 情報システム
  • 電子工学および電気工学
  • 制御およびシステム工学
  • コンピュータ サイエンスの応用

フィンガープリント

「Reinforcement Learning Reduced H Output Tracking Control of Nonlinear Two-Time-Scale Industrial Systems」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル