Reinforcement Learning Reduced H Output Tracking Control of Nonlinear Two-Time-Scale Industrial Systems

Xiaomin Liu, Gonghe Li, Linna Zhou, Chunyu Yang, Xinkai Chen

Research output: Contribution to journalArticlepeer-review


In this article, based upon reinforcement learning (RL) and reduced control techniques, an H∞ output tracking control method is represented for nonlinear two-time-scale industrial systems with external disturbances and unknown dynamics. First, the original H∞ output tracking problem is transformed into a reduced problem of the augmented error system. Based on zero-sum game idea, the Nash equilibrium solution is given and the tracking Hamilton-Jacobi-Isaacs (HJI) equation is established. Then, to handle the issue of unmeasurable states of the virtual reduced system, full-order system state data are collected to reconstruct the reduced system states, and the model-free RL algorithm is proposed to solve the tracking HJI equation. Next, the algorithm implementation is given under the actor-critic-disturbance framework. It is proved that the control policy obtained from reconstructed state data can make the augmented error system asymptotically stable and satisfy the L2 gain condition. Finally, the effectiveness of the proposed method is illustrated by the permanent-magnet synchronous motor experiment.

Original languageEnglish
Pages (from-to)2465-2476
Number of pages12
JournalIEEE Transactions on Industrial Informatics
Issue number2
Publication statusPublished - 2024 Feb 1


  • H output tracking control
  • reduced control
  • reinforcement learning (RL)
  • state reconstruction
  • two-time-scale (TTS) industrial systems

ASJC Scopus subject areas

  • Information Systems
  • Electrical and Electronic Engineering
  • Control and Systems Engineering
  • Computer Science Applications


Dive into the research topics of 'Reinforcement Learning Reduced H Output Tracking Control of Nonlinear Two-Time-Scale Industrial Systems'. Together they form a unique fingerprint.

Cite this