TY - JOUR
T1 - Reinforcement Learning Reduced H∞ Output Tracking Control of Nonlinear Two-Time-Scale Industrial Systems
AU - Liu, Xiaomin
AU - Li, Gonghe
AU - Zhou, Linna
AU - Yang, Chunyu
AU - Chen, Xinkai
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2024/2/1
Y1 - 2024/2/1
N2 - In this article, based upon reinforcement learning (RL) and reduced control techniques, an H∞ output tracking control method is represented for nonlinear two-time-scale industrial systems with external disturbances and unknown dynamics. First, the original H∞ output tracking problem is transformed into a reduced problem of the augmented error system. Based on zero-sum game idea, the Nash equilibrium solution is given and the tracking Hamilton-Jacobi-Isaacs (HJI) equation is established. Then, to handle the issue of unmeasurable states of the virtual reduced system, full-order system state data are collected to reconstruct the reduced system states, and the model-free RL algorithm is proposed to solve the tracking HJI equation. Next, the algorithm implementation is given under the actor-critic-disturbance framework. It is proved that the control policy obtained from reconstructed state data can make the augmented error system asymptotically stable and satisfy the L2 gain condition. Finally, the effectiveness of the proposed method is illustrated by the permanent-magnet synchronous motor experiment.
AB - In this article, based upon reinforcement learning (RL) and reduced control techniques, an H∞ output tracking control method is represented for nonlinear two-time-scale industrial systems with external disturbances and unknown dynamics. First, the original H∞ output tracking problem is transformed into a reduced problem of the augmented error system. Based on zero-sum game idea, the Nash equilibrium solution is given and the tracking Hamilton-Jacobi-Isaacs (HJI) equation is established. Then, to handle the issue of unmeasurable states of the virtual reduced system, full-order system state data are collected to reconstruct the reduced system states, and the model-free RL algorithm is proposed to solve the tracking HJI equation. Next, the algorithm implementation is given under the actor-critic-disturbance framework. It is proved that the control policy obtained from reconstructed state data can make the augmented error system asymptotically stable and satisfy the L2 gain condition. Finally, the effectiveness of the proposed method is illustrated by the permanent-magnet synchronous motor experiment.
KW - H output tracking control
KW - reduced control
KW - reinforcement learning (RL)
KW - state reconstruction
KW - two-time-scale (TTS) industrial systems
UR - http://www.scopus.com/inward/record.url?scp=85165292177&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85165292177&partnerID=8YFLogxK
U2 - 10.1109/TII.2023.3292970
DO - 10.1109/TII.2023.3292970
M3 - Article
AN - SCOPUS:85165292177
SN - 1551-3203
VL - 20
SP - 2465
EP - 2476
JO - IEEE Transactions on Industrial Informatics
JF - IEEE Transactions on Industrial Informatics
IS - 2
ER -