Abstract
In this article, based upon reinforcement learning (RL) and reduced control techniques, an H∞ output tracking control method is represented for nonlinear two-time-scale industrial systems with external disturbances and unknown dynamics. First, the original H∞ output tracking problem is transformed into a reduced problem of the augmented error system. Based on zero-sum game idea, the Nash equilibrium solution is given and the tracking Hamilton-Jacobi-Isaacs (HJI) equation is established. Then, to handle the issue of unmeasurable states of the virtual reduced system, full-order system state data are collected to reconstruct the reduced system states, and the model-free RL algorithm is proposed to solve the tracking HJI equation. Next, the algorithm implementation is given under the actor-critic-disturbance framework. It is proved that the control policy obtained from reconstructed state data can make the augmented error system asymptotically stable and satisfy the L2 gain condition. Finally, the effectiveness of the proposed method is illustrated by the permanent-magnet synchronous motor experiment.
Original language | English |
---|---|
Pages (from-to) | 2465-2476 |
Number of pages | 12 |
Journal | IEEE Transactions on Industrial Informatics |
Volume | 20 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2024 Feb 1 |
Keywords
- H output tracking control
- reduced control
- reinforcement learning (RL)
- state reconstruction
- two-time-scale (TTS) industrial systems
ASJC Scopus subject areas
- Information Systems
- Electrical and Electronic Engineering
- Control and Systems Engineering
- Computer Science Applications