Model Free Reinforcement Learning based Control of Permanent Magnet Synchronous Motor Drive
Vikas,Pankaj Yadav,Bharat Singh,Rajesh Kumar
TLDR
The improved control topology under hybrid deep-reinforcement learning, which is more robust to changes in the motor parameters and loading conditions is presented, and shows that the DDPG is more reliable and has higher d-axis and q-axis current tracking accuracy as compared to the deep Q-learning algorithm.
摘要
Permanent Magnet Synchronous Motor (PMSM) drive plays a vital role in multiple applications, however, controlling of PMSM is a very complex task due to the presence of multiple nonlinear motor parameters which are directly dependent on its speed and current control mechanism. Traditional control algorithms such as vector control are badly impacted by these parameter variations. This research work presents the improved control topology under hybrid deep-reinforcement learning, which is more robust to changes in the motor parameters and loading conditions. Presented algorithm explicitly does not require the explicit plant model for tuning its parameters. Two control topologies based on the deep deterministic policy gradient (DDPG) algorithm and deep Q network (DQN) are proposed for controlling the PMSM. Additionally, the objective function based on the weighted sum of error of the tracking d-axis and q-axis current is proposed for learning control topology parameters. Numerous experimental investigations on the proposed current control of a drive have been carried out to demonstrate its effectiveness. The result shows that the DDPG is more reliable and has higher d-axis and q-axis current tracking accuracy as compared to the deep Q-learning algorithm.
