Frank L. Lewis 教授讲座通知
题 目:Data-driven Control and Optimization for Industrial Processes:Reinforcement Learning and Supervisory Control
报告人:Dr. Frank L. Lewis
美国德州大学阿灵顿分校终身教授
Fellow of IEEE, Fellow of IFAC, Fellow of U.K. Inst. M.C.,
Registered Professional Engineer in the State of Texas,Chartered Engineer U.K.
时 间:2013年9月4日 14:30--15:30
地 点:电气工程学院会议室3211
报告人简介:
Dr. Lewis obtained the Bachelor's Degree and the Master's Degree in Electrical Engineering at Rice University in 1971. Hethenspent six years in the U.S. Navy, serving Executive Officer and Acting Commanding Officer aboard USS Salinan (ATF-161). In 1977 he received the Master's of Science in Aeronautical Engineering from the University of West Florida. In 1981 he obtained the Ph.D. degree at The Georgia Institute of Technology, where he was employed as a professor from 1981 to 1990. He is a Professor of Electrical Engineering at The University of Texas at Arlington, where he was awarded the Moncrief-O'Donnell Endowed Chair in 1990 at the UTA Research Institute.
He isFellow of the IEEE, Fellow of IFAC, Fellow of the U.K. Institute of Measurement & Control, Member of the New York Academy of Sciences, Registered Professional Engineer in the State of Texas, and Chartered Engineer, U.K. Engineering Council. He received various awards, including the Fulbright Research Award, NSF Research Initiation Grant, ASEETerman Award, Int. Neural Network Soc.Gabor Award2009, U.K. Inst Measurement & ControlHoneywell Field Engineering Medal2009, and IEEE Computational Intelligence SocietyNeural Networks Pioneer Award2012.
Editor of Taylor & Francis Book Series on Automation & Control Engineering. Served as Editor for the flagship journalAutomatica. Served/serves on many Editorial Boards,includingIEEE Transactions on Neural Networks, International Journal of Control,Neural Computing and Applications,Optimal Control & Methods,andInt. J. Intelligent Control Systems.Author of6 U.S. patents, 273 journal papers, 52 chapters and encyclopedia articles, 375 refereed conference papers, and 15 books includingOptimal Control, Optimal Estimation, Applied Optimal Control and Estimation, Aircraft Control and Simulation, Control of Robot Manipulators,Neural Network Control.
His current research interests include intelligent control, distributed cooperative control on graphs, nonlinear systems, reinforcement learning, manufacturing process control, condition-based maintenance, and neurobiological systems.
报告简介:
Modern industrial processes are complex and new imperatives in sustainable manufacturing and energy efficient systems require improved decision and control methods. More emphasis is being placed on optimal design of automatic decision and control systems, including minimum fuel, minimum energy, minimum time, minimum pollutant concentration, and others. Operational control loops are responsible for stable plant operation, and they must ensure following of set points from higher-level supervisory loops that include optimization-based design criteria.
Optimal feedback control design has been responsible for much of the successful performance of engineered systems in aerospace, manufacturing, industrial processes, vehicles, ships, robotics, and elsewhere since the 1960s. Optimal feedback control design is performed offline by solving optimal design equations including the algebraic Riccati equation. It is difficult to perform optimal designs for nonlinear process systems since they rely on solutions to complicated Hamilton-Jacobi-Bellman equations. Finally, optimal design generally requires that the full system dynamics be known, which is seldom the case in manufacturing systems. Dynamics modeling identification of systems is complicated, expensive, and inaccurate. Moreover, practical manufacturing systems may have no tractable closed-form system model.
Nevertheless, the availability of large amounts of measured data in today’s industry has the potential to allow good process controller design with optimization of performance if data are properly and efficiently used.
This talk will present methods for online controller design and optimization using data measured from industrial processes. We will present several methods for efficient online tuning of process controllers based on measurements of real-time data for unmodeled or partially modeled processes. Techniques from Reinforcement Learning are used to design a novel class of adaptive control algorithms that converge to optimal control solutions by online learning in real time. These are based on actor-critic Reinforcement Learning mechanisms that occur in the human brain and sociological ecosystems. Reinforcement Learning provides methods for learning optimal energy-efficient control solutions online in real-time using data measured along the process trajectories for unmodeled systems with unknown dynamics.