Neural networks have had a huge impact on how engineers design controllers for robots, enabling more adaptive and efficient machines. Yet these brain-like machine learning systems are a double-edged sword: Their complexity makes them powerful, but it’s also difficult to ensure that a robot powered by a neural network will perform its task safely.
The traditional method for verifying safety and stability is to use techniques called Lyapunov functions. If you can find a Lyapunov function whose value decreases steadily, then you can know that dangerous or unstable situations associated with higher values will never occur. However, for robots controlled by neural networks, previous approaches to verifying Lyapunov conditions did not scale well to complex machines.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and other labs have developed new techniques for rigorously certifying Lyapunov calculations in more sophisticated systems. Their algorithm efficiently searches for and verifies a Lyapunov function, providing a stability guarantee for the system. This approach could potentially enable safer deployment of robots and autonomous vehicles, including aircraft and spacecraft.
To outperform previous algorithms, the researchers found a cost-effective shortcut to the training and verification process. They generated less expensive counterexamples (e.g., conflicting data from sensors that could have destabilized the controller) and then optimized the robotic system to account for them. Understanding these edge cases helped the machines learn to handle difficult circumstances, allowing them to operate safely under a wider range of conditions than before. They then developed a new verification formulation that enables the use of a scalable neural network verifier, α,β-CROWN, to provide rigorous worst-case guarantees beyond counterexamples.
“We have seen impressive empirical performance in AI-controlled machines like humanoids and robotic dogs, but these AI controllers lack the formal guarantees that are crucial for safety-critical systems,” says Lujie Yang, an MIT doctoral student in electrical engineering and computer science (EECS) and affiliated with CSAIL, who is co-lead author of a new paper on the project alongside Toyota Research Institute researcher Hongkai Dai SM ’12, PhD ’16. “Our work bridges the gap between this level of performance of neural network controllers and the safety guarantees needed to deploy more complex neural network controllers in the real world,” Yang notes.
For a numerical demonstration, the team simulated how a quadcopter drone equipped with lidar sensors would stabilize in a two-dimensional environment. Their algorithm successfully guided the drone to a stable hover position, using only the limited environmental information provided by the lidar sensors. In two other experiments, their approach enabled the stable operation of two simulated robotic systems in a wider range of conditions: an inverted pendulum and a trajectory-following vehicle. These experiments, while modest, are relatively more complex than anything the neural network verification community has previously done, particularly because they included sensor models.
“Unlike common machine learning problems, rigorously using neural networks as Lyapunov functions requires solving difficult global optimization problems, and scalability is therefore the main bottleneck,” says Sicun Gao, associate professor of computer science and engineering at the University of California, San Diego, who was not involved in this work. “The current work makes an important contribution by developing algorithmic approaches that are much better suited to the particular use of neural networks as Lyapunov functions in control problems, and significantly improves the scalability and quality of solutions compared to existing approaches. This work opens up exciting opportunities for the further development of optimization algorithms for neural Lyapunov methods and the rigorous use of deep learning in control and robotics in general.”
Yang and his colleagues’ approach to stability has a wide range of potential applications in which safety assurance is crucial. It could help ensure smoother driving for autonomous vehicles, such as airplanes and spacecraft. Similarly, a drone delivering items or mapping different terrains could benefit from such safety assurances.
The techniques developed here are very general and not specific to robotics; the same techniques could potentially help other applications, such as biomedicine and industrial processing, in the future.
While this technique is an improvement over previous work in terms of scalability, the researchers are exploring how it can perform better in larger systems. They would also like to consider data beyond lidar surveys, such as images and point clouds.
The team also wants to provide the same stability guarantees to systems that operate in uncertain and disruptive environments. For example, if a drone encounters a strong gust of wind, Yang and his colleagues want to ensure that it will continue to fly stably and perform its intended task.
They also plan to apply their method to optimization problems, where the goal would be to minimize the time and distance a robot needs to complete a task while remaining stable. They plan to extend their technique to humanoids and other real-world machines, where a robot must remain stable while making contact with its environment.
Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering at MIT, vice president for robotics research at TRI, and a CSAIL fellow, is a senior author on the research. The paper also credits Zhouxing Shi, a doctoral student at the University of California, Los Angeles, and Associate Professor Cho-Jui Hsieh and Assistant Professor Huan Zhang at the University of Illinois at Urbana-Champaign. Their work was supported, in part, by Amazon, the National Science Foundation, the Office of Naval Research, and Schmidt Sciences’ AI2050 program. The researchers’ paper will be presented at the 2024 International Conference on Machine Learning.