Friday 20 April 2018, Room Amfitrion I
Hotel Makedonia Palace, Thessaloniki, Greece
Dr. Simon Burton (Chief Expert Safety and Reliability, BOSCH)
Making the Case for Safety of Machine Learning applied to Automated Driving
Abstract: Machine learning technologies such as neural networks show great potential for enabling automated driving functions in an open world context. However, these technologies can only be released for series production if it can be demonstrated to be sufficiently safe. As a result, convincing arguments need to be made for the safety of automated driving systems based on such technologies. This talk examines the various forms in which machine learning can be applied to automated driving and the resulting functional safety challenges. A systems engineering approach is proposed to derive a precise definition of the performance requirements on the function to be implemented on which to base the safety case. A systematic approach to structuring the safety case, is introduced and a number of open research questions are presented.
Dr. Mauricio Castillo-Effen (Senior Researcher, Emerging Technologies & Advanced Technology Laboratories, Lockheed Martin) Challenges in applying ML-enabled systems in avionics
Abstract:Machine Learning (ML) holds the promise of revolutionizing the way we design and develop machines by allowing them to learn from data instead of laborious and error-prone coding by experts. As air transport, in civilian as well as in military domains, moves towards introducing more autonomy, ML techniques seem to fit these needs like a hand in a glove. As illustrated through a few well-known present-day applications, they could allow autonomous aircraft to deal with more complex situations with less human intervention. However, current avionics development processes and methodology, aimed at design safety and certification are incompatible with techniques, frameworks, and tools currently available for ML. This is understandable considering that most of them have been developed for non-mission-/safety critical applications. Similarly, all processes conducive to operational safety are challenged by the notion of systems that are updated and learn in the field. The talk provides details on specific challenges and also on promising practical directions that could help bridge the worlds of safety-critical avionics and machine learning. Finally, the talk also offers remarks aimed at fostering collaboration between practitioners of both communities towards solving these vexing challenges.
Dr. Alhussein Fawzi (Scientist, DeepMind) Fooling deep image classifiers
Abstract: Image classification systems have recently witnessed huge accuracy gains on several complex benchmarks. Besides being accurate on such data sets, an important requirement for deploying classifiers in real-world environments is their robustness to external (possibly adversarial) perturbations. In this talk, I will first highlight the vulnerability of state of the art classifiers to simple perturbation regimes, such as adversarial and universal perturbations. Second, I will show fundamental limits on the robustness of classifiers to perturbations, and provide evidence for the difficulty of designing robust classifiers in high dimensional settings. I will finally conclude with key open problems in this emerging field.
Dr. Xiaowei Huang (Lecturer, Univ. Liverpool) Verification and testing of deep neural networks
Abstract: Deep neural networks (DNNs) have achieved significant breakthroughs in the past few years and are now being deployed in many applications. However, in safety-critical domains, where human lives are at stake, and security-critical applications, which often incur significant financial implications, concerns have been raised about the reliability of this technique. In this talk, I will discuss a few techniques we have developed for the verification and testing of DNNs. The property we consider is a robustness property of DNNs, defined as the invariance of the classification for a given input over a small neighbourhood of inputs around the input subject to adversarial perturbations. For the verification techniques, we develop a layer-by-layer refinement approach and a global optimisation approach. Both of them have provable guarantees. For the testing techniques, we develop a Monte-Carlo tree based approach to do mutation testing and a set of test criteria inspired by the modified condition/decision coverage (MC/DC) used in software testing.
Dr. John Rushby (Program Director, FMSD, SRI International)
Assurance For Increasingly Autonomous Safety Critical Systems
Abstract: Increasing Autonomy (IA) is US airplane language for systems that employ machine learning (ML) and advanced/general AI for flight assistance short of full autonomy. It is similar to driver assistance in cars. I will describe the challenge of IA in systems that need to achieve extremely high levels of safety assurance. Some of the issues do not concern technologies such as ML directly, but the changes they cause or enable in system architecture and human-computer interaction.Then I will discuss assurance for ML and similar systems themselves. I conclude that direct assurance is problematic and that the best solution may be a new kind of system-level or "pervasive" monitoring, and I will consider the feasibility of constructing such pervasive monitors with high assurance.
Panel discussion - formal methods for ML-enabled autonomous systems