Please note:
On this site, there is only displayed the English speaking sessions of the OOP 2022 Digital. You can find all conference sessions, including the German speaking ones, here.
The times given in the conference program of OOP 2022 Digital correspond to Central European Time (CET).
By clicking on "EVENT MERKEN" within the lecture descriptions you can arrange your own schedule. You can view your schedule at any time using the icon in the upper right corner.
Machine Learning appears to have made impressive progress on many tasks from image classification to autonomous vehicle control and more. ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. This is not necessarily a good thing. Systematic risk is invoked by adopting ML in a haphazard fashion. Understanding and categorizing security engineering risks introduced by ML at design level is critical. This talk focuses on results of an architectural risk analysis of ML systems.
Target Audience: Architects, Technical Leads, and Developers and Security Engineers of ML Systems
Prerequisites: Risk Managers, Software Security Professionals, ML Practitioners, everyone who is confronted by ML
Level: Advanced
Extended Abstract
Machine Learning appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games including chess, Go, and Atari video games, and more. This has led to much breathless popular press coverage of Artificial Intelligence, and has elevated deep learning to an almost magical status in the eyes of the public. ML, especially of the deep learning sort, is not magic, however. ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. In my view, this is not necessarily a good thing. I am concerned with the systematic risk invoked by adopting ML in a haphazard fashion. Our research at the Berryville Institute of Machine Learning (BIIML) is focused on understanding and categorizing security engineering risks introduced by ML at the design level. Though the idea of addressing security risk in ML is not a new one, most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML. This talk focuses on the results of an architectural risk analysis (sometimes called a threat model) of ML systems in general. A list of the top five (of 78 known) ML security risks will be presented.
Vortrag Teilen