AI Security Likelihood Analysis

MINT has provided guidance and support for a new project led by Harriet Farlow of Mileva Security labs, which has been awarded USD50,000 in funding by the Foresight Institute.

Organisations across Government, healthcare, finance, and the university sector are increasingly reliant on using Artificial Intelligence systems to power their analysis. However, you might not know that AI is actually really easy to hack. For example, using adversarial machine learning models criminals or threat actors could hack these models and disrupt, deceive or disclose information from them. Attacks on these models could allow someone to influence and profit from stocks, leak private data to sell on the dark web, or cause a Tesla to not recognise a stop sign and drive straight through. AI is an unregulated technology that yields much promise but also much danger. In fact, over 90% of company leaders surveyed say their business uses some kind of Artificial Intelligence. But only 14% of those companies are aware of, or implement, AI Security.  AI Security refers to the technical and governance practices that secure AI systems from an adversary.

This project fills a crucial gap in AI security research by quantifying the likelihood of AI incidents. Likelihood is a key parameter in risk assessment best practices in other security studies, including cyber security and national security. In the practice of AI security there is a growing need to model risk, however while there is much research into severity there is very little in likelihood. Drawing from established risk assessment methodologies in cybersecurity, the study aims to construct a robust framework for evaluating and mitigating AI security risks, thus shedding light on the elusive dimension of risk associated with AI vulnerabilities.

Key outputs will include an open-source database of AI incidents enriched with likelihood analysis, and a series of articles and presentations. As AI continues to permeate every aspect of our society, the evolution of adversarial machine learning from a niche academic field to a critical component of global cyber security underscores the significance of this research.

The project team comprises project leader (and MINT Affiliate) Harriet Farlow and researcher Tania Sadhani, with support from advisors Professor Seth Lazar of MINT and Dr Tim Lynar of UNSW Canberra’s Innovation Lab for Cyber Security and Machine Learning,