AI Usage for Vulnerability Assessment
Graph of Effort - Quantifying Risk of AI Usage for Vulnerability Assessment.
This project addresses the emerging security challenges posed by offensive AI - the use of AI-based software to attack non-AI assets. With AI capabilities such as high automation and complex pattern recognition becoming widely available, the potential for exploitation significantly increases.
While current research explores offensive AI utilization and classification, and threat modeling methods are being developed for AI-based assets, critical gaps remain. Organizations need effective ways to quantify the factors contributing to AI threats and to assess the risk of AI-based attacks across all their assets. This challenge is particularly acute in cloud environments with their sophisticated infrastructure and complex access control landscapes.
To address these challenges, this project developed the Graph of Effort, an intuitive, flexible, and effective threat modeling method. The Graph of Effort quantifies the effort required by an adversary to use offensive AI for vulnerability exploitation, enabling security analysts to rank vulnerabilities and prioritize proactive countermeasures.
The project delivers a functional threat model that provides valuable support for vulnerability assessment. It enables organizations to systematically analyze how offensive AI could be leveraged against their assets and helps security teams make informed decisions about defense strategies. Future work will focus on empirical validation of the design choices underlying the threat model.