Saved by Stardustwyn
2.6.4 AI security and hacking

I think the most worrisome aspect of AI systems in the short term is that we will give them too much autonomy without being fully aware of their limitations and vulnerabilities. We tend to anthropomorphize AI systems: we impute human qualities to them and end up overestimating the extent to which these systems can actually be fully trusted.
Melanie Mitchell • Artificial Intelligence: A Guide for Thinking Humans

Asset Vulnerability : Identifying vulnerabilities within these assets is the next step. Vulnerabilities can be technical (e.g., unpatched software) or human-related (e.g., suboptimal configuration). Individual vulnerabilities will also have different outcomes and widely varying likelihoods of real-world exploitation. Does successful exploitation o
... See moreRik Ferguson • The Cybersecurity Resilience Quotient Measuring Security Effectiveness
Fundamentally, the machine learning methodology used in modern AI systems is susceptible to attacks through the public APIs that expose the model, and against the platforms on which they are deployed. This report focuses on the former and considers the latter to be the scope of traditional cybersecurity taxonomies.