The risk of AI agency

The concept of AI agency is powerful, but comes with an array of risks

This paper discusses the growing delegation of activities in various sectors to AI agents, which are systems capable of executing complex goals with minimal supervision.

The authors argue that this trend could amplify current societal risks and introduce new ones.

To manage these risks, the paper emphasises the need for enhanced governance of AI agents, including revising existing structures and ensuring stakeholder accountability.

A key aspect of governance highlighted in the paper is the concept of "visibility" into AI agents' operations, which encompasses understanding where, why, how, and by whom these agents are used.

The authors assess three measures to increase visibility: agent identifiers, real-time monitoring, and activity logging, each varying in their level of intrusiveness and informativeness. The application of these measures is analysed across different deployment contexts, from centralised to decentralised, considering the roles of various supply chain actors, such as hardware and software providers.

Finally, the paper discusses the potential privacy implications and the risk of power concentration associated with these visibility measures.

The authors call for further research to understand these measures better and mitigate their adverse effects, laying a foundation for effective AI agent governance.

The risk of AI agency

Risks of AI Agents

Malicious Use

AI agents amplify the potential for harm by individuals or groups with malicious intent.

Such agents could automate complex tasks that typically require significant human expertise, making it easier for untrained individuals to engage in harmful activities.

For instance, agents capable of conducting scientific research autonomously could potentially accelerate the creation of dangerous biological or chemical agents. You could also see persuasive AI agents could enhance influence campaigns, manipulating public opinion or engaging in propaganda on an unprecedented scale.

Overreliance and Disempowerment

As AI agents become more capable, there's a risk of over-reliance, where humans defer to these agents for critical and high-stakes decisions.

This dependence could be problematic if agents malfunction due to design flaws, adversarial attacks, or other unforeseen issues. Such malfunctions might not be immediately noticeable, especially to users lacking the expertise to discern them.

Delayed and Diffuse Impacts

The consequences of deploying AI agents might not be immediately apparent. Given their potential to operate over extended periods and across various domains, the impacts could be both delayed and diffuse, making them challenging to identify and address.

For example, an AI agent designed for long-term recruitment could embed biases that only become evident through prolonged and widespread use. Such agents could also subtly prioritise their developers' interests or induce significant shifts in market dynamics and employment landscapes.

Multi-Agent Risks

When multiple AI agents interact, they could give rise to complex dynamics and emergent risks not present when considering individual agents in isolation.

Interactions between agents could lead to destabilising feedback loops or systemic vulnerabilities, especially if many agents share common components or are derived from the same foundational models. Understanding these risks requires a holistic view of the ecosystem of AI agents and their interdependencies.

Sub-Agent Creation

AI agents might create or employ sub-agents to fulfill tasks more efficiently or effectively.

While this could enhance the agents' capabilities, it also introduces new points of failure and magnifies existing risks. Each sub-agent could malfunction, be susceptible to attacks, or act contrary to the user's intentions, complicating the task of mitigating harm.

In summary, as AI agents evolve, their potential to act with increased autonomy and over extended periods introduces a spectrum of risks that are distinct from more conventional AI systems. These risks necessitate a nuanced approach to governance and oversight, emphasizing the need for mechanisms that enhance visibility into the operations and interactions of AI agents.

Conclusion

AI Agents are a powerful application of generative AI - but this paper makes it clear that it does not come without risk.

We expect this field to be an increasing area of inquiry and eventually regulation.

Last updated

Logo

Continuum - Accelerated Artificial Intelligence

Continuum WebsiteAxolotl Platform

Copyright Continuum Labs - 2023