What is agency?
Philosophical Origins of the Agent Concept
Last updated
Copyright Continuum Labs - 2023
Philosophical Origins of the Agent Concept
Last updated
The paper investigates the origins, philosophical underpinnings, and the integration of the concept of "agent" within the field of Artificial Intelligence (AI).
It explores how the idea of agency has evolved from philosophical discussions to become a central element in AI research, focusing on the potential of artificial entities to exhibit agency and the implications of large language models (LLMs) in advancing AI agents towards more sophisticated and autonomous functionalities.
In philosophy, agency encompasses entities with desires, beliefs, intentions, and the ability to act autonomously, not limited to humans but extending to other physical and virtual entities. This contrasts with the field of AI, where an agent is redefined as a computational entity that perceives its environment and acts upon it.
AI Agents as Computational Entities: Unlike philosophical agents, AI agents are computational, focusing on observable behaviours rather than metaphysical attributes like consciousness. AI researchers often sidestep debates on consciousness and mind, instead describing AI agents through attributes like autonomy, reactivity, pro-activeness, and social ability.
This paper presents a survey on the evolution and current state of artificial intelligence (AI) agents, with a particular focus on the role of large language models (LLMs) in advancing these agents towards achieving Artificial General Intelligence (AGI).
It traces the philosophical roots of AI agents back to thinkers like Aristotle and Hume, and maps the journey of these concepts into the domain of computer science, where they have been adapted to enable computers to autonomously perform actions based on users' interests.
The concept of AI agents has been influenced by historical perspectives, notably Denis Diderot's 18th-century idea that a parrot capable of responding to every question could be considered intelligent, and Alan Turing's mid-20th-century proposal of the Turing Test to determine if machines can display human-like intelligent behavior.
This paper identifies AI agents as artificial entities that perceive their surroundings, make decisions, and take actions, building on the foundation laid by philosophy and further developed by Turing's work.
AI agents are highlighted as the cornerstone of AI systems, described as entities with autonomy, reactivity, pro-activeness, and social ability.
The paper emphasises the transition of the agent concept from philosophy to computer science, aiming to create computer systems that can understand human interests and act autonomously. The exploration and technical advancement of agents are identified as key areas within AI research, crucial for the pursuit of AGI.
Advancements in smart AI agents have been made since the mid-20th century, focusing on enhancing specific capabilities or mastering particular tasks, such as playing Go or Chess.
However, the paper notes that achieving broad adaptability across various scenarios remains a challenge, with previous efforts largely concentrating on algorithm design and training strategies rather than on developing the model’s inherent general abilities.
The development of LLMs is presented as a turning point for furthering the development of AI agents, with LLMs demonstrating powerful capabilities in knowledge acquisition, instruction comprehension, generalisation, planning, reasoning, and effective natural language interactions.
The paper proposes expanding LLMs into agents equipped with expanded perception and action spaces to reach higher levels of the World Scope (WS), a framework depicting the research progress from NLP to general AI.
A general conceptual framework for LLM-based agents is introduced, consisting of three key parts: brain, perception, and action. The brain, primarily composed of an LLM, is the core for storing memories, processing information, and making decisions. The perception module is intended to expand the agent’s perceptual space beyond text to include multiple sensory modalities. The action module is designed to expand the agent’s capability to take actions in the environment.
The paper explores the practical applications of LLM-based agents, including single-agent and multi-agent systems, and discusses the potential for human-agent collaboration.
It envisions an "Agent Society" where agents exhibit human-like behaviour and personality within diverse social environments, potentially achieving a harmonious society where humans and AI agents coexist.
The paper concludes with a discussion on mutual benefits and inspirations between LLM and agent research, existing evaluation efforts, potential risks of LLM-based agents, challenges of scaling up agent counts, and several open problems in the field.
These include debates over LLM-based agents as a path to AGI, transitions from virtual to physical environments, collective intelligence in AI agents, and the concept of "Agent as a Service."
In summary, this paper provides a deep dive into the history, development, and future prospects of AI agents, emphasizing the transformative potential of LLM-based agents in advancing towards AGI and fostering a world where humans and intelligent agents can harmoniously coexist.
Debate on AI and Agency: The paper discusses the contentious issue of whether AI systems can possess agency in a philosophical sense, given their lack of consciousness and intentionality in the traditional understanding. It highlights the argument that attributing psychological states like intention to AI agents might be anthropomorphizing and lacking scientific rigor.
Language Models as Potential Agents: Despite these concerns, advancements in language models suggest a burgeoning possibility for artificial intentional agents. Language models function as conditional probability models but lack the incorporation of social and perceptual context that characterises human speech based on mental states. However, some researchers argue that, in a narrow sense, language models can infer representations of beliefs, desires, and intentions, making them capable of generating human-like utterances.
Agency in Philosophy vs. AI: In philosophy, agency encompasses entities with desires, beliefs, intentions, and the ability to act autonomously, not limited to humans but extending to other physical and virtual entities. This contrasts with the field of AI, where an agent is redefined as a computational entity that perceives its environment and acts upon it.