Can (And Should) AI Be Considered an Agent

Gabriel Lima, Computer science undergraduate student, KAIST, South Korea

ABSTRACT

In this short essay, I share my thoughts on the relationship between artificial intelligence (AI) and various definitions of agency. Can AI be considered an agent? More specifically, does AI fulfill requirements set forth in various definitions of agency? Depending on the perspective and definition taken by the reader, the agency of AI could be controversial, unimaginable, or an unquestionable truth. A question that is often neglected, however, is whether AI should be given any agency. Even though we often derive normative statements of value (e.g., should, ought to) from descriptive statements of fact (e.g., can, is), their distinction is extremely important and has been discussed by many philosophers who argue this relation is not necessarily valid and advisable. Finally, I conclude my essay by raising the open question whether AI should indeed be an agent in our society independent of the fulfillment of agency requirements set by various definitions. Instead of focusing on the abilities of an AI, what if we first ask whether it would be beneficial to treat an AI as an agent in society?


Introduction

  

Agency has never been clearly defined across, or even within, disciplines. Even though it is often related to autonomy, responsibility, or causality, no clear definition agrees on every detail around the complicated issue of who (or what) is an agent.


In this short essay, I share my thoughts on the relationship between artificial intelligence (AI) and agency. Can AI be considered an agent? More specifically, does AI fulfill requirements set in various definitions of agency? Depending on the perspective taken by the reader, the agency of AI could be controversial, unimaginable, or an unquestionable truth. A question that is often neglected, however, is whether AI should be given agency. Even though we often derive normative statements of value (e.g., should, ought to) from descriptive statements of fact (e.g., can, is), their distinction is important and many philosophers have argued that this connection is not valid or advisable. Finally, I conclude my essay by raising the open question whether AI should indeed be an agent in our society independent of the fulfillment of agency requirements set by various definitions.


As introduced above, agency is not clearly defined and thus, tackling whether AI could qualify as an agent following every single proposed idea of agency is infeasible. In the following short subsections, I will deal with some common sociological, legal, philosophical, and technological definitions of agency and share my thoughts on whether AI could be considered an agent under each definition.


An Agent Is a Goal-Oriented Entity

Does an AI have a goal? From a computer science perspective, this is often how we create and train AIs. For instance, in reinforcement learning we teach AIs by rewarding them depending on whether or not they have achieved a set goal. The goals of an AI are not intrinsic, but extrinsic; the programmer sets its goals following his or her needs. This does not, however, disqualify AI as a candidate for agency. According to the idea that agency is based on a goal-oriented behavior, AI could be seen as an agent.


An Agent Can Act and Modify Its Behavior Depending on the Environment

This definition is often used in computer science when dealing with reinforcement learning, a method used to train AIs. In this setting, we define AI as an agent in an environment with a set of policies and actions. Given that AI is defined as an agent from its conception, it is easy to imagine an AI as an agent after its deployment.


An Agent Has an Effect on the World and Drives Social Change

Following this more sociological perspective, an agent must make a difference in society to qualify for agency. In the current “AI Summer,” AI is affecting society in ways many did not expect – or did expect, but unfortunately neglected. AI has been disruptive in diverse sections of society. Job markets having to adapt to the insertion of these electronic entities, and recommendation algorithms controlling what kind of information a certain part of society has access to, are among many examples of novel consequences AI is imposing on society. It is not hard to see an AI as an agent considering its impact on society.


An Agent Can Engage With or Resist Colonial Power

Even though sci-fi scenarios give us the idea that AI can resist the power of its creators, this possibility is far from us. AI cannot resist and turn against its own creator, due to both lack of ability and the high level of control creators still have over their creations. AIs are distant from engaging with (or inverting) the power pyramid, where they are at the very bottom. More importantly, how can they even set that as a goal, if an AI is not currently able to have intrinsic goals? By this conception, AI cannot be an agent since it does not engage in any action dealing with its creators and its hierarchical position.


An Agent Is an Entity That Acts on Behalf of a Principal

We often build AIs as entities to complete a certain task for humans. These systems act on behalf of a principal, which can be their programmers, manufacturers, or users. The principal sets the AI’s goals and the system works towards achieving them. By this conception of agency, an AI is clearly an agent. Some authors even argue that AI could be a “perfect agent,” since it does not have intentions or goals that could deviate from its principal’s goals.


As issued raised by many legal scholars about AI agency is the usual requirement of a contract to establish a principal-agent relationship. Since AI has not (yet) been granted any kind of legal personhood, it cannot be a party to a legal contract. Consequently, while an AI could be seen as an agent under a principal in economic terms, it cannot qualify as one legally.


An Agent Can Bear Responsibility for Its Actions

Can an AI be responsible for its actions? How would this responsibility even be assigned to an entity that cannot be held accountable for its actions? If an AI causes damage, how can it be punished? These issues are raised by many law scholars when dealing with the liability assignment of an act with legal consequences by an AI. At present, liability usually goes towards the manufacturer or user of an AI, so the AI system itself cannot be seen as an agent.


But Should AI Be Considered an Agent?


As I have argued above, depending on how you define agency, the idea of AI being an agent can be seen as either reasonable or completely absurd. Given that it is a possibility, should we consider AI as an agent? Even though we often derive whether an entity should receive any consideration from its ontology and capabilities, should we apply the same reasoning when dealing with AI? Would that be beneficial to our society, our legal systems, or even humanity as a whole? Should we even ask that question?


With the fast development of AI, we keep dwelling on what each system can and cannot do; we thereby neglect the question of whether this consideration is the right one to focus on. What if, instead of focusing on what an AI can do, we center discussion on whether these entities can be seen as agents no matter how complex, intelligent, or autonomous they might be? Although the abilities and inabilities of current AI systems are important to the discussion of the position of AI in society, this might better be left as a follow-up question to the most immediate inquiry: given the lack of agreement on the definition of agency and regardless of the abilities of these newly developed entities, is it socially beneficial or possible to consider AIs as agents?