human agency in the age of AI

Abeba Birhane


The question of agency necessarily provokes the question of what it means to be a person and, in particular, what it means to be a person in the age of ubiquitous artificial intelligence (AI) systems. We are embodied beings that inherently exist in a web of relations within political, historical, cultural, and social norms. Increasingly, seemingly invisible AI systems permeate most spheres of life, mediating and structuring our communications, interactions, relations, and ways of being. Since we do not exist in a social, political, historical, and AI-mediated vacuum, it is imperative to ground agency as inherently inseparable from the person as construed in such contingent constituent factors. Depending on the context and the space we occupy in the social world, all these dynamic and contingent factors serve as enabling constraints for our capacity to act. Our capacity to act within these contextual factors varies in degree depending on the space we occupy at a certain time, in a certain socioeconomic context; the more privileged we are, the fewer the potential constraints, and the greater our degrees of agency.


The individual is never a fully autonomous entity: rather, they come into being and maintain that sense of existence through dynamic, intersubjective, and reciprocal relations with others[1]. Our biology, current social and cultural norms, historical, and contextual contingencies, as well as our physical and technological environment, constitute who we are and our degrees of agency within a given time and context. Increasingly, AI systems are becoming an integral part of our environment – be it the search engines that we interact with, our social media activities, the facial recognition systems that we come in contact with, or the algorithmic systems that sift through our job applications – further adding enabling, or limiting, constraints. (Enabling constraints here might include having a common Western male name, or other demographic traits, that the job application algorithm chooses to include, rather than exclude. These are still constraints, but in certain instances they increase opportunity, rather than decrease them.)

We are embodied beings that necessarily exist in a web of relations with others, within certain social and cultural norms as well as through emerging technologies. This means our sense of being, as well as our capacity to act, are inextricably intertwined and continually changing as we move between various spheres taking on various roles. The various factors that constitute (and sustain) who we are influence the varying degrees of agency we are afforded. As we go on about our daily lives, we move between various social and cultural conventions, physical environmental enablers (or disablers) of certain behaviors and actions (as opposed to others), and technological tools that shape, reinforce, and nudge behavior and actions in certain directions (and not others). As a PhD student, my role, responsibility, and capacity to act in my academic environment, for example, is different than that of my role, responsibility, and capacity for action when I am at a social gathering within the immigrant community. Furthermore, my interaction with others through Twitter is different from both these other contexts, and is partially determined by the ways the medium affords possible actions and interactions. Our sense of agency, then, is fluid, dynamic, and continually negotiated within these various physical, mental, psychological, technological, and cultural spaces. Discussion of agency, consequently, cannot emerge in a social, technological, and contextual vacuum. Nor is it something we can view as stable or pin on individual persons due to the complex, contingent, and changing factors that constitute and sustain personhood.

Conversely, agency cannot be an abstract term that we attempt to define and analyze in a general, one-size-fits-all manner but one that needs to be grounded in people. People, due to their embeddedness in context, culture, history, and socio-economic status, are afforded varying degrees of enabling constraints. Agency, therefore, is not an all-or-nothing phenomenon but something that varies in degrees depending on individual factors, circumstances and situations. Individuals at the top of the socio-economic hierarchy, for example, face relatively fewer disabling constraints, consequently resulting in a higher degree of agency, and the reverse holds for those at the lower end of society. For example, depending on their socio-economic and educational background, one may be labelled “eccentric” vs. “insane”, a “lone wolf” vs. a “radicalized extremist”, a “freedom fighter” vs. a “terrorist”.

Agency, AI, and ethical considerations

Living in a world of ubiquitous networked communication, a world where AI technologies are interwoven into the social, political, and economic sphere means living in a world where who we are, and subsequently our degree of agency, is partially influenced by automated AI systems.

The concept of AI often provokes the idea of (future and imaginary) sentient artificial beings, or autonomous vehicles such as self-driving cars or robots. These preconceptions often assume (implicitly or otherwise) that AI systems are entities that exist independently of humans in a machine vs. human dichotomy. This view, which dominates academic and public discourse surrounding AI is a deeply misconceived, narrow, and one-dimensional conception of AI. What AI refers to in the present context is rather grounded in current systems and tools that operate in most spheres of life. These are seemingly invisible tools and systems that mediate communication, interaction with others and other technological infrastructures that alter the social fabric. These AI systems make life effortless, as they disappear into the background to the extent that we forget their very existence. They have become so inextricably integrated with our daily lives that life without them seems unimaginable. As Weiser[2] has argued, these are the most profound and powerful technologies. “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”

These systems sort, classify, analyze, and predict our behaviors and actions. Our computers, credit card transactions, phones, and the cameras and sensors that proliferate public and private spaces are recording and codifying our “habits”, “behaviors”, and “experiences”. Such ubiquitous interlinked technological milieu continually maps out the where, when, what, and how of our behaviors and actions, which provide superficial patterns that infer who we are[3]. Whether we are engaging in political debate on Facebook, connecting to “free” wi-fi, using Google Maps to get from point A to B, searching for sensitive health information on Google, ordering grocery shopping, posting selfies on Instagram, or out in the park for a jog; our actions and behaviors produce a mass flow of data that produce pattern-based actionable indices about “who we are”. These superficial extrapolations, in turn, feed models that predict how we might behave in various scenarios, whether we might be a “suitable” candidate for a job, or are likely to commit crimes, or are risks that should be denied loans or mortgages. Questions of morality (often misconceived as technical questions in need of a technical fix) are increasingly handed over to engineers and commercial industries developing and deploying AI systems as they are bestowed with sorting, pattern detecting, and predicting behaviors and actions. These predictive systems give options and opportunities to act or they limit what we see and the possible actions we can take. And as O’Neil[4] reminds us, each individual person does not pass through these processes to the same degree nor do they suffer the consequences equally. “The privileged are processed by people, the masses by machines.”

These systems not only predict behavior based on observed similar patterns, they also alter the social fabric and reconfigure the nature of reality in the process. Through “personalized” ads and recommender systems, for example, the level and amount of options put in front of us varies depending on the AI’s decision of “who we are,” which reflects the place we occupy in the social hierarchy. The constraints that provide us with little or great room to act in the world are closely related to our socio-economic status and, increasingly, to who our data says we are. Unsurprisingly, the more privileged we are, the more we are afforded the capacity to overrule algorithmic identification and personalization (or not be subjected to them at all), maximizing our degrees of agency.

Since agency is inextricably linked to subjecthood, which is necessarily political, moral, social, and increasingly digital, the impact of power structures is inescapable. These power relations and the capacity to minimize the potential constraints AI imposes on agency, is starkly clear when we look at the lifestyle choices that powerful agents in Silicon Valley, who make and deploy technology, are afforded. For example, while screen-aided education is pushed towards mainstream schools, the rich on the other hand are reluctant to adopt such practices[5]. Agency, the capacity to act in a given technological environment and context varies in degree from person to person. Silicon Valley tech developers, those with power and awareness of technology as constraining powers, are reluctant to let it infiltrate their children’s surroundings. Some go so far as banning their nannies from the use of screens [6].

Agency is not an all-or-nothing phenomenon that we either do or do not have. Rather, agency is inextricably linked to our social, political, and historical contexts, which are increasingly influenced by technological forces. These forces grant people varying degrees of agency. In an increasingly AI-powered society our capacity to act is limited or expanded based on our privilege; agency is increasingly becoming a commodity that only the privileged can afford.


[1] Birhane, A. (2017). Descartes Was Wrong: ‘A Person Is a Person through Other Persons’. Aeon.

[2] Weiser, M. (1995, June). The computer for the 21st century. In Human-computer interaction (pp. 933-940). Morgan Kaufmann Publishers Inc.

[3] Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

[4] O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.

[5] Bowles, N (2018)

[6] Bowles, N (2018)