A Human Among AI Agents

Our world is changing before our eyes, and it is changing faster than ever. AI systems are now integrated into our homes, workplaces, and even our personal lives. They help doctors in analyzing X-rays and help us optimize our portfolios to prevent financial losses. Yet, they also decide who gets loans, influence judges’ opinions, and even compete with artists, filmmakers, and authors in their creative work.

What concerns me, though, is the rapid pace of this advancement. This pace prevents us from fully understanding these technologies and assessing their consequences. In addition to the fast pace, there is significant hype surrounding these technologies. Everyone wants to talk about them, adding noise to the signal. This, in turn, leads to misleading information and makes it difficult to navigate to the crux of the matter, especially at first.


In the middle of this rapid transformation, I often find myself wondering:

“What is my role as a human in a world increasingly run by machines?”

This post is my attempt to explore that question, and I’ll answer it in 4 parts:


Being responsible

The most obvious role for humans in an AI-dominated world is responsibility. No matter how advanced AI technologies become, humans will always be accountable for the actions of these technologies and their consequences. Even with AGI (Artificial General Intelligence)—a level of AI that may surpass human cognitive abilities—the ultimate responsibility will rest with humans.

This accountability exists at both macro and micro levels.

On a macro level, companies that train and develop AI systems bear ethical and technical responsibility for their products. Companies like OpenAI, Google, Amazon, and others must justify their decisions regarding data collection, model training, and deployment practices.

For example, controversies like the biased outcomes of COMPAS, a criminal risk assessment tool, demonstrate what happens when accountability lapses. A lack of transparency and oversight led to unfair outcomes for marginalized communities, underscoring the need for companies to implement rigorous checks and balances.

The stakes become even higher when AI is applied in critical contexts like warfare. Recent developments in autonomous weapons systems, where AI is entrusted with making life-and-death decisions—such as identifying and targeting individuals without any human oversight—are deeply concerning. This scenario is not only dystopian but raises profound ethical and humanitarian questions. Allowing AI to operate entirely without human intervention in such scenarios is a chilling reminder of the importance of maintaining human control and accountability in the deployment of advanced technologies. For more information about this topic, check out Hannah Fry’s book, Hello World.

On a micro level, end users will always be held accountable for the results generated by AI. For example, a structural engineer using AI to design a bridge cannot simply blame the AI for a flawed design or a collapsing structure.

Consider the Boeing 737 MAX disaster: although systems like the MCAS software played a critical role in the crashes, human oversight, regulatory failures, and insufficient testing were ultimately to blame.

I know there are some rising voices nowadays trying to convince us that everything will eventually be run by AI, making humans redundant. However, I honestly believe this idea is akin to the notion of a perpetual motion machine—a dream that defies reality. Even if companies rely heavily on AI in the future, humans will always be required to interpret, validate, and bear the responsibility for these systems’ decisions.

Perpetual motion machine- reference

This responsibility cannot be arbitrary. It must lie with professionals who fully understand the problem domain and the AI’s functioning. For instance, in software engineering, a developer remains accountable for the code, regardless of whether it was written manually or generated by AI.


Seeking Deep Understanding

I believe humans constantly seek a deeper understanding of the things surrounding them, and AI is no exception. This applies both to the professional side of things (e.g., “how to build a bridge” or “how black holes work”) and the technical side (e.g., “how LLMs work”).


That deep understanding allows us to assess the outcomes of an AI system and determine whether the results are good, bad, or in need of improvement. This knowledge also makes it more efficient to interact with AI tools and craft precise prompts to achieve the best possible results.

A clear understanding of AI demystifies the technology, reducing fear while increasing caution. Consider the public’s reaction to tools like ChatGPT or DALL·E. Initially met with awe, deeper exploration revealed their strengths (like generating coherent responses or creating artistic outputs) and their flaws (like hallucinations and biased outputs).


Challenging and Exploiting AI

One of the core tasks we should focus on as humans among AI agents is to constantly test the limits of AI systems, challenge them, and exploit their capabilities to innovate responsibly.

I am sure that no matter how responsible AI creators are, bad actors will always exist and their misuse of AI can have devastating consequences. Deepfake technology, for instance, has been weaponized for misinformation campaigns and even cybercrime. Our responsibility is to critically assess AI applications and raise the alarm when something doesn’t align with ethical principles.

Moreover, all systems—including AI—are subject to manipulation. This requires us to remain vigilant, setting clear and bold boundaries to prevent AI from infiltrating biases or violating privacy/copyrights.

At the same time, AI offers huge potential. Digital artists, for example, are using AI tools to push creative boundaries, combining traditional techniques with AI-generated ones to produce stunning works of art.


Do what’s meaningful for you

As more and more tasks become automated and AI-dominated, we indeed gain our precious time back. The time that was once spent on things we dislike. This newfound time allows us to reflect, get creative, and embrace our humanity.

Even if AI can perform the same tasks we do, the process of doing meaningful work carries inherent value. A handcrafted product or a carefully written line of code often holds more meaning than a machine-generated equivalent. This is why handmade goods, organic produce, and sustainable practices remain in demand despite mass production and technological efficiency.

AI challenges us to refine, expand, or even redefine our skillsets. When we rise to this challenge, the fulfilment of overcoming obstacles and achieving success is unmatched.


The bottom line

AI is transformative – no doubt about that! Yet, it cannot replace our humanity. Let’s focus on what makes us humans. By holding ourselves accountable, striving for deep understanding, and challenging the systems around us, we can ensure that AI serves us rather than overwhelms us.

So, sit down, enjoy your herbal tea, and reflect on what AI can never replace in us. Never give away your creativity—it’s your most valuable asset in a world increasingly run by machines.

Leave a comment