Unchecked Power: The Ethical Crisis Posed by Agentic AI
- Ryan Yin
- Jun 13
- 3 min read
Updated: Aug 9
Fergus Farrell,
London, UK
In the pharmaceutical industry, developing a new drug is a lengthy and tightly regulated process. A medication undergoes years of rigorous testing, clinical trials, and regular approvals before it can reach patients. Risks including side effects, long-term impacts, and ethical concerns are all considered at great lengths. Despite this, scandals such as Thalidomide have prevailed.
AI development, contrarily, is driven by market forces, with companies racing to release revolutionary models. Ethics teams within tech companies often lack real power, serving more as advisory groups rather than decision-makers. Those who stand up to the rapid development are often driven to leave or fired like Dr. Gebru, a leading Artificial Intelligence ethics researcher, who was fired in 2021 after sending an internal email that accused Google of “silencing marginalised voices.” Meanwhile, government regulation lags far behind innovation, which leaves AI development governed by a corporate drive for profit rather than public safety.
Agentic AI refers to artificial intelligence that is capable of perceiving, reasoning, and acting without direct human oversight. These systems can learn from their environment, set goals, and make decisions. Their use extends from a personal assistant used to book flights and organise meetings to autonomous military drones capable of destruction.
The immediately obvious issue concerns responsibility and accountability. If an agentic AI makes a harmful decision, like misusing sensitive information such as bank details, who is responsible: the developer, or the user of the AI itself? If the AI is responsible, how do we punish it, considering that it is incapable of feeling pain, lacks conscience, and has no relationships or responsibilities? The lack of accountability could create a legal vacuum, allowing corporations or governments to evade responsibility, meaning that when a scandal does occur, they will walk free.
AI is constantly accused of bias; but it's not until agentic AI systems that AI has the potential to act on these biases autonomously, exacerbating the issue. AI models are trained on historical data, which often contains embedded human prejudices, moreover, only 22% of people working in AI according to a WEF survey are women and less than 25% identify as minorities according to McKinsey. This underrepresentation suggests that there is a lack of voices fighting against these biases. When AI systems make decisions whether that be in hiring, lending, policing, or healthcare they are at risk of amplifying discrimination. This is not merely theoretical, as hiring algorithms have discriminated against women and minorities, reflected by the biases present in historical hiring data. Whilst Predictive policing AI has disproportionately targeted minority communities, leading to over-policing and wrongful arrests. If AI systems do work independently of human intervention, as intended with Agentic AI, these biases may become self-reinforcing, creating a world where discriminatory decisions are made at scale, with no obvious way to challenge them.
As AI grows more autonomous, it also becomes less predictable. Currently, AI models run within predefined limits. The development of more sophisticated, self-governed AI raises alarm bells concerning whether humans will be able to control them. This brings rise to hugely challenging ethical concerns. Could AI refuse human intervention if it determined that human commands contradict its own objectives? In areas like finance, healthcare, and national security, will AI begin to make decisions beyond human comprehension? As AI becomes more capable of independent reasoning, there is a risk that it could reach a point where humans cannot easily intervene or correct mistakes. Such a scenario would have catastrophic consequences.
Agentic AI’s unique ability to complete entire tasks without human oversight, is so useful that it may soon be capable of replacing white-collar jobs such as accounting. Take for example two AI Agents: Agent A (Client) aims to minimise audit costs, ensure prompt delivery, and follow regulatory standards, while Agent B (Accounting firm) seeks to maximise their services fees and maintain high-quality standards. The negotiation proceeds through exchanges until each agent predicts the other’s preferences and adjusts their state files accordingly. The agents converge on a mutually acceptable agreement and finalise the contract. All this, a procedure that would usually take weeks could be completed within a minute. This poses huge ethical challenges with mass unemployment, and the risk of an event like the Industrial Revolution, but instead of blue-collar workers being replaced by machines, it would be white-collar workers. Like the Industrial Revolution, this would lead to widening income inequality as only senior management would be able to work skilled jobs. Even if a global benefits system was created, psychological and social concerns persist as employment is often a form of social engagement and personal fulfilment.
As a result, governments across the world now face a critical challenge: how to regulate Agentic AI to minimise harm without falling behind other countries in the race for technological and economic dominance.
I'm usually skeptical, but the article on Ways2Well https://ways2well.com/blog/stem-cells-for-psoriasis-promising-treatments-for-skin-healing about psoriasis and stem cells caught my attention. It wasn't loud or promotional, just honest information about how medicine is trying to find new ways forward. I'm sharing it because I think it's one of those things that's worth reading, even if you don't have the disease.