Should AI Be Regulated Like Utilities? Exploring the Future of AI Governance
19 May 2025
Table of contents
Should AI Be Regulated Like Utilities?
Let's be real, AI powers your smartphone, curates your newsfeed, filters your emails, and increasingly drives decisions in finance, healthcare, policing, and beyond. It’s the nervous system of modern society, invisible but indispensable.
So here’s the question we’re now forced to confront:
Should artificial intelligence be regulated like a public utility?
Some say yes. Like water or electricity, AI is becoming so foundational that its development and use must be tightly controlled. Others argue that regulation would smother innovation just as we’re approaching breakthroughs in medicine, education, and climate modeling.
Both camps are right and both are wrong. The truth, as always, is more nuanced.
AI Is PowerfulBut Not All Intelligence Is Equal
AI is everywhere. It’s shaping how we work, live, and decide. But here’s what we’re not talking about enough: not all AI is built the same.
One system helps doctors detect cancer.
Another writes poems.
Another decides who qualifies for a home loan.
These AIs don’t just do different jobs, they carry very different risks. Some can save lives. Others can cause real harm, discrimination, or reinforce bias.
Lumping them all into one category is dangerous. It’s like using the same rulebook for a kitchen appliance and a missile system.
And yet... we need rules. Badly..
That needs to change. Now.
Un-guardrailed AI isn't only risky, it's foolish. And the issues are accumulating quicker than a great many legislators can keep pace.
Bias and Discrimination
AI acquires its information from training data. If the training data contains bias, so will the AI.
We have already seen actual harm:
- AI is biased towards men compared to women when hiring
- Misidentifying people of color with facial recognition
- Health instruments that wither away patients on the grounds of faulty thinking
Deepfakes and Misinformation
We now have technology to produce AI-generated fake audio and video, which appear as real.
Fake political speech? False news reporting?
It becomes increasingly difficult to tell what's true, and that is a real issue. And the real question remains are deepfakes taking over?
Work Replacement
Artistic work affects coders, writers, and designers.
Some jobs are evolving. Others are vanishing.
If we don't act quickly, we might experience severe economic consequences.
Autonomous Weapons
AI is going to war.
Drones that decide to kill or spare on their own?
That's not a script for a movie, it's real-world experimentation.
Without regulation, this could result in worldwide chaos.
Why Regulating AI Like a Utility Sounds Good
Supporters of utility-style regulation argue that AI has crossed the threshold of critical infrastructure. Like electricity, the internet, or clean water, AI now underpins essential systems.
What does utility-style regulation offer?
- Accountability: Forces developers to meet standards for safety, transparency, and fairness.
- Accessibility: Prevents monopolies from locking AI behind paywalls or proprietary platforms.
- Oversight: Provides ethical and legal checks on powerful, potentially destructive technology.
The European Union is leading the charge. With its AI Act, the EU aims to classify AI systems based on risk, restrict harmful applications, and impose transparency requirements. It’s bold. And it’s needed.
But...
Why Overregulation Could Break AI’s Momentum
Here’s the counterpoint: Regulate AI too early, too broadly, or too clumsily and you risk stifling progress.
Let’s remember:
- AI is accelerating breakthroughs in drug discovery.
- It’s helping predict natural disasters faster and more accurately.
- It’s revolutionizing personalized education and accessibility.
Blanket regulations could slow this progress by:
- Creating compliance costs only big tech can afford.
- Discouraging startups from taking risks.
- Locking countries out of AI development due to bureaucratic red tape.
There’s also the danger of regulating the wrong things. Imagine forcing Tesla to follow the same AI standards as a dating app. That’s not smart governance, it's tech illiteracy masquerading as oversight.
The answer isn’t a binary yes-or-no. It’s nuanced.
What we need is risk-based regulation, similar to how the medical field handles drug approval:
- Low-risk AI (e.g., recommendation engines) → Light oversight
- Medium-risk AI (e.g., loan scoring models) → Audits and transparency
- High-risk AI (e.g., surveillance, weapons, healthcare diagnostics) → Strict regulation and human-in-the-loop mandates
This is the approach the EU is proposing and while imperfect, it's a start.
AI isn’t a monolith. It's a spectrum. And regulation must match that reality.
Who Should Be Responsible for AI Oversight?
Here’s where things get messy: Who makes the rules? Governments? Big tech? Academics?
The answer must be: all of the above.
- Governments provide legal authority and protection of the public interest.
- Companies have the resources and engineering talent to shape real-world solutions.
- Academia and civil society bring in ethical, social, and philosophical perspectives often ignored in boardrooms.
We need public-private partnerships and multi-stakeholder councils that evolve alongside the technology. If that sounds ambitious, that’s because it is. But anything less risks either chaos or stagnation.
Move fast, but don’t break the world.
Yes, I believe in moving fast and breaking things when you're disrupting the systems. But when you're building the infrastructure of intelligence itself? You better know where the brakes are.
We need AI to scale. But we also need to bake in accountability from day one. That means:
- Open-source transparency for high-impact models.
- Bias auditing as standard practice.
- Kill switches and explainability protocols for safety.
In short, we need regulatory agility rules that evolve with the tech, rather than lag five years behind it.
Final Thoughts
AI is too powerful to be left to itself and too varied to be regulated like a water company.
So, should we create regulations for AI-like utilities? Not really.
But should we develop bespoke regulatory regimes that shield humanity but enable the technology to thrive?
Absolutely.
The future of AI will not be determined in a vacuum. It will be determined by the decisions we make today about ethics, responsibility, and courage.
And in that future, the wisest path isn't full speed or full stop. It's creating a smarter road.
WRITTEN BY

Dezeal Khedia
Dezeal is a marketing mastermind who knows how to make brands stand out in a crowded world. From developing strategies to executing campaigns, Dezeal helps businesses get noticed and grow.
WRITTEN BY
Dezeal Khedia
Dezeal is a marketing mastermind who knows how to make brands stand out in a crowded world. From developing strategies to executing campaigns, Dezeal helps businesses get noticed and grow.
More
An interesting read? Here is more related to it.
Making IT Possible
Making IT Possible
Making IT Possible
Making IT Possible
Making IT Possible
Making IT Possible
India (HQ)
201, iSquare Corporate Park, Science City Road, Ahmedabad-380060, Gujarat, India
For Sales
[email protected]
Looking For Jobs
Apply Now