Five Objections to AI I Keep Hearing (And What I Actually Think)
The concerns I think are overstated, and the two I think are real
I had a conversation with two friends over the weekend. One works in journalism, the other in design. Both smart, thoughtful people. Neither is anti-technology. But when AI came up, the objections poured out. Electricity. Creativity. Centralized power. Thinking. War.
I’ve heard versions of these objections many times now. From friends, from other founders, from strangers on the internet. Some of them I think are overstated. A couple of them keep me up at night.
Here’s where I land on each one.
1. Electricity and the Environment
This is the most common objection I hear, and the one I think is most overstated relative to the actual data.
Yes, data centers use a lot of electricity. And yes, that number is growing fast. But “a lot” needs context. Data centers currently account for about 1-1.5% of global electricity consumption, projected to reach roughly 3% by 2030. Transport, by comparison, accounts for about 30% of global energy demand. Industry consumes another ~40% of electricity demand growth. The sectors that are genuinely hard to decarbonize dwarf AI’s energy footprint by an order of magnitude.
That doesn’t mean the concern is irrelevant. Energy use is growing, and the trajectory matters. But I think the medium-term calculus is fairly clear: AI is already being deployed to optimize energy grids, accelerate materials science for better batteries, improve agricultural efficiency, and model climate systems with higher fidelity than we’ve ever had. These aren’t theoretical applications. They’re happening now. The IEA estimates that widespread adoption of AI in end-use sectors could deliver 1,400 megatons of CO2 reductions by 2035, three to four times larger than projected data center emissions over the same period.
The net energy impact of AI over a 10-year horizon is almost certainly positive. The World Economic Forum published a framework for “net-positive AI energy” specifically arguing this point, with data across 130+ real-world use cases. The short-term costs are real but manageable, and they’re the kind of problem that engineers and policy makers know how to solve. We’ve done this before. Every major computing revolution came with an energy conversation. The internet was supposed to consume unsustainable amounts of power. It did consume more power. And then efficiency gains, renewable energy deployment, and better hardware brought the cost curve down.
I’d bet the same pattern plays out here.
2. Creativity Is Dead
This one comes up a lot in my conversations with designers and writers. The fear is that AI-generated content is going to flood every channel with mediocre remixes until nothing feels original anymore.
I get the concern. But I think it misreads history.
Every new creative medium has triggered this exact reaction. Photography was supposed to kill painting. Film was supposed to kill theater. Synthesizers were supposed to kill music. Auto-tune was supposed to kill singing. Photoshop was supposed to kill photography. In every single case, the medium got absorbed, the best creators adapted, and the bar for what counted as compelling work shifted.
That’s already happening with AI. I’m seeing creative people lean in and produce work that wouldn’t have been possible six months ago. New visual styles. New ways of combining mediums. Entirely new creative workflows where the human is directing, curating, and shaping output in ways that feel genuinely novel.
And for the concern about social media becoming “fake,” I’d push back gently: most of social media was already heavily processed before AI entered the picture. Photoshopped images, filtered videos, ghost-written captions. We already lived in a world of manufactured content. AI gives us more tools, but the underlying dynamic of seeking authenticity and real storytelling was happening regardless. The creators who connect with audiences have always been the ones who feel real. That hasn’t changed.
The real shift isn’t that creativity is dying. It’s that the barrier to producing decent creative output is dropping to near zero. Which means the thing that differentiates great work from mediocre work is taste, perspective, and editorial judgment. Those are deeply human qualities. If anything, they become more valuable, not less.
3. Deep Thinking and Rote Work
A friend asked me: “If AI does all the work, what happens to our ability to think?”
It’s a fair question. There’s real cognitive science behind the idea that struggling through difficult work builds mental capacity. If you never do the hard thing yourself, you might lose the ability to do it at all.
But I think this frames it wrong.
Steve Jobs called the computer “a bicycle for the mind”, inspired by a Scientific American study showing that a human on a bicycle was the most efficient species on the planet in terms of locomotion. The computer removed some of the manual drudgery of calculation and organization, and it freed people to think about harder problems. The calculator did the same thing a generation earlier. Nobody argues that calculators made us worse at math in the ways that matter. They made arithmetic less central and freed us to focus on higher-order mathematical thinking.
AI is a motorcycle for the mind. It’s faster, more powerful, and covers more ground. And the question it raises is the same one every previous tool raised: where are you going?
If you use AI to avoid thinking entirely, that’s a problem. If you use it as a thought partner, a sparring partner that challenges your assumptions and helps you iterate faster on ideas, it sharpens your thinking. I experience this every day. My best thinking happens in conversation with AI, not because it thinks for me, but because it forces me to articulate what I actually believe and then stress-tests it.
The real risk isn’t that AI replaces thinking. It’s that people choose not to think. But that’s a human discipline problem, not a technology problem. The people who ground themselves in clear principles, who know what questions they’re trying to answer, who use AI to go deeper rather than to avoid depth entirely, they’re going to be dramatically more capable than they were before.
I’ll acknowledge that the pace of change makes people nervous. That’s legitimate. We’re adapting to a new tool faster than we’ve adapted to any tool before. But the adaptation pattern is the same: figure out what to delegate, figure out what to own, and lean into the parts that require your judgment.
4. Centralization of Power
This is where my concerns start to get real.
The economics of AI are concentrating an extraordinary amount of capability in a very small number of companies. Training frontier models costs hundreds of millions of dollars, with costs growing at roughly 2.4x per year since 2016 and projected to exceed $1 billion by 2027. The compute infrastructure requires billions more. The talent pool is small and getting smaller relative to demand. The result is that a handful of companies, mostly in one country, are building the systems that will increasingly mediate how knowledge work, creative work, and even governance gets done.
That’s concerning on its own. Mix in geopolitics, and it gets more concerning.
McKinsey estimates that generative AI could add $2.6 to $4.4 trillion annually to the global economy, with roughly 75% of that value concentrating in just four functional areas. The IMF projects that 40-60% of jobs in advanced economies will be affected by AI. That value is going to flow disproportionately to the companies that own the models and the infrastructure. If you’re a small country without sovereign AI capability, you’re essentially renting your cognitive infrastructure from a foreign corporation. The leverage dynamics there are uncomfortable.
This is where I see real hope in open source. Open-weight models like Llama, Mistral, DeepSeek, and others are creating an alternative path where capability isn’t locked behind a single provider. Countries and organizations can run models on their own infrastructure, fine-tune them for their own needs, and maintain some degree of sovereignty over their AI stack.
But open source alone isn’t enough. This requires active policy work. Data sovereignty frameworks. Investment in domestic compute infrastructure. Serious thinking about how the economic gains from AI get distributed rather than concentrated. These aren’t problems that the market will solve on its own. The default trajectory is more concentration, not less.
I’ve written about this in the context of Canada’s AI strategy, and I think every country needs to be having this conversation. The window for shaping how power gets distributed is now. Once the infrastructure is built and the dependencies are locked in, it becomes much harder to unwind.
5. AI in War
This is the one that genuinely scares me.
We already saw what drones did to the calculus of armed conflict. Research from the University of Michigan Press documents how combat drones reduce the political costs of military action by eliminating military casualties and increasing perceived precision. Politicians who would have faced massive domestic opposition for deploying troops can deploy drones with minimal public scrutiny. As one study put it: “to the degree that substituting machines for humans lowers the costs for fighting, conflict will become more frequent, but less definitive.”
Now extend that logic to autonomous systems. If the entire apparatus of conflict can be largely automated, what does the threshold for going to war look like? If there’s no body bag count, no grieving families on the evening news, no political cost at all, what restraint remains?
We’ve already seen early versions of this play out. Israel’s Lavender and Gospel AI systems have been used to generate thousands of targeting recommendations in Gaza, with human reviewers reportedly spending as little as 20 seconds per target before approving strikes. The Pentagon’s Replicator program is developing AI-powered weapon swarms designed for highly autonomous operations. The line between “AI-assisted” and “AI-directed” military action is getting blurry, and the pace of deployment is outrunning the pace of governance.
I don’t have a clean answer here. I think this is genuinely one of the hardest problems of our generation. The technology exists. Multiple nations are developing it. And the incentive structures push toward deployment, not restraint.
What I’ll say is this: the conversation about AI in warfare needs to be happening with the same urgency as the conversation about nuclear proliferation. In December 2024, the UN General Assembly voted 166-to-8 to adopt a resolution on lethal autonomous weapons systems. That’s a start. But resolutions without enforcement mechanisms are paper shields. The potential for AI to lower the barrier to conflict, to enable asymmetric warfare at unprecedented scale, and to remove the human judgment that has historically served as a check on escalation is real and immediate. This isn’t a hypothetical risk for 2035. It’s a 2026 problem.
Where I Net Out
Three of these objections, electricity, creativity, and thinking, I believe are transition costs. Real, but manageable. The pattern of every previous technological revolution suggests we’ll adapt, and the benefits will substantially outweigh the disruption.
Two of them, centralized power and AI in warfare, I believe are genuinely existential. Not because the technology itself is inherently dangerous, but because the human systems for governing it are not keeping pace.
I’m optimistic about AI. I’m building with it every day. But optimism without honesty about the risks is just salesmanship. The people dismissing every concern are as wrong as the people who think we should stop building entirely.
The honest position is somewhere in the middle: lean in, build thoughtfully, and fight like hell on the governance questions that actually matter.
References
IEA - Data Centres and Data Transmission Networks - Data centers account for ~1-1.5% of global electricity, projected to reach ~3% by 2030.
IEA - Energy Efficiency 2025: Transport - Transport accounts for ~30% of global energy demand.
IEA - AI and Climate Change - AI could deliver 1,400 Mt CO2 reductions by 2035, 3-4x projected data center emissions.
World Economic Forum - Net-Positive AI Energy by 2030 - Framework arguing AI-enabled savings can exceed AI’s energy consumption.
The Marginalian - Steve Jobs: Bicycle for the Mind (1990) - Original PBS/NOVA interview where Jobs explains the analogy.
Epoch AI - How Much Does It Cost to Train Frontier AI Models? - Training costs growing 2.4x/year, projected to exceed $1B by 2027.
McKinsey - The Economic Potential of Generative AI - $2.6-4.4T annual economic potential, 75% concentrated in four areas.
IMF - AI Will Transform the Global Economy - 40% of global jobs affected, 60% in advanced economies.
Walsh & Schulzke - Drones and Support for the Use of Force - Research on how drones reduce political costs of military action.
Gartzke - Blood and Robots: Remotely Piloted Vehicles and the Politics of Violence - “Conflict will become more frequent, but less definitive.”
+972 Magazine - Lavender: The AI Machine Directing Israel’s Bombing Spree in Gaza - Investigation into Israel’s AI targeting systems.
The Guardian - Israel Used AI to Identify 37,000 Hamas Targets - Human reviewers spent ~20 seconds per AI-generated target.
Bajak et al. - AI-Powered Autonomous Weapons Risk Geopolitical Instability - Pentagon’s Replicator program and autonomous weapon swarms.
Stop Killer Robots / UN GA - 166 States Vote to Adopt LAWS Resolution - December 2024 UN vote on lethal autonomous weapons regulation.

