The Efficiency Trap: What We Lose When AI Optimizes Everything

Standing in the tension between AI powered efficiency and retained humanity

I've spent 28 years in the trenches of tech—engineer, engineering manager, program manager, product leader, enterpreneur, consultant. I've lived through the booms, the busts, and the disruptions. But lately, I'm finding it harder to stay on the side of "tech-optimist" that has been part of my identify for all of these years.

Through Mosaic Mesh AI, I help people use this technology. I'm an "Applied AI" guy. But if I'm honest, I'm carrying tension that doesn't make it into my marketing copy.

Every day, I feel the squeeze from two extremes. On one side, there's a visceral, almost grieving rejection of everything AI, embodied by those who see the death of human dignity in every automated paragraph or AI generated image. On the other side, there is a cold, hyper-efficient push that mocks any hesitation as weakness that says to stop being soft and just help the winners win.

It's exhausting. And it's personal.

I have two teenage daughters. They're looking at a future horizon that can easily feel more like a threat than a promise. The world they're inheriting doesn't feel like the one I came of age within, the post-Cold War moment of relative clarity and certainty. I don't have a data-driven argument to tell them everything will be fine.

Recently, I was talking with another tech leader who has college-age kids. We discussed: with AI coding agents writing entire applications now, how will younger people learn the hard lessons of architecture, design, and deep thinking that we cut our teeth on? How will our kids find their place in a world optimizing away the very friction that taught us how to think?

We're told that AI is "great" because it removes friction. But friction is where life and growth and understanding happens.

Think about a middle-school basketball game. It's messy. The players make basic mistakes. The calls can be bad. It's wildly inefficient compared to a pro-level, carefully-managed basketball entertainment product. But we care about the middle-school game because the stakes are human. These kids are our family. We care because the kids are trying to figure out who they are through trial and error and we get to see their growth happen in near-real-time. I've coached my daughters' basketball teams spanning the ages 8-15. This season is the first time I'm just a parent in the stands. And I notice the difference between watching their games and watching pro-level games. The professional games can quickly start to feel fake and overly managed. The middle school game is raw and messy and real. And, I find that I really like raw/messy/real.

A second metaphor. I was a sailor as a kid. I learned valuable lessons from having to be in tune with the wind, waves, water, weather, and "feel" of the boat. I've also driven a lot of power-boats in all sorts of "crazy" conditions. In my work, I see the "power boats"—the high-speed, engine-driven AI tools that get you to the destination directly and quickly. They're impressive. They feel almost inevitable. But as a sailor, I also know that when I turn on the engine, I stop feeling the water/wind/waves in the same way (unless I'm driving a power-boat in some type of extreme weather conditions). I stop needing to be fully present.

The "Applied AI" work I do isn't just about cheering for the machines. It's also about finding ways to preserve the "sailing" and "middle-school basketball games" in a world that often times seems to only want power boats and polished pro games. I want to find ways to keep human stakes in systems easily designed to optimize them away.

When a company tells me their AI integration failed, I don't just want to fix the data pipeline or API calls. I want to ask: What were you actually trying to accomplish? What matters about how you do this work? Where does human judgment need to stay in the loop? What will success do to your employees and customers and business in the long run?

When I teach people to use AI tools, my goal isn't simply to deliver effeciency. I want to teach them to understand what they're actually trying to do, what matters about how they do it, and where the machines can help without replacing the messy, human parts that can make it meaningful.

I'm working from a hypothesis that AI adoption struggles are rarely pure technology problems. They're often identity problems. When organizations and people haven't figured out what role technology should play in their work, what belongs to humans, and what could/should be automated, they struggle. Organizations and people that succeed seem to understand their own tradecraft well enough to know where the friction is essential and where it's waste.

I can't promise my daughters a world unchanged by AI. That ship has sailed, or depending on the metaphor, the engine is already running. But I can teach them (and others) how to keep human stakes at the center as we navigate AI powered changes. How to use these tools without letting them use us. How to stay present even when the engines are running.

Every day can bring up complex emotions in me as I build a business in the middle of this tension. But right now, this is the most honest place for me to stand. In the middle, admitting that I'm worried, admitting that I don't have all the answers, and refusing to pretend that "efficiency" is always the same thing as "progress."

The "tension" isn't a bug. It's the proof I'm still here. And it's worth the effort.

Previous
Previous

The Bart Test - Part 10: The Stochastic Parrot and What Visible Thinking Traces Might Reveal

Next
Next

The Bart Test - Part 9: The Question I Couldn't Answer