You're Not Behind on AI. You're Just Asking the Wrong Question
- Kyle Tyacke

- 2 hours ago
- 5 min read
Kyle Tyacke, Director of Technology
Table of Contents

The best conference sessions don't give you answers. They give you a better question to carry home.
At HumanX 2026, Stefan Weitz, Founder of HumanX, opened the main stage keynote with a historical analogy about electricity and factories that reframed how I think about AI adoption entirely. It wasn't aimed specifically at developer programs or developer marketing. It was aimed at every leader in every industry navigating the current moment. But the implications for how we think about developer programs in an AI-first world are significant and worth spending some time with.
The central question Weitz posed: "What would you build if you were starting from scratch?"

What Was the Most Useful Idea From HumanX 2026?
In the 1890s, factories adopted electricity. They did the rational, sensible thing: they replaced their steam engines with electric motors and kept building factories exactly as they always had. A single large motor at the center, mechanical shafts and belts running power to every workstation. The layout, the processes, the organizational logic: all of it stayed the same. Electricity was simply a better energy source for the same design.
Productivity gains were minimal. For decades.
It was not until the 1920s that manufacturers started asking a fundamentally different question. Not "how do we electrify this factory" but "if electricity had always existed, how would we have designed this factory from the start?" The answer looked completely different. Individual motors at each workstation. Flexible layouts. Workflows that electricity made possible, rather than workflows that electricity simply powered.
That question unlocked the productivity revolution that had been theoretically available for thirty years.
"What you can control is the questions you ask." — Stefan Weitz
Weitz argues that most organizations are in the 1890’s moment with AI. They are adopting the new technology rationally and responsibly, adding it to existing structures. But those structures were designed under a different set of constraints, and AI fundamentally changes those constraints, so the structures themselves deserve to be questioned.
Why Does the Electricity Story Matter Right Now?
Weitz set the analogy against a backdrop that made its urgency hard to ignore. He opened with a slide cataloging the current macro environment: US tariffs at their highest since 1909, geopolitical conflict intensifying, nobody certain what org charts will look like in 18 months, and AI investment potentially the only thing keeping some economies out of recession.
His framing for all of it: "You are moving at a speed you didn't choose, through conditions you can't see."

The point was not to be alarming. It was to explain why the electricity question is more urgent now than it might appear. The teams waiting for conditions to stabilize before rethinking their approach may be waiting longer than they expect. The productivity gap between organizations that rethink from first principles and those that keep electrifying steam engines is widening in real time.
The mirror quote Weitz showed from a CTO at a mid-sized logistics company captures what this realization often feels like from the inside:

AI implementations have a way of revealing organizational assumptions that existed long before AI arrived. That is uncomfortable. It is also, for teams willing to look, one of the more useful things AI can do.
What Does This Open Up for Developer Programs?
This is where I think the analogy gets genuinely exciting for developer marketing and DevRel leaders.
Most developer programs were designed around the constraints that existed when they were built: content production gated by human writing capacity, community management structured around synchronous engagement, and developer experience dependent on developers actively seeking out documentation. Those constraints shaped the programs we have. AI changes many of them fundamentally.
The opportunity is not to optimize those programs for nominal performance improvement, it’s to ask the 1925 question: if AI had always existed when we designed this program, what would it look like?
There are no universal answers here. The electricity question is not a framework with steps. It is a mode of thinking that has to be deliberately entered, because the pull of execution is constant and legitimate: content to produce, communities to manage, programs to run. Creating space to ponder the question is itself the first move, and the teams that make this move deliberately will be the ones who gain the competitive advantage.
Conclusion
The electricity analogy is useful not because it tells you what to build, but because it tells you what kind of question to ask. Most organizations are in the 1895 moment: adopting AI responsibly, adding it to existing structures, measuring the efficiency gains. That is not wrong. It is just not the whole opportunity.
The 1925 question, what would we build if we were starting from scratch, is available to any developer program leader willing to step outside execution mode long enough to ask it. The teams that ask it now will have the advantage of doing so on their own terms.
Curious what that conversation looks like in practice? Connect with the Catchy team below. This is the kind of structural thinking our strategy practice is built around.
Frequently Asked Questions
What is the "electrifying the steam engine" problem in AI adoption?
It describes the most common pattern of AI adoption: adding AI tools to existing structures without questioning whether those structures are still the right ones. Just as early factories replaced steam engines with electric motors but kept the same layouts, many organizations are using AI to accelerate existing processes rather than rethinking what those processes should look like when AI is the starting assumption.
How does the electricity analogy apply to developer programs specifically?
Developer programs built before AI existed were shaped by the constraints of that era: human writing capacity, synchronous community management, documentation that developers had to actively find. AI changes those constraints fundamentally. The analogy suggests the more productive question is not how to add AI to the existing program, but what the program would look like if it had been designed from scratch with AI available from day one.
What is the first step toward asking the right question?
Creating space for it. Most developer program teams operate in execution mode, and the 1925 question requires stepping outside of that. It helps to have an outside perspective, someone who is not embedded in the existing structure and its assumptions. The goal at first is not a new plan. It is clarity on which assumptions are worth revisiting.
Why is this question more urgent now than it was a year ago?
The pace of AI adoption has accelerated significantly. The agentic AI market reached $8.5 billion in 18 months from zero, and 64% of enterprises are actively deploying agents rather than just piloting them. The productivity gap between teams that rethink from first principles and those that keep optimizing existing structures is widening faster than most forecasts anticipated.


