We're living through a fundamental inversion in product development. For decades, the constraint was execution—how quickly could we build, how many ideas could we test, how fast could we iterate? Today, AI has commoditized much of that execution. And in doing so, it's revealed a more challenging constraint: judgment.
I co-founded Sightglass at a moment when low-code prototyping felt revolutionary. The ability to skip conceptual mockups and jump straight into working software was exhilarating. Today, we’re experiencing something far more profound. We can generate concepts, run market tests, and refactor entire features in the time it used to take to schedule a design review.
The question is no longer “can we build this?” It’s “should we?”
And that question is harder to answer than most teams realize.
The Abundance Problem
The irony of AI-accelerated development is that the more you can test, the less you should test.
Think about traditional A/B testing. You’d carefully design two variants, maybe three if you had the resources. You’d run them with statistical rigor, analyze the results, and make a decision. The scarcity of options forced discipline.
Now? You can test the whole alphabet. Twenty-six variants, each generated in minutes, each plausible enough to warrant consideration. And some teams are doing exactly that—overwhelming themselves with options, exhausting their beta participants, and drowning in marginal differences.
The worst offenders default to building whatever the highest-paid executive wants, burying the decision in data that’s too abundant to be meaningful. They’re using AI to abdicate judgment rather than inform it.
The best teams do something different. They exercise discernment. They test half the alphabet instead of the whole thing. They recognize that participant exhaustion, brand exposure risk, and analysis paralysis are real constraints, even when technical execution isn’t.
Product strategy has become the new bottleneck in software development. And that’s actually a good thing.
The Reemergence of the Subject Matter Expert
There’s a narrative circulating that AI will replace specialized knowledge with general-purpose intelligence. In practice, I’m seeing the opposite.
Subject matter experts are having a renaissance.
Here’s why: AI can provide technical breadth on demand. A developer doesn’t need to master every framework because they can leverage AI for domains outside their core expertise. But AI fundamentally struggles with context—particularly the kind of context that comes from spending 20 years in payroll processing, or educational content design, or healthcare compliance.
That accumulated wisdom—the “je ne sais quoi” of understanding all the ways a system can fail, all the edge cases regulations create, all the unspoken expectations users bring—can’t be effectively compressed into a context window. And even if it could, someone needs to know what context to provide and when.
We’re seeing this play out in how teams are structured. Subject matter experts used to front-load their contributions during project kickoff, dump their knowledge in a requirements document, and then largely step back. Now they’re needed throughout the entire development cycle.
When you’re moving fast with AI support, the feedback loop tightens dramatically. You’re not waiting two weeks between sprints to course-correct. You’re making micro-adjustments continuously. And that requires domain expertise to flow continuously into the development process, not just at the beginning.
The implication for talent strategy is profound: succession planning isn’t just about filling roles. It’s about preserving institutional knowledge that has become exponentially more valuable. As AI handles more tactical execution, losing experienced team members isn’t a staffing problem—it’s an existential risk.
Smaller Teams, Greater Diversity
When AI can provide breadth on demand, you don’t need five people who all know React. You need T-shaped practitioners who bring unique perspectives and complementary expertise. You need someone who understands the healthcare regulatory environment. Someone else with deep experience in consumer behavior. Another person who’s built scalable data pipelines.
In our studio, we’ve stopped thinking about hiring for a specific role and started thinking about hiring for the team composition. We look at the puzzle and ask: what piece is missing? What perspective are we lacking? What type of judgment do we need?
This is harder than traditional hiring. You can’t just post a job description for “Senior Product Designer” and slot someone in. You need to understand the team’s existing strengths, identify gaps, and find someone who complements rather than duplicates.
But the results are worth it. A small team with genuine diversity of expertise and experience, augmented by AI, can accomplish what used to require much larger groups. We’ve delivered enterprise-scale projects with teams of four and a half people.
The “half person” matters, by the way. We’re not talking about part-time work. We’re talking about fractional expertise—bringing in specialized knowledge exactly when and how it’s needed, rather than maintaining full-time resources that sit idle between moments of critical contribution.
The Apprenticeship Crisis
The problem that’s emerging is how to train the next generation when AI handles so much of the foundational work?
I came up through an apprenticeship model. I learned why web standards are the way they are by implementing them manually, making mistakes, and having experienced practitioners explain the reasoning. I understood interaction patterns by building them from scratch, not by selecting them from a component library.
Today’s juniors are skipping that foundation. They’re asking AI to generate a form, and it does—perfectly formatted, accessibility compliant, following all conventions. But they don’t learn why the navigation goes in the top left, why buttons need proper contrast ratios, why certain patterns have become standards.
Of course, I don’t care about manual implementation for its own sake. What I care about is cultivating the understanding that leads to innovation. I want juniors to build the next web standard, not just use the current ones. I want them to invent something better than the keyboard and mouse, which should have been replaced decades ago.
But if you don’t understand the current patterns—why they exist, what problems they solve, where they fall short—how can you design what comes next? We need to be intentional about creating learning pathways that preserve the understanding required for innovation, even as we automate execution.
The teams making the biggest mistakes right now are the ones cutting junior positions entirely. They’re optimizing for today’s productivity and sacrificing tomorrow’s innovation capacity. The smarter play is to redesign how we mentor and upskill, creating apprenticeship models that work in an AI-augmented environment.
The Strategic Imperative
If AI amplifies human judgment while commoditizing execution, that means:
- Product strategy matters more, not less. Invest in leadership that can exercise discernment about what gets built.
- Domain expertise compounds in value. Build succession plans that preserve institutional knowledge.
- Team composition requires intentionality. Optimize for diversity of experience rather than depth of identical skills.
- Apprenticeship models need reinvention. Create pathways that preserve the understanding required for innovation.
We’ve built our studio model around these realities. We assemble small, senior teams with diverse, complementary capabilities. We embed with clients to ensure domain context flows continuously into product development. We use AI to accelerate execution while preserving the strategic judgment that determines whether we’re building the right thing.
But we’re very aware that we’re in the early stages of this transformation. The patterns are emerging, not settled. What worked last year might be obsolete next quarter.
The organizations that thrive won’t be those with the biggest teams or the most AI tools. They’ll be those who combine the right human expertise with thoughtful automation. They’ll be those who recognize speed is a tactic, not a strategy. And they’ll be those who understand judgment doesn’t scale with headcount—it scales with wisdom.
We’re not replacing people with AI. We’re enabling smaller, more strategic teams to deliver innovation at scale. And that changes everything.





