Ruthlessly Helpful

Stephen Ritchie's offerings of ruthlessly helpful software engineering practices.

The Oregon Trail to Agentic AI

The Oregon Trail to Agentic AI: Where Are We on the Journey?

John McBride recently wrote “Gas Town is a glimpse into the future”, and that article got me thinking about where we actually are on the road to AI-assisted software engineering. Not where enthusiasts claim we are. Not where skeptics fear we are. Where we actually are.

Gas Town is Steve Yegge’s experimental framework for agentic AI development. Instead of Sprints, Product Owners, and Scrum Masters, you’ve got towns, Mayors, rigs, crew members, hooks, and polecats. There’s only one human role: “a god-like entity called the Observer (you) who hands the mayor mandates.” The system runs continuous agent sessions, it’s highly experimental, and—at the moment—only works for people who can afford dozens of Claude Max accounts at $200/month each.

It’s also described as addictive.

I’m not recommending you install Gas Town. I’m not installing it myself. But watching what explorers like Yegge are doing tells us something important about where this technology actually is; and how long it might take before the rest of us can use it productively.


The Oregon Trail Metaphor

The Oregon Trail to Agentic AI

The Oregon Trail to Agentic AI

I often use a metaphor of change by reflecting on the journey from St. Louis to Oregon:

  1. Lewis and Clark Expedition (1804-1806): A small group of explorers on a 3,700 mile journey into unknown geography, climate, plants, and animals.
  2. The Oregon Trail (1842-1890s): Carried an estimated 300,000 pioneers in prairie schooners on a brutal journey of death, pain, and disease.
  3. The Transcontinental Railroad (1869-present): A long but relatively safe journey for adventurers and fortune seekers.
  4. Charles Lindbergh (1927): Landed the Spirit of St. Louis at Portland’s Swan Island Airport on September 14, demonstrating that air travel would eventually transform the journey.
  5. U.S. Route 26 (1926-1950s): Established a highway connection. Driving became a safe, reliable way to relocate from St. Louis to Oregon.
  6. Southwest Airlines (present day): A one-way flight from St. Louis to Portland has 1-stop, takes about 7 hours, and costs about $170. Statistically the safest way to travel.

The journey that once required years of preparation and carried a significant chance of death is now a routine afternoon trip. But it didn’t happen overnight. Each stage required different capabilities, different costs, and different levels of acceptable risk.

From what I can tell, Gas Town is firmly in the Lewis and Clark phase of discovery.


What the Explorers Are Finding

Steve Yegge isn’t building a product. He’s running an expedition. In interviews, he says he’s pushing limits and expects that someone will eventually put this together in a way that people can reason about easily. With that new tooling, process, and practices, there will be new roles, events, and artifacts.

This is exactly what explorers do: they chart unknown territory so that later travelers can understand the terrain.

The terrain Yegge is mapping includes questions like:

  • Orchestration complexity: How do you coordinate multiple AI agents working on different aspects of a codebase?
  • Cost economics: At what point does the productivity gain from agentic AI offset the compute costs?
  • Human oversight models: How much supervision do these systems need, and what kind?
  • Failure modes: How do agentic systems fail, and how do you detect and recover from those failures?
  • Quality assurance: How do you verify that autonomous agents are producing maintainable, reliable code?

These are the questions that must be answered before we can build the Oregon Trail, let alone the railroad.


The Ruthlessly Helpful Assessment

Let me apply the framework from my book to evaluate where agentic AI development actually stands today:

Practicable: Can Teams Actually Use This?

Verdict: No, not yet—for most teams

Gas Town requires: – Dozens of concurrent AI sessions ($200/month each) – Tolerance for experimental, addictive tooling – Significant expertise to interpret and correct agent behavior – A willingness to be an explorer, not a traveler

This isn’t a criticism. Lewis and Clark’s expedition wasn’t practicable for Oregon farmers either. That was never the point. The expedition’s purpose was discovery, not travel.

For typical development teams, the current state of agentic AI is: – Too expensive for routine use – Too unpredictable for production work – Too complex for teams without dedicated expertise – Too immature for sustainable practices

Generally Accepted: Is This Widely Used?

Verdict: No. It’s an explorer community, not general adoption

There’s no industry data on agentic AI adoption because there’s nothing widespread enough to measure. The community consists of: – AI researchers and enthusiasts pushing boundaries – Well-funded experiments at large technology companies – Individual explorers like Yegge with the resources and appetite for risk

This is normal for the Lewis and Clark phase. The practice isn’t generally accepted because it isn’t general yet.

Valuable: Does This Provide Clear Benefits?

Verdict: The potential is enormous, but current value is unclear

The promise of agentic AI is compelling: – Dramatically reduced development time for routine work – Ability to tackle larger projects with smaller teams – Automation of tedious maintenance and refactoring tasks

But the current reality is: – High cost-to-benefit ratio for most use cases – Significant time spent supervising and correcting agent behavior – Quality concerns about AI-generated code at scale – Unknown long-term maintenance implications

We don’t yet know whether the value proposition will hold up as the technology matures, or whether the costs will decrease faster than the capabilities improve.

Archetypal: Are There Clear Examples to Follow?

Verdict: Not for production use; it’s only for experimentation

Gas Town itself is an example, but it’s explicitly experimental. Yegge describes it as “addictive” and acknowledges it’s not ready for general use. There are no archetypal patterns yet for: – Integrating agentic AI into typical development workflows – Managing the costs at sustainable levels – Ensuring quality and maintainability of agent-produced code – Training and onboarding teams to work with these systems

Overall Assessment: 0/4 criteria met for production adoption

This doesn’t mean agentic AI won’t eventually meet all four criteria. It means we’re watching an expedition, not a product launch.


The Uncomfortable Questions

An article in Today in Tabs, “All Gas Town, No Brakes Town,” asks exactly the right questions:

“Will the AI ever gain a high level conceptual understanding of how to structure software to be reliable and maintainable, when it isn’t currently capable of a high-level conceptual understanding of how to run a vending machine? Will today’s junior developers ever gain that understanding themselves if they spend their careers instructing the AI rather than writing code? Will there even be junior developers if all the senior devs are handing off that work to polecats or whatever?”

These aren’t questions that explorers need to answer. Lewis and Clark didn’t need to figure out how to run a railroad or schedule commercial airline flights. But these are the questions that matter for the Oregon Trail and beyond.

The Skill Development Question

If junior developers spend their early careers instructing AI rather than writing code, will they develop the deep understanding needed to: – Debug complex systems? – Make architectural decisions? – Identify subtle quality problems? – Know when the AI is wrong?

We don’t know the answer. The pioneers who eventually walked the Oregon Trail developed practical skills that railroad passengers never needed—and those skills became obsolete. Perhaps the same will happen with traditional coding skills. Perhaps it won’t.

The Conceptual Understanding Question

Current AI systems are impressive at pattern matching and code generation, but they lack the conceptual understanding that experienced developers bring. They can produce code that looks right but isn’t—because they don’t understand what “right” means in context.

Will that change? Maybe. The gap between GPT-3 and GPT-4 was surprisingly large. Future advances might bridge the conceptual gap. Or the gap might prove fundamental.

The Economics Question

Right now, agentic AI is expensive. Dozens of sessions at $200/month adds up quickly. For the economics to work broadly, either: – AI compute costs must drop dramatically – AI productivity gains must exceed the high costs – New architectures must emerge that are more efficient

All three are plausible. None is guaranteed.


What Happens Next?

The history of the Oregon Trail suggests some patterns:

The Trail Phase (1-5 years?)

Before there’s a railroad, there will be trails. Expect: – Frameworks and tools that make agentic AI more accessible – Best practices emerging from early adopters – Significant failures and hard lessons – Gradually decreasing costs and increasing capabilities – Enterprise experiments with mixed results

This phase will be characterized by high effort, high risk, and high reward for those who get it right. Most organizations should observe carefully but not invest heavily.

The Railroad Phase (3-10 years?)

Eventually, the paths will be well-worn enough to build reliable infrastructure. Expect: – Platform products that abstract away complexity – Standardized practices for human-AI collaboration – Predictable cost and quality models – Mainstream adoption by large enterprises – Significant workforce transformation

This is when most organizations should adopt—when the practices become practicable, generally accepted, valuable, and archetypal.

The Airline Phase (5-15 years?)

At some point, the journey becomes routine. Expect: – AI-assisted development as a standard capability – Developers who never knew a world without AI agents – New challenges we can’t currently anticipate – Integration so deep it’s invisible

But remember: the people flying Southwest today don’t need to know anything about Lewis and Clark. The knowledge and practices evolved until the complexity was hidden.


The Strategic Question for Today

If you’re a development leader, the question isn’t “should we adopt agentic AI?” The question is: “What phase of the journey are we prepared for?”

If You Want to Be an Explorer

  • You have significant resources for experimentation
  • You can absorb high costs and high failure rates
  • You have deep expertise in AI and software development
  • You’re driven by curiosity and competitive advantage
  • You understand you’re charting territory, not traveling safely

If You Want to Walk the Trail (Not Yet, But Soon)

  • Watch what the explorers learn
  • Build foundational AI/ML expertise in your teams
  • Experiment with simpler AI-assisted development tools
  • Develop your own evaluation frameworks for emerging practices
  • Stay connected to the explorer community for signals

If You Want to Take the Train (Wait)

  • Focus on proven practices that meet the ruthlessly helpful criteria
  • Build strong fundamentals in testing, CI/CD, and code quality
  • Invest in developer experience and platform engineering
  • Prepare your organization for eventual transformation
  • Don’t feel pressured by fear of missing out

Commentary: When I wrote “Pro .NET Best Practices” in 2011, I didn’t dare imagine the AI developments we’re seeing today. But the framework I developed (practicable, generally accepted, valuable, and archetypal) applies directly to evaluating emerging technologies. The question is never “is this exciting?” It’s always “is this right for our team, right now?” For most teams, agentic AI isn’t right yet. That’s not a criticism of the technology. It’s a recognition of where we are on the trail.


What We Can Learn from the Explorers

Even if you’re not ready to join the expedition, the explorers are generating valuable knowledge:

Emerging Patterns

  • Hierarchical orchestration: Complex projects may need multiple layers of AI coordination
  • Human-in-the-loop checkpoints: Autonomous doesn’t mean unsupervised
  • Cost-aware architectures: Designing systems that minimize expensive AI calls
  • Quality gates: Automated checks that catch AI errors before they propagate

Warning Signs

  • Over-reliance on AI output: The AI can be confidently wrong
  • Skill atrophy: Teams losing capabilities they might still need
  • Cost overruns: AI expenses that exceed productivity gains
  • Quality degradation: Subtle problems accumulating in AI-generated code

Open Questions

  • What’s the right ratio of human to AI contribution?
  • How do you maintain code that was generated by AI?
  • What new skills do developers need?
  • How do organizations handle the economic transition?

Conclusion: Patience and Preparation

Gas Town is genuinely fascinating. Steve Yegge and explorers like him are doing important work that will eventually benefit all of us. They’re finding the paths, documenting the obstacles, and demonstrating what’s possible.

But the Oregon Trail killed a lot of pioneers. The people who prospered were the ones who waited for the railroad (or fly Southwest).

For most development teams, the ruthlessly helpful approach to agentic AI is:

  1. Watch the explorers: Learn from their discoveries without bearing their risks
  2. Build foundational skills: AI-assisted development, prompt engineering, evaluation frameworks
  3. Improve your fundamentals: Strong testing, CI/CD, documentation, and code quality practices will serve you in any future
  4. Wait for the trail to be worn: Let others pay the pioneer tax
  5. Prepare to move quickly: When the technology matures, be ready to adopt

The journey from St. Louis to Oregon is now a $170 flight. Someday, AI-assisted software development will be that routine. But we’re not there yet.

We’re watching Lewis and Clark from a distance, taking notes, and preparing for the world they’re discovering.


References and Further Reading

One response to “The Oregon Trail to Agentic AI

  1. Arix Fïen's avatarArix Fïen January 28, 2026 at 10:49 am

    I appreciate how grounded this is. Framing agentic AI as an exploration phase rather than a ready-made destination feels honest and useful. The Oregon Trail metaphor works because it resists both hype and fear. Watching the explorers, learning from their failures, and not mistaking experimentation for maturity seems like the most rational posture right now.

Leave a reply to Arix Fïen Cancel reply