Dec 30, 2025

Table of Content

Table of Content

Table of Content

The End of GTM Engineering: Why It Emerged, Why It’s Already Shrinking, and What Comes Next

The End of GTM Engineering: Why It Emerged, Why It’s Already Shrinking, and What Comes Next

The End of GTM Engineering: Why It Emerged, Why It’s Already Shrinking, and What Comes Next

This theory examines how GTM Engineering emerged to bridge the gap between AI-powered GTM platforms and the operational reality of Rev Ops teams.

4,600 words

Executive Summary

GTM Engineering emerged in 2023 to solve a specific coordination problem: AI-powered no-code platforms made sophisticated GTM automation possible faster than revenue organizations could adapt. The role filled the gap between what tools like Clay, n8n, and Zapier enable - and what most Rev Ops teams have the capability or bandwidth to execute.

Our core conclusion is clear: GTM Engineering is operationally valuable but structurally transient. It excels at last-mile execution, but it does not build durable foundations. The role exists in a narrow window where tools are powerful, organizations are behind, and speed creates advantage. As platforms simplify and teams upskill, that window closes.

Core Insights

Operational Leverage. Shallow Architecture.

GTM Engineers orchestrate vendor platforms effectively, but they are configuring on top of existing infrastructure - not engineering it. Despite the title, the vast majority of these roles do not require coding; the primary toolkit is Clay, Zapier, n8n and CRMs, as opposed to Python, APIs, or data architecture. This limits long-term leverage.

Compression Cycle is Underway.

Historically, new role categories take 5–7 years to commoditize. AI compresses that cycle by 3–5×. What required a GTM Engineer in 2024 will require a prompt by 2027. The low barrier to entry to become a GTM Engineer was an advantage in the early sprint but will quickly become the reason this role disappears as a standalone function.

Platform Dependency Masqueraded Efficiency.

Job descriptions explicitly requiring “Clay expertise” reveal over-indexing on vendor-specific knowledge rather than transferable capability. Companies are building critical GTM workflows on proprietary platforms they don’t control, recreating the same lock-in dynamics that have plagued enterprise software for decades and running counter to AI’s greatest value of untethering customers from individual vendors.

Strategic Implication

The companies that win long-term will not be those with the best GTM Engineers. They will be the ones that embrace aspects of the GTM Engineering methodology - the willingness to run experiments, a focus on rapid prototyping, and a general Product mindset grounded in shipping and iterating solutions - then adopt this very approach with architectural rigor and technical complexity across solutions that can scale.

The real strategic question is not “should we hire GTM Engineers”, it’s how do we adopt a formal methodology that captures the efficiency gained from speed and the effectiveness of running compressed feedback cycles, while minimizing platform dependency and organizational debt.

This distinction separates temporary leverage from durable advantage.

The Landscape

Three years ago, the term GTM Engineer barely existed. Today, roughly 400 roles are posted each quarter, salary premiums run ~20% above traditional operations roles, and the category’s most visible champion (Clay) raised $100M at a $3.1B valuation largely on the momentum of this movement.

The ecosystem formed quickly. Seven bootcamps now teach GTM Engineering skills. Over 2,500 students have graduated from Clay’s cohort program. Agencies billing themselves as “GTM Engineering as a Service” are proliferating across LinkedIn.

But beneath the growth metrics sits a definitional crisis.

Roughly half of job postings read like rebranded, lower-level Rev Ops roles. The other half describe lead gen automation specialists. Ask practitioners what GTM Engineering actually is and the answers diverge: Rev Ops with stronger technical chops, growth engineering for the AI era, or simply someone who’s very good at Clay.

The only consistent thread: they build automated revenue workflows using AI-powered platforms. Beyond that, consensus collapses.

Current State: Tool-Enabled Arbitrage

The underlying mechanics are simple. AI and no-code platforms have collapsed time-to-automation from quarters to hours. Tasks that required manual research teams or engineering resources two years ago - data enrichment, lead scoring, personalization, sequencing, CRM updates - now run through visual workflow builders.

Clay aggregates 150+ data providers. LLMs generate personalized copy at scale. Zapier, Make, and n8n stitch systems together without code.

This created an execution gap.

Rev Ops teams trained on CRM administration suddenly had access to capabilities they couldn’t reliably implement, either due to knowledge or bandwidth gaps. Sales Leaders wanted automated outbound engines without knowing how to build them. Strategy and execution drifted apart. Someone needed to bridge commercial intent and technical assembly without being a true full-stack GTM Systems Engineer.

Enter the GTM Engineer: part commercial thinker, part builder. As Clay frames it, “marketers who are bad engineers” or “engineers who care more about moving revenue than writing perfect code.”

The value proposition was speed. One GTM Engineer could replace 5–7 SDRs by automating prospecting, research, and outreach. According to Clay, early adopters reported 31–42% CAC reduction, faster deal cycles, and higher-quality pipeline.

For a period, this was real arbitrage.

Companies like Cursor, Notion, Vanta, Webflow, Canva, and Intercom embedded GTM Engineers to build systems competitors didn’t yet know how to copy. They operationalized data signals before playbooks existed and turned tooling advantages into revenue leverage.

But in software, advantages rarely stay advantages.

Key Players and Market Dynamics

Clay is the gravitational center. They didn’t just popularize the term; they built the ecosystem. Their platform became the default toolkit. Their marketing positioned GTM Engineering as the first “AI-native profession.” Their valuation validates demand but also exposes dependency risk. If Clay stalls or is disrupted, what happens to a role defined primarily by Clay proficiency?

The broader vendor layer includes enrichment providers (ZoomInfo, 6sense, Apollo), automation platforms (Zapier, n8n, Make), and CRMs (Salesforce, HubSpot, Attio). Together, they form the infrastructure GTM Engineers orchestrate but “orchestration specialist” lacks the narrative appeal of “engineer.”

A services layer is already emerging. Agencies sell “GTM Engineering as a Service,” packaging workflows, playbooks, and templates for companies that can’t justify full-time hires. Historically, this is a leading indicator: when specialized skills become productized services, commoditization follows.

Notably, many of the most successful companies aren’t hiring “GTM Engineers” at all. They’re distributing technical capability across existing teams. They separate the questions: What should we automate? Who owns commercial judgment? Who builds the system? The answers often point to different people.

The approach being taken by more sophisticated orgs gets to the heart of the problem: no-code solutions deployed by a GTM Engineer might work for a Seed stage startup with low complexity but Clay’s own definition of the role they created includes the phrase “[bad Engineers, who care more about revenue than writing perfect code]” and herein lies a ticking time bomb.

Historical Context

GTM Engineering is not a novel phenomenon. It follows a familiar pattern: new tools create a capability gap, a hybrid role emerges to close it, and the role eventually compresses as tools mature and skills diffuse. Understanding this pattern clarifies not just where GTM Engineering came from but where it is likely headed.

Evolution: From Manual to Automated Revenue Operations

Revenue Operations has evolved through three distinct eras, each defined by its primary constraint.

The Manual Era (Pre-2010): Headcount as the Constraint Sales Operations relied on spreadsheets, cold calls, and manual data entry. Marketing automation was rudimentary. Systems were fragmented, data quality was poor, and scaling meant hiring more people. Productivity was measured in activity volume - dials, emails, meetings - and insights were garnered from historical data only.

The Platform Era (2010–2020): Technical Capability as the Constraint CRMs and marketing automation platforms matured. Rev Ops emerged to unify sales, marketing, and customer success around shared data and processes. The challenge shifted from labor to configuration with the ultimate prize being a 360 degree view of the customer journey.

Rev Ops professionals became translators - mapping business requirements onto increasingly complex stacks of point solutions - with the goal of not only increasing productivity via automation but also increasing efficiency through better intelligence.

The AI-Enabled Era (2020–Present): Execution Speed as the Constraint LLMs made personalization cheap. Data enrichment became abundant. No-code platforms eliminated the need for engineering resources to build standard sales automation workflows. What once took months could be prototyped in hours. The constraint was no longer can we build this but how fast can we operationalize it?

GTM Engineering emerged precisely at this inflection point. The tools existed. The opportunity was obvious. But most revenue organizations lacked operators who could move fast while tying systems directly to commercial outcomes. Traditional Rev Ops was too slow. Engineering was too detached. The gap produced the role and, in theory, solved some of those short-term challenges.

Precedents: Three Compression Cycles

1. Growth Hacker (2010–2015): When Tactics Outpaced Judgment Originally coined to describe a disciplined, data-driven experimenter, “growth hacker” quickly became a catch-all title. Early practitioners built durable advantages - referral loops, viral mechanics, product-led acquisition. As the role scaled, quality collapsed. The term became associated with spammy tactics and shallow optimization. Platforms closed loopholes. Users adapted. By 2015, serious practitioners abandoned the label. The valuable work didn’t disappear but was absorbed and operationalized with real rigor into product, marketing, and analytics roles.

What Endured: experimentation, measurement, cross-functional execution What Failed: tactic-driven thinking and the belief one person could own everything

2. Full-Stack Developer (2012–2018): The Generalist Myth The promise was speed and coherence: one engineer owning the entire system. The reality was tradeoffs. Frontend and backend engineering require different mental models. Most “full-stack” developers leaned heavily toward one side. The role worked in simple environments or as a forcing function for better system awareness but broke down at scale. Over time, specialization reasserted itself. The title persisted, but the premium vanished.

What Endured: cross-domain understanding What Failed: the assumption breadth could replace depth

3. Rev Ops (2018–Present): Strategy Diluted by Breadth Rev Ops solved a real problem: low visibility and poor strategic insights caused partially by fragmented GTM Systems. At its best, it became a strategic function aligning data, incentives, and execution across teams. As adoption grew, the title diluted as “Rev Ops” roles absorbed a heavy administrative burden - CRM maintenance, reporting, workflow upkeep, user support - while strategic ownership remained with a small senior cohort. Today, the term spans everything from Head of Revenue Operations to Revenue Architect to Salesforce Admin.

What Endured: unified systems and strategic coordination What Failed: role discipline as the category scaled

The Critical Difference: Platform Dependency

Previous roles were anchored in transferable, platform-agnostic skills - system design, intelligent experimentation, statistical reasoning, persuasive communication. Tools changed but capabilities persisted.

GTM Engineering is different. Its center of gravity is platform proficiency and workflows are vendor-specific. When platforms change, there is no core competency outside of the tools itself that allows for knowledge to transfer cleanly.

Rev Ops professionals adapted when tools evolved because they understood business logic and were operators or strategists at heart.

This matters, particularly in the AI era when new product innovation occurs in quarters rather than years. The role’s shelf life is tied less to durable capabilities and more to how long any one vendor maintains an edge.

Deep Analysis

Core Mechanisms Driving Change

Three forces produced GTM Engineering - and those same forces are now compressing it.

AI collapsed labor into execution speed.

LLMs didn’t simply enhance existing workflows; they replaced entire categories of work. Previously, just the process of integrating Zoominfo for data enrichment at scale required the guidance of a Salesforce Admin to design the data flow and implement the solution. Not to mention, much of the actual work still fell on the Sales Reps to trigger actions when needed.

Now, this entire setup is possible in minutes, while also being about to run autonomously with infinite, pre-configured, and real-time updates. This means there is no asking “do we have the capability to do this and how long will it take” and instead, you switch on routine sales automation with the push of a button.

No-code removed engineering as a bottleneck.

Platforms like Clay and n8n abstracted away software development. Operators without technical backgrounds could build sophisticated workflow automations. The paradox: execution became accessible, which feels good as new solutions are deployed on a daily basis but presents very real consequences over time - if any idea can be built, it often gets built. This is one of the most overlooked risks when it comes to GTM Systems infrastructure and is a primary cause of friction between Ops teams that want to move fast and Eng teams that want to build things the right way.

Platform consolidation reduced orchestration value.

Enrichment, automation, and CRM vendors are absorbing each other’s capabilities. Salesforce embedded generative AI into its CRM, enabling native predictive insights and AI-assisted workflow inside Salesforce. HubSpot’s CRM now includes AI-driven record enrichment, workflow automation and Breeze Copilot agents that generate tasks, draft messages, and more. Zapier continues to evolve with AI-powered workflow builders and Copilot assistance to automatically construct integrations based on plain-language prompts.

As these platforms integrate vertically, the standalone value of orchestration specialists erodes. The compression mechanism is straightforward: platforms ship features faster than the role can evolve. What required a GTM Engineer in Q1 can often be completed through built-in automation or AI toggles by Q4.

Theoretical Advantages

There are several talking points often highlighted by proponents of the GTM Engineering movement - but the logic is somewhat misguided and misses a few critical considerations.

Speed without engineering debt.

GTM Engineers ship in days what traditional teams deliver in quarters. They bypass backlogs, deployment cycles, and code reviews. For rapid GTM experimentation, this velocity produces real - albeit temporary - advantages. In our opinion, this perceived advantage is actually one of the greatest threats in the era of GTM Engineering adoption. When it comes to Product Development, speed is an incredible advantage but only with proper guardrails in place. There’s a reason why the vast majority of Salesforce instances are a wasteland, a thorn in the side of Sales Reps using the tool, and Sales Leaders trying to extract insights from it.

From the outset, Salesforce was a champion of the ‘Citizen Developer’ movement - you could configure the CRM, build workflows, and more without needing to be a Developer. The direct result: a tendency to overbuild, throwing new features into production without a long-term product roadmap in place. This inevitably creates technical debt, dependencies you didn’t know existed, and a mess of integrations that directly hinder future build efforts.

Commercial judgment embedded in execution.

Unlike traditional Engineers, GTM Engineers optimize directly against revenue metrics. They understand CAC, conversion, and pipeline velocity. In theory, this collapses the strategy–execution gap and ensures workflows drive measurable outcomes. You will hear GTM Engineering champions make this claim as a primary selling point. And it makes sense: bringing commercial expertise into systems development is an effective way to build the right things.

But this concept is not new. Revenue Operations teams already own highly sophisticated GTM Systems headcount within their org. These groups have a mandate that directly overlaps with that of a GTM Engineer; however, GTM Systems Specialists bring formal, technical backgrounds across a wide range of products and often have dedicated Architects or Product Managers to ensure guardrails are in place and underlying architecture is sound.

Structural Disadvantages

There is no denying that execution speed is an inherent advantage to GTM Engineering; however, companies need to adopt new strategies while balancing both short-term needs and long-term objectives. In our opinion, this is one critical area that the promise of GTM Engineering falls short.

A hard ceiling on platform capabilities.

As we’ve outlined, the overwhelming majority of GTM Engineering roles focus on a no-code or low-code technology stack. Platforms that offer a valuable feature set, no doubt. But as AI advances, we will continue to see the commoditization of no-code features. Harnessing the full potential of Agentic AI requires much deeper technical knowledge - use of APIs, reverse ETL, and data warehousing to name a few.

Brittleness compounds quickly.

The complete lack of architectural rigor poses a significant threat to companies, even if they are primarily deploying no-code features across the org. Without proper documentation, version control, API limiting etc, maintenance will absorb an increasing amount of time when workflows begin degrading.

There is also a very real user fatigue element - the new shiny object is great at the start but as features continue to ship endlessly without proper change management, the whiplash can erode productivity.

Vendor lock-in becomes strategic risk.

As workflows proliferate, switching costs explode. Pricing changes, feature removals, or vendor failure force compliance or rebuilds. Short-term efficiency converts into long-term dependency. It’s the enterprise software trap in a new form.

Economic Implications

The core economic question is simple: are you building durable capability or renting platform proficiency?

Organizations extracting sustained value treat GTM Systems as a foundational pillar for their entire technology infrastructure.

  • Sound design and architectural decision making.

  • Long-term roadmap planning that bridges near-term priorities with long-term best practices.

  • Clear documentation that leaves a blueprint for everything that has been done and creates efficiency - not bottlenecks - for future build.

GTM Engineering solves an execution speed problem that has long been a headache for business leaders. The frustration is understandable - but it also exists for a reason.

Designing and building sustainable systems infrastructure - a GTM engine that works for 25 Reps and scales without breaking for 250 Reps - is an engineering challenge riddled with complexity and technical nuance.

Yes, it takes time to plan and build. While there are a myriad of process improvements these teams can be making, the answer is almost certainly not “just give the keys to someone that can drive fast even if they don’t know where they’re going”.

Implications

What This Means for GTM Engineering

GTM Engineering as it exists today - a role primarily defined by proficiency in Clay, Zapier, and no-code orchestration - is not a sustainable category. It emerged to solve a real problem: the execution gap created when AI-powered tools outpaced organizational capability.

But the very forces that created the role are now compressing it.

The future does not belong to GTM Engineers as a standalone function. It belongs to unified GTM Systems teams with the operational rigor and technical sophistication to architect scalable revenue infrastructure. What we're witnessing is not the birth of a new discipline but rather the temporary scaffolding required while organizations transition from manual operations to intelligent, automated systems.

The Market Will Bifurcate. Only One Path Survives.

The current GTM Engineering movement conflates two fundamentally different skill sets under a single label, and this conflation obscures a critical distinction:

Path 1: The No-Code Orchestrator These practitioners excel at Clay workflows, Zapier chains, and rapid prototyping. Their primary toolkit is vendor platforms; their primary skill is speed. They deliver immediate value by automating what was previously manual, but they are building on top of platforms they don't control, using tools that will inevitably simplify, and creating workflows that the next generation of AI will render obsolete.

These roles compress rapidly. As platforms like Salesforce, HubSpot, and emerging CRMs embed native AI capabilities - automated enrichment, intelligent routing, predictive scoring - the orchestration layer loses its differentiation. What required a GTM Engineer to configure in Q1 becomes a toggle switch by Q4. The low barrier to entry that made the role accessible becomes the reason it disappears.

Path 2: The Systems Architect These are technical operators who combine strategic thinking with deep engineering capabilities: SQL, Python, API integration, data architecture, and system design. They don't just configure tools - they design infrastructure. They build data pipelines, create decision frameworks, implement governance, and architect solutions that scale from 25 Reps to 250 without breaking.

As Spencer Tahil articulates, the distinction is fundamental: "Most implementers think about the technical mechanics: 'How do I configure the workflow to distribute leads?' A systems architect asks: 'What business outcome are we trying to create, and what does the system need to support that sustainably?'"

Only Path 2 survives. Organizations that win long-term won't have "GTM Engineers" scattered across Rev Ops - they'll have centralized GTM Systems teams structured like sophisticated Product orgs: Product Managers defining roadmaps, Architects designing solutions, Engineers building infrastructure, and Administrators managing all low-code build, while simultaneously serving as a liaison to the business.

Why GTM Systems, Not GTM Engineering

The distinction matters because it reflects fundamentally different mandates:

GTM Engineering solves for speed. Build it now, ship it fast, iterate quickly. This works in the sprint but creates enormous technical debt at scale. Workflows proliferate without documentation. Dependencies form without anyone tracking them. The faster you move without architectural discipline, the more expensive the cleanup becomes.

GTM Systems solves for leverage. Build once, scale infinitely, maintain rigorously. This requires upfront investment - proper design, documentation, version control, testing - but produces compounding returns. The difference is the same as shipping code to production without tests versus building with CI/CD pipelines. One moves faster initially; the other scales sustainably.

The evidence is already clear: companies like Amplitude, Docker, Intercom, and Canva have already adopted this model. They've structured GTM Systems as centralized technology teams reporting directly to senior leadership. This signals organizational commitment: GTM infrastructure is as critical as the product itself.

AI Doesn't Compress Systems Architecture. It Demands It

There's a seductive narrative that AI will eliminate the need for technical expertise in GTM. After all, if no-code tools made engineering accessible, won't AI make everything accessible?

This fundamentally misunderstands what AI enables.

AI doesn't eliminate the need for architecture; it exponentially increases the importance of it. Yes, AI can write basic workflows and automate routine tasks. But AI amplifies bad architecture as readily as it amplifies good architecture. Give an AI access to a poorly designed CRM with inconsistent data and fragmented workflows, and it will dutifully propagate those problems at scale.

The companies already winning with AI - those building what industry observers call "AI-native GTM operating systems" - aren't eliminating technical roles. They're evolving them:

Data foundations become critical. AI requires clean, unified, accessible data. Building and maintaining that foundation demands technical expertise, not no-code proficiency.

Integration complexity increases. As AI Agents orchestrate actions across systems, the robustness of those integrations determines reliability. APIs, error handling, rate limiting - these aren't abstracted away by AI; they become more important, especially as we begin to see a move away from the legacy “System of Record” architecture and toward “System of Intelligence”, where data warehouses may become the central node in place of the CRM.

Governance requirements intensify. AI operating autonomously requires guardrails, monitoring, and fail-safes. This is systems architecture work, not workflow configuration.

As one GTM Systems Leader at Black Forest Labs describes the role: "This isn't your typical Rev Ops role, and it's not pure engineering either. It's the technical architecture of our business itself: the systems that turn API calls into revenue, transform messy multi-jurisdictional data into clarity, and automate what currently requires 10 people to do manually."

AI makes one GTM Systems Engineer capable of 10× the impact - but only with proper architecture. Without it, AI becomes a liability, not an asset.

The Playbook Paradox: Commoditization Requires Customization

Here's the tension currently playing out: GTM playbooks are rapidly commoditizing while competitive advantage increasingly depends on execution specificity.

Platforms like Unify, Common Room, and Amplemarket are productizing what were once bespoke workflows. Signal-based selling, account enrichment, automated outreach - these are becoming packaged features, not proprietary capabilities.

But the conclusion isn't that GTM roles disappear. It's that the value shifts from having the playbook to customizing it for your context:

  • What signals matter for your ICP?

  • How do those signals integrate with your systems?

  • What actions should your organization take when signals fire?

  • How do you measure effectiveness in your GTM motion?

This customization requires deep understanding of:

  1. Business context: Who are we selling to and why do they buy?

  2. Technical systems: How does our infrastructure work and what are its constraints?

  3. Data architecture: What information exists, where does it live, and how do we access it reliably?

No-code operators can deploy the standard playbook. Systems Architects customize it for competitive advantage. The former becomes a commodity; the latter becomes more valuable as playbooks proliferate.

Organizational Complexity Doesn't Simplify. It Scales

Even if individual tools simplify, the overall system complexity increases as organizations grow. This is an iron law of scaling:

  • More products require more pricing models

  • More markets require localization and compliance considerations

  • More team members require coordination and alignment mechanisms

  • More data sources require integration and governance

At 10 people, anyone can manage GTM Systems with spreadsheets and Zapier. At 100 people, you need structured processes and proper tooling. At 1,000 people, you need architecture.

GTM Systems teams exist to manage this complexity, not by simplifying it away (impossible) but by building infrastructure that handles complexity systematically:

  • Modular design ensures changes to one system don't break others

  • Clear interfaces allow teams to coordinate without constant communication

  • Centralized governance prevents fragmented decision-making

  • Documentation enables knowledge transfer and reduces key-person risk

This work doesn't disappear as tools improve. If anything, better tools enable more complex GTM motions, which require more sophisticated systems architecture to manage.

The Implications for Organizations

If you're hiring today, recognize the distinction between tactical executors and strategic architects. The former might call themselves GTM Engineers and demonstrate impressive Clay proficiency. The latter understand system design, can write code when necessary, think in data models, and architect for scale. Hire for the long term, not the short-term execution gap.

If you're building a team, structure for sustainability. Don't scatter "GTM Engineers" across Rev Ops as individual contributors. Build a centralized GTM Systems team with clear ownership, technical leadership, and a mandate to architect infrastructure, not just ship workflows. Model it like a product engineering team: PMs for strategy, Architects for design, Engineers for implementation, Analysts for insights.

The Verdict: Methodology Over Title

The GTM Engineering movement got one thing exactly right: the methodology. The willingness to experiment rapidly, the focus on compressed feedback cycles, the product mindset grounded in shipping and iterating. These principles represent genuine innovation in how GTM teams operate.

Where it went wrong was conflating methodology with skillset. You don't need a GTM Engineer to adopt rapid experimentation and data-driven iteration. You need organizational commitment to those principles, paired with the technical rigor to implement them sustainably.

The companies that win will be those that:

  1. Adopt the GTM Engineering methodology: rapid experimentation, compressed cycles, product thinking

  2. Reject the GTM Engineering structure: distributed no-code operators without architectural oversight

  3. Invest in GTM Systems teams: centralized, technically sophisticated organizations that build durable infrastructure

GTM Engineering as a role category will fold into GTM Systems. The best practitioners will evolve into Systems Architects. The rest will be compressed by the same AI tools that created the opportunity in the first place.

This isn't the end of technical GTM roles. It's the continuation of their maturation into true Engineering disciplines, with all the rigor, structure, and long-term thinking that implies. The window for arbitrage is closing. The era of sustainable competitive advantage through systems architecture is opening.

The only question is whether organizations will recognize the distinction before they've accumulated too much technical debt to recover efficiently.

Working Theories.

In an era where certainty expires faster than playbooks can adapt, we provide a clear point of view, tested in reality, and built to evolve.