Agentic Systems and the Displacement of Enterprise Software
Abstract
The enterprise software industry generates approximately $600 billion in annual revenue, the vast majority structured as per-seat subscriptions that assume a proportional relationship between organizational headcount and software consumption. This paper examines how autonomous AI agents are decoupling software value from human headcount, analyzes vertical-specific exposure across the SaaS landscape, and argues that the emerging "service-as-software" paradigm represents a structural threat to incumbent vendors rather than an incremental improvement in their existing products. The implications extend beyond pricing model disruption to a fundamental reordering of how organizations purchase, consume, and derive value from enterprise technology.
The Per-Seat Model Under Structural Pressure
Enterprise SaaS pricing is fundamentally a labor tax: organizations pay per employee who uses the software, with revenue scaling linearly with headcount. Salesforce charges $300 per user per month for its Enterprise CRM tier. ServiceNow charges comparable rates for IT service management seats. Atlassian, Workday, and SAP SuccessFactors all employ variations of the same model. The economic logic assumes that each human operator generates proportional value from the software, making per-seat pricing a reasonable proxy for value delivered.
When AI agents can perform the work of multiple human operators, this proportionality breaks. Consider a legal document review workflow. A team of 50 paralegals using Relativity's eDiscovery platform at $180 per seat per month generates $108,000 in annual seat revenue. An AI agent performing the same document review, using the same underlying data through Relativity's API, reduces the human team to 5 supervising attorneys. The seat revenue collapses by 90% while the volume of work processed may actually increase. This is not a theoretical projection; Clio, the legal practice management platform, reported in its 2025 Legal Trends Report that law firms deploying AI assistants reduced their per-matter software seat requirements by an average of 60%.
The first-order effect is margin compression for SaaS vendors whose revenue is tied to seat counts. The second-order effect is more consequential: a restructuring of how organizations purchase software. When the primary consumer of an application's functionality is an AI agent operating through an API rather than a human navigating a graphical interface, the value proposition shifts from "user experience" to "data access and API reliability." Vendors whose competitive moats are built on interface design, user training, and switching costs find those moats meaningless when the user is an API-calling agent. Vendors whose moats are built on proprietary data, workflow logic, and integration depth retain defensibility, but must repricing around outcome delivery rather than seat occupancy.
Vertical Exposure Analysis
The impact of agentic displacement is not uniform across the SaaS landscape. A useful analytical framework classifies enterprise software by automation surface area, which measures what percentage of the software's value delivery requires human judgment versus procedural execution. Verticals with high procedural content and well-structured data face the most acute disruption.
Customer Relationship Management (CRM) represents the single largest SaaS category by revenue, dominated by Salesforce with approximately $35 billion in annual revenue. A typical CRM workflow involves data entry (logging calls, emails, meeting notes), pipeline management (updating deal stages, forecasting), and reporting (generating dashboards, identifying trends). An estimated 70% of CRM activity by time spent is data hygiene and record-keeping, tasks where AI agents demonstrably outperform human operators. HubSpot's 2025 State of Sales report found that sales representatives spend only 28% of their time actually selling; the remainder is CRM administration, internal communication, and meeting preparation. AI agents can eliminate the non-selling time almost entirely, reducing the number of CRM seats needed while improving data quality. Salesforce's response, embedding Einstein AI throughout its platform, attempts to add agent capabilities within the existing per-seat model. But the fundamental tension remains: if AI makes each user 5x more productive, the customer needs 5x fewer seats.
Legal technology faces perhaps the most concentrated disruption. Thomson Reuters' Westlaw and LexisNexis together dominate legal research with a combined market share exceeding 80%. Their pricing models charge per-attorney access, with enterprise contracts often exceeding $100,000 annually for mid-size firms. AI agents with access to case law databases, regulatory filings, and contract corpora can perform document review, precedent search, and risk analysis without a human operator navigating the platform's interface. Harvey, the AI legal assistant backed by Sequoia Capital, demonstrated in 2025 trials that it could complete contract review tasks in minutes that previously required junior associates working for hours. The platform's value shifts from the interface to the underlying data, and data access is increasingly commoditized as government legal databases, court filing systems, and regulatory repositories adopt open data standards.
IT Service Management (ITSM) follows a similar pattern. ServiceNow, the dominant ITSM platform at approximately $10 billion in annual revenue, prices per "fulfiller" seat. A typical enterprise IT support organization might have 200 fulfillers handling tier-1 and tier-2 tickets. AI agents can resolve an estimated 40-60% of tier-1 tickets autonomously, as demonstrated by ServiceNow's own Virtual Agent product and competitors like Moveworks. Each resolved ticket eliminates a human interaction with the platform, reducing the required seat count. ServiceNow has aggressively pivoted toward "platform" positioning, betting that organizations will pay for the workflow automation substrate even as the number of human operators declines. This is a rational response, but it requires a pricing model transition that will compress revenue per customer.
The Service-as-Software Paradigm
The emerging model inverts the traditional relationship between software and labor. In the conventional paradigm, software is a productivity tool: humans use software to accomplish tasks more efficiently. In the agentic paradigm, software becomes an infrastructure substrate: AI agents consume software APIs to deliver services directly to end customers or internal stakeholders. The economic unit shifts from "user accessing a tool" to "outcome delivered by an agent."
This inversion has been termed "service-as-software" by venture capital firms including Andreessen Horowitz and Bessemer Venture Partners, both of which published investment theses in 2025 arguing that the next generation of enterprise technology companies will sell outcomes rather than seats. The distinction is not merely semantic. A company that sells "accounts payable processing" as an outcome-priced service, powered by an AI agent consuming accounting software APIs, competes directly with the accounting software vendor while operating on a fundamentally different economic model. The outcome-based provider's cost structure is dominated by inference compute rather than human labor, enabling radical price compression.
The pricing implications are significant. McKinsey's 2026 analysis of enterprise AI adoption found that organizations moving from per-seat SaaS to outcome-based AI services reported average cost reductions of 35-55% for equivalent workflow throughput. However, the total value extracted from the workflow often increased because AI agents could process higher volumes, operate continuously, and maintain consistent quality. The net effect is that customers pay less per unit of output while receiving more total output, a dynamic that benefits buyers but compresses vendor revenue per customer.
Early examples of service-as-software are already operating at scale. Klarna reported in February 2025 that its AI customer service agent, built on OpenAI's technology, was handling two-thirds of all customer service interactions, performing the equivalent work of 700 full-time agents. The company reduced its customer service headcount accordingly and reported improved resolution times and customer satisfaction scores. This is not a hypothetical; it is an operational reality that other companies are racing to replicate.
Security Considerations for Agentic Enterprise Systems
The rapid deployment of AI agents with broad access to enterprise systems introduces novel attack surfaces that traditional application security frameworks do not address. The OWASP Foundation published its first "Top 10 for LLM Applications" in 2023 and updated it in 2025, identifying prompt injection, insecure output handling, and excessive agency as the most critical vulnerability classes. These risks are amplified in enterprise contexts where agents have access to financial systems, customer data, and operational infrastructure.
Prompt injection remains the most concerning vector. An attacker who can influence the context provided to an enterprise AI agent, whether through a malicious email, a poisoned document in a shared drive, or a crafted customer support message, can potentially redirect the agent's actions. In 2025, researchers at Anthropic and Google DeepMind independently demonstrated that sophisticated indirect prompt injection attacks could cause agents to exfiltrate data, modify records, or escalate privileges, all while producing output that appeared normal to human supervisors. The defense-in-depth approach, combining input sanitization, output validation, privilege minimization, and human-in-the-loop approval for high-risk actions, is necessary but adds latency and complexity that partially offsets the productivity gains of agentic deployment.
Authorization scope presents a systemic challenge. A human operator's access to enterprise systems is governed by role-based access control (RBAC) policies that reflect their job function. An AI agent acting on behalf of multiple users, or performing cross-functional workflows, does not map cleanly onto existing RBAC models. Organizations deploying agentic systems must develop agent identity frameworks that define what each agent can access, under what conditions, and with what level of human oversight. The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, provides a starting taxonomy but does not yet address the specific challenges of autonomous agents operating within enterprise authorization systems.
Infrastructure Implications and Inference Economics
Agentic enterprise workloads are inference-intensive in ways that differ qualitatively from traditional SaaS backends. Each agent interaction involves multiple reasoning steps, tool calls, context retrievals from RAG systems, and often multi-turn deliberation where the agent plans, executes, evaluates, and iterates. A single customer service resolution that takes a human 8 minutes might require 15 to 30 inference calls as the agent retrieves customer history, reasons about the issue, queries knowledge bases, drafts a response, and validates compliance with policy. Multiply this by thousands of concurrent interactions and the aggregate inference demand exceeds traditional batch processing by orders of magnitude.
The infrastructure economics of agentic enterprise workloads favor dedicated, purpose-built inference platforms over general-purpose cloud compute. Latency requirements are stringent: an agent that takes 30 seconds to respond to a customer inquiry delivers a worse experience than a human who takes 30 seconds, because the human is perceived as thinking while the agent is perceived as broken. Achieving sub-second response times for complex multi-step agent workflows requires GPU infrastructure optimized for low-latency inference with large context windows, high-bandwidth memory for rapid KV-cache access, and continuous batching to maximize throughput across concurrent sessions.
Organizations deploying agentic systems at enterprise scale face a build-versus-buy decision on inference infrastructure. Cloud inference APIs from Anthropic, OpenAI, and Google offer simplicity but introduce latency variability, rate limits, and per-token costs that scale linearly with usage. Private inference infrastructure requires significant capital investment but delivers deterministic latency, predictable costs that amortize over time, and the ability to optimize hardware configurations for specific workload profiles. For organizations where AI agents are becoming the primary interface to enterprise systems, the inference layer is no longer a discretionary technology expense; it is core operational infrastructure on par with networking and storage.