From SDLC to Intent-to-Verified Behavior
Rethinking the Software Lifecycle in the Age of Generative Systems
1. Introduction
For decades, the SDLC has provided the dominant grammar for enterprise software production. That grammar was useful because it made software legible to managers, auditors, regulators, and engineering teams. It treated software as an artifact that could be scoped, built, tested, released, and maintained through recognizable stages. Agile, DevOps, and continuous delivery modified the cadence of those stages, but they largely preserved the underlying assumption that the main thing being produced was code.
That assumption deserves re-examination. Generative AI does not merely accelerate implementation; it changes the economic center of gravity of software work. When a plausible first version of code can be generated quickly, implementation no longer monopolizes cost, time, or attention. The bottleneck moves upstream to the quality of the intent being expressed and the knowledge available to support generation, and downstream to the rigor of verification required to trust the result.
This essay advances a simple thesis: the most important unit in modern software production is no longer the code release, but the verified behavior. Code remains indispensable. It remains inspectable, optimizable, and often safety-critical. But it is increasingly a derivative representation of a deeper set of inputs: business intent, domain knowledge, policy constraints, data semantics, and runtime evidence. Once that shift is recognized, the lifecycle itself must be redefined.
2. Why the Classical Framing Is Under Strain
The classical lifecycle model emerged in an era when software implementation was expensive, specialized, and opaque. Human beings translated requirements into design, design into code, and code into executable systems. Governance naturally formed around that sequence. Requirements documents were controlled because they fed design. Designs were reviewed because they fed implementation. Code was reviewed because it became the executable artifact.
Agile methods improved responsiveness by shortening cycles and embedding feedback earlier. DevOps and continuous delivery reduced the friction between build and run. Platform engineering industrialized the delivery substrate. Yet even these advances preserved code as the central durable asset. The release remained the core unit of change, and most governance still attached to repositories, pull requests, branches, artifacts, and deployment pipelines.
Generative systems do not eliminate this machinery, but they destabilize its assumptions. If multiple different code artifacts can satisfy the same specification, then the business value does not primarily reside in any single code listing. It resides in the behavior achieved, the constraints respected, the interfaces honored, and the evidence available to justify trust in the result. This matters especially in enterprises, where correctness, compliance, and accountability are often more important than authorship.
3. Redefining Software
A narrower definition of software treats it as a set of instructions executed by a computing system. That definition remains technically true, but it is no longer sufficient as an organizing concept. In AI-mediated production, source code is increasingly one representation among several. The more durable concern is the governed behavior a system produces over data and interfaces.
A more useful working definition is this: software is executable, governed behavior operating over data, interfaces, and policies. This definition preserves code, but it refuses to make code the sole source of truth. It also better reflects how modern systems actually fail. Many material failures are not syntax failures or algorithmic failures. They are failures of domain interpretation, data semantics, policy enforcement, edge-case handling, or accountability.
This redefinition has practical consequences.
- Ownership shifts from "who owns the repository" to "who is accountable for a given business behavior."
- Change management shifts from code diffs alone to changes in intent, policy, data contracts, and knowledge assets.
- Assurance shifts from post-implementation testing to continuous evidence that behavior remains aligned with expectations.
- Architecture shifts from organizing only services and deployments to organizing the informational substrate that generation and verification depend on.
These implications are developed more fully in the next article. For now, the key point is that if software is better understood as governed behavior, then the lifecycle must be about aligning and re-aligning behavior, not simply moving code through stages.
4. Redefining the Lifecycle
Under the classical SDLC, the lifecycle is the structured progression by which a code artifact is conceived, built, validated, released, and maintained. Under an AI-native framing, the lifecycle is better understood as the continuous transformation of intent, knowledge, policy, and data into verified runtime behavior.
That definition changes the unit of analysis. A release is no longer the natural center of gravity. The new center is the behavior surface that the system exposes in production: what it does, under what constraints, for which users, using what data, with what evidence of correctness and legitimacy. Some changes to that behavior surface may still be released in batches. Others may be regenerated in smaller increments as specifications, rules, or operational conditions evolve.
This redefinition also changes what counts as productive work. In a code-centric worldview, the main work is implementation and the main supporting activities are requirements and testing. In an intent-to-verified-behavior worldview, implementation becomes partially machine-assisted and sometimes machine-generated, while the most leverage moves to intent precision, knowledge formalization, policy modeling, verification design, telemetry interpretation, and controlled regeneration.
5. The IVB Lifecycle
The Intent-to-Verified-Behavior lifecycle is a conceptual model for this shift. It consists of six continuous capabilities rather than six sequential gates.
5.1 Intent Capture
Intent capture turns business aims into precise, reviewable specifications. It asks not only what outcome is desired, but also what constraints, acceptance criteria, exclusions, and measurable success conditions define that outcome. A vague prompt may generate code, but it rarely generates trustworthy behavior.
5.2 Knowledge Alignment
Knowledge alignment assembles the domain facts, decision rules, canonical definitions, and data semantics needed to generate correct behavior. In many organizations, this is where the real difficulty now lies. Critical business knowledge is often fragmented across policies, legacy systems, tribal memory, spreadsheets, and informal conventions.
5.3 Solution Synthesis
Solution synthesis produces candidate artifacts from the available inputs. Those artifacts may include code, configurations, workflows, prompts, interface definitions, policies, and operational runbooks. Human expertise remains essential here, but the human role shifts from exclusive authorship toward framing, selecting, constraining, and integrating candidate solutions.
5.4 Verification and Assurance
Verification becomes a first-class activity rather than a downstream checkpoint. Generated behavior must be tested not only for functionality, but for groundedness, policy compliance, edge-case handling, failure behavior, and traceability. The critical question is no longer "does the code run?" but "is there sufficient evidence to trust the behavior?"
5.5 Deployment and Enforcement
Deployment in the IVB model includes the runtime enforcement of the policies under which a behavior was approved. This means that guardrails, access controls, scope limits, and observability hooks are not adjuncts to production; they are part of the production artifact.
5.6 Telemetry and Regeneration
Operational telemetry completes the loop. It reveals whether behavior remains aligned with intent under real conditions. In an AI-native lifecycle, telemetry does not merely inform bug fixes. It can trigger re-specification, knowledge correction, re-verification, or controlled re-synthesis.
The value of this model is not that it abolishes existing engineering practice. Its value is that it places the emphasis where AI-native systems increasingly require it: on alignment, governance, and evidence rather than on code production alone.
6. New Bottlenecks and New Roles
If implementation cost falls, bottlenecks migrate. The first major bottleneck is knowledge formalization. Enterprises rarely fail because they cannot type code fast enough; they fail because they cannot state their own rules consistently. The second bottleneck is verification design. A generated artifact may look plausible and even pass shallow tests while still embodying the wrong assumptions. The third bottleneck is accountability: someone must still be answerable for what the system does.
These shifts change role definitions rather than eliminating roles.
- Developers spend less time producing boilerplate and more time shaping solution boundaries, reviewing generated artifacts, and reasoning about correctness.
- QA engineers become more central because assurance design becomes harder, not easier.
- Product managers and analysts become more responsible for precision in specifications and acceptance criteria.
- Data and knowledge specialists become critical because poor semantics produce unreliable behavior at scale.
- Architects become stewards of the structures through which intent, knowledge, and policy are made operational.
This role evolution is the bridge to the second article in the trilogy. If the lifecycle changes in this way, architecture must change with it.
7. Risks and Limits
A behavior-centered lifecycle is not a license for automation enthusiasm. Several risks become more acute when generation is cheap.
First, plausible outputs can conceal incorrect reasoning. Second, generated tests can replicate the blind spots of generated implementations. Third, traceability can become performative rather than real if systems attach references that look authoritative but do not truly ground the result. Fourth, rapid regeneration can produce unstable systems unless tightly bounded by approval thresholds and rollback mechanisms.
The deeper risk is managerial misreading. Organizations may treat generative tooling as a reason to reduce investment in domain modeling, testing, architecture, or governance. The opposite is more likely to be true. The cheaper generation becomes, the more valuable disciplined upstream inputs and independent downstream assurance become.
8. Conclusion
The software lifecycle is not disappearing, but its organizing assumptions are changing. In an era of generative systems, software is less usefully understood as a fixed code artifact and more usefully understood as governed runtime behavior derived from intent, knowledge, policy, and data. Once that shift is accepted, the lifecycle must be reframed around continuous alignment and verification rather than around implementation phases alone.
This first article has established the trilogy's central thesis: enterprise software is moving from code-centric production toward behavior-centric production. The second article expands that thesis from lifecycle to architecture, arguing that knowledge, policy, and intent must be designed as first-class architectural concerns. The third article then applies both arguments to an external interaction model in which customer agents and enterprise service agents negotiate over data, scope, and purpose.
Bibliography
- Amershi, Saleema, et al. "Software Engineering for Machine Learning: A Case Study." 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), 2019.
- Bass, Len, Ingo Weber, and Liming Zhu. DevOps: A Software Architect's Perspective. Addison-Wesley, 2015.
- Beck, Kent, et al. "Manifesto for Agile Software Development." Agile Alliance, 2001.
- Fitzgerald, Brian, and Klaas-Jan Stol. "Continuous Software Engineering: A Roadmap and Agenda." Journal of Systems and Software, vol. 123, 2017, pp. 176–189.
- NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology, 2023.
- Royce, Winston W. "Managing the Development of Large Software Systems." Proceedings of IEEE WESCON, 1970.
- Sculley, D., et al. "Hidden Technical Debt in Machine Learning Systems." Advances in Neural Information Processing Systems 28, 2015.