If AI is writing the majority of the code. . . .

I was recently asked the following question:
“If AI is writing the majority of the code, how much of the agile process goes away? What parts need to change? For example, we won’t need story point sizing anymore.”

It took me aback a little, not because I hadn’t been thinking about it, but because it was so up front about whether I, as an Agile leader, had any value. I responded immediately and stream-of-consciousness, but that set off a late-night “storm” of ideas.

My initial response:

Much of the answer depends on where the AI is picking up the code that it needs to write. Additionally we would have to know the degree to which we trust the code being written by AI. As an organization moves deeper into AI writing the majority of the code, there will still be a crew of people overseeing, reviewing, re-running the AI.That team will still need to determine what the system is capable of. The role of Scrum Master and Product Owner don’t go away. They shiftGoing back to “where is the demand for code coming from?” The PO becomes more of an overseer of the Agent that he or she would likely run to ensure that the system is producing what is right for the humans who will interface with the product. The PO governs the “what” of the system and will need to ensure what is being funneled to the team is what actually is needed.The Scrummaster’s responsibilities stay on the “HOW” but shifts away from things like estimation but doubles down on ensuring the team is using a consistent set of tools (if there is any further interest in improving and determining what changes have helped the team improve in quality, usability, or speed). Finally, if you still have humans involved, they are still going to need to retrospect on how they improve, what needs to be trained out of the AI. They’ll still need to work together to ensure that what they are building will integrate for the end user in a usable format.

I say all that with a certain amount of certainty because im thinking back to my days when we introduced robotics to the hose assembly production lines in Mark IV Auto. The teams shifted from manual labor to a smaller team of developers, robotics engineers, and maintenence folks. They were still a team, and they met daily to address what new part numbers they would see coming through, what changes they might need to make, any glitches they saw the robots beginning to repeat, etc.

Now when the time comes that there are no humans in the development at all, then yes, Agile will no longer be necessary. Many many frameworks and roles will no longer be necessary

Later that evening, I decided to add to my response:

To continue with my robotic arm  (and I gotta add, that big orange arm was terrifyingly powerful and fast and scared the beejesus out of everyone), but when the Easley, SC plant introduced it, the work didn’t disappear (yes, the work force reduced, but not by as many as everyone expected). It moved from manual assembly to system supervision, programming, and quality control.

I really feel like Software development with AI is following the same pattern.  Here’s the thing about that robot arm, when it was working well, it was magnificent. When it got glitchy or some simple line of code was skipped, it could produce MASSIVE numbers of the wrong part number–and when working with stainless steel tubing in a market with 1% margin–that was a disaster.   I think that, if not well structured and “fed,” AI has the potential to dramatically increase the cost of poorly planned/ poorly clarified product thinking.

bad requirements–> huge volumes of wrong code (sure, it feels cheap, but untangling it?)weak acceptance criteria–> unstable systems, adding mistakes to mistakespoor architecture–>massive AI-generated complexity Agile, as I see it, becomes less about coordinating coding work and more about coordinating thinking, validation, and structured experimentation.

Later that night I tried to sleep, but. . . thoughts, manifestos, lists, the fact that I hadn’t actually answered “What parts need to change?”:
  • Estimation changes–but planning doesn’t disappear
    • Story point estimation is replaced by Feature complexity and integration risk . . estimates? ratings?
    • Validation effort
    • Experiment cycles
    • Instead of “How long will it take the development team to build this?” we have to ask “How hard will it be to get this working in the system?”
  • Backlog Refinement becomes more important
    • (mentioned in my previous post) AI can generate huge volumes of wrong code very quickly
    • shift towards clearer product intent
    • even clearer acceptance criteria (AI doesn’t always know what to ask)
    • Really consider how you will prompt for (and test for) edge cases
    • Set clear system boundaries
    • clarify data constraints
    • get really clear about UX expectations
    • Introduce a clear Jobs To Be Done structure
  • Lock your Definition of Done DOWN!
    • Test automation
    • Integration testing
    • Security validation
    • Performance validation
    • Usability review
    • know that the new bottleneck is validation pipelines and test coverage (yes you can have AI set them up for you and evenautomate, but you then have to test and monitor those, so yeah, potential bottleneck)
  • Daily coordination still matters (think about the support team for the big dangerous robot arm), the team has to sync on
    • Your AI prompt strategies
    • System integration
    • Model behavior anomalies
    • unexpected regressions
    • changes in requirements
    • standup is less about “what I wrote yesterday and what do I expect to finish today” and more “What did we learn about the system yesterday, and what will we have it work on today?
  • Retrospectives become even more valuable (the questions just shift)
    • Which prompts worked well?
    • Where did we over-trust?
    • Which tools produced better results?
    • Where did the AI hallucinate?
    • Do we have enough information for any of our latest experiment’s measurables?
And that led me to think about what AI doesn’t eliminate.
  • Product ownership–what problem are we solving, what does good look like, what tradeoffs matter. AI cannot replace product judgment.
  • Cross-functional collaboration–design, architecture, security, integration, customer understanding
  • The healthy practice of incremental delivery–even if AI can crank out large features quickly but
    • they have to integrate
    • customers can only accept so much change so fast (imagine if your phone updated with new features every night-C’mon)
    • rollback complexity increases exponentially

If I had to summarize it, I would say that larger and larger use of AI will reduce the need to estimate coding effort, but it will increase the need for stronger and clearer product definition (and customer understanding), robust testing architecture, and steadily maintained system integration. These are all themes of the Agile principles.–care for your customer, care about your product, care about your system, and, yes, care about your team who works with you to take care of all of those.

Posts created 18

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top