Hear me out, I think Artificial Intelligence Ethics should be added to the Agile Coaching Growth wheel. Using AI is a toolset in its own right, and I propose that it’s a solid tool the output of which any good Coach and facilitator could draw tremendous value.
Warning, this is a bit of a rambling post because I want to get the idea out there and hear back from all of you on it. Please Respond in the comments section.
So here’s how I got here. I was lucky enough to join an Agile Coaches breakfast the other day organized by Art Pittman and as part of the Agile Leadership Network, one of the two major Agile meetup groups in my area. The host was Kerri Patterson, and she had a great prompt for us to chat about–Artificial Intelligence: the areas of division between bots and people, the accelerator points of the advent of AI, and the trends and Timelines of AI.
As one might expect, we all were all over the place on the topic. Some of us expressed fear, some of us expressed delight, some saw it as a great tool, while others saw ways it might corrupt the way we think. I was part of the “Trends and Timelines discussion, and we dove very quickly into the ethical challenges that AI presents and even the industry that will more than likely expand around that idea. Already, I can hear the wheels cranking for all of the different official certifying bodies for certification in AI Ethics. Just a quick search will prove the large numbers of universities and independent agencies offering various certifications in AI. It’s reminiscent of the wild west show many of us remember in the early days of Agile certifications.
Through the course of the discussion, the notion that AI in the agile space stands a good chance of being a powerful brainstorming tool for just about any topic. Even Product Owners might use it to take a swipe at a high-level backlog for building something new. (one of the attendees had tried this with decent success).
But like any tool available to Agile teams, there is needed structure. It strikes me that this is exactly the kind of line of questioning that a good Agile Coach should be asking. It’s not a far stretch; well-structured questions to ChatGPT achieve the best results when posed in the form of a user story and acceptance criteria
- As a company wanting to be on the cutting edge of industry X
- We need a new application that takes advantage of social media
- So that users in industry X will have something they have never had before
Adding acceptance criteria after the first pass generates more and more detail until the requestor has a pretty solid structure for product mapping–in minutes. Try it yourselves. While the results range from uncanny to eerily exciting, where our coaching on the ethics of AI comes in might be as simple as asking our clients what they think they should do with the information.
Other questions could include, “Is that an ethical approach to developing a new app?” It’s not really an original thought. The question was your idea, but the answer . . . not so much. Can you look your VP in the eye and say: “This is what I’ve come up with.”
Then there’s the question about the soundness of the response. There are numerous methods to check bodies of text to determine if it is AI generated–how does it work? Mainly by essentially doing a reverse search on the phrases used. In other words, it’s looking for original thoughts. But more importantly, we have to remind people to consider whether AI has all of the most up-to-date information. The folks at OpenAI have been pretty thorough about reminding users that ChatGPT doesn’t have the latest information and can’t be predictive based on recent trends. On top of that, the folks at OpenAI have added some built-in ethical reminders–(e.g. try asking ChatGPT for the most common numbers that have historically appeared in Powerball).
All of that makes it sound like I don’t think AI is very helpful. On the contrary. Like everything in Agile, the real value comes in the conversations it generates. I maintain that AI, if used right, could actually bring us closer together as people. It could get us talking about the markets we serve. It could prompt us to think more deeply about our customers. It could get us all talking more about of the system questions we should be considering rather than just linear “if, then” approaches.
Further, as Coaches, we have some opportunities to demonstrate to the teams with which we are working, how AI responses to ideas are a great starting point for discussions. Having an objective “3rd party voice” against which the team might argue makes for a great “common enemy” Having that 3rd party not be anyone at the company might give the team more openness.
I tried this recently while trying to distill a large number of ideas down to a few salient points. I fed it into a ChatGPT-powered virtual whiteboard, and shared the results with the group. Ultimately, we rejected several of its responses and still modified even the one we accepted. It made for a great conversation, and, even though We rejected the AI responses, they did give us tighter focus much more quickly than if we had tried to do the “distillation” part ourselves.
I believe the primary areas that AI could be misused in the Agile space are more related to stealing ideas or claiming designs as one’s own than they are to the ethical use of data. I believe that because the many spaces in which Agile is applied tend to be knowledge work involving creativity and design. That seems like a great place for an Agile Coach to be the voice of reason.
I rambled on a bit there, and clearly there is a great deal more to this conversation. The reaction to AI strikes me a great deal the way the internet struck so many of us back in the day. It seemed wild, and potentially dangerous. What were once only library catalogs and scholarly research were quickly replaced by adult content, and bloggers pontificating opinions as fact. We grew into it and found ways to make it of unimaginable value and a means to share incredible amounts of knowledge and creativity. I think the same is in store for us with our use of AI. Sure, it’s going to be abused, and we will all need to speak up, but maybe it will help us think more clearly about who we are and the good of which we are really capable. I’ll leave you with this: I went ahead and asked ChatGPT the following:
“What are some ways that an Agile Coach might guide a team to using AI responsibly?”
It gave me a 13-point response going into the primary ethical pillars already being addressed in think tanks and universities, but finally concluded with the following:
“Remember that responsible AI usage is an ongoing journey, and it requires a team effort to ensure that the benefits of AI are maximized while potential risks are minimized. An Agile Coach can play a pivotal role in integrating responsible AI practices into the Agile development process.”
I gotta say,