Agents, Agents Everywhere...
As the popularity of agentic AI workflows continues to explode, I see a trap on the horizon
… And not a drop of teamwork
Agents!
I won’t belabor the point: Agentic AI workflows are experiencing a huge surge in popularity. This is not news. LLMs tend to perform better if assigned individual roles - manager, designer, producer, reviewer, marketer (which could map to their respective proper titles in any industry or vertical).
And then, certain people convinced themselves that wasn’t even enough - that agents should have the capability to manage their own sub-teams of agents. Dollar signs appeared in the eyes of every shareholder of three particular companies in specific; anyone LLM-adjacent in general.
Not the battle you think
I’m not going to take this subject by its horns and fight the battle you might think I would fight here. Yes, this approach requires a different skillset than that of a software developer even just a year ago, let alone two. It increases the distance (particularly if implemented poorly) between the codebase and a human being’s familiarity with that codebase. I’m not going to die on that hill; it’s already a graveyard.
What is Teamwork?
Let’s zoom out a bit and think about teamwork. What happens in a functional team that is truly greater than the sum of its parts? How do one-off ideas, or light-bulb thoughts, or scratchpad mockups become flawlessly executed products? Hint: It’s got nothing to do with making more sub-teams within the team (“Let’s add more middle managers!” — Nobody, Ever). Once you have the bare necessities, it’s not about adding more diverse roles to the team. What’s the special sauce?
Adversity
I’ll say it again: Adversity. Good teams don’t fight - but they do challenge one another, and ideas/solutions are improved by each challenge they survive, either by nature of their existing properties or because a team member has suggested an improvement.
Do copies of copies of copies of the same LLM model challenge one another? If one encounters a problem for which it was trained poorly or incorrectly, will a copy catch that error and suggest a solution?
Of course not.
Make your AIs Work as a Team
I have a lot more thoughts on this subject that I hope to share in the coming months, but this short entry serves as a protracted introduction. Make a team out of your AIs. Add some productive adversity.
It’s what we do.