Jan 8, 2025
6 Min
Neeraj Bhargava
When it comes to this generation of AI, front-ended by generative capabilities, it is time to drop the stereotypes like “AI can only do routine work”, or “AI can only automate, not create” or “AI needs humans for quality control and judgment.” Notwithstanding the claims about AGI and AI exceeding human intelligence shortly, the reality is that AI soon will be able work digitally on practically everything like humans, including making mistakes, having biases, and misjudging like humans do. The effectiveness of AI Agents is a matter of how you train and how you design your control processes, much like how we design systems for humans to work. This is consistent with the Atlassian report as shown in Figure 1 below:
Survey of ~5,000 knowledge workers from Australia, US, India, Germany, and France.
Source: Atlassian - AI Collaboration Report, Nov 2024
The key now is to define and continuously refine how AI Agents and humans work together. Our recommendation is to define the right kind of Human-AI Agent partnership is built around both the capabilities and deficiencies of AI Agents. As articulated in the first two parts of this series, the key and potentially insurmountable challenges for AI Agents to work autonomously are comprehensive access to all relevant data and the level of trust one can build in their responses to a situation. If one breaks this down into actual work, one can see AI Agents taking a major part of the workload, when it comes to:
Coding: the first few versions of any new product can be generated through simple commands
Basic Research: whether it is business, legal, or scientific, the knowledge base behind AI models will offer a compelling response and framework to any problem; it may be incomplete and even somewhat incorrect, but it will be a great starting point for humans to build further
Basic Analysis: anything standardized like reporting and dicing and slicing of facts, building first level spreadsheets and presentations - can be done by AI Agents. They can go further and do regressions, multi-modal content assessment, and define new possibilities that become ripe for more informed and sophisticated human assessment
Contextual Conversations: a lot already written about this, basic calls and responses, almost completely done by AI Agents.
These four activities form a major chunk of knowledge work and unambiguously throw a challenge for large scale employment provided for this kind of work. So what will humans do? Different stuff and a lot of it:
Data Prep & Sanitization: note our previous views on how a lot of the contextual data is either poorly documented or undocumented, plus there will always be new data
Risk Management: assess and manage tech, legal, and business risks
Exceptions Management: respond to transactions and interactions that do not fit the bill
AI Agent Design: we are a long way away from AI Agents designing themselves
Business Consulting: defining differentiation, meaning and purpose of businesses, reconfiguring operations, redesigning the new AI-enabled organizations, consultants as change agents will thrive and have the best time of their life
Authenticators and Quality Managers: while AI tools will also manage the quality of work of AI Agents, humans cannot abdicate their responsibilities as owners of an outcome, the insurance companies will agree with that
Survey of ~5,000 knowledge workers from Australia, US, India, Germany, and France.
Source: Atlassian - AI Collaboration Report, Nov 2024
Like many other partnerships, the Human-AI Agent partnership model design question bears the answers, “it depends on…”. How you provide the right data and manage the appropriate level of trust in AI becomes the fulcrum for defining who does what and how the capabilities blend.