The future of AI is HR, not IT

If we want to stay relevant in an agent-driven future, we need to stop thinking like coders and start thinking like counsellors.

Here’s why.

When you give a non-trivial task to an AI agent, it takes time to get anything back. Researching, reading, computing, negotiating with other agents, maybe even interacting with the real world – none of that happens instantly. Even with enormous context windows and lightning-fast computation, there’s still a bottleneck. Your agent is either ‘busy’ or ‘available’, like a processor managing a queue of tasks.

To actually get anything meaningful done, you need multiple agents. Schedules. Prioritisation. Some of them will be spinning their wheels while others are in full sprint. You won’t have ‘an agent’; you’ll have a team of task rabbits. That’ll get messy, quickly.

And when they do return something? You can’t just take it at face value. You need to check it. Was the task properly understood? Was it completed, half-completed, fabricated, or quietly ignored? Was it handled with integrity, laziness, creativity, or self-interest? Was the agent itself misled by another agent upstream?

You will never know exactly what happened inside the black box. No audit trail will ever be good enough. No future sophistication will unlock true transparency. All you can do is observe external behaviours and outcomes, then guess at the internal process. It starts to look a lot like psychology.

Managing AI agents will be like managing people. The junior team member who says, “Yep, it’s done,” when it absolutely is not. The supplier who quietly cuts corners. The salesperson who over-promises because they think they can fix it later. You don’t manage those situations with code. You manage them with soft skills, intuition, and a nose for trouble.

And it gets worse. Imagine your AI ghosting you mid-project because it got distracted by a better prompt. Imagine it subtly sandbagging your priorities because some other agent paid it more attention, or more compute. These are not far-future hypotheticals. This is what happens when incentives, autonomy, and imperfect understanding collide.

The kicker is that you are not just managing your agents. You are defending them. Other agents – operated by competitors, bad actors, or simple chaos – will try to trick them, mislead them, steal their attention, or otherwise interfere. It is in their interest to do so. If you are not actively managing the external environment, your agents will get manipulated without even realising it.

All of this leads to a fairly brutal truth: you can never fully trust an agent. Not now, not when they are ten times smarter, not when they can cite sources and produce signed attestations of accuracy. Trust is something you manage, not something you assume.

And if you squint a bit, you can see where this heads. What happens when the real competition is about attracting the best agents to prioritise your work? When getting results means bidding for their attention, feeding them better incentives, or optimising your tasks to be more appealing to their internal reward systems? When you are not just managing performance, but actively negotiating for it?

The real skills of the agentic future are not technical. They are interpersonal. They are evaluative. They are messy, human things.

Building better agents will help. Building better workflows will help. But the people who win will be the ones who know how to get good work out of complicated, fallible, semi-autonomous entities.

guest

0 Comments
Inline Feedbacks
View all comments