Dissecting Copilot Agents
Welcome everyone! This post is part of my wider series on how to implement low-code not just from a technology and process perspective, but also for a people enablement and adoption perspective to truly maximise value from the platform - This post gives an overview of the structured approach I’ve defined and how I’m breaking down my blog posts
Holistic Low-Code Enablement - Blog structure and navigation — EmPOWER Your World
We’ve heard of Copilot… A LOT! over the last year or so but there’s another flavour of it that’s arrived… The Agent… This may conjure up images of James Bond… Or Agent Smith from the Matrix… And in reality they’re kinda a combination of the two… but a bit friendlier than Agent Smith! They’re here to help us be EVEN MORE effective by bringing specific information to a specific scenario to achieve a specific goal all with natural language.
Charles Lamanna, Microsoft CVP of Business & Industry Copilot published a blog post to unveil Copilot agents on Sept 16th as a way to ‘Supercharge your business’! That post explains agents like this…
“Broadly, an agent uses AI to automate and execute business processes, working with you and on your behalf. Agents can build capacity for every individual, team, and organization—from sales to marketing to customer service, and more—enabling you to scale impact like never before.
Agents come in all shapes and sizes. They help you retrieve information from grounding data and reason over it to summarize or answer questions. More capable agents take actions when asked and the most advanced agents are autonomous, operating independently to create and perform plans, orchestrate other agents, and learn when to escalate to an employee for help.”
I’ve had a chance to experiment with these a little as part of Early Adoption… Let’s take a look at what Agents are and how they work based on the info that’s been shared so far through Microsoft blog posts, ‘Learn’ articles, and videos explained them as an extensibility option on top of Microsoft 365 Copilot.
This picture summarises how I understand Agents to work at a high level from how I’ve interpretted what I’ve read.
Terms (light blue labels) used to describe the key elements of an agent in the ‘Copilot agents fundamentals’ image in this Learn article
Inputs (orange labels) - I’ve drawn these either as inputs to the process (which could be user inputs, or event triggers from internal or external systems)
Outputs (green text and arrows) for how agents produce the goal of that agent
Process steps (black text and arrows) for how these items connect together
And some general description in dark blue
As I understand it we define, a set of instructions in natural language (like with the prompts that we’ve been learning to use across other AI) that describe the context around what we want to do, the rules we want the agent to follow, and what we’re trying to achieve.
These are supplemented with knowledge sources to ground our agent so it can take our triggers and knowledge and combine these to run through our instructions and dynamically produce a set of modular actions that to take place to get us to our goal. The order of these actions are whatever is most appropriate to produce our goal. They don’t have to immediately run sequentially and so can work over longer cycle time processes.
How could they be used?
In those example links there are some examples of what Agents could be used for e.g. a ‘Customer Support agent’ - “The agent has identified new support issues and triaged to other agents. If we compare this to the picture it could work something like this….
Goal - to maintain production within tolerance and specification at the highest throughput possible.
Inputs - We have telemetry being collected from a system e.g. a manufacturing line.
Knowledge - Documents describing what the specifications and tolerances are for certain measurements for certain products, perhaps another document with Nelson statistical rules that would trigger intervention (e.g. number 3 where 6 consecutive data points are increasing / decreasing) .
Instructions - monitor key performance indicators to ensure good quality product is manufactured; to display a message to alert a manufacturing operator for awareness for trend issues; to intervene through closed loop control (if available) to adjust machine settings if trending out of specification; to monitor and log any outputs of any materials outside of specification; to alert a supervisor if operating outside of specification; to stop the line if outside of tolerance for x items / minutes etc.
Outputs - Depending on what data is produced, and the scenario that’s occurring, the agent would identify which actions might be needed and in which order to achieve the specified goal. It may be fired to an agent to raise a support ticket, it might be making automated adjustments, it might be communication or alerts… It might be all of those, or something else. It could also monitor for closure of that ticket at a later date… perhaps take the investigation and resolution text from that ticket and suggest an update to it’s knowledge so it can provide an even better diagnosis and resolution proposal response in the future.
Of course, depending on the process, the agent, the risk impact of the activity, our confidence, etc. we may design our agent to have a ‘human in the loop’ to make the final decision of what is done, and how, but we may choose to have automatic actions too.
This dynamic evaluation of what’s being asked of it, grounding with structured and unstructured information, and dynamic creation of a workflow to achieve a goal is REALLY exciting and has huge opportunity!
I’m really excited to see how this evolves as more information is shared and it becomes visible in previews and translates into real work use cases! The future is evolving at an ever increasing rate!!! Where might we be in 1… 5… 10…. 20 years??!?!?
Come and join the discussion around this topic on LinkedIn and if you find this, or any of my other posts useful, share my blog with your network :)