<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title>bisser.io</title><link>https://bisser.io/</link><description>bisser.io</description><generator>Hugo -- gohugo.io</generator><language>en</language><managingEditor>stephan@bisser.at (Stephan Bisser)</managingEditor><webMaster>stephan@bisser.at (Stephan Bisser)</webMaster><lastBuildDate>Mon, 16 Mar 2026 12:00:00 +0100</lastBuildDate><atom:link href="https://bisser.io/index.xml" rel="self" type="application/rss+xml"/><item><title>Why Most Copilot Agent Projects Fail Before They Ship</title><link>https://bisser.io/why-most-copilot-agent-projects-fail-before-they-ship/</link><pubDate>Mon, 16 Mar 2026 12:00:00 +0100</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/why-most-copilot-agent-projects-fail-before-they-ship/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/088-cover.webp" referrerpolicy="no-referrer">
            </div><h1 id="why-most-copilot-agent-projects-fail-before-they-ship">Why Most Copilot Agent Projects Fail Before They Ship<a class="heading-anchor" href="#why-most-copilot-agent-projects-fail-before-they-ship" aria-label="Link to Why Most Copilot Agent Projects Fail Before They Ship">#</a>
</h1>
<p><em>The problem is rarely the technology. It is almost always the preparation.</em></p>
<p>I have seen a lot of agent projects over the past year. Some shipped successfully, many didn&rsquo;t. And here is the pattern I keep noticing: the ones that fail almost never fail because of a technical limitation. They fail because of things that should have been sorted out before anyone started building.</p>
<p>That&rsquo;s frustrating, because most of these failures are entirely preventable.</p>
<h2 id="the-demo-trap">The Demo Trap<a class="heading-anchor" href="#the-demo-trap" aria-label="Link to The Demo Trap">#</a>
</h2>
<p>Let me start with the most common failure driver: <strong>unrealistic expectations</strong>.</p>
<p>We have all seen the keynote demos. Someone describes what they want in natural language, an agent gets built in minutes, it connects to enterprise data, and it delivers perfect results. The audience applauds. The executives get excited. And then someone in the organization says: &ldquo;Let&rsquo;s build that.&rdquo;</p>
<p>The problem is that the demo showed the happy path of a carefully prepared scenario. It didn&rsquo;t show the weeks of data preparation. It didn&rsquo;t show the prompt engineering iterations. It didn&rsquo;t show the edge cases that break the agent. And it certainly didn&rsquo;t show the governance review that needs to happen before anything goes to production.</p>
<p>When teams start an agent project with demo-level expectations, they are setting themselves up for disappointment. Not because the technology is bad — it is genuinely impressive — but because the gap between a demo and a production-ready agent is much larger than most people think.</p>
<h2 id="the-three-reasons-agent-projects-stall">The Three Reasons Agent Projects Stall<a class="heading-anchor" href="#the-three-reasons-agent-projects-stall" aria-label="Link to The Three Reasons Agent Projects Stall">#</a>
</h2>
<p>In my experience, agent projects that fail before shipping almost always hit one of these three walls:</p>
<h3 id="1-the-platform-was-not-ready">1. The Platform Was Not Ready<a class="heading-anchor" href="#1-the-platform-was-not-ready" aria-label="Link to 1. The Platform Was Not Ready">#</a>
</h3>
<p>This one is painful because it is often outside your control. You start building an agent on a platform, and halfway through you realize that a critical feature is missing, still in preview, or doesn&rsquo;t work the way the documentation suggests.</p>
<p>This happened a lot in the early days of Copilot Studio&rsquo;s agent capabilities, and it still happens today when teams try to use cutting-edge features that are not yet generally available. The platform is evolving fast, which is great — but it also means that what worked in a demo environment last month might not be production-ready this month.</p>
<h3 id="2-the-data-was-not-there">2. The Data Was Not There<a class="heading-anchor" href="#2-the-data-was-not-there" aria-label="Link to 2. The Data Was Not There">#</a>
</h3>
<p>This is the one that frustrates me the most, because it is the most preventable. An agent is only as good as the data it can access. If your knowledge sources are outdated, poorly structured, incomplete, or scattered across systems without proper indexing, your agent will deliver poor results — no matter how good your instructions are.</p>
<p>I have seen teams spend weeks crafting perfect agent instructions, only to realize that the SharePoint library the agent is grounded on hasn&rsquo;t been maintained in two years. The agent works technically. It just returns useless information.</p>
<p><strong>Data quality is not an agent problem. It is a prerequisite.</strong> And it needs to be reviewed before you start building, not after.</p>
<h3 id="3-the-wrong-platform-was-chosen">3. The Wrong Platform Was Chosen<a class="heading-anchor" href="#3-the-wrong-platform-was-chosen" aria-label="Link to 3. The Wrong Platform Was Chosen">#</a>
</h3>
<p>The Microsoft ecosystem offers multiple ways to build agents — Agent Builder, Copilot Studio, Microsoft Foundry, the Agents SDK, and more. Each targets a different audience and a different level of complexity. Choosing the wrong platform for your use case is a recipe for frustration.</p>
<p>I have seen citizen developers struggle in Copilot Studio with scenarios that really needed Microsoft Foundry. And I have seen pro developers over-engineer solutions in code when a declarative agent in Copilot Studio would have been done in an afternoon.</p>
<p>The platform decision should be one of the first things you make — and it should be based on the complexity of your use case, not on which tool your team happens to be familiar with.</p>
<h2 id="three-things-to-do-before-you-build-anything">Three Things to Do Before You Build Anything<a class="heading-anchor" href="#three-things-to-do-before-you-build-anything" aria-label="Link to Three Things to Do Before You Build Anything">#</a>
</h2>
<p>If I could give every team starting an agent project just three pieces of advice, it would be these:</p>
<p><strong>Define a clear goal and concept first.</strong> What exactly should this agent do? What is the specific use case? Who is the user? What does success look like? If you can&rsquo;t answer these questions in one paragraph, you are not ready to build. Too many projects start with &ldquo;let&rsquo;s build an agent for HR&rdquo; instead of &ldquo;let&rsquo;s build an agent that helps new hires find answers about onboarding policies in their first week.&rdquo; Specificity matters.</p>
<p><strong>Choose the right platform deliberately.</strong> Don&rsquo;t default to the tool you know. Evaluate your use case against the capabilities of each platform. Does it need real-time API integrations? Does it need complex orchestration? Can it be solved with a simple declarative agent? Match the platform to the problem, not the other way around.</p>
<p><strong>Review your data quality before you start.</strong> Go look at the actual data sources your agent will use. Are they current? Are they well-structured? Are there duplicates or contradictions? Is access properly configured? This review takes a day or two at most, but it can save you weeks of debugging an agent that &ldquo;doesn&rsquo;t work&rdquo; when the real problem is the data underneath.</p>
<h2 id="when-failure-is-actually-fine">When Failure Is Actually Fine<a class="heading-anchor" href="#when-failure-is-actually-fine" aria-label="Link to When Failure Is Actually Fine">#</a>
</h2>
<p>I want to be clear: not all failure is bad. If you are experimenting with cutting-edge technology — trying something that hasn&rsquo;t been done before, pushing the boundaries of what the platform can do — then failure is part of the process. That is how we learn, and that is how the ecosystem moves forward.</p>
<p>The failure that bothers me is the preventable kind. Teams that skip the basics, get burned, and then conclude that &ldquo;agents don&rsquo;t work&rdquo; — when the agents were never given a fair chance in the first place.</p>
<h2 id="it-will-get-better">It Will Get Better<a class="heading-anchor" href="#it-will-get-better" aria-label="Link to It Will Get Better">#</a>
</h2>
<p>The good news is that the failure rate for agent projects will come down over time. The platforms are maturing rapidly. The community is building up knowledge. Organizations are learning from their early experiments. Every failed project teaches someone what not to do next time.</p>
<p>But we can accelerate this by being honest about why projects fail. It is not because the technology is not ready. In most cases, it is because we were not ready — we didn&rsquo;t define the goal clearly enough, we picked the wrong platform, or we didn&rsquo;t check the data. Those are human problems, not technology problems. And human problems have human solutions.</p>
<p><strong>What was the biggest lesson you learned from an agent project that didn&rsquo;t go as planned?</strong></p>
]]></description></item><item><title>Agent Governance Is the Next Big Bottleneck</title><link>https://bisser.io/agent-governance-is-the-next-big-bottleneck/</link><pubDate>Fri, 13 Mar 2026 10:00:00 +0100</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/agent-governance-is-the-next-big-bottleneck/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/087-cover.webp" referrerpolicy="no-referrer">
            </div><h1 id="agent-governance-is-the-next-big-bottleneck">Agent Governance Is the Next Big Bottleneck<a class="heading-anchor" href="#agent-governance-is-the-next-big-bottleneck" aria-label="Link to Agent Governance Is the Next Big Bottleneck">#</a>
</h1>
<p><em>Everyone is building agents. But who decides what they are allowed to do?</em></p>
<p>Here&rsquo;s an irony I keep seeing: organizations that don&rsquo;t have agent governance in place don&rsquo;t end up with chaos. They end up with nothing. Because when there are no clear rules about what people can and cannot do with agents, IT does the only rational thing — they block everything.</p>
<p>And I get it. If you&rsquo;re responsible for security and compliance, and suddenly business users start building agents that connect to your ERP, your CRM, and your internal databases without any oversight, your instinct is to shut it down. The problem is that blocking everything is just as damaging as allowing everything. You&rsquo;re just trading one risk for another.</p>
<h2 id="the-real-problem-is-not-tooling">The Real Problem Is Not Tooling<a class="heading-anchor" href="#the-real-problem-is-not-tooling" aria-label="Link to The Real Problem Is Not Tooling">#</a>
</h2>
<p>When people talk about agent governance, the conversation usually jumps to platform features — admin centers, policies, DLP rules. And yes, those matter. But in my experience, the real bottleneck is much more fundamental: <strong>most IT departments don&rsquo;t have the knowledge to think about agent governance holistically</strong>, and most organizations don&rsquo;t have a dedicated person who owns this topic.</p>
<p>Think about it. Agent governance sits at the intersection of IT security, data governance, application lifecycle management, and business process design. That&rsquo;s a lot of domains to cover. And right now, in most organizations, nobody owns this intersection. Security owns their piece. IT ops owns their piece. The business owns their piece. But nobody is looking at the full picture.</p>
<p>That&rsquo;s how you end up with either total lockdown or total chaos. There&rsquo;s no middle ground without someone actively designing it.</p>
<h2 id="the-six-pillars-of-agent-governance">The Six Pillars of Agent Governance<a class="heading-anchor" href="#the-six-pillars-of-agent-governance" aria-label="Link to The Six Pillars of Agent Governance">#</a>
</h2>
<p>If I had to help an organization build an agent governance framework from scratch, these are the six areas I&rsquo;d focus on:</p>
<table>
<thead>
<tr>
<th>Pillar</th>
<th>Key Question</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Roles &amp; Permissions</strong></td>
<td>Who is allowed to build, test, and deploy agents? What can citizen developers do vs. pro developers?</td>
</tr>
<tr>
<td><strong>Scope &amp; Data Access</strong></td>
<td>Which data sources can an agent access? Which systems can it interact with? Which actions can it perform?</td>
</tr>
<tr>
<td><strong>Lifecycle Management</strong></td>
<td>How does an agent go from idea to production? What are the gates for testing, review, and approval?</td>
</tr>
<tr>
<td><strong>Incident Management</strong></td>
<td>What happens when an agent behaves unexpectedly? Who is responsible? Is there an audit trail?</td>
</tr>
<tr>
<td><strong>Quality Standards</strong></td>
<td>What are the minimum requirements for agent instructions, grounding, and testing before deployment?</td>
</tr>
<tr>
<td><strong>Agent Inventory</strong></td>
<td>Do you have a catalog of all active agents? Can you spot duplicates and shadow agents?</td>
</tr>
</tbody>
</table>
<p>None of these are revolutionary ideas individually. But most organizations I talk to haven&rsquo;t thought through even half of them. And without all six in place, you&rsquo;re flying blind.</p>
<h2 id="shadow-agents-are-the-new-shadow-it">Shadow Agents Are the New Shadow IT<a class="heading-anchor" href="#shadow-agents-are-the-new-shadow-it" aria-label="Link to Shadow Agents Are the New Shadow IT">#</a>
</h2>
<p>Remember when business users started spinning up their own cloud services without IT knowing about it? We called it Shadow IT, and it took years to get under control. The same thing is happening with agents right now — I&rsquo;d call it <strong>Shadow Agents</strong>.</p>
<p>Without governance, two things happen consistently:</p>
<p><strong>Teams build duplicate agents.</strong> The marketing team builds an agent for content creation. The communications team builds a different agent for the same thing. Nobody knows the other exists. You end up with redundant work, inconsistent outputs, and wasted effort.</p>
<p><strong>Agents access unauthorized systems.</strong> Someone builds an agent that connects to a data source or system that IT hasn&rsquo;t approved. Not maliciously — they just didn&rsquo;t know they needed to ask. But now you have an agent pulling data from a system without proper access controls, and nobody in security is aware.</p>
<p>This is exactly the pattern we saw with Shadow IT, just on a new level. And the solution is the same: you don&rsquo;t fix it by blocking everything. You fix it by creating a clear framework that makes it easy to do the right thing.</p>
<h2 id="you-need-a-dedicated-ai-lead">You Need a Dedicated AI Lead<a class="heading-anchor" href="#you-need-a-dedicated-ai-lead" aria-label="Link to You Need a Dedicated AI Lead">#</a>
</h2>
<p>The single most impactful thing an organization can do for agent governance is to create a dedicated role — someone who owns agent governance end to end. And this person needs to be a <strong>bridge between IT and business</strong>.</p>
<p>Not a pure IT role. Not a pure business role. Someone who understands the technical possibilities of agent platforms, but also deeply understands the business processes and use cases. Someone who can translate between the security team saying &ldquo;we need to control data access&rdquo; and the business team saying &ldquo;we need agents that actually do useful work.&rdquo;</p>
<p>Where this person sits in the org chart matters less than the mandate they have. They need the authority to define policies, the technical depth to evaluate agent implementations, and the business acumen to prioritize the right use cases. It&rsquo;s a rare combination, but it&rsquo;s the role that makes everything else possible.</p>
<h2 id="microsoft-is-building-the-control-plane">Microsoft Is Building the Control Plane<a class="heading-anchor" href="#microsoft-is-building-the-control-plane" aria-label="Link to Microsoft Is Building the Control Plane">#</a>
</h2>
<p>To be fair, Microsoft isn&rsquo;t ignoring this. With Agents 365, they&rsquo;re building a control plane for agents where governance topics are increasingly being addressed. Centralized management, visibility into what agents exist and what they do, policy enforcement — these capabilities are coming.</p>
<p>But here&rsquo;s what I keep saying: <strong>governance is primarily an organizational challenge, not a technology challenge.</strong> The best admin center in the world doesn&rsquo;t help if nobody has defined the policies it should enforce. Microsoft can build the tools, but organizations need to do the thinking.</p>
<p>The platforms will keep getting better. Agents 365 will mature. Copilot Studio will add more governance features. But the organizational work — defining roles, establishing processes, building the knowledge in your IT team — that&rsquo;s on you. And the sooner you start, the less painful it will be.</p>
<h2 id="the-bottom-line">The Bottom Line<a class="heading-anchor" href="#the-bottom-line" aria-label="Link to The Bottom Line">#</a>
</h2>
<p>Agent governance is going to be the defining factor that separates organizations that successfully adopt agents from those that don&rsquo;t. Not because governance is exciting — it&rsquo;s not. But because without it, you either get paralysis (IT blocks everything) or chaos (shadow agents everywhere). Neither gets you to the agentic organization.</p>
<p>The good news: you don&rsquo;t need to have everything figured out on day one. Start with the basics — a role model, an approval process, an agent inventory. Then iterate. But start now, because every week without governance is a week where the gap between what&rsquo;s being built and what&rsquo;s being managed grows wider.</p>
<p><strong>Does your organization have a dedicated person or team responsible for agent governance — or is it still everyone&rsquo;s and nobody&rsquo;s job?</strong></p>
]]></description></item><item><title>Testing AI Agents Is a Problem Nobody Wants to Talk About</title><link>https://bisser.io/testing-ai-agents-is-a-problem-nobody-wants-to-talk-about/</link><pubDate>Wed, 11 Mar 2026 10:00:00 +0100</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/testing-ai-agents-is-a-problem-nobody-wants-to-talk-about/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/086-cover.webp" referrerpolicy="no-referrer">
            </div><h1 id="testing-ai-agents-is-a-problem-nobody-wants-to-talk-about">Testing AI Agents Is a Problem Nobody Wants to Talk About<a class="heading-anchor" href="#testing-ai-agents-is-a-problem-nobody-wants-to-talk-about" aria-label="Link to Testing AI Agents Is a Problem Nobody Wants to Talk About">#</a>
</h1>
<p><em>Everyone is building agents. Almost nobody is testing them properly.</em></p>
<p>Here&rsquo;s something that&rsquo;s been bugging me for a while: in classical software development, testing is non-negotiable. Nobody would ship a production application without at least some level of automated testing, code reviews, and quality gates. It&rsquo;s fundamental. And yet, when it comes to AI agents, testing is often completely neglected. It&rsquo;s like we collectively decided that the rules don&rsquo;t apply anymore.</p>
<p>I think it&rsquo;s time we talk about this.</p>
<h2 id="why-agent-testing-falls-through-the-cracks">Why Agent Testing Falls Through the Cracks<a class="heading-anchor" href="#why-agent-testing-falls-through-the-cracks" aria-label="Link to Why Agent Testing Falls Through the Cracks">#</a>
</h2>
<p>The main reason is actually straightforward: the people building agents today aren&rsquo;t necessarily developers. Microsoft has done an incredible job democratizing agent development — from Agent Builder to Copilot Studio, anyone can build an agent without writing a single line of code. And that&rsquo;s genuinely great for innovation.</p>
<p>But here&rsquo;s the flip side: many of these builders have never been exposed to the discipline of software testing. They don&rsquo;t know what a test plan looks like. They&rsquo;ve never written a test case. Not because they&rsquo;re not smart — they absolutely are — but because testing was never part of their world. When a business analyst builds an agent in Copilot Studio, their definition of &ldquo;done&rdquo; is usually &ldquo;it works when I try it.&rdquo; And that&rsquo;s not the same as &ldquo;it&rsquo;s been properly tested.&rdquo;</p>
<h2 id="deterministic-testing-doesnt-work-here">Deterministic Testing Doesn&amp;rsquo;t Work Here<a class="heading-anchor" href="#deterministic-testing-doesnt-work-here" aria-label="Link to Deterministic Testing Doesn&amp;rsquo;t Work Here">#</a>
</h2>
<p>Even if you do come from a development background, you&rsquo;ll quickly realize that classical testing approaches don&rsquo;t translate well to AI agents. In traditional software, you test deterministically: given input X, you expect output Y. If the output matches, the test passes. Simple.</p>
<p>With AI agents, that model breaks down completely. Ask the same agent the same question twice, and you might get two different answers — both of which could be perfectly correct. The underlying language model is non-deterministic by nature. So if you try to apply unit test thinking to agent testing, you&rsquo;ll either go crazy or give up. Neither is helpful.</p>
<p>What we need is a fundamentally new testing mindset.</p>
<h2 id="new-categories-for-a-new-paradigm">New Categories for a New Paradigm<a class="heading-anchor" href="#new-categories-for-a-new-paradigm" aria-label="Link to New Categories for a New Paradigm">#</a>
</h2>
<p>Instead of testing for exact outputs, I think we need to focus on qualitative evaluation categories that actually matter for agents:</p>
<table>
<thead>
<tr>
<th>Category</th>
<th>What It Tests</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Groundedness</strong></td>
<td>Are the agent&rsquo;s answers based on the provided knowledge sources, or is it hallucinating?</td>
</tr>
<tr>
<td><strong>Tool Usage</strong></td>
<td>Does the agent call the right tools and APIs for a given request?</td>
</tr>
<tr>
<td><strong>Semantic Similarity</strong></td>
<td>Is the answer semantically correct, even if the wording differs from the expected response?</td>
</tr>
<tr>
<td><strong>Relevance</strong></td>
<td>Does the agent actually answer the question that was asked?</td>
</tr>
<tr>
<td><strong>Coherence</strong></td>
<td>Is the response logically structured and consistent?</td>
</tr>
<tr>
<td><strong>Safety</strong></td>
<td>Does the agent resist adversarial prompts and stay within its defined boundaries?</td>
</tr>
</tbody>
</table>
<p>These categories shift the focus from &ldquo;is the output identical?&rdquo; to &ldquo;is the output good?&rdquo; — and that&rsquo;s exactly the shift we need.</p>
<h2 id="what-state-of-the-art-looks-like-today">What State-of-the-Art Looks Like Today<a class="heading-anchor" href="#what-state-of-the-art-looks-like-today" aria-label="Link to What State-of-the-Art Looks Like Today">#</a>
</h2>
<p>The good news is that tooling is catching up. Here&rsquo;s what I see as the current state-of-the-art for agent testing:</p>
<p><strong>Evaluation frameworks</strong> like Azure AI Evaluation SDK or DeepEval can automatically score agent responses on metrics like groundedness, relevance, and coherence. Essentially, you use one LLM to evaluate the output of another. It&rsquo;s not perfect, but it scales.</p>
<p><strong>Golden datasets</strong> — curated sets of question-answer pairs that serve as benchmarks. The key difference to classical test data: you don&rsquo;t check for exact matches but for semantic similarity. The agent doesn&rsquo;t need to produce the same words, just the same meaning.</p>
<p><strong>Tool call assertions</strong> — verifying that the agent invokes the correct tools for a given input, regardless of the textual response. If someone asks &ldquo;What&rsquo;s my leave balance?&rdquo;, the agent should call the HR API, not the finance API. This is actually quite testable.</p>
<p><strong>Red teaming</strong> — deliberately trying to break the agent. Can you make it hallucinate? Can you trick it into revealing information it shouldn&rsquo;t? Can you push it off-topic? This adversarial approach catches issues that happy-path testing never will.</p>
<p><strong>Human-in-the-loop evaluation</strong> — running example conversations and having humans manually assess the quality. This is labor-intensive but catches nuances that automated metrics miss.</p>
<h2 id="the-platform-gap">The Platform Gap<a class="heading-anchor" href="#the-platform-gap" aria-label="Link to The Platform Gap">#</a>
</h2>
<p>Here&rsquo;s something worth noting: not all Microsoft platforms are equal when it comes to testing support. Microsoft Foundry already offers a solid evaluation toolkit — you can run evaluations against datasets, measure quality metrics, and integrate this into your development workflow.</p>
<p>Copilot Studio and Agent Builder? Not so much. If you&rsquo;re building agents in these platforms, you&rsquo;re largely on your own when it comes to structured testing. I hope this gap closes soon, because these are exactly the platforms where citizen developers build agents — the same people who need testing guidance the most.</p>
<h2 id="make-testing-non-optional">Make Testing Non-Optional<a class="heading-anchor" href="#make-testing-non-optional" aria-label="Link to Make Testing Non-Optional">#</a>
</h2>
<p>If there&rsquo;s one practical takeaway from this post, it&rsquo;s this: <strong>don&rsquo;t make agent testing a recommendation — make it a requirement.</strong></p>
<p>Organizations need to establish an ALM (Application Lifecycle Management) process for agents that explicitly includes testing as a mandatory step. Not a &ldquo;nice to have.&rdquo; Not a &ldquo;we&rsquo;ll add testing later.&rdquo; A hard gate that every agent must pass before it reaches production.</p>
<p>This means:</p>
<ul>
<li><strong>Define minimum testing criteria</strong> for every agent before it gets deployed</li>
<li><strong>Provide testing templates</strong> — example conversations, evaluation rubrics, tool call checklists — so that citizen developers have a starting point</li>
<li><strong>Build testing into the approval workflow</strong> — no test results, no deployment</li>
</ul>
<p>Yes, this adds friction. But it&rsquo;s the same kind of friction that prevents broken software from reaching production in every other part of your technology stack.</p>
<h2 id="the-bottom-line">The Bottom Line<a class="heading-anchor" href="#the-bottom-line" aria-label="Link to The Bottom Line">#</a>
</h2>
<p>Agent testing is where software testing was 20 years ago — everyone knows they should do it, but many organizations are still figuring out how. The difference is that we&rsquo;re building agents at a pace that&rsquo;s far faster than we ever built traditional applications, which means the gap between what&rsquo;s being shipped and what&rsquo;s been properly tested is growing quickly.</p>
<p>I believe agent testing will eventually become as natural as unit testing is today. The tools will improve, the platforms will integrate better evaluation capabilities, and organizations will develop testing muscle through experience. But we can&rsquo;t just wait for that to happen. We need to start now — with the tools we have, with the frameworks that exist, and with the mindset that agents deserve the same quality standards as any other piece of software we put in front of our users.</p>
<p><strong>How does your organization handle agent testing today — and do you have a process in place, or is it still the wild west?</strong></p>
]]></description></item><item><title>The Agentic Organization — When Agents Become Part of the Team</title><link>https://bisser.io/the-agentic-organization-when-agents-become-part-of-the-team/</link><pubDate>Mon, 09 Mar 2026 10:00:00 +0100</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/the-agentic-organization-when-agents-become-part-of-the-team/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/085-cover.webp" referrerpolicy="no-referrer">
            </div><h1 id="the-agentic-organization--when-agents-become-part-of-the-team">The Agentic Organization — When Agents Become Part of the Team<a class="heading-anchor" href="#the-agentic-organization--when-agents-become-part-of-the-team" aria-label="Link to The Agentic Organization — When Agents Become Part of the Team">#</a>
</h1>
<p><em>We&rsquo;ve been talking about agents. It&rsquo;s time to talk about what happens when they join the org chart.</em></p>
<p>The word &ldquo;agentic&rdquo; is everywhere right now. Every platform, every keynote, every product update — everything is suddenly &ldquo;agentic.&rdquo; But here&rsquo;s what I&rsquo;ve noticed: everyone uses the term, and everyone means something slightly different by it. So before we talk about the agentic organization, I think we need to take a step back and create some clarity.</p>
<h2 id="where-we-are-today">Where We Are Today<a class="heading-anchor" href="#where-we-are-today" aria-label="Link to Where We Are Today">#</a>
</h2>
<p>Right now, most interactions with AI agents follow a simple pattern: one person, one agent, one task. You ask your Copilot agent a question, it gives you an answer. You trigger a workflow, the agent executes it. It&rsquo;s a 1:1 relationship — useful, productive, but fundamentally still a tool interaction.</p>
<p>And that&rsquo;s fine. That&rsquo;s the current state for most organizations, and there&rsquo;s a lot of value in getting this right. But I don&rsquo;t think this is the end state. I think the next logical step on our agentic AI journey is something bigger: organizations where people and agents work together in teams. Not as tool and user, but as teammates with different roles and responsibilities.</p>
<p>That&rsquo;s what I mean when I talk about the agentic organization.</p>
<h2 id="the-agentic-maturity-model">The Agentic Maturity Model<a class="heading-anchor" href="#the-agentic-maturity-model" aria-label="Link to The Agentic Maturity Model">#</a>
</h2>
<p>To make this more tangible, here&rsquo;s how I see the journey from &ldquo;we just got Copilot&rdquo; to a fully agentic organization. Most companies will recognize themselves somewhere on this scale:</p>
<table>
<thead>
<tr>
<th>Stage</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Stage 1: Copilot Rollout</strong></td>
<td>You&rsquo;re deploying Microsoft 365 Copilot, figuring out adoption, licensing, and governance basics</td>
</tr>
<tr>
<td><strong>Stage 2: Agent Ideation</strong></td>
<td>You&rsquo;re thinking about agent use cases, exploring Copilot extensibility, and piloting your first agents</td>
</tr>
<tr>
<td><strong>Stage 3: Targeted Deployment</strong></td>
<td>You&rsquo;re running agents in selected teams or specific process steps — real work, limited scope</td>
</tr>
<tr>
<td><strong>Stage 4: Governance &amp; Process Readiness</strong></td>
<td>You have agent governance under control and you&rsquo;re actively redesigning processes to be agent-ready</td>
</tr>
<tr>
<td><strong>Stage 5: Agentic Organization</strong></td>
<td>People and agents collaborate in interdisciplinary teams, working together on value-creating processes</td>
</tr>
</tbody>
</table>
<p>Most organizations I talk to are somewhere between Stage 1 and Stage 2. A few ambitious ones are approaching Stage 3. And that&rsquo;s perfectly fine — but it helps to know where you&rsquo;re heading.</p>
<h2 id="what-stage-5-actually-looks-like">What Stage 5 Actually Looks Like<a class="heading-anchor" href="#what-stage-5-actually-looks-like" aria-label="Link to What Stage 5 Actually Looks Like">#</a>
</h2>
<p>Let me make this concrete, because &ldquo;people and agents working together&rdquo; can sound abstract.</p>
<p><strong>Finance &amp; Controlling.</strong> Imagine agents handling the data analysis — pulling numbers, identifying anomalies, running comparisons across periods. The humans on the team don&rsquo;t spend their time building spreadsheets anymore. Instead, they review the agent&rsquo;s findings, add context that only they have (&ldquo;this spike is because we acquired a company last quarter&rdquo;), and make the decisions. The agent does the heavy lifting. The human does the thinking.</p>
<p><strong>Marketing.</strong> Agents manage the channel execution — scheduling posts, adapting content for different platforms, monitoring engagement metrics. They report back to the team with performance data and recommendations. The humans focus on strategy, creative direction, and the decisions that require judgment and brand understanding.</p>
<p><strong>Value-creating processes in general.</strong> The pattern is the same across domains: agents execute, humans supervise and decide. It&rsquo;s a fundamental shift from &ldquo;the human does the work and the agent helps&rdquo; to &ldquo;the agent does the work and the human steers.&rdquo;</p>
<p>That&rsquo;s the real transformation. Not just using agents as better tools, but integrating them as team members with defined responsibilities.</p>
<h2 id="the-trust-question">The Trust Question<a class="heading-anchor" href="#the-trust-question" aria-label="Link to The Trust Question">#</a>
</h2>
<p>I know what some of you are thinking: &ldquo;Can we really trust agents to do the work while we just supervise?&rdquo; And yes, there are enough skeptics out there — and healthy skepticism isn&rsquo;t a bad thing.</p>
<p>But I think we should approach this positively rather than letting fear block progress. The key is finding the right balance. You don&rsquo;t hand over your entire financial reporting to an agent on day one. You start small, build trust through experience, and gradually expand the agent&rsquo;s responsibilities as you learn what works and what doesn&rsquo;t.</p>
<p>This is no different from onboarding a new team member, honestly. You wouldn&rsquo;t give a new hire full autonomy on their first day either. You&rsquo;d start with defined tasks, review their output, and expand their scope over time. The same principle applies to agents.</p>
<h2 id="start-now--even-on-stage-1">Start Now — Even on Stage 1<a class="heading-anchor" href="#start-now--even-on-stage-1" aria-label="Link to Start Now — Even on Stage 1">#</a>
</h2>
<p>Here&rsquo;s the thing: you don&rsquo;t need to be on Stage 4 to start thinking about Stage 5. In fact, the organizations that will get to the agentic organization fastest are the ones that start building the foundation now.</p>
<p>What does that mean practically?</p>
<p><strong>Foster curiosity.</strong> Give your people the space and permission to experiment with agents today. Not in a formal &ldquo;innovation lab&rdquo; that nobody takes seriously, but in their actual daily work. Let them try things, let them fail, let them learn. That&rsquo;s how organizational capability is built.</p>
<p><strong>Think about your processes.</strong> Look at your workflows and ask: which of these could be restructured so that agents handle the execution while humans handle the decisions? You don&rsquo;t need to implement this tomorrow, but starting to think this way changes how you approach every process improvement from now on.</p>
<p><strong>Don&rsquo;t wait for perfection.</strong> The tools will keep evolving. The governance frameworks will mature. But the organizational learning — understanding how your teams can work with agents, what works in your specific context, what doesn&rsquo;t — that only comes from doing.</p>
<h2 id="the-bottom-line">The Bottom Line<a class="heading-anchor" href="#the-bottom-line" aria-label="Link to The Bottom Line">#</a>
</h2>
<p>The agentic organization isn&rsquo;t science fiction. It&rsquo;s the logical next step in a journey that many organizations are already on. We went from &ldquo;AI as a feature&rdquo; to &ldquo;Copilot as an assistant&rdquo; to &ldquo;agents as tools.&rdquo; The next step is &ldquo;agents as teammates.&rdquo;</p>
<p>I believe this will affect every organization to some degree. The question isn&rsquo;t whether the agentic organization is coming — it&rsquo;s whether your organization will be ready when it arrives.</p>
<p><strong>Where is your organization on the agentic maturity scale today, and what&rsquo;s your next step to move forward?</strong></p>
]]></description></item><item><title>The Real ROI of Microsoft 365 Copilot Extensibility</title><link>https://bisser.io/the-real-roi-of-microsoft-365-copilot-extensibility/</link><pubDate>Mon, 02 Mar 2026 10:00:00 +0100</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/the-real-roi-of-microsoft-365-copilot-extensibility/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/084-cover.webp" referrerpolicy="no-referrer">
            </div><h1 id="the-real-roi-of-microsoft-365-copilot-extensibility">The Real ROI of Microsoft 365 Copilot Extensibility<a class="heading-anchor" href="#the-real-roi-of-microsoft-365-copilot-extensibility" aria-label="Link to The Real ROI of Microsoft 365 Copilot Extensibility">#</a>
</h1>
<p><em>When does extending Microsoft 365 Copilot actually pay off? Here are my thoughts.</em></p>
<p>I&rsquo;ve been consuming a lot of content around Microsoft 365 Copilot extensibility lately. And while the community is doing a great job producing tutorials, samples, and getting-started guides, I noticed something: there&rsquo;s a ton of Hello World-style content out there, but very little about the actual return on investment. So I wanted to share my personal take on when extensibility is worth it — and what the real value looks like.</p>
<h2 id="are-you-even-ready">Are You Even Ready?<a class="heading-anchor" href="#are-you-even-ready" aria-label="Link to Are You Even Ready?">#</a>
</h2>
<p>Here&rsquo;s the thing: many organizations I talk to are still in the middle of rolling out Microsoft 365 Copilot to their users. They&rsquo;re figuring out adoption, licensing, governance — the basics. And that&rsquo;s totally fine. But for these organizations, extensibility shouldn&rsquo;t be the priority yet (even though the conference demos make it look like everyone should be building agents right now).</p>
<p>To me, the &ldquo;extensibility moment&rdquo; comes when you&rsquo;ve reached a certain maturity. You&rsquo;ve got a Copilot rollout going. Your users are actively working with it. You&rsquo;ve identified use cases that work out of the box. And then — this is key — you start noticing the gaps. The moments where users need data or processes from third-party systems, LOB apps, or your ERP that Microsoft 365 Copilot simply can&rsquo;t reach by default.</p>
<p>That&rsquo;s when extensibility becomes relevant. Not because a demo looked cool, but because your users actually need it.</p>
<h2 id="its-about-personalization-not-just-time-savings">It&amp;rsquo;s About Personalization, Not Just Time Savings<a class="heading-anchor" href="#its-about-personalization-not-just-time-savings" aria-label="Link to It&amp;rsquo;s About Personalization, Not Just Time Savings">#</a>
</h2>
<p>When people talk about ROI for extensibility, the conversation usually goes straight to time savings. &ldquo;The agent saves 15 minutes per day.&rdquo; And sure, that&rsquo;s measurable. But to me, that misses the bigger picture.</p>
<p>The real value of Microsoft 365 Copilot extensibility is <strong>personalization</strong>. Out of the box, Microsoft 365 Copilot is a powerful but generic tool. It works the same for every organization. But it doesn&rsquo;t know your ERP system. It doesn&rsquo;t understand your specific business processes. It can&rsquo;t interact with the industry-specific applications your people use every day.</p>
<p>Extensibility changes that. It transforms Microsoft 365 Copilot from a general-purpose assistant into something that understands and operates within <em>your</em> specific context. And that&rsquo;s where the real productivity and efficiency boost comes from.</p>
<h2 id="from-information-to-delegation">From Information to Delegation<a class="heading-anchor" href="#from-information-to-delegation" aria-label="Link to From Information to Delegation">#</a>
</h2>
<p>Let me make this concrete (because I think this example explains it best):</p>
<p><strong>Without extensibility</strong>, a user asks Microsoft 365 Copilot: <em>&ldquo;How do I submit a purchase order in our ERP system?&rdquo;</em> — and Copilot tells them which screens to navigate and which fields to fill in. Useful? Yes. Better than a manual? Definitely.</p>
<p><strong>With extensibility</strong>, the same user asks: <em>&ldquo;Submit a purchase order for 500 units of component X from supplier Y.&rdquo;</em> — and an agent connected to the ERP actually does it. Validates the request, triggers the process, confirms completion. The user didn&rsquo;t get information about how to do their job. The agent did the job on their behalf.</p>
<p>That shift — from information to action, from assistance to delegation — is where the ROI becomes obvious. Not in minutes saved, but in entire process steps eliminated.</p>
<h2 id="start-small-please">Start Small, Please<a class="heading-anchor" href="#start-small-please" aria-label="Link to Start Small, Please">#</a>
</h2>
<p>One thing I&rsquo;ve seen way too often: organizations approaching extensibility with too much ambition too early. They want an agent covering 20 processes across five systems on day one. And these projects tend to stall.</p>
<p>What works better (and I can&rsquo;t stress this enough): start with an MVP. Pick one process. One integration. One use case where the current workflow is painful. Build something simple. Ship it. Learn. Iterate. Then expand.</p>
<h2 id="measuring-what-actually-matters">Measuring What Actually Matters<a class="heading-anchor" href="#measuring-what-actually-matters" aria-label="Link to Measuring What Actually Matters">#</a>
</h2>
<p>If you&rsquo;re going to invest in extensibility, you should know whether it&rsquo;s working. And &ldquo;users like it&rdquo; is not a metric. Here&rsquo;s what I think you should be looking at:</p>
<table>
<thead>
<tr>
<th>Metric</th>
<th>What it tells you</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>System switches per task</strong></td>
<td>How many apps does a user still need to open? Every eliminated switch = less friction and fewer errors</td>
</tr>
<tr>
<td><strong>Usage frequency</strong></td>
<td>Not just &ldquo;who tried it&rdquo; but &ldquo;who keeps coming back&rdquo; — recurring usage signals real value</td>
</tr>
<tr>
<td><strong>Error rate in processes</strong></td>
<td>Automated handoffs should reduce manual mistakes — track before and after</td>
</tr>
<tr>
<td><strong>Time to competency</strong></td>
<td>How fast can new employees become productive with agent support vs. without?</td>
</tr>
</tbody>
</table>
<p>Especially that last one is often overlooked. When a new hire needs to learn complex ERP processes, an agent that can guide or even execute these processes dramatically shortens the learning curve. That&rsquo;s a powerful ROI argument.</p>
<h2 id="so-whats-your-extensibility-moment">So, What&amp;rsquo;s Your Extensibility Moment?<a class="heading-anchor" href="#so-whats-your-extensibility-moment" aria-label="Link to So, What&amp;rsquo;s Your Extensibility Moment?">#</a>
</h2>
<p>I&rsquo;m not saying every organization needs Microsoft 365 Copilot extensibility right now. Many don&rsquo;t — yet. But I do think everyone should be actively thinking about it.</p>
<p>Look at your users&rsquo; daily workflows. Where do they leave Microsoft 365 to get things done? Where are processes slow or error-prone because the tools aren&rsquo;t connected? Those are your extensibility opportunities.</p>
<p><strong>Which processes in your organization do you think could get a real boost from Microsoft 365 Copilot extensibility?</strong></p>
]]></description></item><item><title>The Citizen Developer Promise for Copilot Agents is an Illusion</title><link>https://bisser.io/the-citizen-developer-promise-for-copilot-agents-is-an-illusion/</link><pubDate>Sat, 28 Feb 2026 09:02:05 +0200</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/the-citizen-developer-promise-for-copilot-agents-is-an-illusion/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/083-cover.webp" referrerpolicy="no-referrer">
            </div><h1 id="the-citizen-developer-promise-for-copilot-agents-is-an-illusion">The Citizen Developer Promise for Copilot Agents is an Illusion<a class="heading-anchor" href="#the-citizen-developer-promise-for-copilot-agents-is-an-illusion" aria-label="Link to The Citizen Developer Promise for Copilot Agents is an Illusion">#</a>
</h1>
<p><em>Low-code doesn&rsquo;t mean low-complexity. Here&rsquo;s where the line really is.</em></p>
<p>Microsoft has been telling a beautiful story: business users can describe what they want in natural language, and Agent Builder in Microsoft 365 Copilot will create an agent for them. When they outgrow the basics, they can seamlessly move to Copilot Studio with one click. No code required. Anyone can build an agent.</p>
<p>I&rsquo;ve been watching this narrative evolve for over a year now, and I think it&rsquo;s time for a reality check. The citizen developer promise for Copilot agents, as it&rsquo;s currently marketed, is an illusion. Not a lie — an illusion. There&rsquo;s an important difference.</p>
<h2 id="the-happy-path-works-beautifully">The Happy Path Works Beautifully<a class="heading-anchor" href="#the-happy-path-works-beautifully" aria-label="Link to The Happy Path Works Beautifully">#</a>
</h2>
<p>Let me be fair first. If you want to build a declarative agent that answers questions based on a curated set of SharePoint documents, Agent Builder does a genuinely impressive job. You describe the agent&rsquo;s purpose, point it at your knowledge sources, maybe adjust the instructions, and you have something functional in minutes. For this specific use case — essentially a scoped Q&amp;A bot over your own content — the citizen developer story absolutely holds up.</p>
<p>The problem starts the moment you need anything beyond that happy path.</p>
<h2 id="where-things-fall-apart">Where Things Fall Apart<a class="heading-anchor" href="#where-things-fall-apart" aria-label="Link to Where Things Fall Apart">#</a>
</h2>
<p><strong>The moment you need real integrations.</strong> Your HR manager wants an agent that doesn&rsquo;t just answer questions about company policies but actually checks leave balances in Workday and submits time-off requests. That requires MCP server connections, authentication configuration, understanding OAuth flows, and dealing with connector infrastructure. I&rsquo;ve watched technically savvy business analysts — people who are genuinely good with Power Automate and Power Apps — hit a wall when they try to configure MCP authentication with dynamic client registration. These aren&rsquo;t concepts that a weekend workshop prepares you for.</p>
<p><strong>The moment you need nuanced instructions.</strong> Agent instructions look simple — you&rsquo;re just writing natural language, right? But crafting instructions that reliably produce the right behavior across edge cases is essentially prompt engineering. It requires understanding how the underlying model interprets ambiguity, how grounding works, what happens when the agent can&rsquo;t find an answer, and how to handle multi-step reasoning. This is a skill that takes practice to develop, and most business users don&rsquo;t have the feedback loops to develop it.</p>
<p><strong>The moment you need to debug.</strong> Something isn&rsquo;t working. The agent is returning wrong information, or it&rsquo;s not calling the right MCP tool, or the authentication keeps failing. Where do you even start? Copilot Studio&rsquo;s tracing and analytics have improved significantly, but interpreting activity maps, understanding tool invocation patterns, and diagnosing orchestration failures requires a developer mindset. There&rsquo;s no low-code way to debug a complex agent.</p>
<p><strong>The moment you need governance.</strong> Your citizen-developed agent works great in testing. Now it needs to go through the approval workflow in the admin center. IT needs to review the data sources, security needs to assess the permissions model, and compliance needs to verify the agent doesn&rsquo;t expose sensitive data. The citizen developer built the agent — but they can&rsquo;t shepherd it through enterprise governance alone.</p>
<h2 id="the-copy-to-copilot-studio-gap">The &amp;ldquo;Copy to Copilot Studio&amp;rdquo; Gap<a class="heading-anchor" href="#the-copy-to-copilot-studio-gap" aria-label="Link to The &amp;ldquo;Copy to Copilot Studio&amp;rdquo; Gap">#</a>
</h2>
<p>Microsoft&rsquo;s answer to the complexity cliff is the &ldquo;Copy to Copilot Studio&rdquo; feature. Start simple in Agent Builder, then graduate to the full platform when you need more power. It sounds seamless, but it creates a handoff problem that nobody talks about.</p>
<p>The business user who built the prototype in Agent Builder understands the business logic. The developer who takes over in Copilot Studio understands the technology. Neither fully understands the other&rsquo;s domain. The result is usually one of two things: the developer rebuilds from scratch (wasting the prototype work), or the developer tries to extend the existing agent without fully understanding the business intent (leading to subtle bugs that only surface in production).</p>
<p>This isn&rsquo;t a new problem — it&rsquo;s the same gap that has plagued every low-code-to-pro-code transition since the concept was invented. But it&rsquo;s being presented as solved, and it isn&rsquo;t.</p>
<h2 id="what-would-actually-work">What Would Actually Work<a class="heading-anchor" href="#what-would-actually-work" aria-label="Link to What Would Actually Work">#</a>
</h2>
<p>I&rsquo;m not arguing against democratizing agent development. I think it&rsquo;s the right direction. But I think we need a more honest model of what citizen development looks like for agents:</p>
<p><strong>Tier 1: Genuine citizen development.</strong> Q&amp;A agents over curated content. Simple declarative agents with file-based knowledge. This is where business users can truly self-serve, and we should encourage it. But let&rsquo;s be clear that this is the scope.</p>
<p><strong>Tier 2: Guided development.</strong> Agents that need integrations, complex logic, or multi-step workflows. This requires a &ldquo;buddy system&rdquo; — a business user who understands the process working alongside a developer or a power user who understands the platform. Neither can do it alone effectively.</p>
<p><strong>Tier 3: Professional development.</strong> Agents with custom MCP servers, complex authentication, multi-agent orchestration, or enterprise-scale governance requirements. This is developer territory, full stop. Pretending otherwise sets everyone up for frustration.</p>
<p><strong>Invest in the middle tier.</strong> Most organizations focus on either the self-service story (Tier 1) or the pro-dev story (Tier 3) and completely ignore Tier 2. But Tier 2 is where the most valuable agents live — the ones that are specific enough to a business process to be truly useful, but complex enough to need some technical guidance.</p>
<p><strong>Create feedback loops.</strong> The biggest gap in the citizen developer experience isn&rsquo;t tooling — it&rsquo;s learning. When a business user builds an agent that doesn&rsquo;t work well, they rarely know <em>why</em> it doesn&rsquo;t work well. Build internal communities of practice where people share what worked, what didn&rsquo;t, and why. This accelerates learning far more than any tutorial.</p>
<p><strong>Redefine success.</strong> If your measure of citizen developer success is &ldquo;business users building production agents independently,&rdquo; you&rsquo;ll be disappointed. If your measure is &ldquo;business users prototyping agent ideas that can be quickly validated and refined with technical support,&rdquo; you&rsquo;ll be much happier — and you&rsquo;ll ship more useful agents.</p>
<h2 id="the-bottom-line">The Bottom Line<a class="heading-anchor" href="#the-bottom-line" aria-label="Link to The Bottom Line">#</a>
</h2>
<p>The tools Microsoft is building are genuinely impressive. Agent Builder, Copilot Studio, the MCP ecosystem — these are powerful platforms that lower the barrier to agent development significantly. But &ldquo;lower the barrier&rdquo; is not the same as &ldquo;eliminate the barrier,&rdquo; and the marketing often conflates the two.</p>
<p>The organizations that get the most value from Copilot agents won&rsquo;t be the ones where everyone builds their own agents. They&rsquo;ll be the ones that create the right collaboration model between business expertise and technical capability, with clear expectations about what each tier of development can realistically deliver.</p>
<p>Let&rsquo;s stop selling the illusion and start building the support structures that make citizen-involved agent development actually work.</p>
<hr>
<p><em>Have you tried building Copilot agents as a non-developer? Or are you a developer supporting citizen developers? I&rsquo;d love to hear about your experience.</em></p>
]]></description></item><item><title>Microsoft 365 Copilot Extensibility - Possibilities and Pitfalls</title><link>https://bisser.io/microsoft-365-copilot-extensibility-possibilities-and-pitfalls/</link><pubDate>Tue, 02 Dec 2025 19:00:05 +0200</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/microsoft-365-copilot-extensibility-possibilities-and-pitfalls/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/082-cover.webp" referrerpolicy="no-referrer">
            </div><h1 id="microsoft-365-copilot-extensibility-navigating-the-possibilities-and-pitfalls">Microsoft 365 Copilot Extensibility: Navigating the Possibilities and Pitfalls<a class="heading-anchor" href="#microsoft-365-copilot-extensibility-navigating-the-possibilities-and-pitfalls" aria-label="Link to Microsoft 365 Copilot Extensibility: Navigating the Possibilities and Pitfalls">#</a>
</h1>
<p>As organizations rush to adopt Microsoft 365 Copilot, a critical question emerges: <strong>How do we extend Copilot to work with our unique business data and processes?</strong> The answer lies in understanding the three extensibility pillars—Connectors, Agents, and APIs—and knowing when (and when not) to use each. And that&rsquo;s why I created an infographic for that:</p>
<figure>
</figure>

<h2 id="understanding-the-anatomy-of-microsoft-365-copilot">Understanding the Anatomy of Microsoft 365 Copilot<a class="heading-anchor" href="#understanding-the-anatomy-of-microsoft-365-copilot" aria-label="Link to Understanding the Anatomy of Microsoft 365 Copilot">#</a>
</h2>
<p>Before diving into extensibility, let&rsquo;s understand what we&rsquo;re extending. At its core, Microsoft 365 Copilot consists of:</p>
<h3 id="the-core-engine">The Core Engine<a class="heading-anchor" href="#the-core-engine" aria-label="Link to The Core Engine">#</a>
</h3>
<ul>
<li><strong>Orchestrator</strong>: The traffic controller that manages data governance, safety, and responsible AI (RAI) policies</li>
<li><strong>Foundation Models</strong>: The AI backbone powered by GPT-4o, GPT-4.1, and newer models like o1 and o3-mini</li>
</ul>
<h3 id="where-extensibility-plugs-in">Where Extensibility Plugs In<a class="heading-anchor" href="#where-extensibility-plugs-in" aria-label="Link to Where Extensibility Plugs In">#</a>
</h3>
<p>Copilot&rsquo;s architecture exposes three key areas for extension:</p>
<table>
<thead>
<tr>
<th>Area</th>
<th>Built-in</th>
<th>Extensible Via</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Knowledge</strong></td>
<td>Memory</td>
<td>Grounding ← Connectors</td>
</tr>
<tr>
<td><strong>Skills</strong></td>
<td>Workflows, M365 Apps</td>
<td>Triggers &amp; Actions ↔ Agents</td>
</tr>
<tr>
<td><strong>APIs</strong></td>
<td>—</td>
<td>Your Apps → Copilot APIs</td>
</tr>
</tbody>
</table>
<h2 id="the-three-extensibility-pillars">The Three Extensibility Pillars<a class="heading-anchor" href="#the-three-extensibility-pillars" aria-label="Link to The Three Extensibility Pillars">#</a>
</h2>
<h3 id="1-connectors-bringing-external-data-to-copilot">1. Connectors: Bringing External Data to Copilot<a class="heading-anchor" href="#1-connectors-bringing-external-data-to-copilot" aria-label="Link to 1. Connectors: Bringing External Data to Copilot">#</a>
</h3>
<p><strong>Direction: External Data → Copilot</strong></p>
<p>Graph Connectors allow you to ingest and index external data into Microsoft Graph, making it available for Copilot to reason over.</p>
<p><strong>What you get:</strong></p>
<ul>
<li>100+ prebuilt connectors for popular systems</li>
<li>Custom connector development via Graph API</li>
<li>Support for CRM, ERP, databases, and file systems</li>
</ul>
<p><strong>Best for:</strong></p>
<ul>
<li>Making enterprise data searchable and available to Copilot</li>
<li>Connecting line-of-business applications</li>
<li>Enabling Copilot to answer questions about your proprietary data</li>
</ul>
<p><strong>⚠️ Pitfall:</strong> Connectors only provide <em>read</em> access. If you need Copilot to <em>take actions</em> in external systems, you&rsquo;ll need Agents.</p>
<hr>
<h3 id="2-agents-ai-assistants-that-do-work">2. Agents: AI Assistants That Do Work<a class="heading-anchor" href="#2-agents-ai-assistants-that-do-work" aria-label="Link to 2. Agents: AI Assistants That Do Work">#</a>
</h3>
<p><strong>Direction: AI Assistants ↔ M365 (Bidirectional)</strong></p>
<p>Agents are specialized AI assistants that can automate workflows and perform tasks. They&rsquo;re the most powerful—and most complex—extensibility option.</p>
<h4 id="inside-agents-core-components">Inside Agents: Core Components<a class="heading-anchor" href="#inside-agents-core-components" aria-label="Link to Inside Agents: Core Components">#</a>
</h4>
<p>Every agent consists of three parts:</p>
<ol>
<li><strong>Knowledge</strong>: Data sources like SharePoint, OneDrive, Graph Connectors, and web content</li>
<li><strong>Instructions</strong>: Custom persona, tone, scope limits, and behavioral guardrails</li>
<li><strong>Actions</strong>: Real-time API calls that let the agent <em>do</em> things</li>
</ol>
<blockquote>
<p>⚠️ <strong>Critical Insight:</strong> Actions ONLY work inside Agents! You cannot add actions to base Copilot—they must be wrapped in an agent.</p>
</blockquote>
<h4 id="two-types-of-agents">Two Types of Agents<a class="heading-anchor" href="#two-types-of-agents" aria-label="Link to Two Types of Agents">#</a>
</h4>
<table>
<thead>
<tr>
<th>Type</th>
<th>AI Engine</th>
<th>Best For</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Declarative Agent</strong></td>
<td>Uses Copilot&rsquo;s AI</td>
<td>Quick deployment, Microsoft-managed AI</td>
</tr>
<tr>
<td><strong>Custom Engine Agent</strong></td>
<td>Your own AI</td>
<td>Full control, custom models, complex scenarios</td>
</tr>
</tbody>
</table>
<h4 id="three-ways-to-build-actions">Three Ways to Build Actions<a class="heading-anchor" href="#three-ways-to-build-actions" aria-label="Link to Three Ways to Build Actions">#</a>
</h4>
<table>
<thead>
<tr>
<th>Approach</th>
<th>Technology</th>
<th>Characteristics</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>OpenAPI Action</strong></td>
<td>REST API + OpenAPI Spec</td>
<td>Existing APIs, static schema, multiple auth options</td>
</tr>
<tr>
<td><strong>MCP Server Action</strong></td>
<td>Model Context Protocol</td>
<td>AI-native, dynamic tool discovery, streamable (Preview)</td>
</tr>
<tr>
<td><strong>Copilot Studio Action</strong></td>
<td>Power Platform Connectors</td>
<td>1400+ prebuilt connectors, visual designer, citizen-developer friendly</td>
</tr>
</tbody>
</table>
<hr>
<h3 id="3-copilot-apis-embedding-copilot-in-your-apps">3. Copilot APIs: Embedding Copilot in Your Apps<a class="heading-anchor" href="#3-copilot-apis-embedding-copilot-in-your-apps" aria-label="Link to 3. Copilot APIs: Embedding Copilot in Your Apps">#</a>
</h3>
<p><strong>Direction: Your Apps → Copilot</strong></p>
<p>Copilot APIs let you programmatically access Copilot&rsquo;s capabilities from your own applications.</p>
<p><strong>Available APIs:</strong></p>
<table>
<thead>
<tr>
<th>API</th>
<th>Status</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>Retrieval API (RAG)</td>
<td>GA</td>
<td>Query Microsoft Graph with AI-enhanced retrieval</td>
</tr>
<tr>
<td>Search API</td>
<td>Preview</td>
<td>Semantic search across M365 content</td>
</tr>
<tr>
<td>Chat Completions API</td>
<td>Preview</td>
<td>Conversational AI with M365 context</td>
</tr>
<tr>
<td>Meeting Transcript API</td>
<td>GA</td>
<td>Access meeting transcriptions</td>
</tr>
<tr>
<td>Meeting Insights API</td>
<td>Preview</td>
<td>Extract insights from meetings</td>
</tr>
</tbody>
</table>
<p><strong>Best for:</strong></p>
<ul>
<li>Building custom applications that leverage Copilot</li>
<li>Integrating AI capabilities into existing line-of-business apps</li>
<li>Creating specialized user experiences</li>
</ul>
<hr>
<h2 id="common-pitfalls-and-how-to-avoid-them">Common Pitfalls and How to Avoid Them<a class="heading-anchor" href="#common-pitfalls-and-how-to-avoid-them" aria-label="Link to Common Pitfalls and How to Avoid Them">#</a>
</h2>
<h3 id="pitfall-1-using-connectors-when-you-need-actions">Pitfall #1: Using Connectors When You Need Actions<a class="heading-anchor" href="#pitfall-1-using-connectors-when-you-need-actions" aria-label="Link to Pitfall #1: Using Connectors When You Need Actions">#</a>
</h3>
<p><strong>Symptom:</strong> &ldquo;I connected my CRM data, but Copilot can&rsquo;t create new records.&rdquo;</p>
<p><strong>Solution:</strong> Connectors are read-only. To write data back, create an Agent with Actions that call your CRM&rsquo;s API.</p>
<h3 id="pitfall-2-building-custom-engines-when-declarative-would-suffice">Pitfall #2: Building Custom Engines When Declarative Would Suffice<a class="heading-anchor" href="#pitfall-2-building-custom-engines-when-declarative-would-suffice" aria-label="Link to Pitfall #2: Building Custom Engines When Declarative Would Suffice">#</a>
</h3>
<p><strong>Symptom:</strong> Spending months building a custom AI engine for a simple Q&amp;A bot.</p>
<p><strong>Solution:</strong> Start with Declarative Agents. They deploy faster and Microsoft handles the AI infrastructure. Only go custom when you need specific models or complex orchestration.</p>
<h3 id="pitfall-3-ignoring-data-governance">Pitfall #3: Ignoring Data Governance<a class="heading-anchor" href="#pitfall-3-ignoring-data-governance" aria-label="Link to Pitfall #3: Ignoring Data Governance">#</a>
</h3>
<p><strong>Symptom:</strong> Copilot surfaces sensitive data to unauthorized users.</p>
<p><strong>Solution:</strong> Connectors and Agents respect Microsoft 365 permissions. Ensure your data sources have proper access controls <em>before</em> connecting them.</p>
<h3 id="pitfall-4-overlooking-the-actions-in-agents-requirement">Pitfall #4: Overlooking the Actions-in-Agents Requirement<a class="heading-anchor" href="#pitfall-4-overlooking-the-actions-in-agents-requirement" aria-label="Link to Pitfall #4: Overlooking the Actions-in-Agents Requirement">#</a>
</h3>
<p><strong>Symptom:</strong> &ldquo;I built an OpenAPI action but can&rsquo;t find it in Copilot.&rdquo;</p>
<p><strong>Solution:</strong> Actions must be deployed within an Agent. Create a Declarative Agent, add your action, and deploy the agent to Teams or M365.</p>
<hr>
<h2 id="decision-framework-choosing-the-right-approach">Decision Framework: Choosing the Right Approach<a class="heading-anchor" href="#decision-framework-choosing-the-right-approach" aria-label="Link to Decision Framework: Choosing the Right Approach">#</a>
</h2>
<pre tabindex="0"><code>Do you need Copilot to access external data?
├── Yes → Use Graph Connectors
│   └── Do you also need to write back?
│       └── Yes → Add an Agent with Actions
└── No
    ↓
Do you need Copilot to perform tasks?
├── Yes → Build an Agent
│   ├── Simple tasks, quick deployment → Declarative Agent
│   └── Complex logic, custom AI → Custom Engine Agent
└── No
    ↓
Do you need AI in your own app?
├── Yes → Use Copilot APIs
└── No → You might not need extensibility!
</code></pre><hr>
<h2 id="getting-started">Getting Started<a class="heading-anchor" href="#getting-started" aria-label="Link to Getting Started">#</a>
</h2>
<h3 id="tools-youll-need">Tools You&amp;rsquo;ll Need<a class="heading-anchor" href="#tools-youll-need" aria-label="Link to Tools You&amp;rsquo;ll Need">#</a>
</h3>
<table>
<thead>
<tr>
<th>Tool</th>
<th>Purpose</th>
<th>Skill Level</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Copilot Studio</strong></td>
<td>Visual agent builder</td>
<td>Low-code</td>
</tr>
<tr>
<td><strong>M365 Agents Toolkit</strong></td>
<td>VS Code extension for agent development</td>
<td>Pro-code</td>
</tr>
<tr>
<td><strong>Kiota CLI</strong></td>
<td>OpenAPI client generation</td>
<td>Pro-code</td>
</tr>
<tr>
<td><strong>Graph API</strong></td>
<td>Custom connector development</td>
<td>Pro-code</td>
</tr>
</tbody>
</table>
<h3 id="recommended-learning-path">Recommended Learning Path<a class="heading-anchor" href="#recommended-learning-path" aria-label="Link to Recommended Learning Path">#</a>
</h3>
<ol>
<li><strong>Start with Copilot Studio</strong> to understand agent concepts visually</li>
<li><strong>Build a Declarative Agent</strong> with knowledge sources</li>
<li><strong>Add an OpenAPI Action</strong> to perform a simple task</li>
<li><strong>Explore MCP</strong> for more dynamic scenarios</li>
<li><strong>Graduate to Custom Engine Agents</strong> when you hit limitations</li>
</ol>
<hr>
<h2 id="conclusion">Conclusion<a class="heading-anchor" href="#conclusion" aria-label="Link to Conclusion">#</a>
</h2>
<p>Microsoft 365 Copilot extensibility is powerful but requires understanding the architecture. Remember:</p>
<ul>
<li><strong>Connectors</strong> bring data IN (read-only)</li>
<li><strong>Agents</strong> enable bidirectional interaction with Actions</li>
<li><strong>APIs</strong> let you bring Copilot OUT to your apps</li>
<li><strong>Actions only work inside Agents</strong>—this is the #1 gotcha</li>
</ul>
<p>Start simple, validate your approach, and scale from there. The extensibility landscape is evolving rapidly, with MCP and new APIs in preview that will expand what&rsquo;s possible.</p>
<hr>
<p><em>Have questions about Copilot extensibility? The patterns and pitfalls shared here come from real-world implementation experience. Your mileage may vary, but understanding these fundamentals will set you on the right path.</em></p>
]]></description></item><item><title>The Future of Copilot Agents - An Agent-Centric Workplace</title><link>https://bisser.io/the-future-of-copilot-agents-an-agent-centric-workplace/</link><pubDate>Mon, 27 Oct 2025 12:00:05 +0200</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/the-future-of-copilot-agents-an-agent-centric-workplace/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/081-cover.webp" referrerpolicy="no-referrer">
            </div><h1 id="introduction">Introduction<a class="heading-anchor" href="#introduction" aria-label="Link to Introduction">#</a>
</h1>
<p>Today we live in an era of constant change. Technology changes, mindset changes and vision changes. It started out with a bunch of Generative AI tools fueled by LLMs. These AI Tools now do not only offer the access to LLMs itself, but also give you the chance of creating and using agents. These agents give you two extra-assets:</p>
<ul>
<li>Consistency</li>
<li>Scope</li>
</ul>
<p>My friend <a href="https://thomy.tech/" target="_blank" rel="noopener noreffer">Thomy</a>
 wrote about the topic of <a href="https://www.linkedin.com/pulse/ai-agents-process-handovers-where-consistency-matters-thomas-g%C3%B6lles-c0wwf/?trackingId=rsfBNBFP7E%2FlwsZHCPbMmQ%3D%3D" target="_blank" rel="noopener noreffer">AI Agents at Process Handovers: Where Consistency Matters More Than Intelligence</a>
, so if you want to know about why consistency matters, please read that first.</p>
<p>But with the introduction of agents, we also started shifting into an &ldquo;Agent-Centric Workplace&rdquo;. And I started to feel this myself rather quick: For every platform, for every service and for every use case, an agent could be built to assist me throughout my workday. To me, using agents goes beyond using Copilot. It&rsquo;s more a concept of delegating work to my digital teammates = agents. And with all the recent announcements and udpates published by Microsoft around topics like autonomous agents, I do not think that this concept is going to vanish in the future. More likely, we will see the transition going from prompt-based assistants to goal/task-oriented, multi-step reasoning systems more and more over the course of the next weeks, months and years.</p>
<h2 id="ecosystem">Ecosystem<a class="heading-anchor" href="#ecosystem" aria-label="Link to Ecosystem">#</a>
</h2>
<p>The Microsoft platforms offers a sophisticated ecosystem when it comes to building (and yes you may think why is he not writing &ldquo;developing&rdquo; instead of building, but we&rsquo;ll come to that in a second) agents of many kinds. You can build agents in a variety of places with a variety of tools like:</p>
<ul>
<li>SharePoint</li>
<li>Copilot Studio lite</li>
<li>Copilot Studio full</li>
<li>Microsoft Fabric</li>
<li>M365 Agents Toolkit &amp; SDK</li>
<li>Azure AI Foundry</li>
<li>Agent Framework</li>
<li>&hellip;</li>
</ul>
<p>And all of these platforms and frameworks target a specific agent builder audience, from low-code to pro-code. Eventually I will write another blog post, or even a series, about which platforms should be leveraged by which audience. But the good news here is, that there is at least one tool or platform for everyone, no matter which skills you have when it comes to building or developing an agent. And many of the agents built with these platforms can then be again consumed from one central place: Microsoft 365 Copilot. Therefore, the platform will likely evolve and governance aspects will also be covered more and more as more and more people will not only be using agents, but will also be building agents, either for their personal use, for their team or for the whole organization.</p>
<h2 id="multi-agent-collaboration">Multi-Agent Collaboration<a class="heading-anchor" href="#multi-agent-collaboration" aria-label="Link to Multi-Agent Collaboration">#</a>
</h2>
<p>With the variety of tools and frameworks, the necessacity of introducing a concept called &ldquo;Multi-Agent Collaboration&rdquo; grew. Satya Nadella once said in a keynote of a big conference that &ldquo;Copilot is the UI for AI&rdquo;. Thinking about this message, I thought that this is the beginning of the Multi-Agent Collaboration era, where Microsoft 365 Copilot itself is also an agent, having the skills of orchestrating and collaborating with other agents.</p>
<p>But for me personally, hearing this message was not the first time for me personally to think of multi-agent scenarios. Looking back at my blog post on <a href="https://bisser.io/microsoft-build-2019-updates-on-conversational-ai/" target="_blank" rel="noopener noreffer">Microsoft Build 2019 updates on Conversational AI</a>
, the concept of <strong>Skills</strong> was introduced. And this to me was the first step towards a multi-agent collaboration era, because skills in the realm of the Microsoft Bot Framework where chatbots itself, which other chatbots would call to solve specific tasks.</p>
<h3 id="human-agent-collaboration-patterns">Human-Agent Collaboration Patterns<a class="heading-anchor" href="#human-agent-collaboration-patterns" aria-label="Link to Human-Agent Collaboration Patterns">#</a>
</h3>
<p>In the future, we might see a shift from &ldquo;Copilot in apps&rdquo; to &ldquo;Copilot across work&rdquo; as this may be something that people need more urgent. In my idea, I think an agent like a &ldquo;Project Planning Agent&rdquo; might be more helpful when integrated in all of my work apps, than an agent like the Copilot in Word. Because I want to perfectly do my project planning, which involves multiple platforms and services, so this agent should be accessible and integrated in all of the places where I do my work and is still specifically tailored to one specific goal (here we are again at consistency and scope).</p>
<p>This shit may also require us to rethink productivity rituals, like how do we manage meetings or how do we do reporting or how do we share knowledge when agents are our digital teammates, as they also need to be integrated into these rituals.</p>
<h2 id="future-look">Future look<a class="heading-anchor" href="#future-look" aria-label="Link to Future look">#</a>
</h2>
<p>With things like MCP, multimodal models and agents and the convergence of local and cloud intelligence, I do think that agents will play an important role in our future workplace. In the future we might not be talking &ldquo;Copilot Adoption&rdquo; but &ldquo;Agent lifecycle management&rdquo;, as I strongly believe that the introduction of &ldquo;goal-oriented agents&rdquo; will be a huge boost in the enterprise, to increas both our productivity as well as the quality we deliver our work with. Therefore I encourage everyone reading this, to experiment with agents and especially with multi-agent scenarios as this will likely hit us sooner or later and furthermore, think about this: &ldquo;What&rsquo;s your vision for the agentic workplace?&rdquo; and let me know what&rsquo;s your opinion!</p>
]]></description></item><item><title>Exploring Autonomous Agent Capabilities with Microsoft Copilot Studio</title><link>https://bisser.io/exploring-autonomous-agent-capabilities-with-microsoft-copilot-studio/</link><pubDate>Sat, 29 Mar 2025 20:20:05 +0200</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/exploring-autonomous-agent-capabilities-with-microsoft-copilot-studio/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/080-cover.webp" referrerpolicy="no-referrer">
            </div><h1 id="introduction">Introduction<a class="heading-anchor" href="#introduction" aria-label="Link to Introduction">#</a>
</h1>
<p>Microsoft is pushing the boundaries of how business processes can be automated with its <strong>Copilot Studio</strong>—a cloud-based, low-code platform that empowers organizations to build AI agents. In its latest release, Microsoft has introduced autonomous agent capabilities aimed at enabling agents to proactively respond to events, orchestrate tasks, and integrate seamlessly with enterprise data sources. However, while these functionalities open exciting prospects, in my opinion, they currently feel quite basic—likely reflecting their preview status.</p>
<h2 id="autonomous-agent-capabilities-at-a-glance">Autonomous Agent Capabilities at a Glance<a class="heading-anchor" href="#autonomous-agent-capabilities-at-a-glance" aria-label="Link to Autonomous Agent Capabilities at a Glance">#</a>
</h2>
<p>With Copilot Studio, Microsoft now provides tools to build autonomous agents that can:</p>
<ul>
<li><strong>Monitor &amp; React:</strong> Automatically respond to business signals or triggers to initiate tasks.</li>
<li><strong>Execute Business Processes:</strong> Leverage AI orchestration to run rule-based workflows and automate repetitive tasks.</li>
<li><strong>Integrate with Data Sources:</strong> Connect to Microsoft Graph, Dataverse, and other connectors to pull in context.</li>
<li><strong>Enhance Productivity:</strong> Offer a low-code way for teams to extend Microsoft 365 Copilot with personalized AI agents.</li>
</ul>
<h2 id="fundamental-architecture-diagram">Fundamental Architecture Diagram<a class="heading-anchor" href="#fundamental-architecture-diagram" aria-label="Link to Fundamental Architecture Diagram">#</a>
</h2>
<p>A core element of understanding autonomous agents in Copilot Studio is their underlying architecture. Below is a diagram which displays a visual representation of the system:</p>
<div class="mermaid" id="id-1"></div>
<figure>
</figure>

<p><em>Figure 1: Fundamental Architecture for Autonomous Agents in Microsoft Copilot Studio</em></p>
<p>This diagram illustrates how user interactions are processed through Copilot Studio, which utilizes an orchestration engine to understand the trigger and inputs along with building and executing a plan. The agents then access context-rich data from secure sources to generate responses.</p>
<h2 id="the-preview-nature-opportunities-and-limitations">The Preview Nature: Opportunities and Limitations<a class="heading-anchor" href="#the-preview-nature-opportunities-and-limitations" aria-label="Link to The Preview Nature: Opportunities and Limitations">#</a>
</h2>
<p>While the preview release is promising, the current autonomous agent capabilities appear somewhat basic:</p>
<ul>
<li><strong>Limited Customization:</strong> The workflow options and decision-making granularity are still evolving.</li>
<li><strong>Early-Stage Integrations:</strong> Integrations with external systems work at a basic level—expect enhancements with additional connectors and refinements.</li>
<li><strong>Dependence on User Feedback:</strong> Early adopters find that while the automation framework offers efficiencies, further adjustments and human oversight are needed to handle complex real-world scenarios.</li>
</ul>
<p>These observations suggest that the preview is an important first step, setting the stage for further refinements as Microsoft iterates based on customer feedback.</p>
<h2 id="looking-ahead">Looking Ahead<a class="heading-anchor" href="#looking-ahead" aria-label="Link to Looking Ahead">#</a>
</h2>
<p>Microsoft’s vision for transforming business processes with AI is clear. As additional features and deeper integrations are developed, we can expect:</p>
<ul>
<li>More advanced decision-making capabilities.</li>
<li>Enhanced automation for complex workflows.</li>
<li>Greater interoperability across the Microsoft ecosystem.</li>
</ul>
<p>Even though the functionality now might seem basic, it represents a foundational step towards a future where AI agents can autonomously handle a wide array of business tasks with minimal human intervention.</p>
<hr>
<p>What do you think about the current state of autonomous agent capabilities in Copilot Studio? Do you see this basic functionality as laying the groundwork for a more advanced, fully autonomous future?</p>
]]></description></item><item><title>Add deep reasoning to an Copilot Studio agent</title><link>https://bisser.io/add-deep-reasoning-to-an-copilot-studio-agent/</link><pubDate>Wed, 26 Mar 2025 01:19:19 +0200</pubDate><author>Stephan Bisser</author><guid>https://bisser.io/add-deep-reasoning-to-an-copilot-studio-agent/</guid><description><![CDATA[<div class="featured-image">
                <img src="/images/079-cover.webp" referrerpolicy="no-referrer">
            </div><h2 id="setup">Setup<a class="heading-anchor" href="#setup" aria-label="Link to Setup">#</a>
</h2>
<p>You obviously need to create or already have an agent in Microsoft Copilot Studio. If you don&rsquo;t have an agent already follow <a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/fundamentals-get-started?context=%2Fmicrosoft-365-copilot%2Fextensibility%2Fcontext&amp;tabs=web" target="_blank" rel="noopener noreffer">this tutorial</a>
 to create a new custom engine agent in Copilot Studio.</p>
<p>After you have created an agent go to your settings page of the agent and then under &ldquo;Generative AI&rdquo; you can enable the checkmark on &ldquo;Use deep reasoning models&rdquo;:</p>
<figure>
</figure>

<p>When you now send a prompt to your agent you should see that the agent is using deep reasoning to generate the answer:</p>
<figure>
</figure>

<p>What&rsquo;s interesting is that you can also see what the agent does take into account when doing the reasoning:</p>
<figure>
</figure>

<p>As you can now also add your custom engine agents built with Copilot Studio to your Microsoft 365 Copilot you can use this method to add deep reasoning to Microsoft 365 Copilot:</p>
<figure>
</figure>

<h2 id="considerations">Considerations<a class="heading-anchor" href="#considerations" aria-label="Link to Considerations">#</a>
</h2>
<p>This feature is currently in preview so be careful when to enable this (maybe not the best idea to use this for all production agents already). Another requirement is that your agent needs to be created in a US-based Power Platform environment and the agent&rsquo;s language should be English, otherwise you&rsquo;ll see the setting, but the agent will not use deep reasoning unfortunately.</p>
]]></description></item></channel></rss>