Exam AB-210: Dynamics 365 Sales AI Consultant Associate

Indeed the 3rd exam related post in just over a week – it’s a busy (new) certification release season at the moment!

This time it’s the new AB-210 exam, focusing on Dynamics 365 Sales and AI (of course!). It’s nice to see that there’s a dedicated Dynamics 365 Sales exam back now – most of us will remember the MB-210 exam that was around for a number of years, but which was retired at the end of November 2024. What happened was that a new exam at the time (the MB-280) was released, which rolled together Dynamics 365 Sales with Dynamics 365 Customer Insights.

I never fully officially understood the reason for this, given that the roles in reality are quite different, and did comment at the time (MB-280: Microsoft Dynamics 365 Customer Experience Analyst) that I wondered how well it would stand the progress of time.

AI and sales capabilities seem generally to go well together – Microsoft has publicly demoed at large conferences the Sales Agent multiple times, showing how it can help qualify leads, and handle engagments with customers. To be honest I quite like this in general, though for implementation I do keep my (slightly skeptical) eye on it, to ensure it’s working in the right way.

The official description of the proposed exam candidate is:

As a candidate for this Microsoft Certification, you design and configure AI-enhanced sales solutions by using Dynamics 365 Sales, Copilot in Dynamics 365 Sales, and agent capabilities to help sellers work more efficiently throughout the lead-to-cash process. You translate business requirements into practical seller workflows enhanced with conversational intelligence, predictive insights, guided automation, and secure data access.

In this role, you work closely with sales, operations, and IT stakeholders to help ensure that solutions align with revenue goals and process optimization.

You perform the following design and implementation tasks:

  • Configure Dynamics 365 Sales core features.
  • Deploy, manage, and monitor agents in Sales.
  • Implement collaboration features.
  • Tailor AI-powered intelligence features.

It is highly recommended that candidates complete training in intermediate-level Microsoft Power Platform configuration before taking this certification exam. Additionally, you must have functional knowledge of:

  • Building Power Automate cloud flows.
  • Interpreting an organization’s sales processes and seller experience.
  • Building and extending model-driven apps.

The overall information for the exam can be found at Microsoft Certified: Dynamics 365 Sales AI Consultant Associate, and there is an official Learning Path available for it.

I do like that the exam content overview calls out that Power Platform knowledge & configuration is highly recommended. Obviously Dynamics 365 is built on top of Power Platform, and having this knowledge (ie the ability to customise & extend with Power Platform capabilities) is key to well thought through implementations.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Setup & Data
    • Environment creation & provisioning
    • Document management options & requirements
    • Enabling AI capabilities (Copilot, Sales Agent etc)
    • Configuring & customising forms
    • Configuring & customising views
  • Outbound calling
    • Configuration
    • Security requirements
  • AI Capabilities
    • Getting access to AI capabilities for users (deployment, security etc)
    • What the different AI agents & modes are, when to use them, and the behaviour of each
    • What blueprints are, how to use them, how to modify them
    • How AI agents handle communication re-tries
    • Creating custom agents
    • Analysing AI agent behaviour (runs, outcomes, metrics etc), monitoring information
    • Using AI to summarise records & ask for information
    • Ways to handle AI usage billing (what options are available, where to do this, how to do this)
  • Leads & Opportunities
    • Setting up & configuring predictive lead scoring models, requirements for implementing this
    • Understanding lead to opportunity conversion process, and continuing through to a final sale
    • Understanding sales goals, configuring sales goal/metrics/KPI’s, configuring rollup queries for aggregation
    • Assignment behaviour for leads to users, how this works, configuration for this
  • Products
    • The different ways to handle products (eg units, bundles, price lists, product families)
    • When each one should be used, and requirements for them
    • How to use the different components to configure specific scenarios
    • Relating products together
  • Pricing
    • Different ways to approach pricing products (eg singly, as a bundle, etc)
    • Handling multiple territories
    • Handling multiple currencies
    • Configuring price lists
    • Handling expired price lists & system behaviour
    • Handling discounting
  • Mobile app
    • Setup & configuration
    • Data synchronisation
    • Security setup & requirements
    • Push notifications
  • Power Automate
    • Understanding when to use different trigger types (automated/manual/schedule)
    • Usage for scenarios requiring approvals
  • Business process flows
    • What they are, and what they should be used for
    • How to configure, moving between stages, understanding how they work

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Exam AB-620: Design and build integrated AI agent solutions in Copilot Studio

We seem to be on a roll here over the last month or so with new exams being released (& its not over yet!). With all of the emphasis on AI & agents, I decided to go take the new Copilot Studio exam to see what it would be like.

Given that I have a decently passing familiarity with Copilot Studio (as I use it for projects, and actually do get hands on with it quite a bit of the time), I felt that I’d be in a good place to handle it without any revision. Obviously this could have been a bold move, and it’s up to everyone to make their own decisions about how much to revise (or not revise)!.

Copilot Studio has moved on from when it first came onto the scene (and for those who remember, it used to be called Power Virtual Agent, or PVA). Nowadays it supports coding within it, but it also can serve as the front end for other Microsoft AI capabilities, such as Microsoft Foundry models.

This is also the first time that it’s been featured for its own exam – previously it got rolled into other exams (such as the PL-100, PL-200, etc), where it was just one of the components being covered (and covered in a lightweight manner, at that). With the focus from Microsoft now heavily on it though, it’s now taken a step forward into the spotlight by itself.

The official description of the proposed exam candidate is:

As a candidate for this Microsoft Certification, you’re a professional developer or advanced builder who builds, extends, and integrates custom agents for enterprise-grade solutions. You typically work as an IT application developer, consultant, or independent software vendor (ISV) partner focused on creating scalable AI solutions for organizations or customers.

For this exam, you should be familiar with Power Fx, Microsoft Dataverse, Microsoft Power Platform environments and components, Microsoft 365 Copilot, Microsoft Foundry, and adaptive cards.

You need intermediate knowledge of generative AI concepts, including models, orchestration, retrieval-augmented generation (RAG), Model Context Protocol (MCP), Agent2Agent (A2A) protocol, and more. You should also have experience with prompt engineering and with REST APIs and integration patterns. Additionally, you need experience configuring agents with basic knowledge sources, instructions, tools, and topics in Microsoft Copilot Studio.

As a developer who works in Copilot Studio, you:

  • Integrate agents with Microsoft Foundry.
  • Integrate agents with Model Context Protocol (MCP) servers.
  • Integrate agents with custom connectors.
  • Integrate agents with APIs.
  • Integrate agents with Microsoft Fabric.
  • Automate tasks with computer use.
  • Integrate agents with connectors.

You create:

  • Multi-agent solutions.
  • Agents with enterprise knowledge sources (such as ServiceNow, SAP, and others).
  • Advanced agent topics and tools.
  • Computer-using agents.
  • Agents that perform advanced actions via APIs.

You collaborate with Microsoft 365 administrators, Microsoft Power Platform administrators, Microsoft Copilot administrators, Copilot Studio agent builders, Copilot Studio administrators, Foundry administrators, agentic AI business solutions architects, and Copilot Studio architects.

The overall information for the exam can be found at Microsoft Certified: AI Agent Builder Associate, and there is an official Learning Path available for it.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

I’ll freely admit that there was a LOT more focus on MCP capabilities than I had expected there to be, but I guess that again this is natural, given how Microsoft is moving at the moment.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Copilot Studio
    • Component/node types. What they are, how/when to use them
    • Using topic variables
    • Timeouts
    • Concurrency
    • Sensitive data & Using type ‘secret’ – what this does and why to use
    • Generative answers – how they work, limitations, what to know, how to configure & ground them
    • Computer Use
    • Connecting with Microsoft Graph
    • Connecting to other agents – how to do this, how to configure, what to use
  • Connector types
    • Standard connectors (ie connectors provided by Copilot Studio). When to use them, limitations
    • Custom connectors – what these are, why you’d use them
  • Security
    • Authentication types (API, OAuth 2)
    • Query delegation
    • DLP policies
  • MCP servers
    • What they are
    • Connecting to them
    • Security with MCP servers
    • Authentication types
    • Usage of AI with MCP servers
  • Azure AI Search
    • Connecting to knowledge index
    • Configurations
    • Security
  • Solution Types
    • Default vs Unmanaged vs Managed
    • Environment variables
    • Creating solution
  • Application Lifecycle Management (ALM)
    • What this is, and why it’s needed
    • What approaches can be used, why to use them
    • What’s needed to set up ALM
  • Monitoring & Troubleshooting
    • Reporting on deployed agents
    • Evaluating usage of deployed agents
    • Identifying issues & errors
    • Stopping runs

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Exam AI-901: Microsoft Azure AI Fundamentals

With a massive amount of focus on AI across the Microsoft platform, I decided to sit the new AI-901 exam, which is the new Azure fundamentals exam. I’m far from being an Azure architect, but will freely admit a decent amount of familiarity with a lot of Azure components, especially the AI stuff. Having previously passed the AI-900 a while back, I was expecting the exam to be up to date with technical developments, but wasn’t FULLY prepared for what it was actually like…

Now obviously all Microsoft AI capabilities, regardless of where they’re surfaced through, actually sit (somewhere) within Azure. After all, Azure is the Microsoft cloud platform itself (well, until someone decides to rename it, of course).

My expectations for going into the exam (with admittedly very minimal preparation for it) was to cover the basics for AI within Azure, similar to the way that the AI-900 exam was. Whilst this was somewhat the case, it didn’t necessarily stay within the bounds of my expectations.

The official description of the proposed exam candidate is:

This certification is intended for individuals who want to start working with AI solutions built on Azure. It is suitable for learners from technical backgrounds, including aspiring junior developers who are starting to incorporate AI capabilities into applications. As a candidate for this certification, you should have familiarity with the self-paced or instructor-led learning material.

This certification assesses your ability to show the conceptual knowledge and practical understanding needed to work with AI solutions on Azure, including:

  • Understanding core cloud concepts, such as services and resource deployments
  • Using Microsoft Foundry to deploy models and implement single-agent solutions
  • Recognizing how client applications are put together and how AI models and services are consumed within those solutions
  • Understanding Python code examples that call AI models and services

This certification is intended to validate skills commonly used when performing tasks such as:

  • Adding AI workloads, including language, vision, and generative AI, to software or IT solutions
  • Exploring and using AI features in applications as a junior or entry level developer

The overall information for the exam can be found at Microsoft Certified: Microsoft Azure AI Fundamentals, and there is an official Learning Path available for it.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

My main shock was the number of questions on Python code, including needing to select the right code syntax to use. Whilst I do understand that Microsoft is aiming to make Fundamental level exams/certifications more ‘technical’, I do feel that this is much more technical than the audience should be experiencing. I’ve also fed this back as feedback into Microsoft.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Analysis
    • Analyser types (audio, document, image, video). What each type is, how to configure them, and when to use them
    • Defining schemas for data extraction
    • How to extract content for analysis
  • Python
    • Using the Python SDK
    • Python code syntax and commands
  • Microsoft Foundry/Foundry Models
    • How AI models actually work when using/interfacing with them. Behaviour, access to content, prediction etc
    • LLM evaluations – comparing costs and capabilities
    • Creating, configuring, deploying, updating
    • Model temperature, inference
    • Minimising model bias, ensuring fairness
    • Connecting to a deployed model
    • Message structures for Foundry projects
    • Agent Evaluators – what they are, how to use them
    • Using Azure Content Understanding
  • Usage for models
    • Using Azure functions
    • Encoding images – data types
    • Voice Live (audio to text)
    • Azure speech SDK, and classes to use
  • Prompts:
    • Agent prompts. What are they, how are they used, why you should use them
    • System prompts. What are they, how are they used, why you should use them
  • Microsoft Responsible AI Principles – what they are, what are example of them
  • Why humans are still important to be involved in processes

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Power Platform ALM Changes

As a starter for 10, if you haven’t yet looked into ALM for Power Platform, you should most definitely be doing so! ALM is, of course, Application Lifecycle Management. This is how, in a nutshell, we move solutions between environments.

In the good old days, this was done manually of course (CRM 4.0, I’m looking at you!). Today, though it is of course still possible to export/import solutions manually, it’s not the Microsoft Best Practise method. Doing it manually also means that it’s unlikely that you’ll have appropriate source control for your solutions too, which let’s face it, isn’t the best.

Want to look at a previous solution version? Hmm – do you still have it saved on your machine or not?

So we should generally know why we’d want to use ALM. But which tooling do we actually use for it? Going back to the on-premise days, there was TFS (or Team Foundation Server, to give its full name). This was a full source control respository, allowing developers to check in/check out code, built solutions, deploy them, etc.

With the move to ‘cloud based systems’, the TFS replacement is Azure Dev Ops (or ADO, as it’s usually referred to as). ADO works in essentially the same way as TFS did (some differences, but they’re not really relevant here), but does so through the cloud.

When it comes to Power Platform solutions, ADO uses the ‘Power Platform Build Tools’ capabilities to hook into Dataverse & pick up solutions. The tools essentially gives ADO the ability to connect in to a Power Platform environment, build/export solutions, deploy solutions, etc.

More information on the toolset can be found at Microsoft Power Platform Build Tools for Azure DevOps – Power Platform | Microsoft Docs

Now there are some limitations to the Power Platform Build Tools. In fact, I’d be so bold as to say that currently they’re not in a fully mature state. It’s not possible to do everything that you can manually (well, not with the inbuilt capabilities – there are some ‘hacks’ around that can extend them). At the moment, it’s essentially 1.0.

Well, Microsoft is announcing that they’re now releasing 2.0 of the Power Platform Build Tools this week!

In fact, this is so new that at the time of writing, there’s no Microsoft Docs available for this! So what does version 2.0 bring, and why is Microsoft releasing a new version?

So Microsoft has actually had this in planning for a while. There’s a lot going on with GitHub, as we well know, and Microsoft wants to drive the consistency of the experience for users forwards. At the moment, they work in somewhat different ways, and the aim is to bring this to parity.

The main change that the new version has is that instead of tasks being PowerShell based (which they are currently), now the tasks will be Power Platform CLI based. So Microsoft is changing the underlying working method from PS to CLI. Some of us will, of course, already be familiar with the way that the CLI works, and it’s really nice to see that the capabilities will now be part of ADO.

Now don’t start worrying that your current ADO pipelines (v0) will suddenly stop working. Microsoft is not doing anything with v0 at this point in time (though they may potentially deprecate in the future). So all of your existing ADO pipelines using the Power Platform Build Tools will continue to work, but no new features are going to be being released for it.

In terms of switching to using v2, it’s really quite simple – you’ll need to change the task version type as so:

If you are currently using YAML (as so many wonderful developers do) to author pipelines, you’ll need to do the following in the YAML code:

It’s very important to note that it’s not possible to mix and match task versions. If you do this, the ADO pipeline will fail, so please don’t try this!

I’m really excited about this, and to see that the CLI capabilities are being brought into play for ADO capabilities. I’ll admit that I’m wondering what else will be being released (in the fullness of time), as I’m sure that this is just the start of some great new stuff!

One of the things that I’m REALLY hoping for is the ability to use ADO pipelines to be able to migrate Power App Portals (or Power Pages), as currently it’s only possible to do using the Power Platform CLI, or the Configuration Migration Tool. It would be amazing to be able to do these with ADO pipelines as well!

Solution deployments: Automated vs Manual

Over the holiday period, I’ve been playing around with solution deployments. OK – don’t judge me too much…I also took the necessary time off to relax & get time off work!

But with some spare time in the evenings, I decided to look a bit deeper into the world of DevOps (more specifically, Azure DevOps), and how it works. I’ll admit that I did have some ulterior motives around it (for a project that I’m working on), but it was good to be able to get some time to do this.

So why am I writing this post? Well, there’s a variety of great material out there already around DevOps, such as https://benediktbergmann.eu/ by Benedikt (check out his Twitter here), who’s really great at this. I chat to him from time to time around DevOps, to be able to understand it better.

However, I ran into some quite interesting behaviour (which I STILL have no idea why it’s the case, but more on this later), and thought that I would document it.

Right – let’s start off with manual deployments. As we know, manual deployments are done through the user interface. A user (with necessary permissions) would do the following:

  1. Go into the DEV environment, and export the solution (regardless of whether this is managed or unmanaged)
  2. Go into the target environment, and import the solution

Pretty simple, right?

Now, from an DevOps point of view, the process is similar, though not quite the same. Let’s see how it works:

  1. Run a Build pipeline, which will export the solution from the DEV environment, and put it into the repository
  2. Run a Release pipeline, which will get the solution from the repository, and deploy it to the necessary environment/s

All of that runs (usually) quite smoothly, which is great.

Now, let’s talk for a minute about managed solutions. I’m not going to get into the (heated) discussion around managed vs unmanaged solutions. There’s enough that’s been written, said, and debated on around the topic to date, and I’m sure it will continue. Obviously we all know that the Microsoft Best Practise approach is to use managed solutions in all non-DEV environments..

Anyway – why am I bringing this up? Well, there’s one key different in behaviour when deploying a managed solution vs an unmanaged solution (for a newer solution version), and this is to do with removing functionality from the solution in the DEV environment:

  • When deploying an unmanaged solution, it’s possible to remove items from the solution in the DEV environment, but when deploying to other environments, those items will still remain, even though they’re not present in the solution. Unmanaged solution deployments are additive only, and will not not remove any components
  • When deploying a managed solution, any items removed from the solution in the DEV environment, and then deploying the solution to other environments will cause those items to be removed from there as well. Managed deployments are both additive & subtractive (ie if a component isn’t present in the solution, it will remove it when the solution is deployed)

Now most of us know this already, which is great. It’s a very useful way to handle matters, and can assist with handling a variety of scenarios.

So, let’s go back to my first question – why am I writing this post? Well..it’s because of the different behaviour in manual vs automated deployment, which I discovered. Let’s look at this.

When deploying manually, we get the following options:

The default behaviour (outlined above) is to UPGRADE the solution. This will apply the solution with both additive & detractive behaviour. This is what we’re generally used to, and essentially the behaviour that we’d expect with a managed solution.

Now, when running a release pipeline from Azure DevOps, we’d expect this to work in the same way. After all, systems should be build to all work in the same way, right?

Well, no, that’s not actually what happens. See, when an Azure DevOps release pipeline runs, the default behaviour is NOT to import the solution (we’re talking managed solutions here) as an upgrade. Instead (by default), it imports it as an UPDATE!!!

This is what was really confusing me. I had removed functionality in DEV, ran the build pipeline, then ran the release pipeline. However the functionality (which I had removed from DEV) was still present in UAT! It took me a while to find out what was actually happening underneath…

So how can we handle this? Well, apart from suggesting to Microsoft that they should (perhaps) make everything work in the SAME way, there’s a way to handle it within the release pipeline. For this, it’s necessary to do two things:

Firstly, on the ‘Import Solution’ task, we need to set it to import as a holding solution.

Secondly, we then need to use the ‘Apply Solution Upgrade’ task in the release pipeline

What this will do is then upgrade the existing solution in the target environment with the holding solution that’s just been deployed.

Note: You will need to change the solution version to a higher solution number, in order for this to work properly. I’m going to write more about this another time, but it is important to know!

So in my view, this is a bit annoying, and perhaps Microsoft will change the default behaviour within DevOps at some point. But for the moment, it’s necessary to do.

Has this (or something similar) tripped you up in the past? How did you figure it out? Drop a comment below – I’d love to hear!

AI-900: Microsoft Azure AI Fundamentals

One of my recent decisions has been to explore the Azure space. There are several reasons behind this. CDS, as we (hopefully!) know sits on top of Azure, and it’s useful to know the broader digital estate available on the platform.

I’ve also been looking into some of the Cognitive Services functions that are available within Power Platform. These all live in Azure, and are surfaced into Power Apps etc. It’s therefore good to know what can be done outside of the ‘Power Platform bubble’, and the options there.

Incidentally, a year ago I even built a canvas app that allowed you to take a picture of a motorbike tyre. Using AI Builder functionality, it then analysed if the tyre tread was legal or not! That was a really cool proof of concept.

So a good place to start, I thought, would be with the AI-900. This covers the fundamentals of the AI offerings that are in Azure. I had forgotten though that with fundamental exams, there’s only 60 minutes available! Seeing the timer ticking down from that give me a little surprise, though I managed to get through it (& pass!) in good time.

The official description of the exam is

Candidates for this exam should have foundational knowledge of machine learning (ML) and artificial intelligence (AI) concepts and related Microsoft Azure services.

This exam is an opportunity to demonstrate knowledge of common ML and AI workloads and how to implement them on Azure.

This exam is intended for candidates with both technical and non-technical backgrounds. Data science and software engineering experience are not required; however, some general programming knowledge or experience would be beneficial.

The official page for the exam is at https://docs.microsoft.com/en-us/learn/certifications/exams/ai-900, where it gives quite a good overview of things. Go take a look at it, and also take a look at the associated learning paths.

Once again, I sat the exam through the proctored option (ie from home). Honestly I think that my experience this time has probably been the best so far. I went through the usual system checks for signing in. The proctor came alone, and within 30 seconds they had released the exam!

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). I’ve tried to group things together as best as possible for the different subject areas.

  • Image recognition types
    • What each one is, what it’s used for
    • When to use for a specific scenario
  • Facial recognition
    • Different types available
    • What each one is, what it’s used for, when to use for a specific scenario
    • Limitations & issues that can occur when using it
  • Text:
    • Different recognition types
    • What each one is, what it’s used for, when to use for a specific scenario
    • Analytics. How this works, how to set up & use
    • Translation. Different options available, how they work, when to use for a specific scenario
    • Sentiment analysis. How it works, limitations, what’s needed to train a model
  • QnA Maker
    • What this does, how to set it up, how to train it
    • Generating material with it
    • Use with chatbots
  • Machine Learning
    • What this actually is, and what it does
    • How it works
    • Different types that are available, how they work, how to train a model
    • Classification options
  • Machine Learning Designer
    • How to use & set up
    • Different types of data/options used within it
    • Training & evaluation models. The steps needed for this, how to set it up correctly
    • Types of modules available
    • Validation sets
  • Chatbots
    • What they are
    • How/where they can be used
    • Limitations
    • Integration with other systems
  • Charts
    • Different charts that are available for use
    • Reading them correctly
    • Model types shown on them
    • Metrics!
  • Microsoft AI Principles
    • The different principles that are published
    • What each one means/refers to

Overall, it was quite good. The Microsoft AI Principles were new to me, and I had to guess at those (I went to look them up afterwards!). Other than that, some bits I breezed through, other parts I took careful stock of.

This is definitely an area that I’m going to continue exploring, and will be writing up further exams that I take in it. I’m curious what your experience of it has been – please drop a comment below to let me know!