Exam AB-210: Dynamics 365 Sales AI Consultant Associate

Indeed the 3rd exam related post in just over a week – it’s a busy (new) certification release season at the moment!

This time it’s the new AB-210 exam, focusing on Dynamics 365 Sales and AI (of course!). It’s nice to see that there’s a dedicated Dynamics 365 Sales exam back now – most of us will remember the MB-210 exam that was around for a number of years, but which was retired at the end of November 2024. What happened was that a new exam at the time (the MB-280) was released, which rolled together Dynamics 365 Sales with Dynamics 365 Customer Insights.

I never fully officially understood the reason for this, given that the roles in reality are quite different, and did comment at the time (MB-280: Microsoft Dynamics 365 Customer Experience Analyst) that I wondered how well it would stand the progress of time.

AI and sales capabilities seem generally to go well together – Microsoft has publicly demoed at large conferences the Sales Agent multiple times, showing how it can help qualify leads, and handle engagments with customers. To be honest I quite like this in general, though for implementation I do keep my (slightly skeptical) eye on it, to ensure it’s working in the right way.

The official description of the proposed exam candidate is:

As a candidate for this Microsoft Certification, you design and configure AI-enhanced sales solutions by using Dynamics 365 Sales, Copilot in Dynamics 365 Sales, and agent capabilities to help sellers work more efficiently throughout the lead-to-cash process. You translate business requirements into practical seller workflows enhanced with conversational intelligence, predictive insights, guided automation, and secure data access.

In this role, you work closely with sales, operations, and IT stakeholders to help ensure that solutions align with revenue goals and process optimization.

You perform the following design and implementation tasks:

  • Configure Dynamics 365 Sales core features.
  • Deploy, manage, and monitor agents in Sales.
  • Implement collaboration features.
  • Tailor AI-powered intelligence features.

It is highly recommended that candidates complete training in intermediate-level Microsoft Power Platform configuration before taking this certification exam. Additionally, you must have functional knowledge of:

  • Building Power Automate cloud flows.
  • Interpreting an organization’s sales processes and seller experience.
  • Building and extending model-driven apps.

The overall information for the exam can be found at Microsoft Certified: Dynamics 365 Sales AI Consultant Associate, and there is an official Learning Path available for it.

I do like that the exam content overview calls out that Power Platform knowledge & configuration is highly recommended. Obviously Dynamics 365 is built on top of Power Platform, and having this knowledge (ie the ability to customise & extend with Power Platform capabilities) is key to well thought through implementations.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Setup & Data
    • Environment creation & provisioning
    • Document management options & requirements
    • Enabling AI capabilities (Copilot, Sales Agent etc)
    • Configuring & customising forms
    • Configuring & customising views
  • Outbound calling
    • Configuration
    • Security requirements
  • AI Capabilities
    • Getting access to AI capabilities for users (deployment, security etc)
    • What the different AI agents & modes are, when to use them, and the behaviour of each
    • What blueprints are, how to use them, how to modify them
    • How AI agents handle communication re-tries
    • Creating custom agents
    • Analysing AI agent behaviour (runs, outcomes, metrics etc), monitoring information
    • Using AI to summarise records & ask for information
    • Ways to handle AI usage billing (what options are available, where to do this, how to do this)
  • Leads & Opportunities
    • Setting up & configuring predictive lead scoring models, requirements for implementing this
    • Understanding lead to opportunity conversion process, and continuing through to a final sale
    • Understanding sales goals, configuring sales goal/metrics/KPI’s, configuring rollup queries for aggregation
    • Assignment behaviour for leads to users, how this works, configuration for this
  • Products
    • The different ways to handle products (eg units, bundles, price lists, product families)
    • When each one should be used, and requirements for them
    • How to use the different components to configure specific scenarios
    • Relating products together
  • Pricing
    • Different ways to approach pricing products (eg singly, as a bundle, etc)
    • Handling multiple territories
    • Handling multiple currencies
    • Configuring price lists
    • Handling expired price lists & system behaviour
    • Handling discounting
  • Mobile app
    • Setup & configuration
    • Data synchronisation
    • Security setup & requirements
    • Push notifications
  • Power Automate
    • Understanding when to use different trigger types (automated/manual/schedule)
    • Usage for scenarios requiring approvals
  • Business process flows
    • What they are, and what they should be used for
    • How to configure, moving between stages, understanding how they work

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Exam AB-620: Design and build integrated AI agent solutions in Copilot Studio

We seem to be on a roll here over the last month or so with new exams being released (& its not over yet!). With all of the emphasis on AI & agents, I decided to go take the new Copilot Studio exam to see what it would be like.

Given that I have a decently passing familiarity with Copilot Studio (as I use it for projects, and actually do get hands on with it quite a bit of the time), I felt that I’d be in a good place to handle it without any revision. Obviously this could have been a bold move, and it’s up to everyone to make their own decisions about how much to revise (or not revise)!.

Copilot Studio has moved on from when it first came onto the scene (and for those who remember, it used to be called Power Virtual Agent, or PVA). Nowadays it supports coding within it, but it also can serve as the front end for other Microsoft AI capabilities, such as Microsoft Foundry models.

This is also the first time that it’s been featured for its own exam – previously it got rolled into other exams (such as the PL-100, PL-200, etc), where it was just one of the components being covered (and covered in a lightweight manner, at that). With the focus from Microsoft now heavily on it though, it’s now taken a step forward into the spotlight by itself.

The official description of the proposed exam candidate is:

As a candidate for this Microsoft Certification, you’re a professional developer or advanced builder who builds, extends, and integrates custom agents for enterprise-grade solutions. You typically work as an IT application developer, consultant, or independent software vendor (ISV) partner focused on creating scalable AI solutions for organizations or customers.

For this exam, you should be familiar with Power Fx, Microsoft Dataverse, Microsoft Power Platform environments and components, Microsoft 365 Copilot, Microsoft Foundry, and adaptive cards.

You need intermediate knowledge of generative AI concepts, including models, orchestration, retrieval-augmented generation (RAG), Model Context Protocol (MCP), Agent2Agent (A2A) protocol, and more. You should also have experience with prompt engineering and with REST APIs and integration patterns. Additionally, you need experience configuring agents with basic knowledge sources, instructions, tools, and topics in Microsoft Copilot Studio.

As a developer who works in Copilot Studio, you:

  • Integrate agents with Microsoft Foundry.
  • Integrate agents with Model Context Protocol (MCP) servers.
  • Integrate agents with custom connectors.
  • Integrate agents with APIs.
  • Integrate agents with Microsoft Fabric.
  • Automate tasks with computer use.
  • Integrate agents with connectors.

You create:

  • Multi-agent solutions.
  • Agents with enterprise knowledge sources (such as ServiceNow, SAP, and others).
  • Advanced agent topics and tools.
  • Computer-using agents.
  • Agents that perform advanced actions via APIs.

You collaborate with Microsoft 365 administrators, Microsoft Power Platform administrators, Microsoft Copilot administrators, Copilot Studio agent builders, Copilot Studio administrators, Foundry administrators, agentic AI business solutions architects, and Copilot Studio architects.

The overall information for the exam can be found at Microsoft Certified: AI Agent Builder Associate, and there is an official Learning Path available for it.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

I’ll freely admit that there was a LOT more focus on MCP capabilities than I had expected there to be, but I guess that again this is natural, given how Microsoft is moving at the moment.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Copilot Studio
    • Component/node types. What they are, how/when to use them
    • Using topic variables
    • Timeouts
    • Concurrency
    • Sensitive data & Using type ‘secret’ – what this does and why to use
    • Generative answers – how they work, limitations, what to know, how to configure & ground them
    • Computer Use
    • Connecting with Microsoft Graph
    • Connecting to other agents – how to do this, how to configure, what to use
  • Connector types
    • Standard connectors (ie connectors provided by Copilot Studio). When to use them, limitations
    • Custom connectors – what these are, why you’d use them
  • Security
    • Authentication types (API, OAuth 2)
    • Query delegation
    • DLP policies
  • MCP servers
    • What they are
    • Connecting to them
    • Security with MCP servers
    • Authentication types
    • Usage of AI with MCP servers
  • Azure AI Search
    • Connecting to knowledge index
    • Configurations
    • Security
  • Solution Types
    • Default vs Unmanaged vs Managed
    • Environment variables
    • Creating solution
  • Application Lifecycle Management (ALM)
    • What this is, and why it’s needed
    • What approaches can be used, why to use them
    • What’s needed to set up ALM
  • Monitoring & Troubleshooting
    • Reporting on deployed agents
    • Evaluating usage of deployed agents
    • Identifying issues & errors
    • Stopping runs

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Exam AI-901: Microsoft Azure AI Fundamentals

With a massive amount of focus on AI across the Microsoft platform, I decided to sit the new AI-901 exam, which is the new Azure fundamentals exam. I’m far from being an Azure architect, but will freely admit a decent amount of familiarity with a lot of Azure components, especially the AI stuff. Having previously passed the AI-900 a while back, I was expecting the exam to be up to date with technical developments, but wasn’t FULLY prepared for what it was actually like…

Now obviously all Microsoft AI capabilities, regardless of where they’re surfaced through, actually sit (somewhere) within Azure. After all, Azure is the Microsoft cloud platform itself (well, until someone decides to rename it, of course).

My expectations for going into the exam (with admittedly very minimal preparation for it) was to cover the basics for AI within Azure, similar to the way that the AI-900 exam was. Whilst this was somewhat the case, it didn’t necessarily stay within the bounds of my expectations.

The official description of the proposed exam candidate is:

This certification is intended for individuals who want to start working with AI solutions built on Azure. It is suitable for learners from technical backgrounds, including aspiring junior developers who are starting to incorporate AI capabilities into applications. As a candidate for this certification, you should have familiarity with the self-paced or instructor-led learning material.

This certification assesses your ability to show the conceptual knowledge and practical understanding needed to work with AI solutions on Azure, including:

  • Understanding core cloud concepts, such as services and resource deployments
  • Using Microsoft Foundry to deploy models and implement single-agent solutions
  • Recognizing how client applications are put together and how AI models and services are consumed within those solutions
  • Understanding Python code examples that call AI models and services

This certification is intended to validate skills commonly used when performing tasks such as:

  • Adding AI workloads, including language, vision, and generative AI, to software or IT solutions
  • Exploring and using AI features in applications as a junior or entry level developer

The overall information for the exam can be found at Microsoft Certified: Microsoft Azure AI Fundamentals, and there is an official Learning Path available for it.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

My main shock was the number of questions on Python code, including needing to select the right code syntax to use. Whilst I do understand that Microsoft is aiming to make Fundamental level exams/certifications more ‘technical’, I do feel that this is much more technical than the audience should be experiencing. I’ve also fed this back as feedback into Microsoft.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Analysis
    • Analyser types (audio, document, image, video). What each type is, how to configure them, and when to use them
    • Defining schemas for data extraction
    • How to extract content for analysis
  • Python
    • Using the Python SDK
    • Python code syntax and commands
  • Microsoft Foundry/Foundry Models
    • How AI models actually work when using/interfacing with them. Behaviour, access to content, prediction etc
    • LLM evaluations – comparing costs and capabilities
    • Creating, configuring, deploying, updating
    • Model temperature, inference
    • Minimising model bias, ensuring fairness
    • Connecting to a deployed model
    • Message structures for Foundry projects
    • Agent Evaluators – what they are, how to use them
    • Using Azure Content Understanding
  • Usage for models
    • Using Azure functions
    • Encoding images – data types
    • Voice Live (audio to text)
    • Azure speech SDK, and classes to use
  • Prompts:
    • Agent prompts. What are they, how are they used, why you should use them
    • System prompts. What are they, how are they used, why you should use them
  • Microsoft Responsible AI Principles – what they are, what are example of them
  • Why humans are still important to be involved in processes

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Exam AB-731: AI Transformation Leader

What better way to start 2026 then to talk about a Microsoft certification, especially one for a totally NEW type of user!

Following on the steps of the other AB exams I’ve been writing about my experience with (see Exam AB-730: AI Business Professional, Exam AB-100: Agentic AI Business Solutions Architect and Exam AB-900: Microsoft 365 Copilot and Agent Administration Fundamentals ), this article will cover the AB-731 exam.

This exam is focusing on the Microsoft AI capabilities from a Business Leader perspective, and to the best of my knowledge is the first time that Microsoft has ever created an exam from a ‘Business Leader’ perspective. Taking this exam was a complete mindset shift to me, especially when seeing the questions – it’s not about understanding the in depth technical capabilities, but more around the breadth of technology options (spanning Azure, Microsoft 365 Copilot, Copilot Studio & other tools), and what they bring/enable from a BUSINESS perspective.

The official description of the proposed exam candidate is:

As a candidate for this Microsoft Certification, you should understand how to recognize opportunities for AI transformation, identify the right AI tools and resources, plan for AI adoption, optimize business processes, and drive innovation by using Microsoft 365 Copilot and Azure AI services.

This Certification is designed for business decision-makers at all levels who are responsible for guiding transformation and innovation within their teams or organizations. In this role, you’re expected to demonstrate AI fluency, strategic vision, and the ability to lead AI adoption across teams and functions but are not expected to write any code.

As a candidate for this Certification, you should be able to evaluate AI opportunities, champion responsible AI practices, and align AI investments with business goals. You need experience leading adoption or change management in a business context. You must also be familiar with Microsoft 365 services, Azure AI services, and general AI capabilities.

The overall information for the exam can be found at Microsoft Certified: AI Transformation Leader, and there is an official Learning Path available for it.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

Overall, the exam approach was quite different to me – though I do talk with organisations frequently around general AI matters, I’ve never taken an example written in this way beforehand. However, I do feel that it’s very helpful to have this in place, to ensure that business leaders can demonstrate that they actually do know what they’re talking about 😉

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Azure Components & Capabilities
    • AI Vision – what it can be used for, benefits of using it, capabilities that it has
    • AI Language – what it can be used for, benefits of using it, capabilities that it has
    • AI Document Intelligence – what it can be used for, benefits of using it, capabilities that it has
    • Machine Learning – what it can be used for, benefits of using it, capabilities that it has
    • AI Foundry – what it can be used for, benefits of using it, capabilities that it has
    • AI Search – what it can be used for, benefits of using it, capabilities that it has
  • Microsoft 365 Copilot Chat
    • What license is needed
    • What data does it have access to
    • What security controls are in place
  • Microsoft 365 Copilot
    • What is it, what can it be used for
    • What can it do
    • How does it connect to data
    • What are the connectors for it (standard & custom)
    • Benefits of using it (vs 3rd party AI tooling)
    • Different agents (eg Analyst & Researcher) within it – what they do, how to access and use them
  • Microsoft Copilot Studio
    • What is it, what can be used for
    • What can it do
    • What license is needed
    • What data can it access
  • Microsoft Security Copilot
    • What is it, what can be used for
    • What can it do
    • Benefits that it provides
  • Security & Governance
    • Content filtering controls within Copilot
    • Policies
    • Handling requirements to prevent inappropriate language & responses
    • Responsible AI principles
    • Governance ownership, responsibility & requirements
  • Generative AI
    • AI model hallucinations
    • Grounding in data
    • Improving response quality
    • Prompt engineering
    • Pre trained models vs fine tuned models
    • Reasoning models vs non-reasoning models
    • Understanding usage costs (including different pricing models)
    • What is RAG, and how can it be used for business scenarios
    • Adoption throughout organisations – personas to involve in adoption team

    I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

    Exam AB-900: Microsoft 365 Copilot and Agent Administration Fundamentals

    Following on the steps of the other AB exams I’ve been writing about my experience with (see Exam AB-730: AI Business Professional & Exam AB-100: Agentic AI Business Solutions Architect), this article will cover the AB-900 exam.

    This exam is focusing on the Microsoft 365 Copilot capabilities from a user & administration perspective, and doesn’t cover/include anything from Copilot Studio.

    Now, though it’s a Fundamentals exam, to be honest it’s the HARDEST fundamentals exam that I’ve ever taken!

    The approach is around being able to demonstrate understanding of how to use the Microsoft 365 Copilot, as well as a lot of focus on how to control & administer it.

    The official description of the proposed exam candidate is:

    As a candidate for this Microsoft Certification, you should be familiar with Microsoft 365, including core services, security, identity and access, data protection, and governance, along with Microsoft 365 Copilot and agents.

    Additionally, you should be familiar with the admin centers used to access Microsoft 365 workloads, such as Exchange Online, SharePoint in Microsoft 365, Microsoft Teams, Microsoft Entra, and Microsoft Purview. You need to have experience with AI-driven productivity tools and modern IT management practices.

    You must be able to identify the roles of the core features and objects available in Microsoft 365, such as users, groups, teams, sites, and libraries. Plus, you should understand the core security features of Microsoft 365, such as authentication methods, conditional access policies, and single sign-on (SSO).

    The overall information for the exam can be found at Microsoft 365 Certified: Copilot and Agent Administration Fundamentals, and there is an official Learning Path available for it.

    As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

    One thing to keep in mind about this exam – though I do mention Microsoft Purview in the list of items below, I haven’t gone into it extensively. However, there were a LOT of questions that touched on Purview (& other governance stuff as well) – you REALLY need to be knowing & understanding these capabilities to be able to take & pass the exam. Just guessing the answers is not going to help at all!

    Overall, the exam seemed to me to be pretty decent, though with indeed a heavy focus on security & governance (as I’ve mentioned above). I don’t see this as a bad thing though, as it can help to show that administrators really do know what they’re talking about.

    I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

    • Agent types
      • Native Microsoft 365 Copilot agent
      • Native Microsoft 365 Copilot advanced agents (eg Researcher & Analyst). What they are, how to access, what to use them for
      • Custom Microsoft 365 Copilot agent
      • SharePoint agent
    • Creating/using Agents
      • Using natural language to create agents
      • How to handle/perform multi-step reasoning
      • Use of notebooks
      • Custom instructions
      • Scheduling prompts
      • Querying data types
        • Structured
        • Unstructured
    • Governance & security
      • Blocking access to different types of searches & collateral
      • Blocking access to specific agents
      • Tools to use for blocking
      • How to share agents with other users
      • Assigning licenses to users
      • Data retention policies
      • Data labelling policies
      • Use of Microsoft Purview, covering capabilities, tools, auditing, how to use, etc
      • Use of DLP
      • Data source permissions
      • Conditional access policies
      • Microsoft Defender – what it is, capabilities it has, how to use it, etc.
      • Types of authentication
    • Reporting
      • Licensing & usage
      • Adoption & interactions
    • Payment options & capabilities
      • Credit usage – internal vs external users
      • Pay As You Go Billing, and scenarios you can use it for

    I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

    Ignite ’24 – Power Platform Governance Announcements

    Being at Microsoft Ignite ’24 in Chicago is an amazing experience. Even MORE amazing are the announcements that the Power Platform Governance team has come out with. I’ve been fortunate enough to have been given early access to some of the features, and they’re really awesome. Below, I’ve summarised what I believe to be the top picks to look at

    Power Platform Admin Centre.

    We’ve all been used to the PPAC experience that’s been around for a number of years. It’s been useful, but limited in various functions. Well, there’s not just been a facelift, but an entirely NEW PPAC experience for us. Here are some screenshots:

    There’s a massive amount of stuff to look through (& play with) – my overall impressions are that this will definitely help move forward with security, governance & everything that’s needed. More importantly, especially with the focus & mentions of Copilot & Copilot Studio, there’s a section reserved for that, which is going to be critical for IT admins:

    The new PPAC experience is also taking over the role that was previously played by the Power Platform CoE Starter Toolkit. Functionality is (slowly) being shifted into the main PPAC experience. One of these that’s already a great start is the Inventory capability:

    Behind the scenes, this is data being captured at the tenant level, which is being stored in Dataverse (no, we don’t YET have access to the data natively, though I’m told it’s on the roadmap to be able to query). The performance of this works extremely well, though there are still a few little bugs that are being worked out 🙂

    But more importantly, this also covers Copilot Studio components – to date there has not really been anything around to report on this properly…but now there is!

    Managed Environments

    We all know the conversation around Managed Environments, and sometimes needing to persuade organisations that premium licensing will actually give ROI to them. Well, with the new features that have been announced this week, this just got a WHOLE lot easier! Let’s take a look at some of these items

    Environment Rules

    Initially when Managed Environments launched, there were just a few rules that could be applied. We were told that more were coming….and indeed they are! Still more to come that the team is working on, but the number of rules has increased massively:

    Some of my favorites here are the ability to manage Copilot – it’s going to be SO important as to how these are handled (especially with all of the emphasis on it coming out of Ignite). Being able to set/enforce authentication options, sharing options & various other settings is going to be KEY to proper Copilot governance.

    It also now gives options for backup retention policies. I’ve written previously about how to ‘hack’ longer backups for environments (Environment types, capabilities & backups) – we’re now able to set longer backups for pure Power Platform environments within needing to enable Dynamics 365 applications within them (though of course you may still want to do this if you can see yourself using Dynamics 365 in the environment in the future – it’s still not possible to upgrade the environment type at a later point).

    However there’s also something else new around environments. Previously if just looking at an environment from the main list of environments within PPAC, it wasn’t easy to see if it belonged to a Managed Environment group or not. Now it is – more so, you’re not able to tweak any settings on the general environment page that are being managed at the Environment Group level!

    DLP Capabilities

    One of the main challenges to date with DLP has been around the inability to block certain connectors (eg the Microsoft standard connectors). With Managed Environments, the team has now enabled organisations to be able to block ANY connectors that they wish to! If you’re not running Managed Environments, the existing limitations will still apply – you do need to be using Managed Environments for this! This will also be made available through the Power Platform API & Admin SDK tools in the coming weeks.

    Preferred Group

    Whilst we’ve had environment routing around now for a while (being able to auto-route new makers to a specific environments, which could be within a Managed Environment group), we haven’t had the ability to handle new environments being created & auto populated into an environment group.

    Well, this is now changing. We’re now going to have the ability to auto set policies, so that when a new environment is created, it can automatically be added to a Managed Environment group. Obviously with this happening, the rules & policies applied at the group level will automatically be applied to the new environment as well! This will be a decent relief to Power Platform administrators – to date we’ve been able to set up things like DLP policies to auto-apply to new environments, but managing them otherwise needed to be done manually…well, no more!

    Security Personas

    Until now, security & governance within Power Platform have been a ‘one size fits all’ approach. Different types of people would access PPAC etc, but there wasn’t really a way to differentiate the different personas. This is now changing:

    In summary, incredible steps forward, and I know that there’s a LOT more in the works that should be coming in the next weeks & months. I’m really excited about all of this, and using the capabilities to continue enabling & empowering organisations from a security & governance point of view.

    Developer environments – new capabilities to create for users

    Developer environments are awesome. There – I’ve said it for the record. Formerly known as the ‘Community Plan’, developer environments are there for users to be able to play with things, get up to speed, test out new functionality, etc. They’re free to use – even with premium capabilities & connectors, users do not need premium licensing in place (caveat – if it’s enabled as a Managed Environment, it will require premium licensing).

    Originally, users were only able to create a single developer environment. However, earlier on this year Microsoft lifted this restriction – users are now able to create up to THREE developer environments for their own usage (which makes it even easier now for users to get used to ALM capabilities, and try it out for themselves).

    Now, the ability for users to create developer environments is controlled at the tenant level, and it’s either On or Off. It requires a global tenant admin to modify this setting, but it’s not possible to say ‘User Group A will not be able to create developer environments for themselves, but User Group B will be able to’.

    Organisations have differing viewpoints on whether they should allow their users the ability to create developer environments or not. I know this well, as usually I’m part of conversations with them when they’re debating this.

    One of the main challenges that comes when organisations don’t allow users to create their own developer environments has been that historically, it’s not been possible for someone else to create the environment on their behalf. If we think of ‘traditional IT’, if we’re not able to do something due to locked down permissions, we can usually ask ‘IT’ to do it for us, and grant us access. This has not been the case with developer environments though – well, not until recently.

    Something that I do from time to time is chat with the Microsoft Product Engineering groups, to provide feedback to (try to!) help iterate products forward and better. One of the conversations I had in the summer was with the team responsible for developer environments. I was able to share experiences & conversations that I had been having with large scale enterprise organisations, and (very politely!) asked if they could look to open up the ability to do something around this.

    Around a month ago or so, the first iteration of this dropped – in the Power Platform Admin Centre interface, it was now possible to specify the user for whom an environment was to be created!

    This was an amazing start to things, and definitely would start unblocking Power Platform IT teams to enable their users, in circumstances where their organisations had decided to turn off the ability for users to create their own developer environments.

    However, this still required the need to do it manually. Unless looking into an RPA process (which, let’s face it, would be clunky & undesirable), it meant that someone with appropriate privileges would need to go & actually create the environment, and associate it to the user.

    However, this has now taken another MASSIVE step forward – I’m delighted to announce that this capability has been implemented in the Power Platform CLI, and is live RIGHT NOW (you’ll need to upgrade to the latest version – it’s present in 1.28.3 onwards).

    So, with this in place, it’s now possible to use PowerShell commands to be able to create developer environments on behalf of users, and assign it to them. Organisations usually already have PowerShell scripts to handle new joiners, and will therefore be able to integrate this capability into these, to automatically set up developer environments for users. Alternatively, existing users could look to raise internal requests, and have them automated through the use of PowerShell (along with appropriate approval processes, of course!).

    So this is really nice to see. However, I think it can still go one step further (at least!), and am trying to use my connection network to raise with the right people.

    See, we have the Power Platform for Admins connector within Power Platform already. One of the functions available in this is to be able to create Power Platform environments:

    However, if we look at the action (& the advanced settings within this action), there’s no ability to set this:

    Interestingly enough, the API version listed by default is actually several years old. By doing some digging around, I can see that there are multiple later API versions, so I’m not sure why it’s using an older one by default:

    What would be really amazing is to have these capabilities surfaced directly within Power Platform, using this connector. Then we could look to have everything handled directly within Power Platform. Given that the CoE toolkit already includes an Environment Request feature, I would see this as building on top & enabling it even further. Obviously organisations wouldn’t need the CoE toolkit itself, as they could look to build out something custom to handle this.

    What are your thoughts on this – how do you see these features enabling your organisation? If your organisation HAS locked down the ability for users to provision developer environments, are you able to share some insights as to why? I’d love to hear more – drop a comment below!¬

    New Power Platform Security Role Editor

    We’ve all been there. Security role wise, that is. It’s the point in any project where we start looking at configuring user security. To do this, we’ve used the Security Role section in the Settings area (once it’s actually loaded, of course):

    Ah, the joys of this – dating back to CRM 3.0 (to my recollection – though it possibly might be 4.0). All of these lovely little circles, which fill up more & more as we click on them, whilst trying to work out what each one actually does:

    And that’s not to mention the ‘Missing Entities’ tab (did anyone ever figure out what this was supposed to be used for), or the ‘Custom Entities’ tab which seemed like a catch all place. Plus the fact that non-table permissions (eg Export to Excel) were placed on random tabs that meant we needed to hunt through each tab to find the appropriate item.

    Now many of us spend hours in here (then further hours once we started troubleshooting user issues that were down to security role misconfiguration). The absolutely ‘JOYS‘ of the header title row not being scrollable (though it was possible to hover over each permission, and it would tell you what it was). The power of clicking on the line item, and seeing ALL of the little circles fill up – if you haven’t ever done it, you’ll not have experienced the bliss that this could bring!

    But all things come to an end(or as the Wheel of Time series says:’ The Wheel of Time turns, and Ages come and pass, leaving memories that become legend‘), and now we have a NEW security role experience.

    First of all, the UI has changed. It’s cleaner, responsive, gives more information to users upfront….and the heading SCROLL!!!

    We’re able to show just the tables that have permissions assigned to them (rather than wading through dozens or hundreds of entries that have no relevance), or show everything:

    Oh, and those random non-table privileges that we had to try to find beforehand – these are now grouped very nicely. This is SO much easier to manage!

    We can also take permissions that have been set on a specific table, and then copy them to another table (it promps us to select – and we can select MULTIPLE tables to copy to!):

    But best of all is the way that we can now set permissions for items. There are several different ways of doing this.

    Firstly, Microsoft has now provided us with the ability to select standard pre-defined options. Using these will set permissions across all categories for the item appropriately:

    This is really neat, and is likely to save quite a bit of time overall. However if we’re needing to tweak security permissions to custom settings, we can do this as well. Instead of clicking on circles, we now have lovely dropdowns to use:

    In short, I’m absolutely loving this. The interface is quick to load, intuitive, and works well without fuss.

    Given how much time I’ve spent over the years in wrestling with security roles, I think this is going to be a definite timesaver for so many people (though we’ll still need to troubleshoot interesting error messages at times that testing will throw up, and work out how/what we’re needing to tweak for security access to work).

    There are still some tweaks that I think Microsoft could make to get this experience even better. My top three suggestions would include:

    • The ability to select multiple lines, and then set a permission across all of them (sort of like bulk editing)
    • Being able to have this area solution aware. When we have various different projects going on, it would be great to have the ability to filter the permissions grid by a solution. This would be a timesaver, rather than having to wade through items that aren’t relevant
    • Export to Excel. Having a report generated to save digitally or print off is amazing for documentation purposes. There are 3rd party tools (thank you XrmToolBox!), but it would be great to be built into here

    Overall, I’m really quite happy and impressed with it (it’s definitely taken enough time for Microsoft to pay attention to this, and get it out), and hope that it’ll continue to improve!

    What have your bugbears with the legacy security editor been over the years, and how are you liking the new experience? Drop a comment below – I’d love to hear!

    The story of MFA & the Centre of Excellence

    I’ve been rolling out the Microsoft Centre of Excellence solution for several years now at customers. It’s a great place to start getting a handle on what exactly is going on within a Power Platform tenant, though there’s obviously so much more that takes place within a Centre of Excellence team.

    The solution gathers telemetry around environments, Power Apps, Power Automates etc through the usage of the Power Automate Admin connectors for Power Platform (see Power Platform for Admins – Connectors | Microsoft Learn for further information on these).

    Now obviously we need a user account to run these, and this usually has been through the use of a ‘pseudo service account’, as using a service principal has been tricky, to say the least. So we would get customers to set up an appropriate account with licensing & permissions in place, and use this to own & run the Power Automate flows that bring in the information to the CoE solution.

    It is important to note that usage of these connectors do require a pretty high level of permissions – in fact, we usually suggest applying the Power Platform Admin security role (within the Microsoft 365 Admin Centre) to the user account. All good so far.

    The tricky part has, to date, been around security. Organisations usually require (for good reasons) multi-factor authentication to be in place (aka MFA). Now this is fine for users logging in & accessing systems. However, it proves to be somewhat tricker for automations.

    See, when a user logs in & authenticates through MFA, a token is stored to allow them to access systems. Automations can also use this. However the token will expire at some point (based on how each organisations has implemented MFA access/controls). When the token expires, the automations will stop running, and fail silently. There’s no prompt that the token has expired, and the only way of knowing is to take a look at the Power Automate flow history. This can be interesting though, as signing in (with the pseudo service account) will prompt for MFA authentication, and then everything will start running again!

    So this has usually resulted in conversations with the client to politely point out that implementing MFA on the service account will mean that, at some point, the Power Automate flows are going to start failing. Discussions with security teams take place, mitigation using tools such as Azure Sentinel are implemented, and things move ahead (cautiously). It’s been, to date, the most annoying pain for the technical implementation (that I can think of at least, in my experience).

    Now you’d think that a change in this would be shouted from the rooftops, people talking about it, social media blowing up, etc. Well, I was starting an implementation recently for a customer, and was talking to them around this, as I’d usually do. Imagine my surprise when Todd, one of the Microsoft technical people attached to the client, asked why we weren’t recommending MFA.

    Taking a look at the online documentation, I noticed that something had slipped in. Finally there was the ability to use MFA!

    Trawling back through the GitHub history (after all, I wanted to find out EXACTLY when this had slipped in), I discovered that it was a few months old. I was still very surprised that there hadn’t been more publicity around this (though definately a good incentive to write about it, and a great blog post to start off 2023 with!).

    So moving forward, we’re now able to use MFA for the CoE user account. This is definately going to put a lot of mind at rest (especially those who are in security and/or governance). The specifics around the MFA implementation can be found at Conditional access and multi-factor authentication in Flow – Power Automate | Microsoft Learn – but it’s important to note that specific MFA policies will need to be set up & implemented for this account.

    So, now the job will be to retro-fit this to all organisations that already have the CoE toolkit in place. Thankfully this shouldn’t be too difficult to do, and will most definitely enhance the security controls around it!

    Have you implemented any mitigation in the past to handle non-MFA? I’m curious if you have – please drop a comment below!

    New Platform DLP Capabilities

    DLP (or Data Loss Prevention) is a very important capability in the Power Platform. With being able to bring together multiple data sources, both within the Microsoft technology stack as well as from other providers gives users amazing capabilities.

    However with such great capabilities comes great responsibility. Of course, we trust users to be able to make proper judgements as to how different data sources can be used together. But certain industries require proper auditing around this, and so being able to specify DLP policies are extremely important to any governance team.

    Being able to set how data connectors can be used together (or, in the reverse, not used together) across both Power Apps as well as Power Automate flows is imperative in any modern organisation.

    To date, Power Platform DLP capabilities have existed that allow us to be able to categorise connectors (whether Microsoft provided or custom) into three categories. These categories specify how the connectors are able to function – they’re able to work with other connections that are in the same category group, but cannot work with connectors that are in a different category group.

    So for example, it’s been possible to allow a user to create a Power App or a Power Automate flow that interacts with data from Dataverse, but cannot interact with Twitter (in the same app or flow).

    With this approach, it’s possible to create multiple DLP policies, and ‘layer’ them as needed (much like baking a 7 layer cake!) to give the functionality required per environment (or also at the tenant level).

    Now this has been great, but what has been missing has been the ability to be more granular in the approach to this. What about if we need to read data from Twitter, but just push data out to Twitter?

    Well, Microsoft has now iterated on the DLP functionality available! It’s important to note that this is per connector, and will depend on the capabilities of the connector. What we’re now able to do is to control the specific actions that are contained within a connector, and either allow or not allow them to be able to be utilised.

    Let’s take the Twitter connector as an example:

    We’re able to see all of the actions that the connector is capable of (the scroll bar on the side is a nice touch for connectors that have too many actions to fit on a single screen!). We’re then able to toggle each one to either allow or disallow it.

    What’s also really nice are the options for new connector capabilities.

    This follows in the footsteps of handling connectors overall – we’re able to specify which grouping they should come under (ie Business, Non-Business, or Blocked). As new connectors are released by Microsoft, we don’t need to worry that users will automatically get access to them.

    So too with new actions being released for existing connectors (that we’ve already classified). We’re able to set whether we want them to be automatically allow, or automatically blocked. This means that we don’t need to be worried that suddenly a new connector action will be available for users to use, that they perhaps should not be using.

    From my perspective, I think that any organisation that’s blocking one or more action capabilities for a connector will want this to be blocked by default, just to ensure that everything remains secure until they confirm whether the action should be allowed or not.

    So I’m really pleased about this. The question did cross my mind as to whether it would be nice to be able to specify this on a per environment basis when creating a tenant-level policy, but I guess that this would be handled by creating multiple policies. The only issue I could see around this would be the number of policies that could need to be handled, and ensuring that they’re named properly!

    Have you ever wanted these capabilities? How have you managed until now, and how do you think you’ll roll this out going forward? Drop a comment below – I’d love to hear!