Exam AB-210: Dynamics 365 Sales AI Consultant Associate

Indeed the 3rd exam related post in just over a week – it’s a busy (new) certification release season at the moment!

This time it’s the new AB-210 exam, focusing on Dynamics 365 Sales and AI (of course!). It’s nice to see that there’s a dedicated Dynamics 365 Sales exam back now – most of us will remember the MB-210 exam that was around for a number of years, but which was retired at the end of November 2024. What happened was that a new exam at the time (the MB-280) was released, which rolled together Dynamics 365 Sales with Dynamics 365 Customer Insights.

I never fully officially understood the reason for this, given that the roles in reality are quite different, and did comment at the time (MB-280: Microsoft Dynamics 365 Customer Experience Analyst) that I wondered how well it would stand the progress of time.

AI and sales capabilities seem generally to go well together – Microsoft has publicly demoed at large conferences the Sales Agent multiple times, showing how it can help qualify leads, and handle engagments with customers. To be honest I quite like this in general, though for implementation I do keep my (slightly skeptical) eye on it, to ensure it’s working in the right way.

The official description of the proposed exam candidate is:

As a candidate for this Microsoft Certification, you design and configure AI-enhanced sales solutions by using Dynamics 365 Sales, Copilot in Dynamics 365 Sales, and agent capabilities to help sellers work more efficiently throughout the lead-to-cash process. You translate business requirements into practical seller workflows enhanced with conversational intelligence, predictive insights, guided automation, and secure data access.

In this role, you work closely with sales, operations, and IT stakeholders to help ensure that solutions align with revenue goals and process optimization.

You perform the following design and implementation tasks:

  • Configure Dynamics 365 Sales core features.
  • Deploy, manage, and monitor agents in Sales.
  • Implement collaboration features.
  • Tailor AI-powered intelligence features.

It is highly recommended that candidates complete training in intermediate-level Microsoft Power Platform configuration before taking this certification exam. Additionally, you must have functional knowledge of:

  • Building Power Automate cloud flows.
  • Interpreting an organization’s sales processes and seller experience.
  • Building and extending model-driven apps.

The overall information for the exam can be found at Microsoft Certified: Dynamics 365 Sales AI Consultant Associate, and there is an official Learning Path available for it.

I do like that the exam content overview calls out that Power Platform knowledge & configuration is highly recommended. Obviously Dynamics 365 is built on top of Power Platform, and having this knowledge (ie the ability to customise & extend with Power Platform capabilities) is key to well thought through implementations.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Setup & Data
    • Environment creation & provisioning
    • Document management options & requirements
    • Enabling AI capabilities (Copilot, Sales Agent etc)
    • Configuring & customising forms
    • Configuring & customising views
  • Outbound calling
    • Configuration
    • Security requirements
  • AI Capabilities
    • Getting access to AI capabilities for users (deployment, security etc)
    • What the different AI agents & modes are, when to use them, and the behaviour of each
    • What blueprints are, how to use them, how to modify them
    • How AI agents handle communication re-tries
    • Creating custom agents
    • Analysing AI agent behaviour (runs, outcomes, metrics etc), monitoring information
    • Using AI to summarise records & ask for information
    • Ways to handle AI usage billing (what options are available, where to do this, how to do this)
  • Leads & Opportunities
    • Setting up & configuring predictive lead scoring models, requirements for implementing this
    • Understanding lead to opportunity conversion process, and continuing through to a final sale
    • Understanding sales goals, configuring sales goal/metrics/KPI’s, configuring rollup queries for aggregation
    • Assignment behaviour for leads to users, how this works, configuration for this
  • Products
    • The different ways to handle products (eg units, bundles, price lists, product families)
    • When each one should be used, and requirements for them
    • How to use the different components to configure specific scenarios
    • Relating products together
  • Pricing
    • Different ways to approach pricing products (eg singly, as a bundle, etc)
    • Handling multiple territories
    • Handling multiple currencies
    • Configuring price lists
    • Handling expired price lists & system behaviour
    • Handling discounting
  • Mobile app
    • Setup & configuration
    • Data synchronisation
    • Security setup & requirements
    • Push notifications
  • Power Automate
    • Understanding when to use different trigger types (automated/manual/schedule)
    • Usage for scenarios requiring approvals
  • Business process flows
    • What they are, and what they should be used for
    • How to configure, moving between stages, understanding how they work

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Exam AB-620: Design and build integrated AI agent solutions in Copilot Studio

We seem to be on a roll here over the last month or so with new exams being released (& its not over yet!). With all of the emphasis on AI & agents, I decided to go take the new Copilot Studio exam to see what it would be like.

Given that I have a decently passing familiarity with Copilot Studio (as I use it for projects, and actually do get hands on with it quite a bit of the time), I felt that I’d be in a good place to handle it without any revision. Obviously this could have been a bold move, and it’s up to everyone to make their own decisions about how much to revise (or not revise)!.

Copilot Studio has moved on from when it first came onto the scene (and for those who remember, it used to be called Power Virtual Agent, or PVA). Nowadays it supports coding within it, but it also can serve as the front end for other Microsoft AI capabilities, such as Microsoft Foundry models.

This is also the first time that it’s been featured for its own exam – previously it got rolled into other exams (such as the PL-100, PL-200, etc), where it was just one of the components being covered (and covered in a lightweight manner, at that). With the focus from Microsoft now heavily on it though, it’s now taken a step forward into the spotlight by itself.

The official description of the proposed exam candidate is:

As a candidate for this Microsoft Certification, you’re a professional developer or advanced builder who builds, extends, and integrates custom agents for enterprise-grade solutions. You typically work as an IT application developer, consultant, or independent software vendor (ISV) partner focused on creating scalable AI solutions for organizations or customers.

For this exam, you should be familiar with Power Fx, Microsoft Dataverse, Microsoft Power Platform environments and components, Microsoft 365 Copilot, Microsoft Foundry, and adaptive cards.

You need intermediate knowledge of generative AI concepts, including models, orchestration, retrieval-augmented generation (RAG), Model Context Protocol (MCP), Agent2Agent (A2A) protocol, and more. You should also have experience with prompt engineering and with REST APIs and integration patterns. Additionally, you need experience configuring agents with basic knowledge sources, instructions, tools, and topics in Microsoft Copilot Studio.

As a developer who works in Copilot Studio, you:

  • Integrate agents with Microsoft Foundry.
  • Integrate agents with Model Context Protocol (MCP) servers.
  • Integrate agents with custom connectors.
  • Integrate agents with APIs.
  • Integrate agents with Microsoft Fabric.
  • Automate tasks with computer use.
  • Integrate agents with connectors.

You create:

  • Multi-agent solutions.
  • Agents with enterprise knowledge sources (such as ServiceNow, SAP, and others).
  • Advanced agent topics and tools.
  • Computer-using agents.
  • Agents that perform advanced actions via APIs.

You collaborate with Microsoft 365 administrators, Microsoft Power Platform administrators, Microsoft Copilot administrators, Copilot Studio agent builders, Copilot Studio administrators, Foundry administrators, agentic AI business solutions architects, and Copilot Studio architects.

The overall information for the exam can be found at Microsoft Certified: AI Agent Builder Associate, and there is an official Learning Path available for it.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

I’ll freely admit that there was a LOT more focus on MCP capabilities than I had expected there to be, but I guess that again this is natural, given how Microsoft is moving at the moment.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Copilot Studio
    • Component/node types. What they are, how/when to use them
    • Using topic variables
    • Timeouts
    • Concurrency
    • Sensitive data & Using type ‘secret’ – what this does and why to use
    • Generative answers – how they work, limitations, what to know, how to configure & ground them
    • Computer Use
    • Connecting with Microsoft Graph
    • Connecting to other agents – how to do this, how to configure, what to use
  • Connector types
    • Standard connectors (ie connectors provided by Copilot Studio). When to use them, limitations
    • Custom connectors – what these are, why you’d use them
  • Security
    • Authentication types (API, OAuth 2)
    • Query delegation
    • DLP policies
  • MCP servers
    • What they are
    • Connecting to them
    • Security with MCP servers
    • Authentication types
    • Usage of AI with MCP servers
  • Azure AI Search
    • Connecting to knowledge index
    • Configurations
    • Security
  • Solution Types
    • Default vs Unmanaged vs Managed
    • Environment variables
    • Creating solution
  • Application Lifecycle Management (ALM)
    • What this is, and why it’s needed
    • What approaches can be used, why to use them
    • What’s needed to set up ALM
  • Monitoring & Troubleshooting
    • Reporting on deployed agents
    • Evaluating usage of deployed agents
    • Identifying issues & errors
    • Stopping runs

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Exam AI-901: Microsoft Azure AI Fundamentals

With a massive amount of focus on AI across the Microsoft platform, I decided to sit the new AI-901 exam, which is the new Azure fundamentals exam. I’m far from being an Azure architect, but will freely admit a decent amount of familiarity with a lot of Azure components, especially the AI stuff. Having previously passed the AI-900 a while back, I was expecting the exam to be up to date with technical developments, but wasn’t FULLY prepared for what it was actually like…

Now obviously all Microsoft AI capabilities, regardless of where they’re surfaced through, actually sit (somewhere) within Azure. After all, Azure is the Microsoft cloud platform itself (well, until someone decides to rename it, of course).

My expectations for going into the exam (with admittedly very minimal preparation for it) was to cover the basics for AI within Azure, similar to the way that the AI-900 exam was. Whilst this was somewhat the case, it didn’t necessarily stay within the bounds of my expectations.

The official description of the proposed exam candidate is:

This certification is intended for individuals who want to start working with AI solutions built on Azure. It is suitable for learners from technical backgrounds, including aspiring junior developers who are starting to incorporate AI capabilities into applications. As a candidate for this certification, you should have familiarity with the self-paced or instructor-led learning material.

This certification assesses your ability to show the conceptual knowledge and practical understanding needed to work with AI solutions on Azure, including:

  • Understanding core cloud concepts, such as services and resource deployments
  • Using Microsoft Foundry to deploy models and implement single-agent solutions
  • Recognizing how client applications are put together and how AI models and services are consumed within those solutions
  • Understanding Python code examples that call AI models and services

This certification is intended to validate skills commonly used when performing tasks such as:

  • Adding AI workloads, including language, vision, and generative AI, to software or IT solutions
  • Exploring and using AI features in applications as a junior or entry level developer

The overall information for the exam can be found at Microsoft Certified: Microsoft Azure AI Fundamentals, and there is an official Learning Path available for it.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

My main shock was the number of questions on Python code, including needing to select the right code syntax to use. Whilst I do understand that Microsoft is aiming to make Fundamental level exams/certifications more ‘technical’, I do feel that this is much more technical than the audience should be experiencing. I’ve also fed this back as feedback into Microsoft.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Analysis
    • Analyser types (audio, document, image, video). What each type is, how to configure them, and when to use them
    • Defining schemas for data extraction
    • How to extract content for analysis
  • Python
    • Using the Python SDK
    • Python code syntax and commands
  • Microsoft Foundry/Foundry Models
    • How AI models actually work when using/interfacing with them. Behaviour, access to content, prediction etc
    • LLM evaluations – comparing costs and capabilities
    • Creating, configuring, deploying, updating
    • Model temperature, inference
    • Minimising model bias, ensuring fairness
    • Connecting to a deployed model
    • Message structures for Foundry projects
    • Agent Evaluators – what they are, how to use them
    • Using Azure Content Understanding
  • Usage for models
    • Using Azure functions
    • Encoding images – data types
    • Voice Live (audio to text)
    • Azure speech SDK, and classes to use
  • Prompts:
    • Agent prompts. What are they, how are they used, why you should use them
    • System prompts. What are they, how are they used, why you should use them
  • Microsoft Responsible AI Principles – what they are, what are example of them
  • Why humans are still important to be involved in processes

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Exam AB-731: AI Transformation Leader

What better way to start 2026 then to talk about a Microsoft certification, especially one for a totally NEW type of user!

Following on the steps of the other AB exams I’ve been writing about my experience with (see Exam AB-730: AI Business Professional, Exam AB-100: Agentic AI Business Solutions Architect and Exam AB-900: Microsoft 365 Copilot and Agent Administration Fundamentals ), this article will cover the AB-731 exam.

This exam is focusing on the Microsoft AI capabilities from a Business Leader perspective, and to the best of my knowledge is the first time that Microsoft has ever created an exam from a ‘Business Leader’ perspective. Taking this exam was a complete mindset shift to me, especially when seeing the questions – it’s not about understanding the in depth technical capabilities, but more around the breadth of technology options (spanning Azure, Microsoft 365 Copilot, Copilot Studio & other tools), and what they bring/enable from a BUSINESS perspective.

The official description of the proposed exam candidate is:

As a candidate for this Microsoft Certification, you should understand how to recognize opportunities for AI transformation, identify the right AI tools and resources, plan for AI adoption, optimize business processes, and drive innovation by using Microsoft 365 Copilot and Azure AI services.

This Certification is designed for business decision-makers at all levels who are responsible for guiding transformation and innovation within their teams or organizations. In this role, you’re expected to demonstrate AI fluency, strategic vision, and the ability to lead AI adoption across teams and functions but are not expected to write any code.

As a candidate for this Certification, you should be able to evaluate AI opportunities, champion responsible AI practices, and align AI investments with business goals. You need experience leading adoption or change management in a business context. You must also be familiar with Microsoft 365 services, Azure AI services, and general AI capabilities.

The overall information for the exam can be found at Microsoft Certified: AI Transformation Leader, and there is an official Learning Path available for it.

As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

Overall, the exam approach was quite different to me – though I do talk with organisations frequently around general AI matters, I’ve never taken an example written in this way beforehand. However, I do feel that it’s very helpful to have this in place, to ensure that business leaders can demonstrate that they actually do know what they’re talking about 😉

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Azure Components & Capabilities
    • AI Vision – what it can be used for, benefits of using it, capabilities that it has
    • AI Language – what it can be used for, benefits of using it, capabilities that it has
    • AI Document Intelligence – what it can be used for, benefits of using it, capabilities that it has
    • Machine Learning – what it can be used for, benefits of using it, capabilities that it has
    • AI Foundry – what it can be used for, benefits of using it, capabilities that it has
    • AI Search – what it can be used for, benefits of using it, capabilities that it has
  • Microsoft 365 Copilot Chat
    • What license is needed
    • What data does it have access to
    • What security controls are in place
  • Microsoft 365 Copilot
    • What is it, what can it be used for
    • What can it do
    • How does it connect to data
    • What are the connectors for it (standard & custom)
    • Benefits of using it (vs 3rd party AI tooling)
    • Different agents (eg Analyst & Researcher) within it – what they do, how to access and use them
  • Microsoft Copilot Studio
    • What is it, what can be used for
    • What can it do
    • What license is needed
    • What data can it access
  • Microsoft Security Copilot
    • What is it, what can be used for
    • What can it do
    • Benefits that it provides
  • Security & Governance
    • Content filtering controls within Copilot
    • Policies
    • Handling requirements to prevent inappropriate language & responses
    • Responsible AI principles
    • Governance ownership, responsibility & requirements
  • Generative AI
    • AI model hallucinations
    • Grounding in data
    • Improving response quality
    • Prompt engineering
    • Pre trained models vs fine tuned models
    • Reasoning models vs non-reasoning models
    • Understanding usage costs (including different pricing models)
    • What is RAG, and how can it be used for business scenarios
    • Adoption throughout organisations – personas to involve in adoption team

    I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

    Exam AB-900: Microsoft 365 Copilot and Agent Administration Fundamentals

    Following on the steps of the other AB exams I’ve been writing about my experience with (see Exam AB-730: AI Business Professional & Exam AB-100: Agentic AI Business Solutions Architect), this article will cover the AB-900 exam.

    This exam is focusing on the Microsoft 365 Copilot capabilities from a user & administration perspective, and doesn’t cover/include anything from Copilot Studio.

    Now, though it’s a Fundamentals exam, to be honest it’s the HARDEST fundamentals exam that I’ve ever taken!

    The approach is around being able to demonstrate understanding of how to use the Microsoft 365 Copilot, as well as a lot of focus on how to control & administer it.

    The official description of the proposed exam candidate is:

    As a candidate for this Microsoft Certification, you should be familiar with Microsoft 365, including core services, security, identity and access, data protection, and governance, along with Microsoft 365 Copilot and agents.

    Additionally, you should be familiar with the admin centers used to access Microsoft 365 workloads, such as Exchange Online, SharePoint in Microsoft 365, Microsoft Teams, Microsoft Entra, and Microsoft Purview. You need to have experience with AI-driven productivity tools and modern IT management practices.

    You must be able to identify the roles of the core features and objects available in Microsoft 365, such as users, groups, teams, sites, and libraries. Plus, you should understand the core security features of Microsoft 365, such as authentication methods, conditional access policies, and single sign-on (SSO).

    The overall information for the exam can be found at Microsoft 365 Certified: Copilot and Agent Administration Fundamentals, and there is an official Learning Path available for it.

    As I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change for when it comes out of beta.

    One thing to keep in mind about this exam – though I do mention Microsoft Purview in the list of items below, I haven’t gone into it extensively. However, there were a LOT of questions that touched on Purview (& other governance stuff as well) – you REALLY need to be knowing & understanding these capabilities to be able to take & pass the exam. Just guessing the answers is not going to help at all!

    Overall, the exam seemed to me to be pretty decent, though with indeed a heavy focus on security & governance (as I’ve mentioned above). I don’t see this as a bad thing though, as it can help to show that administrators really do know what they’re talking about.

    I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

    • Agent types
      • Native Microsoft 365 Copilot agent
      • Native Microsoft 365 Copilot advanced agents (eg Researcher & Analyst). What they are, how to access, what to use them for
      • Custom Microsoft 365 Copilot agent
      • SharePoint agent
    • Creating/using Agents
      • Using natural language to create agents
      • How to handle/perform multi-step reasoning
      • Use of notebooks
      • Custom instructions
      • Scheduling prompts
      • Querying data types
        • Structured
        • Unstructured
    • Governance & security
      • Blocking access to different types of searches & collateral
      • Blocking access to specific agents
      • Tools to use for blocking
      • How to share agents with other users
      • Assigning licenses to users
      • Data retention policies
      • Data labelling policies
      • Use of Microsoft Purview, covering capabilities, tools, auditing, how to use, etc
      • Use of DLP
      • Data source permissions
      • Conditional access policies
      • Microsoft Defender – what it is, capabilities it has, how to use it, etc.
      • Types of authentication
    • Reporting
      • Licensing & usage
      • Adoption & interactions
    • Payment options & capabilities
      • Credit usage – internal vs external users
      • Pay As You Go Billing, and scenarios you can use it for

    I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

    Ignite ’24 – Power Platform Governance Announcements

    Being at Microsoft Ignite ’24 in Chicago is an amazing experience. Even MORE amazing are the announcements that the Power Platform Governance team has come out with. I’ve been fortunate enough to have been given early access to some of the features, and they’re really awesome. Below, I’ve summarised what I believe to be the top picks to look at

    Power Platform Admin Centre.

    We’ve all been used to the PPAC experience that’s been around for a number of years. It’s been useful, but limited in various functions. Well, there’s not just been a facelift, but an entirely NEW PPAC experience for us. Here are some screenshots:

    There’s a massive amount of stuff to look through (& play with) – my overall impressions are that this will definitely help move forward with security, governance & everything that’s needed. More importantly, especially with the focus & mentions of Copilot & Copilot Studio, there’s a section reserved for that, which is going to be critical for IT admins:

    The new PPAC experience is also taking over the role that was previously played by the Power Platform CoE Starter Toolkit. Functionality is (slowly) being shifted into the main PPAC experience. One of these that’s already a great start is the Inventory capability:

    Behind the scenes, this is data being captured at the tenant level, which is being stored in Dataverse (no, we don’t YET have access to the data natively, though I’m told it’s on the roadmap to be able to query). The performance of this works extremely well, though there are still a few little bugs that are being worked out 🙂

    But more importantly, this also covers Copilot Studio components – to date there has not really been anything around to report on this properly…but now there is!

    Managed Environments

    We all know the conversation around Managed Environments, and sometimes needing to persuade organisations that premium licensing will actually give ROI to them. Well, with the new features that have been announced this week, this just got a WHOLE lot easier! Let’s take a look at some of these items

    Environment Rules

    Initially when Managed Environments launched, there were just a few rules that could be applied. We were told that more were coming….and indeed they are! Still more to come that the team is working on, but the number of rules has increased massively:

    Some of my favorites here are the ability to manage Copilot – it’s going to be SO important as to how these are handled (especially with all of the emphasis on it coming out of Ignite). Being able to set/enforce authentication options, sharing options & various other settings is going to be KEY to proper Copilot governance.

    It also now gives options for backup retention policies. I’ve written previously about how to ‘hack’ longer backups for environments (Environment types, capabilities & backups) – we’re now able to set longer backups for pure Power Platform environments within needing to enable Dynamics 365 applications within them (though of course you may still want to do this if you can see yourself using Dynamics 365 in the environment in the future – it’s still not possible to upgrade the environment type at a later point).

    However there’s also something else new around environments. Previously if just looking at an environment from the main list of environments within PPAC, it wasn’t easy to see if it belonged to a Managed Environment group or not. Now it is – more so, you’re not able to tweak any settings on the general environment page that are being managed at the Environment Group level!

    DLP Capabilities

    One of the main challenges to date with DLP has been around the inability to block certain connectors (eg the Microsoft standard connectors). With Managed Environments, the team has now enabled organisations to be able to block ANY connectors that they wish to! If you’re not running Managed Environments, the existing limitations will still apply – you do need to be using Managed Environments for this! This will also be made available through the Power Platform API & Admin SDK tools in the coming weeks.

    Preferred Group

    Whilst we’ve had environment routing around now for a while (being able to auto-route new makers to a specific environments, which could be within a Managed Environment group), we haven’t had the ability to handle new environments being created & auto populated into an environment group.

    Well, this is now changing. We’re now going to have the ability to auto set policies, so that when a new environment is created, it can automatically be added to a Managed Environment group. Obviously with this happening, the rules & policies applied at the group level will automatically be applied to the new environment as well! This will be a decent relief to Power Platform administrators – to date we’ve been able to set up things like DLP policies to auto-apply to new environments, but managing them otherwise needed to be done manually…well, no more!

    Security Personas

    Until now, security & governance within Power Platform have been a ‘one size fits all’ approach. Different types of people would access PPAC etc, but there wasn’t really a way to differentiate the different personas. This is now changing:

    In summary, incredible steps forward, and I know that there’s a LOT more in the works that should be coming in the next weeks & months. I’m really excited about all of this, and using the capabilities to continue enabling & empowering organisations from a security & governance point of view.

    Environment Grouping

    One of the main ‘complaints’ that Power Platform administrators have is around how policies are applied to environments. Within Azure, it’s possible to set up security policies and apply them in bulk, or group together components under a single set of policies. However when it comes to Power Platform, this has not been possible – each environment has needed to be configured on its own.

    I’m not talking here about DLP policies, as these are set up and then relevant environments selected/deselected as needed. I’m talking about things like setting Canvas App sharing limits, welcoming new makers, and other items.

    Well, Microsoft has now made this possible to do – though the current first iteration (now in Public Preview) only has a few options within it, I’m quite certain that many more items will be coming down the line to fall under the new Environment Grouping feature.

    At the moment, there are 6 options available for Power Platform administrators to be able to set and configure. Note that you do need to have the M365 security roles for either Global Tenant Administrator or Power Platform Administrator to be able to access and carry this out.

    To be clear, Environment Grouping is a feature of Managed Environments. I’m not going to go into the debate about whether you should or shouldn’t adopt Managed Environments (at least not here – I may be speaking about it publicly later on this year), but you do need to have these in order to use this functionality. More specifically, you will ONLY be able to add environments that are set as ‘Managed’ to Environment Groups (though they don’t have to have Dataverse in play):

    So, what exactly is the purpose of Environment Grouping? Well, it’s to minimise the amount of time that Power Platform administrators need to spend in setting up & applying policies.

    Think of the users within your organsiation. You’re going to have different personas, such as developers, testers, end users, etc.

    You’re also likely (especially in larger organisation) to have different business units & functions requiring different items. For example, you may lock down access to social media, but Marketing and Recruitment may indeed need access to social media to be able to carry out their jobs.

    With these personas in mind, you can then start to look into building out different rule groupings, which will apply to all environments that are included under the Environment Group. It’s somewhat similar to the way in which DLP policies work – you create a DLP policy, and then everything that comes under the DLP policy gets the DLP policy setting.

    There are many ways to manage pockets of environments within your tenant using environment groups. For example, global organisations can create an environment group for all environments in each geographic region to ensure compliance with legal and regulatory requirements. You can also organise environment groups by department or other criteria.

    One of the other features around Environment Groups is the ability to use Environment Routing. I’ve talked about this previously when the feature was first released (Developer Environment Routing!) – Environment Groups now takes this to the next level, by being able to automatically set the Environment Group that new developer environments will fall under (so therefore policies will be automatically applied). Important to note here that all developer environments created through this WILL be set as ‘Managed’.

    More information on the new capabilities can of course be found on Microsoft Learn, at https://learn.microsoft.com/en-us/power-platform/admin/environment-groups.

    I think that this is a great new feature to have in place for Power Platform administrators, and look forward to seeing new functionality rolled out within this to enable organisations in a better way. Being able to cut down on administration/governance time, whilst being able to be more effective is, in my view, a win-win for ALL of us, and I can’t wait to see how it will develop over time.

    So, my question to you is how would YOU look to use such functionality? What features might you like to appear within Environment Grouping to enable you and your organisation? Drop a comment below – I’d love to hear!

    Developer environments – new capabilities to create for users

    Developer environments are awesome. There – I’ve said it for the record. Formerly known as the ‘Community Plan’, developer environments are there for users to be able to play with things, get up to speed, test out new functionality, etc. They’re free to use – even with premium capabilities & connectors, users do not need premium licensing in place (caveat – if it’s enabled as a Managed Environment, it will require premium licensing).

    Originally, users were only able to create a single developer environment. However, earlier on this year Microsoft lifted this restriction – users are now able to create up to THREE developer environments for their own usage (which makes it even easier now for users to get used to ALM capabilities, and try it out for themselves).

    Now, the ability for users to create developer environments is controlled at the tenant level, and it’s either On or Off. It requires a global tenant admin to modify this setting, but it’s not possible to say ‘User Group A will not be able to create developer environments for themselves, but User Group B will be able to’.

    Organisations have differing viewpoints on whether they should allow their users the ability to create developer environments or not. I know this well, as usually I’m part of conversations with them when they’re debating this.

    One of the main challenges that comes when organisations don’t allow users to create their own developer environments has been that historically, it’s not been possible for someone else to create the environment on their behalf. If we think of ‘traditional IT’, if we’re not able to do something due to locked down permissions, we can usually ask ‘IT’ to do it for us, and grant us access. This has not been the case with developer environments though – well, not until recently.

    Something that I do from time to time is chat with the Microsoft Product Engineering groups, to provide feedback to (try to!) help iterate products forward and better. One of the conversations I had in the summer was with the team responsible for developer environments. I was able to share experiences & conversations that I had been having with large scale enterprise organisations, and (very politely!) asked if they could look to open up the ability to do something around this.

    Around a month ago or so, the first iteration of this dropped – in the Power Platform Admin Centre interface, it was now possible to specify the user for whom an environment was to be created!

    This was an amazing start to things, and definitely would start unblocking Power Platform IT teams to enable their users, in circumstances where their organisations had decided to turn off the ability for users to create their own developer environments.

    However, this still required the need to do it manually. Unless looking into an RPA process (which, let’s face it, would be clunky & undesirable), it meant that someone with appropriate privileges would need to go & actually create the environment, and associate it to the user.

    However, this has now taken another MASSIVE step forward – I’m delighted to announce that this capability has been implemented in the Power Platform CLI, and is live RIGHT NOW (you’ll need to upgrade to the latest version – it’s present in 1.28.3 onwards).

    So, with this in place, it’s now possible to use PowerShell commands to be able to create developer environments on behalf of users, and assign it to them. Organisations usually already have PowerShell scripts to handle new joiners, and will therefore be able to integrate this capability into these, to automatically set up developer environments for users. Alternatively, existing users could look to raise internal requests, and have them automated through the use of PowerShell (along with appropriate approval processes, of course!).

    So this is really nice to see. However, I think it can still go one step further (at least!), and am trying to use my connection network to raise with the right people.

    See, we have the Power Platform for Admins connector within Power Platform already. One of the functions available in this is to be able to create Power Platform environments:

    However, if we look at the action (& the advanced settings within this action), there’s no ability to set this:

    Interestingly enough, the API version listed by default is actually several years old. By doing some digging around, I can see that there are multiple later API versions, so I’m not sure why it’s using an older one by default:

    What would be really amazing is to have these capabilities surfaced directly within Power Platform, using this connector. Then we could look to have everything handled directly within Power Platform. Given that the CoE toolkit already includes an Environment Request feature, I would see this as building on top & enabling it even further. Obviously organisations wouldn’t need the CoE toolkit itself, as they could look to build out something custom to handle this.

    What are your thoughts on this – how do you see these features enabling your organisation? If your organisation HAS locked down the ability for users to provision developer environments, are you able to share some insights as to why? I’d love to hear more – drop a comment below!¬

    Developer Environment Routing!

    Recently I talked about the wider vision that organisations would be able to use, for helping users get access to the right environments (Default Environment – How to handle? » The CRM Ninja). As part of this, I discussed the Microsoft vision of having environment routing in place, to move users automatically to specific environments.

    At the point of writing, there wasn’t anything that I could publicly talk about. However, overnight Microsoft have released functionality around this – what I see as being the first step that this direction is taking. The documentation for this is at https://powerapps.microsoft.com/en-us/blog/default-environment-routing-public-preview/

    The functionality released is to enable new users to Power Platform to automatically have a developer environment created for them to access, rather than landing in the Default environment within their tenant. Many organisations struggle with users creating content in the Default environment, when it’s not really (at least not in my opinion) the right place to do this.

    Now, when we say ‘new users’, this doesn’t actually mean users newly created in M365 (or Entra ID/AAD). What this means is ‘users who have not accessed anything within Power Platform before’. In the back end, there’s a counter on each user record that keeps track of this, which this functionality is using to determine if users have accessed Power Platform beforehand or not.

    What is important to note on this as well is that the Default environment DOES NOT need to be set to Managed for this to work. Microsoft documentation doesn’t make this clear at the moment, but hopefully it’ll be updated soon to clarify this.

    Two settings do need to be toggled on within the Power Platform Admin Centre for this to work:

    Once these have been set & saved, let’s take a look at how things actually happen. I’ve created a new user for testing purposes:

    When signing in, it then briefly shows the general interface that we’re used to for a few seconds:

    But, then we get this exciting NEW screen!

    And then after a minute or so, we get placed nicely in the new environment:

    Looking at the Power Platform Admin Centre, we can see the new environment that’s been created:

    To be candid, during my testing things didn’t always work – I had some differing behaviour, or (on one occasion) the interface just hung. I’m going to put this down to being newly released & the product team working through potential issues (remember of course – this is in PREVIEW), and am hoping that they’re resolved very soon.

    Also, it’s important to note that the developer environments created through this are MANAGED. Users will be able to create collateral in them, but to run apps etc will need premium licensing in place.

    Moving forward, it would be great to have some information displayed to users if something hasn’t worked, as well as notifications to admins (configurable) so that they’re aware as well. Examples of this could include where an organisation has maxed out the number of (free) developer licenses available (yes, I know this sounds stange, but there’s a default limit of 9,999 developer licenses per org).

    But I think it’s a great first step forward, and hopefully there will be many different ways that this product will be developed forward. My initial thoughts would include:

    • Creating developer environments for existing Power Platform users who don’t have a personal developer environment
    • Routing existing Power Platform users who have their own Developer environment to it
    • Being able to route to other places as well, including being able to specify which users/groups of users should be routed

    It’s an exciting place to be in, and I look forward to seeing more of it!

    What are your thoughts around this? Does your organisation allow users to have personal developer enviroments, or do they lock it down?

    Default Environment – How to handle?

    As we’re all aware, the default (Power Platform) environment in any Azure tenant is a very ‘interesting’ thing to have. It’s there by default when an Azure tenant is created, all users within the Azure tenant automatically have access to it, we’re not able to restrict users from being in it, etc etc.

    Though it’s able to be backed up, it’s not able to be restored over itself, there’s no SLA/support available on it….the list goes on & on…!

    Many of us have come up against issues caused by people using the default environment whilst not knowing about challenges involving it, which usually results in pulling out our hair, banging our head against the wall, and other like-minded productive approaches.

    However, it is the first place that users, being new to Power Platform, land up, and instinctively they’ll start building applications, automations etc within it (though usually without using solutions as a container for the development of items). So to date, there’s not really anything that’s been able to be done around this, apart from monitoring users & chasing them after the fact.

    Now, we’re all about enabling our users in the right way, helping educate & support them. Telling them a big NO doesn’t help, and can even be an initial blocker to having people start playing around & building technological solutions.

    So how can we go about enabling our users, but also having the appropriate level of governance over the top? Well, there are several steps that I think we can take, which will help us with these. Now, not all of these are yet in place, though they have been talked about publicly. So let’s go take a look at them

    1. The first step, in my mind, is to start off with enabling the default environment as a managed environment (yes, this can ACTUALLY be done!). Managed environments have many different properties associated with them, but the one of most interest (for this at least) is the requirement to have a premium license in place.

    All users within an organisation should by default have an M365 license SKU against them (usually this would be an E3 or E5). Users with these can immediately use the seeded Power Platform capabilities within them to create Power Platform collateral (using standard connector capabilities). However, with the default environment being managed, they will NOT be able to access it!

    Note: For the moment, I’m leaving out users who have premium Power Platform licenses – this is deliberate

    1. Environment routing. Announced recently is the environment routing capabilities. This will enable users to be automatically routed to an appropriate environment, based on various conditions that can be set. With this, we could create appropriate business unit ‘sandboxes’, and we could route users to these. The user experience would be that when logging in, they would automatically then go to the right environment, rather than trying to work out which environment they should actually go to. This will save on confusion, and be a good user experience (in my opinion).
    1. Just-In-Time (JIT) Environment Creation. One of the items mentioned by Charles Lamanna at the European Power Platform Conference 2023 in Dublin is a new capability that’s coming in soon (I hope!). From the sound of it, this will give the ability to automatically create a new environment for users who do not already have one.

    This sounds really cool. With the recent advent of Development Environments (& the ability for all users to have multiples of these), this could work REALLY well with the environment routing capability mentioned above. When a user would log in for the first time, it could look to see if they have a developer environment – if yes, then route them to it. But if the user didn’t, then to automatically spin up & create a new developer environment, and route them to it.

    Now there are some caveats with this approach, leaving aside that some of the functionality isn’t GA yet.

    It would mean that organisations would need to be alright with changing the default environment to become a managed environment. Obviously, risk assessments would need to be carried out with this, and non-premium solutions migrated elsewhere.

    It’s also important to call out that organisations which have a CDS 1.0 implementation (ie before Power Platform became GA etc) will only have the ability to upgrade default to managed. They are not able to downgrade back to an unmanaged default environment, given limitations of the original CDS implementation (I’ve heard some truly HORRIFIC stories around this, so be careful!)

    The above, however, is just the start of things. There are many other concepts to keep in mind, such as Landing Zones, Policies, etc. I’m going to be looking to cover these in upcoming posts, so keep an eye out for them!