Troubleshooting the ‘Follow’ functionality

On a recent client project, we’ve come up against an interesting situation. Some of the users have the ‘Follow’ functionality available to them, but others don’t seem to have it. This, of course, is quite confusing, so I thought it would be good to write about it, for others who may come up against this.

But first, let’s take a step back. After all, before this had happened I had never heard of the ‘follow’ functionality within the system, and I’m quite sure that many others haven’t either! So what exactly is this all about?

What is ‘Follow’?

We’ve all been there – we have some customers who are ‘priority customers’, and we want to know/see everything that’s happening around them. Obviously we can go into their specific record/s, and see what’s going on. For example, seeing new cases added for these customers, other activities, etc. But what if we don’t want to have to manually open the records each time, or set up specific views in the system for them?

Well, this is where the Follow functionality comes in. It’s possible to track activities (in ‘real-time’) for records that a user follows. Microsoft has given us the ability to set this (or unset this) on a per record basis, so that users can set their own preferences within the system. When a user follows a specific record, the details for that record then show up in the users activity feed. This can then be used further, such as displaying it within a dashboard, for example.

Follow functionality through views
Follow functionality on a specific record

It’s also possible to automatically follow records based on specific criteria.

How to set up Follow functionality

In order for records to be able to have the follow functionality available to them, they need to have the Activity Feed enabled for the specific table. The default system tables such as Accounts, Contacts & Leads already have this enabled, so these records are able to be followed without any additional configuration around them.

To enable other tables (such as custom tables that you may have created) to be able to have the records within them followed, we need to carry out the following steps:

  1. Go to the Advanced Settings menu, and open Activity Feeds Configuration

2. Find the table that we’re wanting to configure this for (if it’s not showing up, click the ‘Refresh’ button on the menu)

Here we can see that the Channel table isn’t enabled at this point

3. Click the ‘Activate’ button on the menu bar

4. Confirm the pop up screen

And voila – you’re done! Users will now be able to go into the table/s, and follow (or unfollow) records there

Troubleshooting

So we now understand what the follow functionality is, and how to enable it. But what happens when users can’t actually see it within the system, to be able to use it?

Well, there are several different things that we can do to look to solve the issue:

  • Have activity feeds been configured for the table? If they’ve not been configured, then they’ll need to have this set up (this is why I’ve put the steps above as to how to do this!)
  • Are security roles set up correctly?

The second one is what turns out to have been the issue for this project. It’s been quite confusing, as originally mentioned, that certain users did see the follow functionality, but others users didn’t see it.

The first place to check is the ‘follow’ privileges on each security role:

As you can see above, we had given organisation-level access on the security role (& actually across all security roles), though the users were still having issues. So the next step is to check a different security privilege within the security role. This is the ‘Post Configuration’ setting, which is found under the Custom Entities section (why it’s under Custom, I have NO idea):

Without this enabled, users with the security role will NOT be able to see/use the follow functionality within the system!

Hopefully this should then sort out all issues, and users will be able to use the functionality as required.

Have you ever had issues with this feature? Have you found a different solution to fix it? Drop a comment below – I’d love to hear!

Environments & ‘Admin Mode’

With some recent events happening (both professional & personal), I’ve taken a slight step back from putting out posts on here. Thankfully things seem to be settling down, so I’m getting (back) into the swing of things!

I thought that it would be good to talk about a subject that I fell ‘foul’ of recently. This is around environments, and more specifically, the ‘admin mode’ that it’s possible to use on them.

So what exactly is this ‘admin mode’? Well, the aim of it to restrict access to certain users, namely System Administrators & System Customisers. Why would we want to do this? There are several scenarios that come into mind:

  • Performing a system upgrade (such as enabling new features)
  • Changing environment type (eg Production to Sandbox, or vice-versa)
  • Restoring an environment

Essentially, any time we have operation-type work that we’re wanting to carry out. This way whatever we’re doing won’t affect users, and anything that the users are doing won’t affect things either (symbiotic relationship there!).

So as an example, if we’re doing a major release, which changes functionality within a system, we wouldn’t want users in the system carrying out their usual work, as this could have data issue if saving during the actual release. We of course SHOULD be communicating to users that a release is going to take place, and that they shouldn’t be in the system at the time, but ‘admin mode’ is how we can truly enforce it.

Something to bear in mind as well is that if you’re going ahead & restoring an environment to a previous state (whether that’s an automatic save point, or a manual one), it will automatically put the environment into ‘admin mode’ once the restore has been completed. This is very important to keep in mind!

There are three settings around administration mode:

  1. ‘Administration Mode’. This sets whether admin mode is on or off!
  2. ‘Background Operations’. This sets whether background processes, such as workflows, power automate flows, and Exchange synchronisation are enabled (allowed to happen) or disabled (stopped from happening
  3. ‘Custom Message’. This allows you to set a custom message that users (who are not system administrator/system customiser) will see when they attempt to access the environment

So this is the scenario that tripped me up a few weeks back:

  • I was needing to restore an environment to an earlier save point (to be clear, this was NOT a production environment)
  • I went ahead with the restore, and it completed successfully
  • Given that I was doing this at night, one of my children woke up, and I had to deal with them
  • I came back to things, saw that it completed, and then went ahead with the release that I was needing to do

All seemed to go well. However, when users were testing (which admittedly was a few days later), they reported that some functionality wasn’t working. This was strange, as it had been working before the release (& the release that I did hadn’t actually touched it!).

It turned out to be Power Automate flows that just didn’t seem to be running. OK – I started to look into them, but couldn’t figure out why they hadn’t run.

Creating a test Power Automate flow didn’t seem to work either – despite running it to test it, the trigger never activated! I was quite puzzled by this, and couldn’t (initially) work out the reason.

Then I thought to check environment settings! Lo & behold, the environment was STILL in administration mode, and the Background Process option was disabled! Aha – I’ve found the source!

Flipping this out of administration mode thankfully then allowed all Power Automate flows to work/run, and users confirmed that functionality was indeed running as expected. As you can imagine, I was quite relieved!

man in white shirt and black pants standing on black concrete bench near white building during

Something that I hadn’t realised previously is that if you manually put an environment into administration mode, it doesn’t automatically disable background processes. However, if you restore an environment, it DOES disable background processes by default. So if you’re wanting to try out automation items within a restored environment that’s still in administration mode, you’re going to need to ensure that you toggle the Background Processes toggle to allow it to work!

One further thing to learn as well (which I’ve been asked already by some people, so thought that I would mention it here). I’ve mentioned above that users were in the system, but reporting that things weren’t working. Now given that the environment was in administration mode, people have asked how users could be in it! The answer is that these users actually had the system customiser role applied to them, which is why they could get in! If they hadn’t had the role, then perhaps I might have realised things a little sooner (ie that the environment was in administration mode).

So a (good) little lesson learned, and I’ll definitely take it forwards. Has this, or anything else like it, ever tripped you up? Drop a comment below – I’d love to hear!

Canvas Apps & Power Automates

So it’s been a busy few weeks here, which is why I haven’t really been putting up any articles. March/April is always a busy time for our family with stuff going on, and this year I decided not to push myself to get articles out, as otherwise I’d be running very low on sleep!

That being said, I’ve still had some great ideas about things that I’d like to share, and have been keeping a series of short notes for me to pick up. Today’s topic is one of them, which I think has been a major pain to anyone involved in canvas app development!

So, the back story to this is that we’re able to use Power Automate flows together with canvas apps. What I mean by this is that we’re able to directly trigger them from within the canvas app, rather than needing to do something like edit or create a record, and then have the Power Automate flow trigger from the record creation or modification.

There’s a specific Power Apps trigger that’s available within Power Automate exactly for this purpose:

When clicked, it gives us the trigger line in the steps as follows:

So what we’d do is within the canvas app, we would bind a button (or another control) that when selected, it would then go away & trigger the Power Automate flow. Great – so many different things that we can get to happen! One of the benefits of doing things like this is that we can then pass information from the Power Automate flow back to the canvas app directly:

This can then mean that the user can know, within the canvas app itself, that the Power Automate flow has run, and use data (or other things) that have come out of it.

OK – all good so far.

The main issue to date has been with deploying canvas apps together with Power Automate flows. See, as per best practise, we would create a solution, place the canvas app, flows, and anything else that’s necessary for it to work within it, and then deploy the solution to our target environment/s. And that’s where things just…didn’t go quite right.

Obviously within the development environment, the canvas app would be hooked up to the flows, and everything would work. Clicking the button would cause the flow to run, etc. User authentication would be in place (along with licenses of course!), and it was just fine.

But when deploying a solution containing canvas apps and associated flows between environments (regardless of whether it’s been manually deploying, or automated using a tool such as Azure DevOps), the connections to the flows would be broken. Ie, the canvas app would run, but the flows wouldn’t trigger. Looking at the connections in the canvas app within Studio would show something like the following:

All of the connections to Power Automate flows would show as ‘Not connected’. It’s not even possible to click the ellipse next to them and re-connect them – the only option available is to remove it from the canvas app!

So in order to get things working again, we’d need to do the following steps:

  • Open up the canvas app
  • Remove all connections to Power Automate flows
  • Add a temporary button, set it to be a Power Automate trigger
  • Click through all of the Power Automates needing to be connected (waiting for each one to connect, then go to the next one)
  • Remove the temporary button
  • Save and publish the solution

This, in a nutshell, has been a (major) headache. For example, I’ve been working with a solution that has over 30 Power Automate flows that can be triggered from the canvas app (lots of different functionality!). Each deployment has needed the above process to be carried out, which has usually added on at least an hour to the deployment process!

Now, this hasn’t been something that’s been unknown. In fact, the official Microsoft documentation noted the following:

So this is something that Microsoft has been well aware of, but it’s been a pain point that we’ve had to work with.

However, this has now ALL changed, which I (and MANY others) are really pleased about!

Microsoft has rolled out an update last month that means that canvas app connections to Power Automate flows will NOT break when they’re deployed across environments! This is such a massive time-saver, that I’m now trying to work out what to do with all of my free time! Only kidding…more project work will commence!

So what we can now do is take our solution, deploy it across the different environment/s that we need to get it out to (whether manually, or automated using tools such as Azure DevOps), publish the solution, and then everything works! Amazing!!

One small caveat though – to ensure that this work, you will need to go into the app, and re-publish it on the latest Power Apps version. This should of course be done in a development environment, and then can be exported and deployed as required.

Microsoft have also updated their documentation at https://docs.microsoft.com/en-us/powerapps/maker/data-platform/solutions-overview to remove the limitation text shown above. It’s a good place to keep an eye on changes that occur over time too.

This is definitely a welcome piece of development, and I know that we’ve been eagerly waiting for this for a while, and now it’s here!

PL-600: Microsoft Power Platform Solution Architect

Well, it’s FINALLY here. And by finally, I guess I’m saying that I’ve been waiting for this for a while? The PL-600 exam is the new ‘Holy Grail’ for Dynamics 365/Power Platform people, being the Solution Architect (3 star) exam. Ten minutes after it went live, I booked to take it, and four hours after it went live I sat it! (I would have taken it sooner, but had to have supper first, get the kids to bed, etc…)

The first solution architect exam that Microsoft has done in this space has been the MB-600 (see my exam experience write-up on it at MB-600 Solution Architect Exam). However with the somewhat recent shift moving towards certifications for the wider Power Platform, it was inevitable that this exam would change as well.

Interestingly enough, the MB-600 now counts towards some of the Microsoft Partner qualifications. I’d expect that when it retires (currently planned for June 2021), the PL-600 will take the place of it in the required certifications to have.

So, how to discuss it? Well, the obvious first start is to link to the official Microsoft page for it, which is at https://docs.microsoft.com/en-us/learn/certifications/power-platform-solution-architect-expert/. According to the specification for it:

Microsoft Power Platform solution architects lead successful implementations and focus on how solutions address the broader business and technical needs of organizations.
A solution architect has functional and technical knowledge of the Power Platform, Dynamics 365 customer engagement apps, related Microsoft cloud solutions, and other third-party technologies. A solution architect applies knowledge and experience throughout an engagement. The solution architect performs proactive and preventative work to increase the value of the customer’s investment and promote organizational health. This role requires the ability to identify opportunities to solve business problems.
Solution architects have experience across functional and technical disciplines of the Power Platform. Solution architects should be able to facilitate design decisions across development, configuration, integration, infrastructure, security, availability, storage, and change management. This role balances a project’s business needs while meeting functional and non-functional requirements.

So not really changed that much from the MB-600, though obviously there’s now an expectation for solutions to bring in other parts of the Power Platform, as well as dip into Azure offerings as well. Pretty much par for the course, in my experience, with how recent projects that I’ve been on have been implemented.

At the time of writing, there are no official Microsoft Learning paths available to use to study. I do expect this to change in the near future, and will update this article when they’re out. However the objectives/sub-objectives are available to view from the main exam page, and I’d highly recommend going ahead & taking a good look at these.

Passing the exam (along with having either the PL-200 Microsoft Power Platform Functional Consultant or PL-400: Microsoft Power Platform Developer Exam qualifications as well) will result in a lovely (new) shiny badge. Oh, we do so love those three stars on it!

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

Overall, I had 47 questions, which is around the usual amount that I’ve experienced in my exams over the last year or so. What was slightly unusual was that instead of two case studies, I got three of them! (note that your own experience may likely vary from mine).

Some of the naming conventions weren’t updated to the latest methods, which I would have expected. I still had a few references to ‘entities’ and ‘fields’ come up, though for the most part ‘tables’ and ‘columns’ were used. I guess it’s a matter of time to get everything up to speed with it.

  • Environments
    • Region locations, handling scenarios with multiple countries
    • Analytics
    • Data migrations
  • Requirement Gathering
    • Functional
    • Non-functional
  • Data structure
    • Tables
      • Types of tables
        • Standard vs custom functionality
        • Virtual tables. What these are, when they would be used, limitations to them
        • Activity types
      • Table relationships & behaviours
      • Types of columns, what each one is suited for
      • Business rules. What they are, how they can be used
      • Business process flows. What they are, how they can be used
  • App types (differences between them, scenarios each one is best suited for
    • Model
    • Canvas
    • Portal
  • Model-driven apps
    • Form controls (standard vs custom)
    • Form layout (standard functionality vs custom functionality)
    • Formatting inputs
    • Restricting inputs
  • Automation
    • Power Automate flows. What they are, how they can be used, restrictions with them
    • Azure Logic Apps. What they are, how they can be used, restrictions with them
    • Power Virtual Agents
  • Communication channels
    • Self service abilities through Power Virtual Agent chatbots. How this works, when you’d use them, limitations that exist
    • Live agent abilities through Omnichannel. How this is implemented, how customers can connect to a live agent (directly, as well as through chatbots)
    • Teams. When this can be used, how other platform abilities can be used through it
  • Integration
    • Integration tools
    • Power Platform systems
    • Azure systems
    • Third party systems
    • Reporting across data held in different systems
    • Dynamics 365 API
  • Reporting
    • Power BI. What it is, how it’s used, how it’s configured, limitations with it, how to share information with other users
    • Interactive Dashboards. What these are, how these are set up and used, limitations to them
  • Troubleshooting
    • Canvas app issues
    • Model driven app issues
    • Data migration
  • Security
    • Data Protection. What is it, where it’s set up, how it’s used across different requirements in the platform
    • Types of users (interactive/non-interactive)
    • Azure Active Directory, and the role/s it can play, different types of AAD authentication
    • Power Platform security roles
    • Power Platform security teams, types
    • Portal security
    • Restricting who can view forms
    • Field level security
    • Hierarchy abilities
    • Auditing abilities and controls
    • Portal security

Wow. It’s a lot of stuff. Not that I’m surprised by that, as essentially it’s the sort of thing that I was expecting (being familiar with the MB-600). I think that on a ‘day to day’ basis, I cover most of these items already, so didn’t have to do a massive amount of revision for items that I wasn’t familiar with.

From my experience in taking it, I’d say that around 30% of the questions seemed to be focused on Dynamics 365, with 70% being focused on Power Platform capabilities. It’s about what I thought it would be when the exam was first announced. Obviously some people are more Dynamics 365 focused, and others are more Power Platform focused, but the aim of the exam (& qualification) is to really understand the breadth of the offerings available.

I can’t tell you if I’ve passed it or not…YET!. Results aren’t going to be out for several months, based on previous experience with Beta exams, but I’ve got a good feeling about this.

So, if you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Solution Dependencies & Management

Solutions are marvellous things. They enable us to be able to package up lots of components, and deploy them to different environments all together as one single package.

However, there have been changes over time as to how solutions are used. I’m not (for the most part) going to go into the Managed VS Unmanaged debate, which I leave to people who are more in the know….

Microsoft Dynamics 365 apps are installed using solutions. Third party apps provided by Independent Software Vendors (ISVs) also use solutions.

In Power Apps, solutions are leveraged to transport apps and components from one environment to another or to apply a set of customisations to existing apps. A solution can contain one or more apps as well as other components such as entities, option sets, etc. You can get a solution from AppSource or from an independent software vendor (ISV).

Custom development should also take place within a solution, to allow it to be deployed appropriately.

But it’s important to take a closer look at how solutions work overall, as we can be involved on multiple projects within the same environment. Not only that, some solutions may require other solutions to be present first, in order to actually work! A great example of this is Master Data Management (or MDM), which is where companies have a ‘backbone’ of data, which other parts of the system then hangs off.

To understand this concept better, let’s take a quick look at solution layering.

Solution Layering

Layering occurs on the import of solutions and describes the dependency chain of components from the root solution introducing it, through each solution that extends or changes the components behaviours. Layers are created through an extension of an existing component (taking a dependency on it) or creation of a new component or version of a solution

Managed and unmanaged solutions exist at different levels within a Microsoft Dataverse environment. In Dataverse, there are two distinct layer levels:

  • Unmanaged layer. All imported unmanaged solutions and unmanaged customizations exist at this layer. The unmanaged layer is a single layer.
  • Managed layers. All imported managed solutions and the system solution exist at this level. When multiple managed solutions are installed, the last one installed is above the managed solution installed previously. This means that the second solution installed can customize the one installed before it. When two managed solutions have conflicting definitions, the runtime behaviour is either “Last one wins” or a merge logic is implemented. If you uninstall a managed solution, the managed solution below it takes effect. If you uninstall all managed solutions, the default behaviour defined within the system solution is applied. At the base of the managed layers level is the system layer. The system layer contains the tables and components that are required for the platform to function.

The following diagram introduces how managed and unmanaged solutions interact with the system solution to control application behavior.

  • The system solution represents the solution components defined within Dynamics 365 or the Power Platform. Without any managed solutions or customisations, the system solution defines the default application behaviour. Many of the components in the system solution are customisable and can be used in managed solutions or unmanaged customisations.
  • Managed solutions are installed on top of the system solution and can modify any customisable solution components or add more solution components. Managed solutions can also be layered on top of other managed solutions. As long as a managed solution enables customization of its solution components, other managed solutions can be installed on top of it and modify any customisable solution components that it provides.
  • Unmanaged customisations. All customisable solution components provided by the system solution or any managed solutions can be customized in the unmanaged customisations
  • Unmanaged solutions are groups of unmanaged customisations. Any unmanaged customized solution component can be associated with any number of unmanaged solutions. These can be edited & modified, regardless of the environment in which they’ve been deployed to
  • The ultimate behaviour of an instance of Dynamics 365 or Power Platform application is the culmination of the system solution, any managed solutions, and any unmanaged customisations.

The official stance of Microsoft, according to its Application Lifecyle Management (ALM) documentation, is that unmanaged solutions are used for development, and that managed solutions are released downstream to further environments. For bespoke solutions, however, this may not fit, and an appropriate balance must be found.

Data ‘Backbone’ & Solution Dependencies

Given the way that companies are adopting Power Platform (and Dynamics 365, of course!) it’s highly likely that we will build out system structures that will form the backbone for multiple applications on an on-going basis. With this in mind, it’s appropriate to put in place proper planning for this, to avoid any issues that could occur in the future with appropriate system designs

Solution Dependencies

When creating system structures within an environment, using unmanaged solutions, connecting two (or more) tables together will create dependencies on each other. In simple terms, if we connect Table A to Table B, there’s a reciprocal relationship created back from Table B to Table A:

This happens even if Table A is in Solution 1, and Table B is in Solution 2. If they’re in the same environment (& both solutions are unmanaged), it will create the two-way dependency.

This will cause issues if trying to deploy each solution individually, and will fail on import, as the system will require all items to be available in the solution

Workable scenario

The way in which to handle the issue of solution dependencies is to ensure that the ‘master backbone’ of system design is created in the main development environment, and then to use that in secondary development environments as the core of additional solutions:

This is in line with the emerging recent Microsoft Best Practise information around solution management (which is likely to be moving towards having a single environment per developer, rather than multiple developers working in the same environment).

The steps for doing this are as follows:

  1. Main ‘core solution’ exists (as unmanaged) within the main development environment
  2. When a project requires this to build upon:
    1. Secondary development environment is created
    1. ‘Core solution’ is exported as managed from the main development environment, & imported into the secondary development environment
    1. Project work is carried out within the secondary development environment
    1. Once project solution is complete (or when appropriate for deployment), it can be exported from the secondary development environment
      1. If deploying directly from the secondary development environment to downstream environments, it should be exported as managed
    1. The solution should be exported as unmanaged, and imported back into the main development environment. This will not cause dependencies to be created with the ‘core solution’ in it

Note: The main ‘core solution’ should consist of the items that are needed for core system work. If additional items are needed for multiple projects to work off (eg Account Manager field), this would need to be added to the core solution, rather than the individual project solution/s, as otherwise there could be further issues downstream.

If the project is completed, but requires further work to be carried out later on (or development support), then the following should be done:

  1. Secondary development environment is created
  2. ‘Core solution’ exported from the main development environment as a managed solution, and imported into the secondary development environment
  3. Project solution exported as unmanaged from the main development environment, and imported into the secondary development environment
  4. Work and/or support can be carried out within the secondary development environment, and released appropriately

I’m expecting further information around this to be released by Microsoft in due course (I’m a little surprised there’s not more out there at the moment, to be honest!). It’s vital that we ensure that we’re working with solutions in the right way, to stop any issues occurring later on down the line.

Have you ever had a problem around this? Drop a comment below – I’d love to hear your experiences!

Omnichannel – Wave 1 2021

Today is a day that I’ve been looking forward to over the last few days. Leaving aside anything else that may be happening, it’s the day when the 2021 Wave 1 Release Notes come out! These cover the new functionality & features that will be released during the first half of 20201 for both Dynamics 365 & Power Platform.

The links are here:

There’s an amazing amount of functionality, but what I want to focus on specifically are the capabilities coming down the line around Omnichannel for Customer Service

As I’ve done before, I’m going to include the dates that are applicable (at this point in time) for each time.

Enhancements to existing capabilities

Embedded analytics for Chat and Digital Messaging

GA – April 2021

10 Great Google Analytics Alternatives

Traditional dashboards have limited interactive capabilities and provide a narrow view into the overall organization. Omnichannel’ s Embedded analytics for chat and digital messaging allows service managers to identify problem areas and opportunities to improve from historical data, along with rich slice and dice capabilities powered by Power BI.

With this release, the embedded analytics for chat and digital messaging allows service managers to understand how agents and queues are performing. The analytics provide trends based on problem areas and opportunities allowing the service managers to analyze the corrective measures they can take, provide appropriate guidance to agents, and improve the customer support experience. Key Insights cards provide a glimpse into the notable trends on core metrics and topics that are important for a supervisor to further investigate the analytics.

Enhanced supervisor experiences for operational monitoring of Chat and Digital Messaging

GA – April 2021

How Can I Use Microsoft SCOM for End-to-End Performance Monitoring – eG  Innovations

Supervisors need key metrics and channel-specific performance measures to take operational decisions to meet and exceed service-level goals

  • As contact centres deploy multiple channels to provide an omnichannel experience in customer service, supervisor can view and track relevant metrics for operational efficiency in the following ways:
  • Equip team leads to monitor channel-specific performance metrics to handle agents who are dedicated to a single channel
  • Enable senior team leads and service delivery managers to monitor All-up metrics across all channels
  • Capability to quickly switch between the views

Historical topic clustering for all channels

GA – April 2021

Topics | National Museum of African American History and Culture

Topics are automatically generated using AI to organize similar issues into groups. By aggregating metrics from issues grouped into the same topic, organizations get a full view of KPIs and metric impact for each topic. For example, organisations can view the average handling time, sentiment, and CSAT for a specific topic, and whether the topic is a key driver for any of those metrics.

Modern Administration Experience for Omnichannel Chat and Digital Messaging

GA – April 2021

Modern - Responsive Admin Dashboard Template by stacks | ThemeForest

With the modern administration experience, administrators can easily start the first chat conversation with only a few clicks and see the immediate value of chat conversation powered by Omnichannel for Customer Service. The modern administration experience is intuitive to follow and allows administrators to quickly understand and perform the configuration steps.

Introducing the first run experience to help administrators automatically set up the chat channel and start the first chat conversation. Also, introducing the modern administration experience to guide administrators to set up the end-to-end configurations in Omnichannel for Customer Service.
The key highlights of this feature include:

  • First run experience of chat channel
  • Streamlined and simplified administration user experience of work stream, queue, and global setting configurations for digital messaging channels

Omnichannel Voice Channel

At Ignite in September 2020, Microsoft announced the new Voice channel for Dynamics 365 Customer Service. The aim of the solution is to provide simpler administration & management experiences within the platform itself, rather then needing traditional cloud component integration complexities.

With the release of this, voice, SMS, and digital messaging channels, and a PVA-powered intelligent interactive voice response (IVR), real-time voice intelligence, and insights across all channels, speech-based self-service, and intelligent skills-based routing are all brought together in a single package.

This feature is currently in invite-only private preview, with general availability planned as part of the April 2021 wave

Call intelligence

GA – August 2021

What is call intelligence, and why should you care?

The transcript of a call and an in-depth analysis of a particular call recording can help an organization better understand how the engagement with the customer progressed and present opportunities for agent training.

Through historical analytics, supervisors will be able to drill into a particular call to view more details. Each call will include voice-specific metrics such as talk-to-listen ratio, talking speed and more. Supervisors can also see the detailed sentiment throughout the call, shown alongside the transcript for further analysis. This view helps supervisors better understand how the call went and identify the areas to improve.

  • This capability leverages the call transcription and sentiment analysis to produce the following metrics:
  • Talking speed
  • Switches per hour
  • Pause before speaking
  • Longest customer monologue

Call Recording

GA – August 2021

Call Recording | MightyCall

Customer service agents typically need to review phone calls with customers. Call recording allows agents to record phone calls between agents and customers. This helps the organization to revisit the interaction to better understand the customer’s issues in his or her own words and increase the possibility of resolving the customer’s problems or questions. Call recordings are also helpful for training scenarios where an organization can share examples of great customer interactions among the team.

Call Transcription and Realtime Sentiment Analysis

GA – August 2021

Jog.ai | Supercharge Your Call Notes | Automated Call Recording and  Transcription

Customer service agents often need to take notes while helping customers during a phone call. Call transcription converts a phone conversation into written words reducing the amount of notes an agent will need to take and helping with accessibility. Furthermore, sentiment analysis examines the conversation and identifies the general sentiment or “mood” of the customer like if they are slightly angry or very disappointed. Call transcription and sentiment analysis are both used by the system to proactively analyse cases and provide agents with suggestions to resolve the issue.

Call transcription converts a phone conversation into written words and stores them as plain text in real time as the call is in progress. Sentiment analysis, built on award winning AI, tags a sentiment on the top of a conversation, and is constantly updated as the conversation evolves.
Both call transcription and sentiment analysis are included out-of-the-box with no additional setup or configuration.

Consult and transfer

GA – August 2021

Voip Call Transfer - Voip Service - Voip Business | VoIP Business

Omnichannel for Customer Service offers customer service agents the ability to easily consult with and transfer calls to other customer service representatives and helps agents have a greater chance to resolve customer issues.

While on a call with a customer, an agent can put the customer on hold and consult with another agent or manager on an issue that requires specific expertise. issue perhaps one with specific expertise or a manager. Agents can also transfer the call to a specific customer service agent, which is also referred to as a warm transfer. In other scenarios, the agent can transfer the call to a queue from where it is routed to the best available agent based on rules configured by your business.

Direct outbound calling

GA – August 2021

Outbound Calling Strategy- the Best Way to Boost Communication

The ability of agents to contact customers via voice calling remains one of the most important customer interaction methods in Customer Service. Direct outbound calling enables agents to contact customers using our native fully integrated voice channel based on Azure Communication Services, where voice is just another channel for agents and supervisors.

Agents can contact customers using voice calling. Direct outbound calls can be initiated via click-to-call directly from phone number fields in the following:

  • Cases
  • Customer profiles
  • Call back activities
  • Ongoing chat conversations
  • Via a phone dialler

Outbound calls are displayed as conversations in conversation history contextually per case/customer and timelines. Supervisors can monitor outbound calls just like any other customer interaction.

This feature includes the following key highlights:

  • Fully integrated outbound voice channel without third party voice integration
  • Sample outbound voice channel configured automatically on voice channel provisioning.
  • Easy channel administration within the Omnichannel admin experience.
  • Outbound voice conversations are just another conversation type in Omnichannel.
  • Supervisors can monitor outbound calls from within the ongoing conversations dashboard like any other agent/customer interaction.

Embedded analytics for voice channel

GA – August 2021

How to Utilize Google Analytics to Improve Your Restaurant's Website

Traditional dashboards have limited interactive capabilities and provide a narrow view into the overall organization. With historical data, embedded analytics for voice channel empowers service managers to identify problem areas and opportunities to improve and provides rich slice and dice capabilities powered by Power BI.

Customer service managers or supervisors are responsible for managing the agents who work to resolve customer queries every day through phone channel. With this release, the embedded analytics provide trends over a period to understand how agents and queues are performing, so that service managers can take corrective measures, provide appropriate guidance to agents, and improve the customer support experience. Key Insights cards provide an “at a glance” view into notable trends on core metrics and topics that are important for a supervisor to investigate further in the comprehensive reports. Agent-focused views display core metrics to better understand the primary areas an agent worked in and identify opportunities for coaching.

  • With these views, supervisors can:
  • Monitor operational metrics, such as inbound calls, calls handled, abandon rate, average talk time, and average speed to answer calls, across channels, queues, agents, and topics
  • Monitor support quality through sentiment analysis across channels, queues, agents, and topics.

Intelligent voice via PVA and Azure Bot Framework

GA – August 2021

How Intelligent Voice Agents Can Replace Costly Contact Centers

With speech-enabled Power Virtual Agents, businesses can empower business users to build and update intelligent voice bots that use built-in natural language processing capabilities to engage conversationally with customers and provide personalized self-service always. Bots can be built once and deployed across messaging and voice channels for maximum efficiency and consistency. For more advanced scenarios, businesses can integrate bots built with the Microsoft Bot Framework to work on the voice channel.

With this feature, businesses have a familiar bot authoring experience for all customer service bots, across messaging and voice. Customers will enjoy with flexible, free-form service experiences, instead of inflexible menu trees. Bots can easily hand off the call to humans agents, with the conversation history and context gathered by the bot. This allows Omnichannel for Customer Service to route the customer from the bot to the best available live agent to provide a seamless, contextual hand off.
The key highlights of this feature include:

  • Enable Power Virtual Agents and Azure Bot Framework bots to provide intelligent voice bots on the voice channel
  • Support for built in dual tone multi-frequency (DTMF) as a secondary means to interact with the bot
  • Transfer calls to human agents with full transcript and context
  • Use bots for post-call surveys

Modern Administration Experience for Omnichannel Voice (Number Management)

GA – August 2021

Typically, customer service organizations must manually integrate standalone telephony and customer relationship management (CRM) solutions, which results in fragmented experiences and error-prone data integration. Administrators need to manage resources and phone numbers in the telephony provider’s app and manually bring over this information to the CRM solution. Very often, this setup process requires collaboration between business and IT administrators, adding delay to an already lengthy process. With the availability of Azure Communication Services, Omnichannel for Customer Service now offers native voice channel. This all-in-one solution empowers business administrators to independently deploy a telephony resource and acquire phone numbers in a few steps, offering a fast and consistent experience.

Until now, administrators created resources and managed phone numbers in a separate telephony application and then manually deployed the numbers in the CRM solution. The long-fragmented process is inconsistent and requires continuous maintenance to keep both applications in sync.
With the native voice channel, business administrators can deploy the telephony resource and acquire phone numbers without leaving the Omnichannel Administration app.

The key highlights of this feature include:

  • Telephony resource deployment using connection string or sign into Azure account.
  • Acquiring phone numbers of various types and plans.
  • Releasing phone numbers.

Modern Administration for Omnichannel SMS via ACS (Number Management)

GA – August 2021

Typically, customer service organizations must manually integrate standalone telephony and CRM solutions, resulting in fragmented experiences and error-prone manual data integration. Administrators need to manage resources and phone numbers in the telephony provider’s app and manually bring over this information to the CRM solution. Very often, this setup process requires collaboration between business and IT administrators, adding more delay to an already long process. With the availability of Azure Communication Services, Omnichannel for Customer Service now offers native new voice channel. This all-in-one solution empowers business administrators to independently deploy a telephony resource and acquire phone numbers in a few steps, offering a fast and consistent experience.

Until now, administrators created resources and managed phone numbers in a separate telephony application and then manually deployed the numbers in the CRM solution. The long-fragmented process is inconsistent and requires continuous maintenance to keep both applications in sync.
With the native voice channel, business administrators can deploy the telephony resource and acquire phone numbers without leaving the Omnichannel Administration app.

The key highlights of this feature include:

  • Telephony resource deployment using connection string or sign into Azure account.
  • Acquiring phone numbers of various types and plans.
  • Releasing phone numbers.

Supervisor monitoring and barge

GA – August 2021

Call Monitoring – Understanding this Tool in the Call Centre

Service managers are responsible for the overall quality of customer service and often need to observe customer service representatives while they are on the phone with customers. Omnichannel for Customer Service allows supervisors to listen in on phone conversations and join a conversation, if needed. This helps supervisors increase the likelihood of resolving customer issues, enforce proper business practices, and identify training opportunities.

When supervisors log into the application, they are provided a list of phone calls that are in progress. From the list, they can choose to join a call with the option to join anonymously as a hidden participant. If they want to intervene, they can join the call, referred to as “barging”, which then becomes a group call.

Topic Clustering for Voice

GA – August 2021

Topics are automatically generated using AI to organize similar issues into groups. By aggregating metrics from issues grouped into the same topic, organizations get a full view of KPIs and metric impact for each topic. For example, organizations can view the average handling time, sentiment, and CSAT for a specific topic, and whether the topic is a key driver for any of those metrics.

Topics, which represent semantically similar support issues, help organizations better identify and respond to issues their customers are facing. Correlating these topics along with core historical analytics makes it quick and easy for a supervisor to see common issues by volume, CSAT impact and new cases, helping to identify where they should invest their time.
In this release, the same capability will now be applied to voice channel, generating topics off of the transcript. This will help organizations better understand issues that customers face and their impact on core business metrics across the spectrum of engagement.

I’m really quite excited to see how the new Voice channel will be received, as I think it’s a great feature addition to the overall tools available. It will be interesting to see how clients may choose to use it over their existing voice channel setup.

I’ll be looking deeper into the different functionalities, and will share them here. If there’s anything you think would be helpful to focus on, drop a comment & let me know!

Managed Solutions, & replacing a field

Well to start with, I’m sure that I’m going to get pulled up by some people for my use of the word ‘field’ in the title. After all, officially it’s now a ‘column’! But I (still) can’t let go of calling them as I’ve done so for over a decade, so field it is.

Now to the actual topic of this blog post, which is centred around Managed Solutions. Leaving aside the whole debate about whether we should be using managed or unmanaged solutions (& when/where to do each), there is one definitive benefit of using a managed solution.

See, unmanaged solutions are additive in nature. Work is done in the development environment, then deployed. Further work is done (additional items added, etc), and deployed, and they then appear in the downstream environments. However, if you delete an item in the development environment, it’s not removed when the solution is deployed downstream.

Managed solutions, on the other hand, are both additive & detractive. As with unmanaged solutions, items added in the development environment are also added downstream when deployed. However, if an item is removed from the solution in the development environment, it will also be removed when the solution is deployed downstream. It’s one of the useful ways to ensure that you don’t end up with random unused items just lying around in Production (which have a habit then of popping up in the Advanced Find window, for example). So it’s really quite handy for a lot of reasons to go down this route.

Well, I found myself going down this route recently, but with slightly unexpected results, I’ll freely admit…

The scenario was that we had deployed a managed solution to the UAT (test) environment on a client project. Then the client changed their mind (shock & horror!!) as to a specific item, and we needed to change it from a text item to a lookup item. Obviously (as per best practise, of course) this would need to be done in the development environment, and then released downstream. Given that this is a managed solution, I’d expect this to work, without any issues. Well, it didn’t…

The change in the development environment (deleted the old item, ‘re-created’ it as a lookup with the same system name) was done, we exported it as managed, and then went to import it in the UAT environment. It took the solution file, thought about it for a while (it’s somewhat of a large solution), & then errored:

Exception type: System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault] Message: Attribute mdm_field is a String, but a Lookup type was specified.

Now I was somewhat confused by this message occurring. It’s not been the first time I’ve seen it over the years, but in my previous experience I’ve seen it when handling unmanaged solutions. It’s when you delete an item in the development environment, re-create it as a different item type (with the same underlying system name), and then deploy it as unmanaged. The solution import in the second environment fails due to the different in the type (as it sees the same name). This, of course, is to be expected.

But here we’ve been using managed solutions for deployment, and as mentioned above, they’re detractive as well. The expected behaviour (at least from my side of things) would be that the system would note that the item type has changed, remove the old item, & import the new item. In my mind, that’s logical, but apparently not?

See, even managed solutions have their limitations, of which this is one of them. Having checked with several other people who I reached out to around this, I’ve discovered that it can’t work in the way that I was expecting it to. Instead, a specific process has to be followed

  1. In the development environment, remove the item, & export the solution as managed
  2. In the downstream environment(s), deploy this (interim) managed solution. This will remove the item from the environments
  3. In the development environment, re-create the item with the different system type. Then export it as managed
  4. In the downstream environments, deploy this solution. This will then add the item (with the new system type) into the environment.

This means that development & deployment teams (if separate ones) need to co-ordinate around this, to ensure it’s done in the right way. It could also be developed/exported in succession, and then imported in succession as well (either manually, or through an Azure DevOps Pipeline, for example).

This worked wonderfully for us, and to be honest, I was quite relieved after several hours of frustration with things. Even better, it was a Friday, so meant that the week could end well!

Have you ever come across this, and been frustrated as well? Have you got a similar story with something else that happened to you around solutions? Drop a comment below – I’d love to hear!

Customising Case Resolutions

Well, the title is a bit of a mouthful, I’ll admit. Hopefully though this brings some good information, and can help people out.

Cases are wonderful things, and can be used for tracking client interactions, compliments/complaints, and so many other things. What cases do have is the ability to resolve them, and provide information around the resolution.

Now, the standard way of doing this provides the following screen:

There’s the ability to set the Resolution Type (being a dropdown, aka Choice, field), & putting in free text for the Resolution itself (allowing us to track information around it). There are also time fields, which can be used for working out the time spent, as well as any time that’s going to be chargeable.

Now when going in to modify these, we’d think to open up the Case Resolution table. However, this isn’t actually the right place to do it. Instead, we’re needing to update the Case table itself, as the Care Resolution items comes from the Case Status field!

Somewhat annoyingly, it’s not possible to do this through the new ‘Maker’ interface:

In order to actually handle this, we need to switch across to the Classis editor to set this up. This could be because it’s actually a situation of having both parent & child entries. What I mean by this is that there’s the actual status (being Active, Resolved or Cancelled), and then a reason under each one. Hopefully at some point it’ll be updated into the new UI, so that we can do it from there.

We’ll need to change the Status item to ‘Resolved’, & can then add in the options that we want:

After adding them, we need to save & publish, and then they’ll show up for us, and are able to be selected:

So that’s great – we’re able to customise it. But what if we’re wanting to customise the actual ‘Resolve Case’ form itself? Not everyone wants to show Time/Billable Time on it (quite a few of our clients ask us to remove it), and perhaps they want to add additional custom fields.

So from the usual perspective of doing this, we’d open up the Case Resolution table, create new fields as required, and modify the existing form (we’re not able to create any other forms for this specific table). After all, this is how we’d do it for any table in the system (whether a standard one, or a custom one). This is going to be the Main form, rather than the QuickCreate one:

We save & publish it, and then would open up a Case record, click ‘Resolve Case’, and expect to see it. However, that doesn’t happen, which has been most puzzlingly to me!

It turns out that there are two things needed to be done in order to get to see our ‘custom’ form (though it’s not really custom, as it’s modifying the default form, but whatever).

  1. We need to modify security permissions for users, and is a critical requirement. An example of this is shown below:
Security Role: Customer Service Representative

2. We need to enable customisable dialogues. Yes, it’s a setting that needs to be updated in order for users to see the custom layout of the form. If we don’t do this, they’re shown the default form, even though we’ve modified it! Seems a little strange that the system seems to have this concept of a ‘shadow’ form, but I guess that’s how it is.

To do this, we need to go into the Service Management settings area. I usually launch this through the Customer Service Hub app, though it’s available through several of the other standard apps as well:

Once there, we need to click into the Service Configuration menu item, and then change the ‘Resolve Case Dialogue’ option as shown below:

Remember to click the ‘Save’ button to save this.

Finally we can go back to our Case record, click ‘Resolve Case’, and look what appears!

So in summary, it’s definitely possible to modify & change the way that Case resolutions works in the system. It does take a little bit of fiddling around with settings in different areas, which can be confusing if we’re not used to this, but can give a great result in the end.

Have you ever come across this, and wondered how to do it? Have you developed Case Resolutions any further? Drop a comment below – I’d love to hear!

PL-200 Microsoft Power Platform Functional Consultant

Well, the last week has been quite busy, on many fronts! One of those is having a few new exams come out in Beta. I’ve already taken the PL-400 (see PL-400: Microsoft Power Platform Developer Exam for my review of it). Last Friday, the new PL-200 exam was released as well, so I scheduled it in for as soon as I could sit it.

Now the PL-200 is scheduled to be replacing the MB-200 exam at the end of this year (2020), assuming it comes out of beta by then of course. I remember sitting my MB-200, though I didn’t write up about it at the time. Compared to some of the other exams I’ve taken, it was hefty. I’ll freely admit that I didn’t pass on first go of it – it took me 3 tries to gain it! People will be required to take this as a pre-requisite for attaining the Microsoft Certified: Power Platform Functional Consultant Associate badge.

So I’ve been expecting this new PL-200 to be quite similar, but with more of a Power Platform focus. It’s still heavy on Dynamics 365, and I wasn’t expecting that part to change. The existing MB-2xx series are also staying in place (for the moment, anyhow).

According to the official description for the exam:

Candidates for this exam perform discovery, capture requirements, engage subject matter experts and stakeholders, translate requirements, and configure Power Platform solutions and apps. They create application enhancements, custom user experiences, system integrations, data conversions, custom process automation, and custom visualizations.

Candidates implement the design provided by and in collaboration with a solution architect and the standards, branding, and artifacts established by User Experience Designers. They design integrations to provide seamless integration with third party applications and services.

Candidates actively collaborate with quality assurance team members to ensure that solutions meet functional and non-functional requirements. They identify, generate, and deliver artifacts for packaging and deployment to DevOps engineers, and provide operations and maintenance training to Power Platform administrators.

The official Microsoft Learn page for the exam is at https://docs.microsoft.com/en-us/learn/certifications/exams/pl-200, and I’d highly recommend people to go check it out. I didn’t use it that much, but felt that I was on reasonable grounds with existing knowledge. It’s mostly there, but (at least in my exam) there were some sneaky extras that I was NOT really expecting. Hopefully I managed to get them (mostly) accurate!

Once again, I sat the exam through the proctored option (ie from home). The experience went without issues for once – sign in was fine, no issues with my headset during check-in, exam loaded & worked without problems at all.

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). I’ve tried to group things together as best as possible for the different subject areas.

  • Environments
    • Different types of environments, what each one is used for, how to set/switch them between the different types
    • How to handle security/restrict access as necessary
  • Field types. All of the available field types, what are the benefits of each, and when each type should be used
  • Data storage types. Differences between Office documents (eg Excel), CDS, SQL Server, Azure SQL. When to use each one best
  • Charts. How they’re set up, how they can be shared with other users.
  • System views. What these are, who can access them, how to set them up
  • Entity forms. The different types of forms available, how to set them up, limitations of each. When each one should be used for a given scenarios
  • Model apps. Site map. What this is, how it’s used. Implementing/customising it, the different controls available & what each one does
  • Entity editable grids
    • What these are, how they can be used, how to enable & set them up
    • Limitations that they have within the system
  • Entity/record ownership. The different types of ownerships available, benefits of each, when each should be used for a given scenario
  • Data management
    • Data importing from different sources, different methods to import data
    • What is data mapping for import, and how it’s used
  • Duplicate detection. What it is, what it does, how it works. How to implement & configure it
  • Microsoft Word templates. How they can interact with Dynamics 365, how to set them up/adjust them, what they can be used for
  • Canvas Apps
    • Expression/function types, what they are, how they’re used
    • Handling data (eg collections)
    • Offline usage & data storage
    • Controls that can be used, navigating around, loading/saving data.
  • Power Virtual Agent/Chatbots.
    • Setting them up, deploying them onto websites, deploying them into Teams
    • Configuring topics, routing, handling unknown questions
    • Bot model data, including being able to access across multiple chatbots
    • Reporting on their usage, & how customer engagements have been processed
  • Power App portals
    • Registering users, registration code process
    • Validating/confirming user accounts
    • Forms security, displaying/hiding forms & data
  • AI capabilities. AI models available. Pre-built models vs custom training, capabilities (eg text scanning), and when to use each one.
  • Omnichannel
    • What it is, when it’s used
    • How to implement, deploy & configure customers being able to be sent through to it
  • Automation
    • Workflows, Power Automate, Business Process Flows
    • What each one is, benefits/use cases for each one, when to use each for specific scenarios
  • Power Automate
    • What are triggers, & how do they work
    • What are actions, and how do they work
    • What are connectors, and how do they work
    • Prebuilt vs custom connectors, capabilities, and when to use each one
    • How to set up each type & configure them
    • Instant vs Scheduled vs Triggered
    • Security – how to enable/disable their use by users
  • Business Process Flows
    • What they are, how they’re used, limitations that they have
    • How to handle security for them
  • Business rules
    • What they are, how they’re used, how to set up/configure
    • How to use them in different parts of the system (eg forms, apps, etc)
    • Actions vs Conditions vs Recommendations
  • UI Flows (RPA)
    • What these are, how they are used
    • Requirements in order to use them
    • Desktop vs Cloud
    • Implementation, customisation, configuration & deployment
    • Limitations of them
    • Data extraction from runs
  • Security & Compliance
    • Security roles, security teams, security groups
    • What each one is, how it’s used
    • System auditing, what it is, how it’s used, how to implement & configure
    • How to access & run user audit log reports
  • PowerBI. Setting up & sharing dashboards, setting up & configuring alerts, security options/roles & how they work with data
  • Dynamics 365 integrations. What other systems can integrate directly with Dynamics 365, & any limitations that they may have

The main surprise for me was mostly around the UI flows, and the various questions I had on them. I’ve not played around with them (yet!), but they are really cool!

If you’re going to take this, I’d love to hear how your experience of it went. Drop a comment below for me to see!

PL-400: Microsoft Power Platform Developer Exam

I’ve been continuing with taking new exams as they come out. Having recently taken the MB-400 exam (see MB-400 Power Apps & Dynamics 365 Developer Exam), I was slightly surprised to see the announcement that it was going to be replaced!

Admittedly, I was also surprised (in a good way) that I passed the MB-400, not being a developer! It’s been quite amusing to tell people that I’m a certified Microsoft Dynamics Developer. It definitely puts a certain look on their faces, which always cracks me up.

Then again, the general approach seems to be to move all of the ‘traditional’ Dynamics 365 exams to the new Power Platform (PL) format. This includes obviously re-doing the exams to be more Power Platform centric, covering the different parts of the platform than just the ‘first party apps’. It’s going to be interesting to see how this landscape extends & matures over time.

The learning path came out in the summer, and is located at https://docs.microsoft.com/en-us/learn/certifications/exams/pl-400. It’s actually quite good. There’s quite a lot that overlaps with the MB-400 exam material, as well as the information that’s recently been covered by Julian Sharp & Joe Griffin.

The official description of the exam is:

Candidates for this exam design, develop, secure, and troubleshoot Power Platform solutions. Candidates implement components of a solution, including application enhancements, custom user experience, system integrations, data conversions, custom process automation, and custom visualizations.

Candidates must have strong applied knowledge of Power Platform services, including in-depth understanding of capabilities, boundaries, and constraints. Candidates should have a basic understanding of DevOps practices for Power Platform.

Candidates should have development experience that includes Power Platform services, JavaScript, JSON, TypeScript, C#, HTML, .NET, Microsoft Azure, Microsoft 365, RESTful web services, ASP.NET, and Microsoft Power BI.

So the PL-400 was announced on the Wednesday of Ignite this year (at least in my timezone). Waking up to hear of the announcement, I went right ahead to book it! Unfortunately, there seemed to be some issues with the Pearson Vue booking system. It took around 12 hours to be sorted out, & I then managed to get it booked Wednesday evening, to take it Thursday.

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change.

There were a few glitches during the actual exam. One or two questions with answers that didn’t make sense (eg line 30 does X, but the code sample finished at line 18), and question numbers that seemed to jump back & forth (first time it’s happened to me). I guess that I’ve gotten used to at least ONE glitch happening somewhere, so this was par for the course.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Model Apps.
    • Charts. How they work, what drives them, what they need in order to actually work, configuring them
    • Visualisation components for forms. What they are, examples of them, what each one does, when to use each one
    • Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
    • Entity alternate keys. What these are, when they should be used, how to set them up & configure them
    • Business Process Flows. What these are, how they can be used across different scenarios, limitations of them
    • Business Rules. What these are, how they can be used across different scenarios, limitations of them
  • Canvas apps
    • Different code types, expressions, how to use them & when to use them
    • Network connectivity, & how to handle this correctly within the app for data capture (this was an interesting one, which I’ve actually been looking at for a client project!)
    • Power Apps solution checker. How to run it, how to handle issues identified in it
  • Power Automates
    • Connectors – what these are, how to use them, security around them, querying/returning results in the correct way
    • Triggers. What is a trigger, how do they work, when to use/not use them
    • Actions. What these are, how they can be used, examples of them
    • Conditions. What these are, how to use them, types of conditions/expressions/data
    • Timeouts. How to use them, when to use them, how to configure
  • Power Virtual Agents. How to set them up, how to configure them, how to deploy them, how to connect them to other systems
  • Power App Portals. Different types, how to set them up, how to configure them, how they can work with underlying data & users
  • Solutions
    • Managed, unmanaged, differences between them, how to use each one.
    • Deploying solutions. Different methods that can be used to do it, best practise for each, when to use each one
    • Package Deployer & how to use it correctly
  • Security.
    • All of the different security types within Dynamics 365/Power Platform. Roles/Teams/Environment/Field level. How to set up, configure, use in the right way.
    • Hierarchy security
    • Wider platform security. How to use Azure Active Directory for authentication methods, what to know around this, how to set it up correctly to interact with CDS/Dynamics 365
    • What authentication methods are allowed, when/how they can be used, how to configure them
  • ‘Development type stuff’
    • API’s. The different API’s that can be used, methods that are valid with each one, the Organisation service
    • Discovery URL’s. What these are, which ones are able to be used, how they’d be used/queried
    • Plugins. How to set up, how to register, how to deploy. Steps needed for each
    • Plugin debugging/troubleshooting. Synchronous vs asynchronous
    • Component types. Actions/conditions/expressions/data operations. What these are, when each is used
    • Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
    • Javascript web resources. How to use these correctly, how to set them up on entities/forms/fields
    • Powerapps Component Framework (PCF). What these are, how to develop them, how to use them in the right way
  • System Design
    • Entity relationship types. What they are, what each one does, how they work, when to use them appropriately. Tools that can be used to display them for system design purposes
    • Storage considerations across different types, including CDS & Azure options
  • Azure items
    • Azure Consumption API. How to monitor, how to handle, how to change/update
    • Azure Event Grid. What it is, the different ways in which it can be used, when each source should be used
  • Dynamics 365 for Finance. Native functionality included in it

The biggest surprise that I had really when thinking back to things was the inclusion of Dynamics 365 for Finance in it. Generally the world is split into ‘front of house’ (being Dynamics 365/Power Platform), and ‘back of house’ (Dynamics 365 for Finance & Supply Chain Management). The two don’t really overlap, though they’re supposed to be coming more together over time. Being that this is going to happen, I guess it’s only natural that exam questions around each other will come up!

Overall it was quite a good exam. Some of the more ‘code-style’ questions were somewhat out of my comfort zone, and I’ll freely admit to guessing some of the answers around them! Time will tell, as they say, to see how I’ve done in it.

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it!