MB-280: Microsoft Dynamics 365 Customer Experience Analyst

It’s been a while since taking a Microsoft certification exam, but with the new MB-280 exam being launched in the last few days, I’ve obviously needed to take a look at it! It felt a little strange, as I’m now used to the certification renewal process (which is why I haven’t taken any exams in a while), but thankfully things went alright with the overall exam.

For those who haven’t been following the news, Microsoft made an announcement a few months back that some exams would be retiring, and the new MB-280 exam would be the replacement for this. In short, this is supposed to replace the MB-210 (Sales), MB-220 (Customer Insights – Journeys) & MB-260 (Customer Insights – Data). Malin Martnes wrote a good blog post in June – I’d suggest to take a look at it at for more general information around it.

Now I’m all up for new certifications being created & made available. However, and I know this could be considered controversial, I have ABSOLUTELY NO IDEA as to why this exam was created in THIS specific way. If an exam had been created, for example, to bring together the two sides of Customer Insights (ie to cover both Data & Journeys in a single exam), I think that would have been quite good.

But with having taken this, my thoughts (& feedback to Microsoft directly) is that they should un-deprecate (if that’s a word/phrase?) the MB-210 exam, and continue it forward. There’s no reason that I can see having Marketing & Sales together in a single exam – it feels like two (or technically 3?) lego bricks lumped together without any rhyme or reason.

The learning path for the exam was also launched in the last few days, and can be found at Study guide for Exam MB-280: Microsoft Dynamics 365 Customer Experience Analyst | Microsoft Learn

The official description of the exam is:

As a candidate for this exam, you’re a Microsoft Dynamics 365 customer experience analyst who has:

  • Participated in or plans to participate in Dynamics 365 Sales implementations.
  • An understanding of an organization’s sales process.
  • An understanding of the seller’s perspective (user experience).
  • The ability to demonstrate Dynamics 365 Customer Insights – Data and Customer Insights – Journeys capabilities.

You’re responsible for configuring, customizing, and expanding the functionality of Dynamics 365 Sales to create business solutions that support, automate, and accelerate the company’s sales process. You use your knowledge of customer experience capabilities in Dynamics 365 Sales and Microsoft Power Platform to inform the following design and implementation tasks:

  • Configure Dynamics 365 Sales standard and premium features.
  • Implement collaboration features.
  • Configure the security model.
  • Perform Dynamics 365 Sales customizations.
  • Extend Dynamics 365 Sales with Microsoft Power Platform.
  • Deploy the Dynamics 365 App for Outlook.

As a candidate, you need:

  • An understanding of the Dataverse security model and features, including business units, security roles, and row ownership and sharing.
  • Experience configuring model-driven apps in Microsoft Power Apps.
  • An understanding of accounts, contacts, and activities.
  • An understanding of leads and opportunities.
  • An understanding of the components of model-driven apps, including forms, views, charts, and dashboards.
  • An understanding of model-driven app personal settings.
  • Experience working with Dataverse solutions.
  • An understanding of Dataverse, including tables, columns, and relationships.
  • Familiarity with Power Automate cloud flow concepts, such as connectors, triggers, and actions.

More can be found at the exam page itself, which is located at Exam MB-280: Microsoft Dynamics 365 Customer Experience Analyst (beta) – Certifications | Microsoft Learn

Now during my exam, I was looking forward to seeing the ‘new’ capability around being able to use Microsoft Learn during the exam (new to me – as I haven’t taken any other exams in the last year or so since it was announced!). However there didn’t seem to be any capability to launch Microsoft Learn – I’m not sure why it wasn’t available, as this isn’t a Fundamental level exam

Questions also used the older terms of references rather than the newer/accepted terms – ie using ‘field’ instead of ‘column’, and ‘entity’ instead of ‘table’. Again, I have no idea why this is – all other exams (including the renewals for them) are using these properly (in my summary below I have ensured I use the correct terms).

So, as I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Sales Apps
    • Configuring forms, columns & tables
    • Configuring security roles & access to records
    • Configuring relationships between records (including deletion properties)
    • Sales Mobile App – security & deployment
    • Forecasting – setting up & configuring
    • Configuring Goals
    • Configuring Opportunities
    • Handling currencies
  • Copilot for Sales
    • Setting up & deploying to users
    • Configuring access
  • Outlook App
    • Deploying & setting up
    • Configuring forms & information
  • Exchange
    • Connecting to mailboxes
    • Configuring folder permissions
    • Configuring multiple domains
  • Product Families & Catalogue
    • Creating & setting up
    • Configuring options
    • Adding items to be used
  • Price Lists
    • Creating & setting up
    • Configuring options, including discounts
    • Using time-restricted price lists
    • Handling currencies
  • Document Management
    • Different document management capabilities
    • Usage of SharePoint in different ways
  • Data Import
    • Usage of Power Query
    • Data manipulation
    • Handling duplicate records
  • SMS
    • Setting up & configuring SMS provider
  • Journeys
    • Different triggers to use based on scenarios & requirements
    • How to trigger journeys
    • How to set up emails to be used within a journey
  • Segments
    • Different types of segments
    • Creating & modifying segments
  • Searching/Filtering
    • Using Advanced Find
    • Setting up/modifying queries to include/exclude records based on conditions
  • Business Process Flows
    • Modifying business process flows
    • Handling conditions within business process flows

As a Sales exam, it seemed alright. But as mentioned above, the Customer Insights questions just seemed strange to me – I’d expect a consultant to be very technically skilled in Customer Insights, but not in Sales (& vice versa), so I’m not understanding bringing these two sides together.

I’m going to be quite interested in seeing how the exam is actually launched (as it’s currently in Beta of course). Having chatted with a few others who have taken the exam (whilst obviously respecting the NDA!), they also can’t really understand the landscape. Personally, I think that if it continues like this, Microsoft is going to hear quite a few complaints around it.

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Power Platform Capacity Monitoring

If I look back at customer engagements over the last few years around Power Platform, whether it was a new capability or an existing capability, there was ONE thing that stood out above all. This was the ability to be able to track capacity usage over time, and to be honest, most organisations weren’t really doing very well at it.

For those who are unaware, there are actually three different types of capacity present within Power Platform environments. These are:

  • Data
  • File
  • Log

Each one is used for a specific purpose – broadly speaking, File holds all attachements that are uploaded directly into Dataverse, Log is used for auditing purposes, and Data holds everything else (hence the name)!

Now this data is shown within the Power Platform Admin Centre, under the ‘Resources/Capacity’ section’. An example of this is:

There’s also a nice little breakdown of capacity allocation through licenses etc, which essentially shows where the available capacity has come from:

If we drill down a bit further, we can open up a specific environment, and see not only the overall usage per capacity type, but also which tables are consuming the most amount of data:

All of this is well & good so far, for someone wanting to take a look at what is currently happening. But this is a manual action – it is possible to manually export the data, but again, this isn’t automated.

It’s also not possible (at least not at this point in time) to query the underlying records that hold these values. So we’re a little stuck. If an organisation wanted to see historical data usage, and/or predict data trends (such as ‘how much capacity would we need to have in 6 months if we continued our scaling’), there’s no way to do this. At least not automatically – someone would need to store the values down manually, then report on it. A hassle, to say the least.

Now when it comes to looking overall at Power Platform, the Centre of Excellence Starter Toolkit is really quite amazing. The Microsoft PowerCAT team continue to iterate existing functionality within it, as well as bring new functionality as well.

At this point in time, however, it doesn’t have any capacity monitoring in it. Well, it sort of does – we can implement notifications to alert us when capacity reaches a certain value. But this doesn’t solve the challenge as laid out above.

So with this in mind, I set out to create a solution to handle it. I’ve always wanted to create some sort of tool for giving back to the community & helping others, and I saw this as my chance to do so (I’m in awe of the various XrmToolBox tool creators, for the record).

So, I’m releasing a capacity monitoring tool. I’m using GitHub as the host, and the repo can be accessed at https://github.com/thecrmninja/Power-Platform-Capacity-Monitoring (it was a learning experience as well as how to use GitHub as a source repository, as I’ve not done that before!).

Model-Driven App:

Reporting Dashboard:

This is just the first version – I have various ideas about how to iterate on it, and tweak functionality. Each release will include release notes & important information to be aware of (such as security needing to run it). Also importantly, thanks to the amazing Matt Collins-Jones for reviewing some of my work around this.

The audience for this tool is aimed at IT/Power Platform admins who are already familiar with the Microsoft CoE toolkit solution, and have appropriate access to it.

If you find any issues, please raise an appropriate GitHub Issue item, and I’ll look into it. Also, if you have any ideas that you think could be worthwhile, please feel free to suggest them!

Finally, I’d be interested in hearing how you think this could support you or your organisation – feel free to drop a comment below!

MB-260: Microsoft Customer Data Platform Specialist

It’s been a while since I’ve taken an exam. Admittedly, this is for two reason. Firstly, the renewal process for exams now (as updated last year) is not to take it again, but rather to re-qualify through Microsoft Learn. The second reason is that I’ve been waiting for some new exams to come out (OK – there’s the DA-100, which is still on my list of things to do…).

Well, there’s a new exam on the block. In fact, it’s a different type of exam – this is a ‘Speciality’ exam, rather than focusing on a specific type of application. It’s the first of its kind, though there are likely to be more to follow in the future.

It’s the MB-260, which is all around Customer Data. That’s right – it’s not about how to do sales, or customer service, or something else. It’s about taking the (holistic) approach to ALL of the data that we can hold on customers, and do something with it.

The official page for it is at https://docs.microsoft.com/en-us/learn/certifications/exams/mb-260https://docs.microsoft.com/en-us/learn/certifications/exams/mb-260. The specification for it is:

Candidates for this exam implement solutions that provide insights into customer profiles and that track engagement activities to help improve customer experiences and increase customer retention.

Candidates should have firsthand experience with Dynamics 365 Customer Insights and one or more additional Dynamics 365 apps, Power Query, Microsoft Dataverse, Common Data Model, and Microsoft Power Platform. They should also have direct experience with practices related to privacy, compliance, consent, security, responsible AI, and data retention policy.

Candidates need experience with processes related to KPIs, data retention, validation, visualization, preparation, matching, fragmentation, segmentation, and enhancement. They should have a general understanding of Azure Machine Learning, Azure Synapse Analytics, and Azure Data Factory.

Note that there’s quite a bit of Azure in there – it’s not just about Power Platform, Dataverse, or Dynamics 365. People who handle reporting on customer data should have various Azure skills as well.

There’s also a new type of badge that will be available:

At the time of writing, there are no official Microsoft Learning paths available to use to study. I do expect this to change in the near future, and will update this article when they’re out. However the objectives/sub-objectives are available to view from the main exam page, and I’d highly recommend going ahead & taking a good look at these.

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

Overall, I had 51 questions, which was towards the higher number of questions that I’ve experienced in my exams over the last year or so. There was only a single case study though.

Some of the naming conventions weren’t updated to the latest methods, which I would have expected. I still had a few references to ‘entities’ and ‘fields’ come up, though for the most part ‘tables’ and ‘columns’ were used. I guess it’s a matter of time to get everything up to speed with it.

  • Differences between Audience Insights and Engagement Insights
    • What are the benefits of each
    • When would you use each one
    • What types of users will benefit from each type
    • How to create customer insights
  • Environments
    • Types of environments
    • How to create a new environment
    • What options are available when creating an environment
    • What is possible to copy from an existing environment
  • Relationships
    • Different types of relationships
    • What is each one used for
    • Limitations of different relationship types
  • Business level measures vs customer level measures
    • What each one is, and what they’re used for
  • Power Query
    • How to use
    • How to configure
    • How to load data
  • Data mapping
    • Different types available to use
    • Scenarios each type should be used for
    • Limitations of each type
    • How to set it up
  • Segments
    • What are segments, how are they set up, how are they used
      What are quick segments, how are they set up, how are they used
      What are segment overlaps, how are they set up, how are they used
      What are segment differentiators, how are they set up, how are they used
  • Measures
    • What are measures, how are they set up, how are they used
  • Data refresh
    • Automated vs manual options
    • Limitations of each type
    • Availability of each type
    • How to set up each type
    • How to apply each type
  • Data Unification
    • What is this
    • How it can be used
    • How to set it up
    • Limitations of it
    • Process validation
    • Changing existing models
  • AI for Audience Insights
    • What is this
    • What can it be used for
    • How to use it
    • Factors that can affect outcomes
  • Security
    • Using Azure Key Vault
    • Capabilities of this
    • How to set it up
    • How to use it
  • Dynamics 365
    • Capabilities for interacting with Dynamics 365
    • How to set it up
    • How to display data, and where it can be displayed
    • What actions users are able to carry out within Dynamics 365

Wow. It’s a lot of stuff. It’s definitely an exam that if you’re not already currently hands-on with the skills needed, I’d highly recommend you get a decent amount of experience with it before taking the exam!

I can’t tell you if I’ve passed it or not…YET!. Results aren’t going to be out for several months, and to be honest, I’m not quite sure how well I’ve actually done.

So, if you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Workaround for sharing Canvas Apps

Don’t you find it absolutely frustrating when there’s a canvas app that you want to get access to, or give other users access to, but can’t see it? It’s REALLY annoying, but it’s sort of the way that Microsoft has designed the platform (at least at this point in time).

See, when a user creates a canvas app, only the creator is able to see & launch it. If other users want to get access to it, the creator needs to share it. This can be done by sharing the app directly with another user, or by sharing it with an AAD Security Group (which is sort of best practise).

Now, of course there’s the Microsoft Power Platform Centre of Excellence solution, which includes a very handy app to assign permissions for canvas apps. After all, if a user is on holiday, sick leave, or has left the company, there needs to be some way of assigning permissions for other users to gain access to it. It’s really helpful, but of course needs the CoE solution installed.

Let’s think of another scenario. What about if we have some canvas apps as part of a solution, that’s deployed through (proper) ALM – such as using Azure DevOps with automated pipelines. Best practise for this is to use service principals (ie non-interactive user logins). This is great, but then the canvas app/s will be owned by this user. So without the use of the CoE ‘Set App Permissions’ canvas app, we’re sort of stuck, as we can’t gain access to the app.

Or can we…..?

So this is a scenario that I’ve been dealing with recently, and I’ve found a really cool workaround that doesn’t need the CoE ‘Set App Permissions’ canvas app to be able to handle the situation.

The example below (amusingly, in my opinion) is actually using the Microsoft CoE solution as an example, but this works with any canvas apps that are held within a solution (against, this heavily supports using solutions for ALL development items!).

So, this is what the actual installed apps look like in this environment:

As we can see, there are a lot of them! But what happens if I’m logged in as my regular user? What do I see if I go to the list of apps? Well, I’ll see the following:

Now, as we can see, I’m able to see the model-driven app (as these aren’t hidden at all). But I’m not able to see ANY of the canvas app! So how can I get access to it, or share it with other users?

Well, if I take a look at the solution itself, I can see the following when browsing to the list of apps (I’m really loving the new Solution Explorer layout, I’ll freely admit!):

I can try to play the canvas app (in this case, the ‘Set App Permissions’ app) directly from the solution. But when I try to do this, I’ll get the following error message:

Now, this is of course happening because I’m not the owner of the app, & the app hasn’t been shared with me at all. So really I was expecting this error to happen.

However, if I take a look at the menu options displayed for me, I can see that the ‘Share’ option isn’t greyed out. I wonder what happens if I click it…

Now this is EXCITING! When clicking the ‘Share’ option on the menu, I’m given the regular sharing screen, where I’m able to set app permissions. So it looks like I’m able to do something here. OK – let’s go ahead & try to share the app with my own user:

So I’ve looked up my own user, and then clicked ‘Share’. This is what happens next…

Exciting moment – will this work?

Waiting with bated breath, and then…

It’s worked! The app sharing has been successful with my user.

Note: The example that I’m using here is with my own user account. However it doesn’t need to be – I can select any user account or AAD Security Group, and share accordingly.

Going to my list of apps, I can now see that the app is showing up for me:

Clicking the app to launch it presents me with the permissions dialogue, and having confirmed permissions, then launches it properly:

So this is indeed a way in which it’s possible to share canvas apps with users and/or AAD security groups, even when a user isn’t the owner of the canvas app.

It is important to note that the user carrying this out does need to have one of the following permissions in the environment:

  • System Customiser
  • System Administrator

Without having one of these roles, it’s not going to be possible to carry out the above (mostly because it’s not possible to see solutions & dig down into them).

This is a handy little trick that hopefully will help clear up one of the headaches when trying to share canvas apps! Of course it’s possible to use the Microsoft CoE tool to set app permissions, but if a customer doesn’t have it installed, then this would be another way to approach things.

Have you ever had this issue? How did you go about solving it? I’d love to hear – please drop a comment below…

Omnichannel vs Customer Service Workspace

This is a question that I’m being asked on a semi-regular basis at the moment, so I thought it would be helpful to do a writeup around things. It’s definitely not clear from the outset based on existing documentation. However, being able to speak to wonderful people such as Tricia Sinclair has been amazing in being able to figure out the differences between the two applications.

So, where to start. Well, let’s first of all understand the similarities between the two applications.

Firstly, they are both multi-session apps. To put this in context (as mentioned elsewhere previously) – traditionally Dynamics 365 applications have been ‘single session’ applications. This means that users would navigate around, open/close records, create or edit as necessary. If users wanted to have multiple records open, they’d need to have multiple tabs open, or even multiple windows (yes, I still remember the days before browsers had tab functionality!).

What multi-session means in this context is that users are able to open up multiple records, and switch between them in the same tab. Open records pop into the left-hand navigation bar, and users can easily click between them. Not only that – users are also able to open further tabs within the same record pane, to access further information. These stay open whilst users switch to other records, which is really quite helpful!

So for example, a user could open a case record, then open the contact associated to the case, as well as the account related to the case. They could then further open the knowledge base to search for articles, and so on and so forth. All of these stay open.

Both apps are also web applications – they run in a browser, rather than needing to have a specific software application installed for them (unlike Unified Service Desk).

So, where do they actually differ? Well, this was a bit difficult for me to understand in the first instance, though that turned out to be because I had both Customer Service Workspace as well as Omnichannel configured within the same environment! Turns out that this wasn’t the best approach to take to compare the two, & understand their capabilities. Easily fixed though with quickly spinning up a new trial to install one in.

So with knowing how Omnichannel works (after all, I’ve written quite extensively around it), let’s take a look at the Customer Service Workspace app:

Customer Service workspace overview
  1. The session pane lists all the sessions that you are actively working on. Select the tabs to navigate among sessions.
  2. The Home session returns you to the Customer Service Agent Dashboard view.
  3. Each session has a tab in the session panel. Select a tab to navigate to the session you want to work on.
  4. Select a case to open a new session. A single click on a case replaces your view with the case form. Select the back arrow in the upper-left corner of the form to get back to your previous view.
  5. Select the tabs to navigate to your open activities, cases, forms and views.
  6. Select the + icon to expand the menu to view a list of forms, views, and activities. Select the one you want to open in a new tab.
  7. Select the drop-down selector to filter cases in queues you can choose to work on.
  8. Select Shift + mouse click to open a new session for an activity. A single click replaces your view with the activity form. Select the back arrow in the upper-left corner of the form to go back to your previous vie

Now, without Omnichannel installed in the same environment (& obviously licensed for users), it’s not possible to have native Dynamics 365 channels such as Chat, WhatsApp, etc. Conversations will not appear for customer service agents who are using the Customer Service Workspace.

Note: If you DO have Omnichannel installed in the same environment, and users are licensed to use it, then conversations will show up within the Customer Service Workspace app for them. They’ll have notifications pop up on the screen for incoming customer sessions.

That’s not to say that it’s not possible to have channels available within Customer Service Workspace. So how do they actually come in?

Well, as it turns out, channels within Customer Service Workspace need to be third party channels. There are a plethora of 3rd party add-ons for Dynamics 365, that offer different communication capabilities. Some of these do date back a while (to before any native Microsoft capabilities).

For example, there are ISV add-ons for Customer Service that can embed a call dialler into the experience, so that customer service agents can call directly from a record. Or alternatively an add-on such as a 3rd party web chat application, that can then surface these within the Customer Service Workspace. Each of these obviously would need to be purchased, licensed & integrated appropriately with your Dynamics 365 solution as necessary too.

Now both applications also have other similar functionality, such as the Productivity Pane, Agent Scripts, Smart Assist & Knowledge Search. However there can be differences between them. For more information, I’d suggest taking a look at Tricia’s blog article that goes into depth on this.

So to summarise, Omnichannel is for the native Microsoft channels, giving customer service agents the ability to service customers using them. Licensing (currently) is with Customer Service Enterprise, and then either the Digital Chat or Digital Messaging add-on SKU’s.

Customer Service Workspace, on the other hand, allows customer service agents to be able to have a multi-session application for their work, as well as allowing communications through third-party channels. Licensing is as per the different Customer Service SKU’s, with any 3rd party add-on being licensed appropriately.

Hopefully this helps clarify the different between these two, and make them less confusing. If you have any further questions around this, please drop a comment below, and I’ll do my best to respond!

Canvas Apps & Power Automates

So it’s been a busy few weeks here, which is why I haven’t really been putting up any articles. March/April is always a busy time for our family with stuff going on, and this year I decided not to push myself to get articles out, as otherwise I’d be running very low on sleep!

That being said, I’ve still had some great ideas about things that I’d like to share, and have been keeping a series of short notes for me to pick up. Today’s topic is one of them, which I think has been a major pain to anyone involved in canvas app development!

So, the back story to this is that we’re able to use Power Automate flows together with canvas apps. What I mean by this is that we’re able to directly trigger them from within the canvas app, rather than needing to do something like edit or create a record, and then have the Power Automate flow trigger from the record creation or modification.

There’s a specific Power Apps trigger that’s available within Power Automate exactly for this purpose:

When clicked, it gives us the trigger line in the steps as follows:

So what we’d do is within the canvas app, we would bind a button (or another control) that when selected, it would then go away & trigger the Power Automate flow. Great – so many different things that we can get to happen! One of the benefits of doing things like this is that we can then pass information from the Power Automate flow back to the canvas app directly:

This can then mean that the user can know, within the canvas app itself, that the Power Automate flow has run, and use data (or other things) that have come out of it.

OK – all good so far.

The main issue to date has been with deploying canvas apps together with Power Automate flows. See, as per best practise, we would create a solution, place the canvas app, flows, and anything else that’s necessary for it to work within it, and then deploy the solution to our target environment/s. And that’s where things just…didn’t go quite right.

Obviously within the development environment, the canvas app would be hooked up to the flows, and everything would work. Clicking the button would cause the flow to run, etc. User authentication would be in place (along with licenses of course!), and it was just fine.

But when deploying a solution containing canvas apps and associated flows between environments (regardless of whether it’s been manually deploying, or automated using a tool such as Azure DevOps), the connections to the flows would be broken. Ie, the canvas app would run, but the flows wouldn’t trigger. Looking at the connections in the canvas app within Studio would show something like the following:

All of the connections to Power Automate flows would show as ‘Not connected’. It’s not even possible to click the ellipse next to them and re-connect them – the only option available is to remove it from the canvas app!

So in order to get things working again, we’d need to do the following steps:

  • Open up the canvas app
  • Remove all connections to Power Automate flows
  • Add a temporary button, set it to be a Power Automate trigger
  • Click through all of the Power Automates needing to be connected (waiting for each one to connect, then go to the next one)
  • Remove the temporary button
  • Save and publish the solution

This, in a nutshell, has been a (major) headache. For example, I’ve been working with a solution that has over 30 Power Automate flows that can be triggered from the canvas app (lots of different functionality!). Each deployment has needed the above process to be carried out, which has usually added on at least an hour to the deployment process!

Now, this hasn’t been something that’s been unknown. In fact, the official Microsoft documentation noted the following:

So this is something that Microsoft has been well aware of, but it’s been a pain point that we’ve had to work with.

However, this has now ALL changed, which I (and MANY others) are really pleased about!

Microsoft has rolled out an update last month that means that canvas app connections to Power Automate flows will NOT break when they’re deployed across environments! This is such a massive time-saver, that I’m now trying to work out what to do with all of my free time! Only kidding…more project work will commence!

So what we can now do is take our solution, deploy it across the different environment/s that we need to get it out to (whether manually, or automated using tools such as Azure DevOps), publish the solution, and then everything works! Amazing!!

One small caveat though – to ensure that this work, you will need to go into the app, and re-publish it on the latest Power Apps version. This should of course be done in a development environment, and then can be exported and deployed as required.

Microsoft have also updated their documentation at https://docs.microsoft.com/en-us/powerapps/maker/data-platform/solutions-overview to remove the limitation text shown above. It’s a good place to keep an eye on changes that occur over time too.

This is definitely a welcome piece of development, and I know that we’ve been eagerly waiting for this for a while, and now it’s here!

PL-600: Microsoft Power Platform Solution Architect

Well, it’s FINALLY here. And by finally, I guess I’m saying that I’ve been waiting for this for a while? The PL-600 exam is the new ‘Holy Grail’ for Dynamics 365/Power Platform people, being the Solution Architect (3 star) exam. Ten minutes after it went live, I booked to take it, and four hours after it went live I sat it! (I would have taken it sooner, but had to have supper first, get the kids to bed, etc…)

The first solution architect exam that Microsoft has done in this space has been the MB-600 (see my exam experience write-up on it at MB-600 Solution Architect Exam). However with the somewhat recent shift moving towards certifications for the wider Power Platform, it was inevitable that this exam would change as well.

Interestingly enough, the MB-600 now counts towards some of the Microsoft Partner qualifications. I’d expect that when it retires (currently planned for June 2021), the PL-600 will take the place of it in the required certifications to have.

So, how to discuss it? Well, the obvious first start is to link to the official Microsoft page for it, which is at https://docs.microsoft.com/en-us/learn/certifications/power-platform-solution-architect-expert/. According to the specification for it:

Microsoft Power Platform solution architects lead successful implementations and focus on how solutions address the broader business and technical needs of organizations.
A solution architect has functional and technical knowledge of the Power Platform, Dynamics 365 customer engagement apps, related Microsoft cloud solutions, and other third-party technologies. A solution architect applies knowledge and experience throughout an engagement. The solution architect performs proactive and preventative work to increase the value of the customer’s investment and promote organizational health. This role requires the ability to identify opportunities to solve business problems.
Solution architects have experience across functional and technical disciplines of the Power Platform. Solution architects should be able to facilitate design decisions across development, configuration, integration, infrastructure, security, availability, storage, and change management. This role balances a project’s business needs while meeting functional and non-functional requirements.

So not really changed that much from the MB-600, though obviously there’s now an expectation for solutions to bring in other parts of the Power Platform, as well as dip into Azure offerings as well. Pretty much par for the course, in my experience, with how recent projects that I’ve been on have been implemented.

At the time of writing, there are no official Microsoft Learning paths available to use to study. I do expect this to change in the near future, and will update this article when they’re out. However the objectives/sub-objectives are available to view from the main exam page, and I’d highly recommend going ahead & taking a good look at these.

Passing the exam (along with having either the PL-200 Microsoft Power Platform Functional Consultant or PL-400: Microsoft Power Platform Developer Exam qualifications as well) will result in a lovely (new) shiny badge. Oh, we do so love those three stars on it!

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

Overall, I had 47 questions, which is around the usual amount that I’ve experienced in my exams over the last year or so. What was slightly unusual was that instead of two case studies, I got three of them! (note that your own experience may likely vary from mine).

Some of the naming conventions weren’t updated to the latest methods, which I would have expected. I still had a few references to ‘entities’ and ‘fields’ come up, though for the most part ‘tables’ and ‘columns’ were used. I guess it’s a matter of time to get everything up to speed with it.

  • Environments
    • Region locations, handling scenarios with multiple countries
    • Analytics
    • Data migrations
  • Requirement Gathering
    • Functional
    • Non-functional
  • Data structure
    • Tables
      • Types of tables
        • Standard vs custom functionality
        • Virtual tables. What these are, when they would be used, limitations to them
        • Activity types
      • Table relationships & behaviours
      • Types of columns, what each one is suited for
      • Business rules. What they are, how they can be used
      • Business process flows. What they are, how they can be used
  • App types (differences between them, scenarios each one is best suited for
    • Model
    • Canvas
    • Portal
  • Model-driven apps
    • Form controls (standard vs custom)
    • Form layout (standard functionality vs custom functionality)
    • Formatting inputs
    • Restricting inputs
  • Automation
    • Power Automate flows. What they are, how they can be used, restrictions with them
    • Azure Logic Apps. What they are, how they can be used, restrictions with them
    • Power Virtual Agents
  • Communication channels
    • Self service abilities through Power Virtual Agent chatbots. How this works, when you’d use them, limitations that exist
    • Live agent abilities through Omnichannel. How this is implemented, how customers can connect to a live agent (directly, as well as through chatbots)
    • Teams. When this can be used, how other platform abilities can be used through it
  • Integration
    • Integration tools
    • Power Platform systems
    • Azure systems
    • Third party systems
    • Reporting across data held in different systems
    • Dynamics 365 API
  • Reporting
    • Power BI. What it is, how it’s used, how it’s configured, limitations with it, how to share information with other users
    • Interactive Dashboards. What these are, how these are set up and used, limitations to them
  • Troubleshooting
    • Canvas app issues
    • Model driven app issues
    • Data migration
  • Security
    • Data Protection. What is it, where it’s set up, how it’s used across different requirements in the platform
    • Types of users (interactive/non-interactive)
    • Azure Active Directory, and the role/s it can play, different types of AAD authentication
    • Power Platform security roles
    • Power Platform security teams, types
    • Portal security
    • Restricting who can view forms
    • Field level security
    • Hierarchy abilities
    • Auditing abilities and controls
    • Portal security

Wow. It’s a lot of stuff. Not that I’m surprised by that, as essentially it’s the sort of thing that I was expecting (being familiar with the MB-600). I think that on a ‘day to day’ basis, I cover most of these items already, so didn’t have to do a massive amount of revision for items that I wasn’t familiar with.

From my experience in taking it, I’d say that around 30% of the questions seemed to be focused on Dynamics 365, with 70% being focused on Power Platform capabilities. It’s about what I thought it would be when the exam was first announced. Obviously some people are more Dynamics 365 focused, and others are more Power Platform focused, but the aim of the exam (& qualification) is to really understand the breadth of the offerings available.

I can’t tell you if I’ve passed it or not…YET!. Results aren’t going to be out for several months, based on previous experience with Beta exams, but I’ve got a good feeling about this.

So, if you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Customising Case Resolutions

Well, the title is a bit of a mouthful, I’ll admit. Hopefully though this brings some good information, and can help people out.

Cases are wonderful things, and can be used for tracking client interactions, compliments/complaints, and so many other things. What cases do have is the ability to resolve them, and provide information around the resolution.

Now, the standard way of doing this provides the following screen:

There’s the ability to set the Resolution Type (being a dropdown, aka Choice, field), & putting in free text for the Resolution itself (allowing us to track information around it). There are also time fields, which can be used for working out the time spent, as well as any time that’s going to be chargeable.

Now when going in to modify these, we’d think to open up the Case Resolution table. However, this isn’t actually the right place to do it. Instead, we’re needing to update the Case table itself, as the Care Resolution items comes from the Case Status field!

Somewhat annoyingly, it’s not possible to do this through the new ‘Maker’ interface:

In order to actually handle this, we need to switch across to the Classis editor to set this up. This could be because it’s actually a situation of having both parent & child entries. What I mean by this is that there’s the actual status (being Active, Resolved or Cancelled), and then a reason under each one. Hopefully at some point it’ll be updated into the new UI, so that we can do it from there.

We’ll need to change the Status item to ‘Resolved’, & can then add in the options that we want:

After adding them, we need to save & publish, and then they’ll show up for us, and are able to be selected:

So that’s great – we’re able to customise it. But what if we’re wanting to customise the actual ‘Resolve Case’ form itself? Not everyone wants to show Time/Billable Time on it (quite a few of our clients ask us to remove it), and perhaps they want to add additional custom fields.

So from the usual perspective of doing this, we’d open up the Case Resolution table, create new fields as required, and modify the existing form (we’re not able to create any other forms for this specific table). After all, this is how we’d do it for any table in the system (whether a standard one, or a custom one). This is going to be the Main form, rather than the QuickCreate one:

We save & publish it, and then would open up a Case record, click ‘Resolve Case’, and expect to see it. However, that doesn’t happen, which has been most puzzlingly to me!

It turns out that there are two things needed to be done in order to get to see our ‘custom’ form (though it’s not really custom, as it’s modifying the default form, but whatever).

  1. We need to modify security permissions for users, and is a critical requirement. An example of this is shown below:
Security Role: Customer Service Representative

2. We need to enable customisable dialogues. Yes, it’s a setting that needs to be updated in order for users to see the custom layout of the form. If we don’t do this, they’re shown the default form, even though we’ve modified it! Seems a little strange that the system seems to have this concept of a ‘shadow’ form, but I guess that’s how it is.

To do this, we need to go into the Service Management settings area. I usually launch this through the Customer Service Hub app, though it’s available through several of the other standard apps as well:

Once there, we need to click into the Service Configuration menu item, and then change the ‘Resolve Case Dialogue’ option as shown below:

Remember to click the ‘Save’ button to save this.

Finally we can go back to our Case record, click ‘Resolve Case’, and look what appears!

So in summary, it’s definitely possible to modify & change the way that Case resolutions works in the system. It does take a little bit of fiddling around with settings in different areas, which can be confusing if we’re not used to this, but can give a great result in the end.

Have you ever come across this, and wondered how to do it? Have you developed Case Resolutions any further? Drop a comment below – I’d love to hear!

Personalised Sound Notifications for Omnichannel

One of the themes running through the Wave 2 2020 update for Omnichannel is the personalisation aspect. Though systems work just fine on their own, it’s always nice to add a ‘personal touch’ to the parts that we can. Last week I shared how quick replies are now able to be personalised (Personalised Quick Replies). This week I’m going to go into how the sound notifications can be personalised as well!

These seem to be just small little features, but in my view they do bring things to the next level. Examples of this are the following:

  • If a customer session starts, wanting to know which channel it’s come in through, without needing to open the conversation
  • Many agents in a contact centre – if everyone is using the same sound, no-one knows if it’s their computer or not!
  • The different between a new conversation starting, and a new message being received on an existing conversation
  • Wanting to ensure that sound volumes aren’t too high, else they’ll disturb other people.

All of these are extremely valid scenarios, along with other ones (such as disabling sound entirely, for example!). Though this seems simple to implement, and isn’t very difficult to set it, there’s a lot of flexibility involved. I’m therefore really happy that this is now available to be used.

So, let’s see how to go about setting it up. There are two parts to this – the Omnichannel Administrator side, and what the Agent can then do

Omnichannel Administrator

In the Omnichannel Administrator Hub, the administrator should open the Notifications section, and go to the Sound Notification Settings tab:

There’s a single setting there, to toggle sound notifications on or off. Setting it to ‘Yes’ will then show the following section on the screen:

Once it’s enabled, there are then a number of system default options that are automatically loaded. Here the administrator can do the following tasks:

  • Choose to allow sounds to be played at a per channel level
  • Change the system default sound notification (more on loading in custom sounds below)
  • Allow the sound notification to be repeated until the call is answered
  • Set the maximum volume allowed for the sound (this is a lovely slider control!)

There are of course sound files that come included in the system by default. But what if we’re wanting to upload custom sound files to be used? Well, that’s not a problem. Simply by clicking in the lookup field to select a sound file, we are given the option to upload a new audio file:

Clicking this brings up the Audio File record, which we use to upload. We need to give it a name & save it, and then we’re given the ability to upload the file itself:

Note: There are specific file types that need to be used, with a maximum file size of 1MB. It does say that for best experience to use the OGG file format. There are plenty of free resources out there to download OGG files, or to convert MP3 files to the OGG file format if you need

Once we’ve uploaded the file, we get presented with a mini player to hear how it sounds. This is really cool!

All of the audio files in the system (both default & custom) are then available for agents to personalise their own experience

Note: If a company wants to upload many different custom audio files, it may be easier to add the Audio Files entity to the sitemap, and then perform this function from there

Note: To prevent agents from uploading their own audio files directly, the Omnichannel Agent security role only allows Read access, not Create/Edit access:

Omnichannel Agent

With the initial system setup performed by the Omnichannel Administrator, agents are then free to go ahead & personalise their own experience. This is done directly within the Omnichannel for Customer Service app, by selecting ‘Personalisation’ from the available menu:

Once this is selected, the agent is presented with a very similar interface to the Omnichannel Administrator:

Here the agent can change the system default for themselves (this does not affect any other Omnichannel users), change the various settings, modify the volume levels, etc.

Once saved, it’s then live & active, and will work as desired.

Incoming message alerts for active sessions

At the bottom of the sound notification settings screen, there is one further setting. This is around the behaviour of sounds for existing conversations:

This can be helpful (either from an overall system perspective, or an individual agent perspective) to either allow or turn off sounds from conversations that are already happening. Some people might find it very annoying that every time a customer sends a new message through, the system plays a sound. This is especially true when dealing with multiple conversations (which, after all, is what Omnichannel is all about!)

In summary, it’s a really good feature to have now at our convenience to use. Obviously I’d suggest not to load rock music into it, for example, unless of course your company specialises in rock music! How do you think this would be beneficial to your users? Drop a comment below – I’d love to hear!

Personalised Quick Replies

One of the things that customer service agents absolutely HATE is having to type full replies to customers. There are many things that they’ll do which are quite repetitive, and having to type the same response each & every time gets frustrating to say the least.

As I’ve covered previously at Quick Responses in Omnichannel, Omnichannel has the ability for Quick Replies. With these, agents are able to select the response that they’re wanting to use, and quickly populate it into the chat session that they’re having.

It’s also possible, using ‘slugs’, to set up responses that will automatically populate with specific pieces of information in the system. For example, something like ‘Good morning, my name is {Agent Name}, how may I assist you?’ will automatically populate the name of the agent into the chat session.

This is great; the main drawback to date has been that Omnichannel administrators are required to set these up, as well as maintain them. That’s not so great, when you consider that agents might want to personalise their responses as well. To date, that’s not able to be done within the system.

However, with Wave 2 2020, it’s now possible to allow agents to create their own quick replies, to be able to be used within chat sessions. It’s also not particularly difficult to go about getting this into the system, as we’ll see below.

The Omnichannel Administrator simply needs to go to the Personal Quick Replies section, and change the toggle to ‘Yes’, then save. This will enable personal quick replies for agents simply & swiftly.

Once the system setting has been set, and is active (it can take a few minutes to refresh through), agents are then able to start setting up their own quick replies.

To do this, agents will need to be in the Omnichannel for Customer Service app, and select the Personalisation option from the drop-down menu:

This will then open the agent personalisation tab, which has several different sections on it. The first one is the one that we’re interested in – Personal Quick Replies:

Here will list any personal quick replies that have already been set up by the agent, as well as give the option to create further ones to use:

Clicking this option brings up the familiar interface to set this up:

Note: Personal quick replies aren’t localised in Omnichannel. That’s why you need to select a Locale for the record. To be able to provide the quick response in multiple languages, create a specific response for each language, and select the locale that’s appropriate for it

Once the record is saved, it’s then possible to add tag/s to it for referencing:

Note: If you want to use the hash character (#), you can only use it at the beginning of the tag, not anywhere else in it

Once these have saved, they’re then available to be selected from the chat by the agent. The chat interface will show both system & personal quick replies. Typing ‘/q’ into the chat window will bring these up:

We can select the tab at the top to show just the personal quick replies that the agent has set up:

Alternatively, if the agent starts searching with text, they can easily distinguish between system & personal quick replies by looking at the icon against each one. System replies have a globe-style icon, whereas personal replies have a person-style icon:

So in summary, I think that this is a really great feature to add onto the original way of quick replies working. It’ll free up time for the Omnichannel Administrators, and allow agents to put their own responses in that they need. It’s also possible to share this using the OOB record sharing functionality, which means that a team lead can set them up, and then share them with the rest of the team!

How do you think this could enable or help you? Drop a comment below – I’d love to hear!