Security Roles & Assigning Records

Let’s face it, and call a spade a spade (or a shovel, depending on where in the world you happen to be). Security roles are very important within Dataverse, to control what users can (& can’t!) do within the system. Setting them up can be quite time-consuming, and troubleshooting them can sometimes be a bit of a nightmare.

Obviously we need to ensure that users can carry out the actions that they’re supposed to do, and stop them doing any actions that they’re not supposed to do. This, believe it or not, is generally common sense (which can be lacking at times, I’ll admit).

Depending on the size of the organisation, and of course the project, the number of security roles can range from a few, to a LOT!

Testing out security can take quite a bit of time, to ensure that testing covers all necessary functionality. It’s a very granular approach, and can often feel like opening a door, to then find another closed door behind the first one. Error messages appear, a resolution is implemented, then another appears, etc…

Most of us aren’t new to this, and understand that it’s vitally important to work through these. We’ve seen lots of different errors over our lifetime of projects, and can usually identify (quickly) what’s going on, and what we need to resolve.

Last week, however, I had something new occur, that I’ve never seen before. I therefore thought it might be good to talk about it, so that if it happens to others, they’ll know how to handle it!

The scenario is as follows:

  • The client is using Leads to capture initial information (we’re not using Opportunities, but that’s a whole other story)
  • Different teams of users have varying access requirements to the Leads table. Some need to be able to view, some need to be able to create/edit, and others aren’t allowed to view it at all
  • The lead process is driven by both region (where the lead is located), as well as products (which products the lead is interested in)

Now, initially we had some issues with different teams not having the right level of access, but we managed to handle those. Typically we’d see an error message along the following lines:

We’d then use this to narrow down the necessary permissions, adjust the security role, re-test, and continue (sometimes onto the next error message, but hey, that’s par for the course!).

However, just as we thought we had figured out all of the security roles, we had a small sub-set of users report an error that I had NEVER seen before.

The scenario was as follows:

  • The users were able to access Lead records. All good there.
  • The users were able to edit Lead records. All good there.
  • The users were trying to assign records (ie change the record owner) to another user. This generally worked, but when trying to assign the record to certain users, they got the following error:

Now this was a strange error. After all, the users were able to open/edit the lead record, and on checking the permissions in the security role, everything seemed to be set up alright.

The next step was to go look at the error log. In general, error logs can be a massive help (well, most of the time), assuming that the person looking at it can interpret what it means. The error log gave us the following:

As an aside, the most amusing thing about this particular error log, in my opinion, was that the HelpLink URL provided actually didn’t work! Ah well…

So on taking a look, we see that the user is missing the Read privilege (on what we’re assuming is the Lead table). This didn’t make sense – we then went back to DOUBLE-check, and indeed the user who was trying to carry out the action had read privileges on the table. It also didn’t make sense, as the user was able to open the lead record itself (disclaimer – I’ve not yet tried doing a security role where the user has create/write access to a table, but no read access..I’m wondering what would happen in such a scenario)

Then we had a lightbulb moment.

photo of bulb artwork

In truth, we should have probably figured this out before, which I’ll freely admit. See, if we take a look at the original error that the user was getting, they were getting this when trying to assign the record to another user. We had also seen that the error was only happening when the record was being assigned to certain users (ie it wasn’t happening for all users). And finally, after all, the error message title itself says ‘Assignee does not hold the required read permissions’.

So what was the issue? Well, it was actually quite simple (in hindsight!). The error was occurring when the record was being attempted to be assigned to a user that did not have any permissions to the Lead table!

What was the resolution? Well, to simply grant (read) access to the Lead table, and ensure that all necessary users had this granted to them! Thankfully a quick resolution (once we had worked out what was going on), and users were able to continue testing out the rest of the system.

Has something like this ever happened to you? Drop a comment below – I’d love to hear the details!

MB-260: Microsoft Customer Data Platform Specialist

It’s been a while since I’ve taken an exam. Admittedly, this is for two reason. Firstly, the renewal process for exams now (as updated last year) is not to take it again, but rather to re-qualify through Microsoft Learn. The second reason is that I’ve been waiting for some new exams to come out (OK – there’s the DA-100, which is still on my list of things to do…).

Well, there’s a new exam on the block. In fact, it’s a different type of exam – this is a ‘Speciality’ exam, rather than focusing on a specific type of application. It’s the first of its kind, though there are likely to be more to follow in the future.

It’s the MB-260, which is all around Customer Data. That’s right – it’s not about how to do sales, or customer service, or something else. It’s about taking the (holistic) approach to ALL of the data that we can hold on customers, and do something with it.

The official page for it is at https://docs.microsoft.com/en-us/learn/certifications/exams/mb-260https://docs.microsoft.com/en-us/learn/certifications/exams/mb-260. The specification for it is:

Candidates for this exam implement solutions that provide insights into customer profiles and that track engagement activities to help improve customer experiences and increase customer retention.

Candidates should have firsthand experience with Dynamics 365 Customer Insights and one or more additional Dynamics 365 apps, Power Query, Microsoft Dataverse, Common Data Model, and Microsoft Power Platform. They should also have direct experience with practices related to privacy, compliance, consent, security, responsible AI, and data retention policy.

Candidates need experience with processes related to KPIs, data retention, validation, visualization, preparation, matching, fragmentation, segmentation, and enhancement. They should have a general understanding of Azure Machine Learning, Azure Synapse Analytics, and Azure Data Factory.

Note that there’s quite a bit of Azure in there – it’s not just about Power Platform, Dataverse, or Dynamics 365. People who handle reporting on customer data should have various Azure skills as well.

There’s also a new type of badge that will be available:

At the time of writing, there are no official Microsoft Learning paths available to use to study. I do expect this to change in the near future, and will update this article when they’re out. However the objectives/sub-objectives are available to view from the main exam page, and I’d highly recommend going ahead & taking a good look at these.

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

Overall, I had 51 questions, which was towards the higher number of questions that I’ve experienced in my exams over the last year or so. There was only a single case study though.

Some of the naming conventions weren’t updated to the latest methods, which I would have expected. I still had a few references to ‘entities’ and ‘fields’ come up, though for the most part ‘tables’ and ‘columns’ were used. I guess it’s a matter of time to get everything up to speed with it.

  • Differences between Audience Insights and Engagement Insights
    • What are the benefits of each
    • When would you use each one
    • What types of users will benefit from each type
    • How to create customer insights
  • Environments
    • Types of environments
    • How to create a new environment
    • What options are available when creating an environment
    • What is possible to copy from an existing environment
  • Relationships
    • Different types of relationships
    • What is each one used for
    • Limitations of different relationship types
  • Business level measures vs customer level measures
    • What each one is, and what they’re used for
  • Power Query
    • How to use
    • How to configure
    • How to load data
  • Data mapping
    • Different types available to use
    • Scenarios each type should be used for
    • Limitations of each type
    • How to set it up
  • Segments
    • What are segments, how are they set up, how are they used
      What are quick segments, how are they set up, how are they used
      What are segment overlaps, how are they set up, how are they used
      What are segment differentiators, how are they set up, how are they used
  • Measures
    • What are measures, how are they set up, how are they used
  • Data refresh
    • Automated vs manual options
    • Limitations of each type
    • Availability of each type
    • How to set up each type
    • How to apply each type
  • Data Unification
    • What is this
    • How it can be used
    • How to set it up
    • Limitations of it
    • Process validation
    • Changing existing models
  • AI for Audience Insights
    • What is this
    • What can it be used for
    • How to use it
    • Factors that can affect outcomes
  • Security
    • Using Azure Key Vault
    • Capabilities of this
    • How to set it up
    • How to use it
  • Dynamics 365
    • Capabilities for interacting with Dynamics 365
    • How to set it up
    • How to display data, and where it can be displayed
    • What actions users are able to carry out within Dynamics 365

Wow. It’s a lot of stuff. It’s definitely an exam that if you’re not already currently hands-on with the skills needed, I’d highly recommend you get a decent amount of experience with it before taking the exam!

I can’t tell you if I’ve passed it or not…YET!. Results aren’t going to be out for several months, and to be honest, I’m not quite sure how well I’ve actually done.

So, if you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Solution deployments: Automated vs Manual

Over the holiday period, I’ve been playing around with solution deployments. OK – don’t judge me too much…I also took the necessary time off to relax & get time off work!

But with some spare time in the evenings, I decided to look a bit deeper into the world of DevOps (more specifically, Azure DevOps), and how it works. I’ll admit that I did have some ulterior motives around it (for a project that I’m working on), but it was good to be able to get some time to do this.

So why am I writing this post? Well, there’s a variety of great material out there already around DevOps, such as https://benediktbergmann.eu/ by Benedikt (check out his Twitter here), who’s really great at this. I chat to him from time to time around DevOps, to be able to understand it better.

However, I ran into some quite interesting behaviour (which I STILL have no idea why it’s the case, but more on this later), and thought that I would document it.

Right – let’s start off with manual deployments. As we know, manual deployments are done through the user interface. A user (with necessary permissions) would do the following:

  1. Go into the DEV environment, and export the solution (regardless of whether this is managed or unmanaged)
  2. Go into the target environment, and import the solution

Pretty simple, right?

Now, from an DevOps point of view, the process is similar, though not quite the same. Let’s see how it works:

  1. Run a Build pipeline, which will export the solution from the DEV environment, and put it into the repository
  2. Run a Release pipeline, which will get the solution from the repository, and deploy it to the necessary environment/s

All of that runs (usually) quite smoothly, which is great.

Now, let’s talk for a minute about managed solutions. I’m not going to get into the (heated) discussion around managed vs unmanaged solutions. There’s enough that’s been written, said, and debated on around the topic to date, and I’m sure it will continue. Obviously we all know that the Microsoft Best Practise approach is to use managed solutions in all non-DEV environments..

Anyway – why am I bringing this up? Well, there’s one key different in behaviour when deploying a managed solution vs an unmanaged solution (for a newer solution version), and this is to do with removing functionality from the solution in the DEV environment:

  • When deploying an unmanaged solution, it’s possible to remove items from the solution in the DEV environment, but when deploying to other environments, those items will still remain, even though they’re not present in the solution. Unmanaged solution deployments are additive only, and will not not remove any components
  • When deploying a managed solution, any items removed from the solution in the DEV environment, and then deploying the solution to other environments will cause those items to be removed from there as well. Managed deployments are both additive & subtractive (ie if a component isn’t present in the solution, it will remove it when the solution is deployed)

Now most of us know this already, which is great. It’s a very useful way to handle matters, and can assist with handling a variety of scenarios.

So, let’s go back to my first question – why am I writing this post? Well..it’s because of the different behaviour in manual vs automated deployment, which I discovered. Let’s look at this.

When deploying manually, we get the following options:

The default behaviour (outlined above) is to UPGRADE the solution. This will apply the solution with both additive & detractive behaviour. This is what we’re generally used to, and essentially the behaviour that we’d expect with a managed solution.

Now, when running a release pipeline from Azure DevOps, we’d expect this to work in the same way. After all, systems should be build to all work in the same way, right?

Well, no, that’s not actually what happens. See, when an Azure DevOps release pipeline runs, the default behaviour is NOT to import the solution (we’re talking managed solutions here) as an upgrade. Instead (by default), it imports it as an UPDATE!!!

This is what was really confusing me. I had removed functionality in DEV, ran the build pipeline, then ran the release pipeline. However the functionality (which I had removed from DEV) was still present in UAT! It took me a while to find out what was actually happening underneath…

So how can we handle this? Well, apart from suggesting to Microsoft that they should (perhaps) make everything work in the SAME way, there’s a way to handle it within the release pipeline. For this, it’s necessary to do two things:

Firstly, on the ‘Import Solution’ task, we need to set it to import as a holding solution.

Secondly, we then need to use the ‘Apply Solution Upgrade’ task in the release pipeline

What this will do is then upgrade the existing solution in the target environment with the holding solution that’s just been deployed.

Note: You will need to change the solution version to a higher solution number, in order for this to work properly. I’m going to write more about this another time, but it is important to know!

So in my view, this is a bit annoying, and perhaps Microsoft will change the default behaviour within DevOps at some point. But for the moment, it’s necessary to do.

Has this (or something similar) tripped you up in the past? How did you figure it out? Drop a comment below – I’d love to hear!

Workaround for sharing Canvas Apps

Don’t you find it absolutely frustrating when there’s a canvas app that you want to get access to, or give other users access to, but can’t see it? It’s REALLY annoying, but it’s sort of the way that Microsoft has designed the platform (at least at this point in time).

See, when a user creates a canvas app, only the creator is able to see & launch it. If other users want to get access to it, the creator needs to share it. This can be done by sharing the app directly with another user, or by sharing it with an AAD Security Group (which is sort of best practise).

Now, of course there’s the Microsoft Power Platform Centre of Excellence solution, which includes a very handy app to assign permissions for canvas apps. After all, if a user is on holiday, sick leave, or has left the company, there needs to be some way of assigning permissions for other users to gain access to it. It’s really helpful, but of course needs the CoE solution installed.

Let’s think of another scenario. What about if we have some canvas apps as part of a solution, that’s deployed through (proper) ALM – such as using Azure DevOps with automated pipelines. Best practise for this is to use service principals (ie non-interactive user logins). This is great, but then the canvas app/s will be owned by this user. So without the use of the CoE ‘Set App Permissions’ canvas app, we’re sort of stuck, as we can’t gain access to the app.

Or can we…..?

So this is a scenario that I’ve been dealing with recently, and I’ve found a really cool workaround that doesn’t need the CoE ‘Set App Permissions’ canvas app to be able to handle the situation.

The example below (amusingly, in my opinion) is actually using the Microsoft CoE solution as an example, but this works with any canvas apps that are held within a solution (against, this heavily supports using solutions for ALL development items!).

So, this is what the actual installed apps look like in this environment:

As we can see, there are a lot of them! But what happens if I’m logged in as my regular user? What do I see if I go to the list of apps? Well, I’ll see the following:

Now, as we can see, I’m able to see the model-driven app (as these aren’t hidden at all). But I’m not able to see ANY of the canvas app! So how can I get access to it, or share it with other users?

Well, if I take a look at the solution itself, I can see the following when browsing to the list of apps (I’m really loving the new Solution Explorer layout, I’ll freely admit!):

I can try to play the canvas app (in this case, the ‘Set App Permissions’ app) directly from the solution. But when I try to do this, I’ll get the following error message:

Now, this is of course happening because I’m not the owner of the app, & the app hasn’t been shared with me at all. So really I was expecting this error to happen.

However, if I take a look at the menu options displayed for me, I can see that the ‘Share’ option isn’t greyed out. I wonder what happens if I click it…

Now this is EXCITING! When clicking the ‘Share’ option on the menu, I’m given the regular sharing screen, where I’m able to set app permissions. So it looks like I’m able to do something here. OK – let’s go ahead & try to share the app with my own user:

So I’ve looked up my own user, and then clicked ‘Share’. This is what happens next…

Exciting moment – will this work?

Waiting with bated breath, and then…

It’s worked! The app sharing has been successful with my user.

Note: The example that I’m using here is with my own user account. However it doesn’t need to be – I can select any user account or AAD Security Group, and share accordingly.

Going to my list of apps, I can now see that the app is showing up for me:

Clicking the app to launch it presents me with the permissions dialogue, and having confirmed permissions, then launches it properly:

So this is indeed a way in which it’s possible to share canvas apps with users and/or AAD security groups, even when a user isn’t the owner of the canvas app.

It is important to note that the user carrying this out does need to have one of the following permissions in the environment:

  • System Customiser
  • System Administrator

Without having one of these roles, it’s not going to be possible to carry out the above (mostly because it’s not possible to see solutions & dig down into them).

This is a handy little trick that hopefully will help clear up one of the headaches when trying to share canvas apps! Of course it’s possible to use the Microsoft CoE tool to set app permissions, but if a customer doesn’t have it installed, then this would be another way to approach things.

Have you ever had this issue? How did you go about solving it? I’d love to hear – please drop a comment below…

Omnichannel vs Customer Service Workspace

This is a question that I’m being asked on a semi-regular basis at the moment, so I thought it would be helpful to do a writeup around things. It’s definitely not clear from the outset based on existing documentation. However, being able to speak to wonderful people such as Tricia Sinclair has been amazing in being able to figure out the differences between the two applications.

So, where to start. Well, let’s first of all understand the similarities between the two applications.

Firstly, they are both multi-session apps. To put this in context (as mentioned elsewhere previously) – traditionally Dynamics 365 applications have been ‘single session’ applications. This means that users would navigate around, open/close records, create or edit as necessary. If users wanted to have multiple records open, they’d need to have multiple tabs open, or even multiple windows (yes, I still remember the days before browsers had tab functionality!).

What multi-session means in this context is that users are able to open up multiple records, and switch between them in the same tab. Open records pop into the left-hand navigation bar, and users can easily click between them. Not only that – users are also able to open further tabs within the same record pane, to access further information. These stay open whilst users switch to other records, which is really quite helpful!

So for example, a user could open a case record, then open the contact associated to the case, as well as the account related to the case. They could then further open the knowledge base to search for articles, and so on and so forth. All of these stay open.

Both apps are also web applications – they run in a browser, rather than needing to have a specific software application installed for them (unlike Unified Service Desk).

So, where do they actually differ? Well, this was a bit difficult for me to understand in the first instance, though that turned out to be because I had both Customer Service Workspace as well as Omnichannel configured within the same environment! Turns out that this wasn’t the best approach to take to compare the two, & understand their capabilities. Easily fixed though with quickly spinning up a new trial to install one in.

So with knowing how Omnichannel works (after all, I’ve written quite extensively around it), let’s take a look at the Customer Service Workspace app:

Customer Service workspace overview
  1. The session pane lists all the sessions that you are actively working on. Select the tabs to navigate among sessions.
  2. The Home session returns you to the Customer Service Agent Dashboard view.
  3. Each session has a tab in the session panel. Select a tab to navigate to the session you want to work on.
  4. Select a case to open a new session. A single click on a case replaces your view with the case form. Select the back arrow in the upper-left corner of the form to get back to your previous view.
  5. Select the tabs to navigate to your open activities, cases, forms and views.
  6. Select the + icon to expand the menu to view a list of forms, views, and activities. Select the one you want to open in a new tab.
  7. Select the drop-down selector to filter cases in queues you can choose to work on.
  8. Select Shift + mouse click to open a new session for an activity. A single click replaces your view with the activity form. Select the back arrow in the upper-left corner of the form to go back to your previous vie

Now, without Omnichannel installed in the same environment (& obviously licensed for users), it’s not possible to have native Dynamics 365 channels such as Chat, WhatsApp, etc. Conversations will not appear for customer service agents who are using the Customer Service Workspace.

Note: If you DO have Omnichannel installed in the same environment, and users are licensed to use it, then conversations will show up within the Customer Service Workspace app for them. They’ll have notifications pop up on the screen for incoming customer sessions.

That’s not to say that it’s not possible to have channels available within Customer Service Workspace. So how do they actually come in?

Well, as it turns out, channels within Customer Service Workspace need to be third party channels. There are a plethora of 3rd party add-ons for Dynamics 365, that offer different communication capabilities. Some of these do date back a while (to before any native Microsoft capabilities).

For example, there are ISV add-ons for Customer Service that can embed a call dialler into the experience, so that customer service agents can call directly from a record. Or alternatively an add-on such as a 3rd party web chat application, that can then surface these within the Customer Service Workspace. Each of these obviously would need to be purchased, licensed & integrated appropriately with your Dynamics 365 solution as necessary too.

Now both applications also have other similar functionality, such as the Productivity Pane, Agent Scripts, Smart Assist & Knowledge Search. However there can be differences between them. For more information, I’d suggest taking a look at Tricia’s blog article that goes into depth on this.

So to summarise, Omnichannel is for the native Microsoft channels, giving customer service agents the ability to service customers using them. Licensing (currently) is with Customer Service Enterprise, and then either the Digital Chat or Digital Messaging add-on SKU’s.

Customer Service Workspace, on the other hand, allows customer service agents to be able to have a multi-session application for their work, as well as allowing communications through third-party channels. Licensing is as per the different Customer Service SKU’s, with any 3rd party add-on being licensed appropriately.

Hopefully this helps clarify the different between these two, and make them less confusing. If you have any further questions around this, please drop a comment below, and I’ll do my best to respond!

Omnichannel – Wave 2 2021

So last week the Wave 2 2021 information dropped. It’s taken me a few days to get round to it (family stuff happening), but I’m finally able to do a quick recap of it. As most people know, Microsoft releases features in two waves – one in the spring (Wave 1), and one in the autumn (Wave 2). As usual, I’ve included the links for the full release notes across both Dynamics 365 & Power Platform below, though will be focusing on the product features for Omnichannel

The links are here:

As I’ve done before, I’m going to include the dates that are applicable (at this point in time) for each time.

Enhancements to existing capabilities

Agent workspace inbox view

GA – Oct 2021

10 Google Workspace tips to optimize your inbox - Google Workspace Learning  Center

As more and more organisations move in the direction of omnichannel system capabilities, there is a growing need for the actual agent experience to be better optimised. The inbox view that this functionality will deliver is aimed to address the needs to quickly triage requests, and allow agents to focus on customers & their issues. It will be integrated into the Customer Service workspace as well as the Omnichannel Engagement Hub, and will allow agents to effectively navigate their emails & conversions whilst handling customer interactions.

Usability improvements for agent workspaces

Early Access – Aug 2021. GA – Oct 2021

Web Usability Definition | Web Wise Wording

The Customer Service workspace and Omnichannel Engagement Hub are multi-session applications that allow users to be able to multi-task with customers to provide support on multiple cases simultaneously. This release provides usability improvements to help agents be more productive, including simplified navigation as well as the migration of productivity tools to the new extensible App side pane.

Increase agent productivity with contextual collaboration using embedded Microsoft Teams

GA – Nov 2021

How to Embed and Share Videos on Microsoft Teams | ClickView

Agents who use Dynamics 365 Customer Service can easily collaborate with anyone within their organization, such as agents from other departments, supervisors, customer service peers, or support experts, over Microsoft Teams to resolve customer issues, without leaving
the case or conversation. Chats over Teams will be linked directly to Customer Service records, enabling a contextual experience

Some of the key features coming in this release are:

  • Ability to chat with contacts from within Dynamics 365
  • Access to key Customer Service contacts, such as supervisors, queue members, and support experts.
  • Access to AI-driven suggestions of agents who resolved similar cases.
  • Access to recent Microsoft Teams chat lists.
  • Ability to link and unlink chats to case records.
  • Access to linked Microsoft Teams chats.
  • Message avatar and presence, where users can easily see profile pictures of a chat participant and their availability (presence)

Omnichannel Voice Channel

At Ignite in September 2020, Microsoft announced the new Voice channel for Dynamics 365 Customer Service. The aim of the solution is to provide simpler administration & management experiences within the platform itself, rather then needing traditional cloud component integration complexities.

With the release of this, voice, SMS, and digital messaging channels, and a PVA-powered intelligent interactive voice response (IVR), real-time voice intelligence, and insights across all channels, speech-based self-service, and intelligent skills-based routing are all brought together in a single package.

Voice channel powered by Azure Communication Services

GA – Nov 2021

NEW Voice Channel Capabilities Announced for Dynamics 365 Customer Service  | Preact

As mentioned in the Wave 1 2021 post, there’s a new voice channel that’s coming in. This new solution for Customer Service enables an all-in-one customer service solution without fragmentation or requirement of manual data integration. It will provide a single view of the customer that empowers agents to provide personalised service across all channels, and true omnichannel analytics and insights for agents and supervisors alike. Providing organizations with a choice of telephony delivered directly by Microsoft enables quick and easy deployment of a voice channel for their business.

  • This feature enables organizations to adopt Azure Communication Services as a voice provider natively in Omnichannel for Customer Service, and facilitates the following features:
  • Phone number procurement and management
  • Ability to handle and distribute incoming calls
  • Ability to make outbound calls
  • Ability to manage SMS (inbound and outbound)
  • Deep integration of voice into core Omnichannel for Customer Service functionality
  • Real-time sentiment analysis
  • Real-time transcription
  • Real-time translation
  • Real-time smart assist suggestions
  • Operations management through supervisor dashboards
  • Ability to record and manage phone call

Now there has been a slight delay in rolling this out. As a result, the GA dates for the below have been pushed back to Nov 2021:

  • Call intelligence
  • Call recording
  • Call transcription and real-time sentiment analysis
  • Consult and transfer
  • Direct outbound calling
  • Embedded analytics for voice channel
  • Intelligent voice bot via Power Virtual Agents and Microsoft Bot Framework
  • Modern administration experience for Omnichannel voice (number management)
  • Modern administration for Omnichannel SMS via Communication Services (number management)
  • Supervisor monitoring and barge
  • Topic clustering for voice

Unified routing

Traditionally, organizations use “queue-based routing,” where incoming service requests are routed to a relevant queue, and agents work on those service requests by picking them from the queue. Organizations can miss service-level agreements if agents pick the easier service requests and leave the higher-priority requests in the queue. To address this scenario, organizations either create custom workflows to periodically distribute service requests among their agents or have dedicated personnel to distribute the service requests equitably among agents while adhering to organizational and customer preferences. Both methods are inefficient and error prone and necessitate continuous queue supervision.
The intelligent routing service in Customer Service uses a combination of AI models and rules to assign incoming service requests from all channels (cases, entities, chat, digital messages, and voice) to the best-suited agents. The assignment rules take into account customer-specified criteria, such as priority and auto-skills matching. The new routing service uses AI to classify, route, and assign work items with full automation, eliminating the need for constant queue supervision and manual work distribution to offer operational efficiencies for organizations

Improved historical analytics for unified routing scenarios

GA – Oct 2021

historical analysis on real-time data with ActivePivot

Administrators use unified routing and routing rules across the classification and assignment stages to help ensure the work item is assigned to the best suited agent. Embedded historical analytics provides an overview of routing performance of each channel to help optimize the routing strategy and improve the routing and workforce efficiency. Providing organizations a view of the effectiveness of configurations allows them to improve routing configurations to help increase their customer satisfaction and agent satisfaction scores.

Routing diagnostics for supervisors

GA – Oct 2021

Computer diagnostics icon (PSD) psd free file | Download now!

Routing diagnostics helps an organization to better understand the path a work item takes after it comes into the routing system, through all the classification and assignment rules, to ultimately land in a queue or be assigned to an agent. Current routing diagnostics are available for administrators and are more focused on the workstream and queue routing. In this release, routing diagnostics are being introduced to supervisor experiences, and the quality of the diagnostics is being improved.

I’m really quite excited to see how the new Voice channel will be received, as I think it’s a great feature addition to the overall tools available. It will be interesting to see how clients may choose to use it over their existing voice channel setup.

I’ll be looking deeper into the different functionalities, and will share them here. If there’s anything you think would be helpful to focus on, drop a comment & let me know!

Troubleshooting the ‘Follow’ functionality

On a recent client project, we’ve come up against an interesting situation. Some of the users have the ‘Follow’ functionality available to them, but others don’t seem to have it. This, of course, is quite confusing, so I thought it would be good to write about it, for others who may come up against this.

But first, let’s take a step back. After all, before this had happened I had never heard of the ‘follow’ functionality within the system, and I’m quite sure that many others haven’t either! So what exactly is this all about?

What is ‘Follow’?

We’ve all been there – we have some customers who are ‘priority customers’, and we want to know/see everything that’s happening around them. Obviously we can go into their specific record/s, and see what’s going on. For example, seeing new cases added for these customers, other activities, etc. But what if we don’t want to have to manually open the records each time, or set up specific views in the system for them?

Well, this is where the Follow functionality comes in. It’s possible to track activities (in ‘real-time’) for records that a user follows. Microsoft has given us the ability to set this (or unset this) on a per record basis, so that users can set their own preferences within the system. When a user follows a specific record, the details for that record then show up in the users activity feed. This can then be used further, such as displaying it within a dashboard, for example.

Follow functionality through views
Follow functionality on a specific record

It’s also possible to automatically follow records based on specific criteria.

How to set up Follow functionality

In order for records to be able to have the follow functionality available to them, they need to have the Activity Feed enabled for the specific table. The default system tables such as Accounts, Contacts & Leads already have this enabled, so these records are able to be followed without any additional configuration around them.

To enable other tables (such as custom tables that you may have created) to be able to have the records within them followed, we need to carry out the following steps:

  1. Go to the Advanced Settings menu, and open Activity Feeds Configuration

2. Find the table that we’re wanting to configure this for (if it’s not showing up, click the ‘Refresh’ button on the menu)

Here we can see that the Channel table isn’t enabled at this point

3. Click the ‘Activate’ button on the menu bar

4. Confirm the pop up screen

And voila – you’re done! Users will now be able to go into the table/s, and follow (or unfollow) records there

Troubleshooting

So we now understand what the follow functionality is, and how to enable it. But what happens when users can’t actually see it within the system, to be able to use it?

Well, there are several different things that we can do to look to solve the issue:

  • Have activity feeds been configured for the table? If they’ve not been configured, then they’ll need to have this set up (this is why I’ve put the steps above as to how to do this!)
  • Are security roles set up correctly?

The second one is what turns out to have been the issue for this project. It’s been quite confusing, as originally mentioned, that certain users did see the follow functionality, but others users didn’t see it.

The first place to check is the ‘follow’ privileges on each security role:

As you can see above, we had given organisation-level access on the security role (& actually across all security roles), though the users were still having issues. So the next step is to check a different security privilege within the security role. This is the ‘Post Configuration’ setting, which is found under the Custom Entities section (why it’s under Custom, I have NO idea):

Without this enabled, users with the security role will NOT be able to see/use the follow functionality within the system!

Hopefully this should then sort out all issues, and users will be able to use the functionality as required.

Have you ever had issues with this feature? Have you found a different solution to fix it? Drop a comment below – I’d love to hear!

Omnichannel Admin Center (Part II)

We’ve started off looking at the new Omnichannel Admin Center in Part I. I’m going to continue going through the wonderful new app (interface?), showcasing the functionality that’s different (there’s no point in me mentioning things that are the same, right?).

So having taken a look at the general overview, let’s start delving deeper into how it really is better!

Queues

Queues are really the backbone of Omnichannel. Customer interactions come through to a queue, where agents can then pick them up & respond. Without a queue, nothing would ever happen!

In the new interface, the functionality around queues has been extended. This is what the new interface looks like overall:

You’ll note that the default queues aren’t showing up in here. I’m not quite sure why that is, but am looking into it, and will post about it when I find out the reason behind it.

Opening up a queue record gives us the following:

I’m loving the cleanliness of the new layout – it’s something I’m probably going to keep saying! The new UI is just so much nicer on the eye, in my opinion. We have the information laid out well.

New users can be added from the ‘Add Users’ button on the right top, which is a pretty standard interface (ie adding new/existing records into a subgrid on a form).

But there are several new features here that weren’t present through the old interface. The first to talk about is the ability to set Operation Hours (the block at the bottom of the screenshot above). It’s great to see the prompt that if no operating hours are set, it’ll default to 24/7 operation.

Previously, it was a slight pain (ie clicking around a lot!) to get these to be associated. Now all we need to do is click the ‘Set Operation Hours’ button at the bottom of the page, and we can then add an existing record for this, or set up a new one:

Choosing an existing record will also give us the option to modify the settings for it:

One of the really nice things about this is the Assignment Method, which shows how work items will be prioritised. It’s possible change this, as well as create a NEW assignment method:

So quite a few additional functionality options available from the initial interface, rather than needing to click around. I’m liking it!

Workstreams

Just as with Queues, the Workstreams interface has been streamlined as well. One of the important things to note is that workstreams will need to be migrated over from the old interface to the new interface (I guess that there’s something happening behind the scenes?). I’m going to cover how to do this in a future post (stay tuned!), but let’s take a look the functionality in the new interface:

Clicking into a workstream record gives us the following information:

That’s already MUCH better laid out than the previous way, I think!

So let’s see what we have here. Well firstly, we’re able to move between the channels that are associated to the workstream. This is really helpful, as it can allow us to flip quickly backwards & forwards, and see the relevant information for each channel. We’re able to directly edit each individual channel just by clicking on it (loving the ‘fly out’ side screens for this!), and change the behaviour of it:

The abilities to quickly & rapidly do all of this is just wonderful, rather than needing to have a concrete understanding of the (complex) relationship structures within the system, and clicking around.

It’s also possible to add a new channel directly from this screen, which will easily walk (admin) users through setting up a new channel as needed:

Moving down the options available, we’re able to set routing rules, as well as work classifications. I’m going to talk about this in a separate post, but there’s some really interesting new capabilities here!

Looking at the Work Distribution information, we’re also able to view more information around this, as well as modify some of the settings available. Again, this comes in as a ‘fly out’ style window:

One of the neat pieces of functionality that has been slipped in is the ‘Keep same agent for entire conversation’ option. This means that if the customer interaction drops for some reason, & they come back, it can look for the same agent that they were chatting with previously, if it’s set as such.

Finally, we then have the ‘Advanced Settings’ tab, which gives us information around sessions, notifications, context variables, smart-assist bots, and quick replies. All of these are able to be viewed & configured directly from within the workstream, rather than needing to jump around different parts of the Omnichannel system, & then associating them together:

So to wrap up here (don’t worry, more to come shortly!), the new interface is really enabling admins to be able to quickly & easily create the necessary setup that’s needed. It’s avoiding needing to click around into different parts of the system. Omnichannel is complex enough as it is, and with being able to do the setup from one screen, it really makes life a LOT easier overall with getting the initial setup in place!

What are your thoughts on the new app? Have you used it yet? Have you found that it’s saving you time/effort? Drop a comment below – I’d love to hear!

Omnichannel Admin Center (Part I)

So there’s a new kid on the block. Or rather, it’s probably more accurate to say that there’s a new app available in Dynamics 365! This is the ‘Omnichannel Admin Center’ app that’s now present for anyone who currently has Omnichannel installed in their environment, or who is creating a new installation of Omnichannel.

So, what is this all about then?

Well, let’s back up a step here. Previously to set up Omnichannel, users had to go into the Dynamics 365 Settings, find the Omnichannel App, start the setup of it, and then go ahead & manually configure everything in the Omnichannel Administration app.

This, to be frank, took quite a bit of time to do, and needed users to be very familiar with the different parts of the interface. I’ve previously covered the (multiple) steps needed to do all of this in various blog posts, to help users understand what is actually needing to be done.

Thankfully, Microsoft realised the complexity around this, and have come out with a simplified administration experience. I’m very much in support of this, as it reduces the complexity of getting things started for Omnichannel in the first instance!

So let’s go ahead & take a look at this new app

The first thing to notice when opening the new Omnichannel Admin Center app is the interface itself. I think that this is really nice – rather than a ‘typical’ model-driven app experience, users are able to see some useful information on the home page itself!

Also, very nicely done in my opinion, are the three links at the bottom of the page:

  • Release Notes. This takes users to the release notes section on the Microsoft Docs website. It’s a great little thing that can help users understand the latest/greatest features that are being released
  • Ideas forum. People come up with great ideas to suggest to Microsoft to be able to include in their products. The Ideas forum is the location for these, where users can upvote popular concepts, or submit their own ideas. The Microsoft engineering teams do actually keep an eye on this!
  • Support community. The community forums are really helpful in allowing users to raise questions around the products, and give the ability for other users to help them out by giving answers etc. Most users will have already experienced the support forums in one way or another, but having a link directly to it is definitely quite useful to have

Now one thing that’s usually asked is ‘how can we quickly/easily see & set up chat in Omnichannel’? It’s one of the first things asked, as people tend to want to deploy (web)chat capabilities first, and then add other capabilities later on. Setting this up manually does take several steps, along with some waiting time (or, as I like to refer to it as, a coffee/snack break!)

It’s possible to quickly launch this through the button at the top of the page, rather than needing to go through the multiple configuration steps manually:

Click the button to launch it, and you’ll see the following window come up:

Clicking the ‘open chat demo’ will allow the system to start automatically configuring it for you – no more need for manual steps! You’re also able to use sample data if you wish to, to be able to show the experience without needing to load it in manually.

Yes, this really does only take a minute or two to happen!

Once the system has auto-configured everything, you’re now able to go ahead & launch the demo. Again, all the links & information are presented easily to us, telling/showing us what we need to do.

You’ll notice the chat widget in the lower right hand corner, which I’ve outlined in the image above. This launches into the chat widget directly, rather than needing to deploy it first to a webpage:

There’s no need to start needing to get into the setup of workstreams, queues, channels, routing capabilities, etc. It’s all configured right for you, to get you immediately started!

Of course, to test it out fully you’ll also need someone logged in as an Omnichannel Agent, to be able to respond to the chat instance. This could be the same user (in a different tab/browser on the same machine), or a different user on another machine. It’s really up to you as to how you would like to go about it.

So this is a really great feature to be able to have now. It’s not the ONLY great thing about the new app, however – stay turned for Part II next week when I’ll go into more capabilities that it provides!

Environments & ‘Admin Mode’

With some recent events happening (both professional & personal), I’ve taken a slight step back from putting out posts on here. Thankfully things seem to be settling down, so I’m getting (back) into the swing of things!

I thought that it would be good to talk about a subject that I fell ‘foul’ of recently. This is around environments, and more specifically, the ‘admin mode’ that it’s possible to use on them.

So what exactly is this ‘admin mode’? Well, the aim of it to restrict access to certain users, namely System Administrators & System Customisers. Why would we want to do this? There are several scenarios that come into mind:

  • Performing a system upgrade (such as enabling new features)
  • Changing environment type (eg Production to Sandbox, or vice-versa)
  • Restoring an environment

Essentially, any time we have operation-type work that we’re wanting to carry out. This way whatever we’re doing won’t affect users, and anything that the users are doing won’t affect things either (symbiotic relationship there!).

So as an example, if we’re doing a major release, which changes functionality within a system, we wouldn’t want users in the system carrying out their usual work, as this could have data issue if saving during the actual release. We of course SHOULD be communicating to users that a release is going to take place, and that they shouldn’t be in the system at the time, but ‘admin mode’ is how we can truly enforce it.

Something to bear in mind as well is that if you’re going ahead & restoring an environment to a previous state (whether that’s an automatic save point, or a manual one), it will automatically put the environment into ‘admin mode’ once the restore has been completed. This is very important to keep in mind!

There are three settings around administration mode:

  1. ‘Administration Mode’. This sets whether admin mode is on or off!
  2. ‘Background Operations’. This sets whether background processes, such as workflows, power automate flows, and Exchange synchronisation are enabled (allowed to happen) or disabled (stopped from happening
  3. ‘Custom Message’. This allows you to set a custom message that users (who are not system administrator/system customiser) will see when they attempt to access the environment

So this is the scenario that tripped me up a few weeks back:

  • I was needing to restore an environment to an earlier save point (to be clear, this was NOT a production environment)
  • I went ahead with the restore, and it completed successfully
  • Given that I was doing this at night, one of my children woke up, and I had to deal with them
  • I came back to things, saw that it completed, and then went ahead with the release that I was needing to do

All seemed to go well. However, when users were testing (which admittedly was a few days later), they reported that some functionality wasn’t working. This was strange, as it had been working before the release (& the release that I did hadn’t actually touched it!).

It turned out to be Power Automate flows that just didn’t seem to be running. OK – I started to look into them, but couldn’t figure out why they hadn’t run.

Creating a test Power Automate flow didn’t seem to work either – despite running it to test it, the trigger never activated! I was quite puzzled by this, and couldn’t (initially) work out the reason.

Then I thought to check environment settings! Lo & behold, the environment was STILL in administration mode, and the Background Process option was disabled! Aha – I’ve found the source!

Flipping this out of administration mode thankfully then allowed all Power Automate flows to work/run, and users confirmed that functionality was indeed running as expected. As you can imagine, I was quite relieved!

man in white shirt and black pants standing on black concrete bench near white building during

Something that I hadn’t realised previously is that if you manually put an environment into administration mode, it doesn’t automatically disable background processes. However, if you restore an environment, it DOES disable background processes by default. So if you’re wanting to try out automation items within a restored environment that’s still in administration mode, you’re going to need to ensure that you toggle the Background Processes toggle to allow it to work!

One further thing to learn as well (which I’ve been asked already by some people, so thought that I would mention it here). I’ve mentioned above that users were in the system, but reporting that things weren’t working. Now given that the environment was in administration mode, people have asked how users could be in it! The answer is that these users actually had the system customiser role applied to them, which is why they could get in! If they hadn’t had the role, then perhaps I might have realised things a little sooner (ie that the environment was in administration mode).

So a (good) little lesson learned, and I’ll definitely take it forwards. Has this, or anything else like it, ever tripped you up? Drop a comment below – I’d love to hear!