Omnichannel – Wave 1 2021

Today is a day that I’ve been looking forward to over the last few days. Leaving aside anything else that may be happening, it’s the day when the 2021 Wave 1 Release Notes come out! These cover the new functionality & features that will be released during the first half of 20201 for both Dynamics 365 & Power Platform.

The links are here:

There’s an amazing amount of functionality, but what I want to focus on specifically are the capabilities coming down the line around Omnichannel for Customer Service

As I’ve done before, I’m going to include the dates that are applicable (at this point in time) for each time.

Enhancements to existing capabilities

Embedded analytics for Chat and Digital Messaging

GA – April 2021

10 Great Google Analytics Alternatives

Traditional dashboards have limited interactive capabilities and provide a narrow view into the overall organization. Omnichannel’ s Embedded analytics for chat and digital messaging allows service managers to identify problem areas and opportunities to improve from historical data, along with rich slice and dice capabilities powered by Power BI.

With this release, the embedded analytics for chat and digital messaging allows service managers to understand how agents and queues are performing. The analytics provide trends based on problem areas and opportunities allowing the service managers to analyze the corrective measures they can take, provide appropriate guidance to agents, and improve the customer support experience. Key Insights cards provide a glimpse into the notable trends on core metrics and topics that are important for a supervisor to further investigate the analytics.

Enhanced supervisor experiences for operational monitoring of Chat and Digital Messaging

GA – April 2021

How Can I Use Microsoft SCOM for End-to-End Performance Monitoring – eG  Innovations

Supervisors need key metrics and channel-specific performance measures to take operational decisions to meet and exceed service-level goals

  • As contact centres deploy multiple channels to provide an omnichannel experience in customer service, supervisor can view and track relevant metrics for operational efficiency in the following ways:
  • Equip team leads to monitor channel-specific performance metrics to handle agents who are dedicated to a single channel
  • Enable senior team leads and service delivery managers to monitor All-up metrics across all channels
  • Capability to quickly switch between the views

Historical topic clustering for all channels

GA – April 2021

Topics | National Museum of African American History and Culture

Topics are automatically generated using AI to organize similar issues into groups. By aggregating metrics from issues grouped into the same topic, organizations get a full view of KPIs and metric impact for each topic. For example, organisations can view the average handling time, sentiment, and CSAT for a specific topic, and whether the topic is a key driver for any of those metrics.

Modern Administration Experience for Omnichannel Chat and Digital Messaging

GA – April 2021

Modern - Responsive Admin Dashboard Template by stacks | ThemeForest

With the modern administration experience, administrators can easily start the first chat conversation with only a few clicks and see the immediate value of chat conversation powered by Omnichannel for Customer Service. The modern administration experience is intuitive to follow and allows administrators to quickly understand and perform the configuration steps.

Introducing the first run experience to help administrators automatically set up the chat channel and start the first chat conversation. Also, introducing the modern administration experience to guide administrators to set up the end-to-end configurations in Omnichannel for Customer Service.
The key highlights of this feature include:

  • First run experience of chat channel
  • Streamlined and simplified administration user experience of work stream, queue, and global setting configurations for digital messaging channels

Omnichannel Voice Channel

At Ignite in September 2020, Microsoft announced the new Voice channel for Dynamics 365 Customer Service. The aim of the solution is to provide simpler administration & management experiences within the platform itself, rather then needing traditional cloud component integration complexities.

With the release of this, voice, SMS, and digital messaging channels, and a PVA-powered intelligent interactive voice response (IVR), real-time voice intelligence, and insights across all channels, speech-based self-service, and intelligent skills-based routing are all brought together in a single package.

This feature is currently in invite-only private preview, with general availability planned as part of the April 2021 wave

Call intelligence

GA – August 2021

What is call intelligence, and why should you care?

The transcript of a call and an in-depth analysis of a particular call recording can help an organization better understand how the engagement with the customer progressed and present opportunities for agent training.

Through historical analytics, supervisors will be able to drill into a particular call to view more details. Each call will include voice-specific metrics such as talk-to-listen ratio, talking speed and more. Supervisors can also see the detailed sentiment throughout the call, shown alongside the transcript for further analysis. This view helps supervisors better understand how the call went and identify the areas to improve.

  • This capability leverages the call transcription and sentiment analysis to produce the following metrics:
  • Talking speed
  • Switches per hour
  • Pause before speaking
  • Longest customer monologue

Call Recording

GA – August 2021

Call Recording | MightyCall

Customer service agents typically need to review phone calls with customers. Call recording allows agents to record phone calls between agents and customers. This helps the organization to revisit the interaction to better understand the customer’s issues in his or her own words and increase the possibility of resolving the customer’s problems or questions. Call recordings are also helpful for training scenarios where an organization can share examples of great customer interactions among the team.

Call Transcription and Realtime Sentiment Analysis

GA – August 2021

Jog.ai | Supercharge Your Call Notes | Automated Call Recording and  Transcription

Customer service agents often need to take notes while helping customers during a phone call. Call transcription converts a phone conversation into written words reducing the amount of notes an agent will need to take and helping with accessibility. Furthermore, sentiment analysis examines the conversation and identifies the general sentiment or “mood” of the customer like if they are slightly angry or very disappointed. Call transcription and sentiment analysis are both used by the system to proactively analyse cases and provide agents with suggestions to resolve the issue.

Call transcription converts a phone conversation into written words and stores them as plain text in real time as the call is in progress. Sentiment analysis, built on award winning AI, tags a sentiment on the top of a conversation, and is constantly updated as the conversation evolves.
Both call transcription and sentiment analysis are included out-of-the-box with no additional setup or configuration.

Consult and transfer

GA – August 2021

Voip Call Transfer - Voip Service - Voip Business | VoIP Business

Omnichannel for Customer Service offers customer service agents the ability to easily consult with and transfer calls to other customer service representatives and helps agents have a greater chance to resolve customer issues.

While on a call with a customer, an agent can put the customer on hold and consult with another agent or manager on an issue that requires specific expertise. issue perhaps one with specific expertise or a manager. Agents can also transfer the call to a specific customer service agent, which is also referred to as a warm transfer. In other scenarios, the agent can transfer the call to a queue from where it is routed to the best available agent based on rules configured by your business.

Direct outbound calling

GA – August 2021

Outbound Calling Strategy- the Best Way to Boost Communication

The ability of agents to contact customers via voice calling remains one of the most important customer interaction methods in Customer Service. Direct outbound calling enables agents to contact customers using our native fully integrated voice channel based on Azure Communication Services, where voice is just another channel for agents and supervisors.

Agents can contact customers using voice calling. Direct outbound calls can be initiated via click-to-call directly from phone number fields in the following:

  • Cases
  • Customer profiles
  • Call back activities
  • Ongoing chat conversations
  • Via a phone dialler

Outbound calls are displayed as conversations in conversation history contextually per case/customer and timelines. Supervisors can monitor outbound calls just like any other customer interaction.

This feature includes the following key highlights:

  • Fully integrated outbound voice channel without third party voice integration
  • Sample outbound voice channel configured automatically on voice channel provisioning.
  • Easy channel administration within the Omnichannel admin experience.
  • Outbound voice conversations are just another conversation type in Omnichannel.
  • Supervisors can monitor outbound calls from within the ongoing conversations dashboard like any other agent/customer interaction.

Embedded analytics for voice channel

GA – August 2021

How to Utilize Google Analytics to Improve Your Restaurant's Website

Traditional dashboards have limited interactive capabilities and provide a narrow view into the overall organization. With historical data, embedded analytics for voice channel empowers service managers to identify problem areas and opportunities to improve and provides rich slice and dice capabilities powered by Power BI.

Customer service managers or supervisors are responsible for managing the agents who work to resolve customer queries every day through phone channel. With this release, the embedded analytics provide trends over a period to understand how agents and queues are performing, so that service managers can take corrective measures, provide appropriate guidance to agents, and improve the customer support experience. Key Insights cards provide an “at a glance” view into notable trends on core metrics and topics that are important for a supervisor to investigate further in the comprehensive reports. Agent-focused views display core metrics to better understand the primary areas an agent worked in and identify opportunities for coaching.

  • With these views, supervisors can:
  • Monitor operational metrics, such as inbound calls, calls handled, abandon rate, average talk time, and average speed to answer calls, across channels, queues, agents, and topics
  • Monitor support quality through sentiment analysis across channels, queues, agents, and topics.

Intelligent voice via PVA and Azure Bot Framework

GA – August 2021

How Intelligent Voice Agents Can Replace Costly Contact Centers

With speech-enabled Power Virtual Agents, businesses can empower business users to build and update intelligent voice bots that use built-in natural language processing capabilities to engage conversationally with customers and provide personalized self-service always. Bots can be built once and deployed across messaging and voice channels for maximum efficiency and consistency. For more advanced scenarios, businesses can integrate bots built with the Microsoft Bot Framework to work on the voice channel.

With this feature, businesses have a familiar bot authoring experience for all customer service bots, across messaging and voice. Customers will enjoy with flexible, free-form service experiences, instead of inflexible menu trees. Bots can easily hand off the call to humans agents, with the conversation history and context gathered by the bot. This allows Omnichannel for Customer Service to route the customer from the bot to the best available live agent to provide a seamless, contextual hand off.
The key highlights of this feature include:

  • Enable Power Virtual Agents and Azure Bot Framework bots to provide intelligent voice bots on the voice channel
  • Support for built in dual tone multi-frequency (DTMF) as a secondary means to interact with the bot
  • Transfer calls to human agents with full transcript and context
  • Use bots for post-call surveys

Modern Administration Experience for Omnichannel Voice (Number Management)

GA – August 2021

Typically, customer service organizations must manually integrate standalone telephony and customer relationship management (CRM) solutions, which results in fragmented experiences and error-prone data integration. Administrators need to manage resources and phone numbers in the telephony provider’s app and manually bring over this information to the CRM solution. Very often, this setup process requires collaboration between business and IT administrators, adding delay to an already lengthy process. With the availability of Azure Communication Services, Omnichannel for Customer Service now offers native voice channel. This all-in-one solution empowers business administrators to independently deploy a telephony resource and acquire phone numbers in a few steps, offering a fast and consistent experience.

Until now, administrators created resources and managed phone numbers in a separate telephony application and then manually deployed the numbers in the CRM solution. The long-fragmented process is inconsistent and requires continuous maintenance to keep both applications in sync.
With the native voice channel, business administrators can deploy the telephony resource and acquire phone numbers without leaving the Omnichannel Administration app.

The key highlights of this feature include:

  • Telephony resource deployment using connection string or sign into Azure account.
  • Acquiring phone numbers of various types and plans.
  • Releasing phone numbers.

Modern Administration for Omnichannel SMS via ACS (Number Management)

GA – August 2021

Typically, customer service organizations must manually integrate standalone telephony and CRM solutions, resulting in fragmented experiences and error-prone manual data integration. Administrators need to manage resources and phone numbers in the telephony provider’s app and manually bring over this information to the CRM solution. Very often, this setup process requires collaboration between business and IT administrators, adding more delay to an already long process. With the availability of Azure Communication Services, Omnichannel for Customer Service now offers native new voice channel. This all-in-one solution empowers business administrators to independently deploy a telephony resource and acquire phone numbers in a few steps, offering a fast and consistent experience.

Until now, administrators created resources and managed phone numbers in a separate telephony application and then manually deployed the numbers in the CRM solution. The long-fragmented process is inconsistent and requires continuous maintenance to keep both applications in sync.
With the native voice channel, business administrators can deploy the telephony resource and acquire phone numbers without leaving the Omnichannel Administration app.

The key highlights of this feature include:

  • Telephony resource deployment using connection string or sign into Azure account.
  • Acquiring phone numbers of various types and plans.
  • Releasing phone numbers.

Supervisor monitoring and barge

GA – August 2021

Call Monitoring – Understanding this Tool in the Call Centre

Service managers are responsible for the overall quality of customer service and often need to observe customer service representatives while they are on the phone with customers. Omnichannel for Customer Service allows supervisors to listen in on phone conversations and join a conversation, if needed. This helps supervisors increase the likelihood of resolving customer issues, enforce proper business practices, and identify training opportunities.

When supervisors log into the application, they are provided a list of phone calls that are in progress. From the list, they can choose to join a call with the option to join anonymously as a hidden participant. If they want to intervene, they can join the call, referred to as “barging”, which then becomes a group call.

Topic Clustering for Voice

GA – August 2021

Topics are automatically generated using AI to organize similar issues into groups. By aggregating metrics from issues grouped into the same topic, organizations get a full view of KPIs and metric impact for each topic. For example, organizations can view the average handling time, sentiment, and CSAT for a specific topic, and whether the topic is a key driver for any of those metrics.

Topics, which represent semantically similar support issues, help organizations better identify and respond to issues their customers are facing. Correlating these topics along with core historical analytics makes it quick and easy for a supervisor to see common issues by volume, CSAT impact and new cases, helping to identify where they should invest their time.
In this release, the same capability will now be applied to voice channel, generating topics off of the transcript. This will help organizations better understand issues that customers face and their impact on core business metrics across the spectrum of engagement.

I’m really quite excited to see how the new Voice channel will be received, as I think it’s a great feature addition to the overall tools available. It will be interesting to see how clients may choose to use it over their existing voice channel setup.

I’ll be looking deeper into the different functionalities, and will share them here. If there’s anything you think would be helpful to focus on, drop a comment & let me know!

Managed Solutions, & replacing a field

Well to start with, I’m sure that I’m going to get pulled up by some people for my use of the word ‘field’ in the title. After all, officially it’s now a ‘column’! But I (still) can’t let go of calling them as I’ve done so for over a decade, so field it is.

Now to the actual topic of this blog post, which is centred around Managed Solutions. Leaving aside the whole debate about whether we should be using managed or unmanaged solutions (& when/where to do each), there is one definitive benefit of using a managed solution.

See, unmanaged solutions are additive in nature. Work is done in the development environment, then deployed. Further work is done (additional items added, etc), and deployed, and they then appear in the downstream environments. However, if you delete an item in the development environment, it’s not removed when the solution is deployed downstream.

Managed solutions, on the other hand, are both additive & detractive. As with unmanaged solutions, items added in the development environment are also added downstream when deployed. However, if an item is removed from the solution in the development environment, it will also be removed when the solution is deployed downstream. It’s one of the useful ways to ensure that you don’t end up with random unused items just lying around in Production (which have a habit then of popping up in the Advanced Find window, for example). So it’s really quite handy for a lot of reasons to go down this route.

Well, I found myself going down this route recently, but with slightly unexpected results, I’ll freely admit…

The scenario was that we had deployed a managed solution to the UAT (test) environment on a client project. Then the client changed their mind (shock & horror!!) as to a specific item, and we needed to change it from a text item to a lookup item. Obviously (as per best practise, of course) this would need to be done in the development environment, and then released downstream. Given that this is a managed solution, I’d expect this to work, without any issues. Well, it didn’t…

The change in the development environment (deleted the old item, ‘re-created’ it as a lookup with the same system name) was done, we exported it as managed, and then went to import it in the UAT environment. It took the solution file, thought about it for a while (it’s somewhat of a large solution), & then errored:

Exception type: System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault] Message: Attribute mdm_field is a String, but a Lookup type was specified.

Now I was somewhat confused by this message occurring. It’s not been the first time I’ve seen it over the years, but in my previous experience I’ve seen it when handling unmanaged solutions. It’s when you delete an item in the development environment, re-create it as a different item type (with the same underlying system name), and then deploy it as unmanaged. The solution import in the second environment fails due to the different in the type (as it sees the same name). This, of course, is to be expected.

But here we’ve been using managed solutions for deployment, and as mentioned above, they’re detractive as well. The expected behaviour (at least from my side of things) would be that the system would note that the item type has changed, remove the old item, & import the new item. In my mind, that’s logical, but apparently not?

See, even managed solutions have their limitations, of which this is one of them. Having checked with several other people who I reached out to around this, I’ve discovered that it can’t work in the way that I was expecting it to. Instead, a specific process has to be followed

  1. In the development environment, remove the item, & export the solution as managed
  2. In the downstream environment(s), deploy this (interim) managed solution. This will remove the item from the environments
  3. In the development environment, re-create the item with the different system type. Then export it as managed
  4. In the downstream environments, deploy this solution. This will then add the item (with the new system type) into the environment.

This means that development & deployment teams (if separate ones) need to co-ordinate around this, to ensure it’s done in the right way. It could also be developed/exported in succession, and then imported in succession as well (either manually, or through an Azure DevOps Pipeline, for example).

This worked wonderfully for us, and to be honest, I was quite relieved after several hours of frustration with things. Even better, it was a Friday, so meant that the week could end well!

Have you ever come across this, and been frustrated as well? Have you got a similar story with something else that happened to you around solutions? Drop a comment below – I’d love to hear!

Customising Case Resolutions

Well, the title is a bit of a mouthful, I’ll admit. Hopefully though this brings some good information, and can help people out.

Cases are wonderful things, and can be used for tracking client interactions, compliments/complaints, and so many other things. What cases do have is the ability to resolve them, and provide information around the resolution.

Now, the standard way of doing this provides the following screen:

There’s the ability to set the Resolution Type (being a dropdown, aka Choice, field), & putting in free text for the Resolution itself (allowing us to track information around it). There are also time fields, which can be used for working out the time spent, as well as any time that’s going to be chargeable.

Now when going in to modify these, we’d think to open up the Case Resolution table. However, this isn’t actually the right place to do it. Instead, we’re needing to update the Case table itself, as the Care Resolution items comes from the Case Status field!

Somewhat annoyingly, it’s not possible to do this through the new ‘Maker’ interface:

In order to actually handle this, we need to switch across to the Classis editor to set this up. This could be because it’s actually a situation of having both parent & child entries. What I mean by this is that there’s the actual status (being Active, Resolved or Cancelled), and then a reason under each one. Hopefully at some point it’ll be updated into the new UI, so that we can do it from there.

We’ll need to change the Status item to ‘Resolved’, & can then add in the options that we want:

After adding them, we need to save & publish, and then they’ll show up for us, and are able to be selected:

So that’s great – we’re able to customise it. But what if we’re wanting to customise the actual ‘Resolve Case’ form itself? Not everyone wants to show Time/Billable Time on it (quite a few of our clients ask us to remove it), and perhaps they want to add additional custom fields.

So from the usual perspective of doing this, we’d open up the Case Resolution table, create new fields as required, and modify the existing form (we’re not able to create any other forms for this specific table). After all, this is how we’d do it for any table in the system (whether a standard one, or a custom one). This is going to be the Main form, rather than the QuickCreate one:

We save & publish it, and then would open up a Case record, click ‘Resolve Case’, and expect to see it. However, that doesn’t happen, which has been most puzzlingly to me!

It turns out that there are two things needed to be done in order to get to see our ‘custom’ form (though it’s not really custom, as it’s modifying the default form, but whatever).

  1. We need to modify security permissions for users, and is a critical requirement. An example of this is shown below:
Security Role: Customer Service Representative

2. We need to enable customisable dialogues. Yes, it’s a setting that needs to be updated in order for users to see the custom layout of the form. If we don’t do this, they’re shown the default form, even though we’ve modified it! Seems a little strange that the system seems to have this concept of a ‘shadow’ form, but I guess that’s how it is.

To do this, we need to go into the Service Management settings area. I usually launch this through the Customer Service Hub app, though it’s available through several of the other standard apps as well:

Once there, we need to click into the Service Configuration menu item, and then change the ‘Resolve Case Dialogue’ option as shown below:

Remember to click the ‘Save’ button to save this.

Finally we can go back to our Case record, click ‘Resolve Case’, and look what appears!

So in summary, it’s definitely possible to modify & change the way that Case resolutions works in the system. It does take a little bit of fiddling around with settings in different areas, which can be confusing if we’re not used to this, but can give a great result in the end.

Have you ever come across this, and wondered how to do it? Have you developed Case Resolutions any further? Drop a comment below – I’d love to hear!

Record security with Power Automate

Today’s post is around record security, and how Power Automate can really be quite useful with this!

Let’s take a quick recap of how security works (which is applicable to both Dynamics 365, as well as Power Platform apps). We have the following:

  • Security roles, which are set up with specific privileges (Create/Read/Update/Delete etc) across each entity table, as well as for other system permissions
  • Users, who can have one (or more) security roles applied to them (security roles being additive in nature)
  • Teams, who can have one (or more) security roles applied to them. Users are added into the team, and inherit all permissions that the team has (much easier than applying multiple roles on a ‘per user’ basis)

That’s great for general security setup, but it does take a system admin to get it handled. Alternatively, of course, it’s possible to use AAD Security Groups which are connected to security teams within Power Platform, and users added to them will inherit the necessary permissions.

But what if we want to allow users who aren’t system administrators to allow other users access to the records? Well, it’s also possible to share a specific record with another user – doing this allows the second user to see/access the record, even if they usually wouldn’t be able to do so. This is really great, but does require a manual approach (in that each record would need to be opened, shared with the other user/s, and then closed).

I’ve been working on a project recently where we have the need to share/un-share a larger number of records, but with a different user for each record. We’ve been looking into different ways of doing this, and obviously Power Automate came into mind! We didn’t want to use code for this, for a variety of reasons.

Security and Compliance in PowerApps and Flow - Michał Guzowski Consulting

The scenario we had in mind was to have a lookup to the User record, and with populating this with a user, it would then share the record with them. This would be great, as we could bulk-update records as needed (even from an integration perspective), and hopefully all would work well.

So with that, I started to investigate what options could be available. Unfortunately, there didn’t seem to be any out of the box connectors/actions that could be used for this, which was quite disheartening.

My next move was to look at the user forums, & see if anyone had done anything similar. I was absolutely excited to come across a series of responses from Chad Althaus around this exact subject! It turns out that there’s something called ‘Unbound Actions’, which is perfect for the scenario that we’re trying to achieve.

There are two types of actions available within Power Automate:

  • Bound actions. This are actions that target a single entity table or a set of records for a single entity table
  • Unbound actions. These aren’t bound to an entity type and are called as static operations. They can be used in different ways

There are quite a lot of unbound actions available to use:

The one I’m interested in for this scenario is the GrantAccess action. More information around this can be found at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/web-api/grantaccess?view=dynamics-ce-odata-9

It does require some JSON input, but when formatted correctly, it shows along the following lines:

The different parts of this works as follows:

  • Target is the actual record we’re wanting to apply the action to
  • SystemUserID is the actual system user, and we also need to specify the odatatype
  • AccessMask is what we’re wanting to do when sharing the record (as there are different options available for sharing, ie ReadOnly, Edit, ShareOnwards, etc)

Using this, we’ve therefore built out the following scenario:

  1. Field added to the record, looking up to Users
  2. Relevant users who are able to access the record can set this lookup field to be a specific user record (who doesn’t have access to this record)
  3. Power Automate flow fires on the update of the record when it’s saved (filtering on just this attribute), sharing the record with the selected user
  4. The user then gets an email to notify them that the record has been shared with them, with a URL link to it (it’s somewhat annoying that there’s no inbuild system notification when a record has been shared with you, but I guess that’s something we’re having to live with!)
  5. They can then go in & access the record as they need to

We’ve also given some thought to general record security, and have additionally implemented the following as well:

  1. If the user lookup value is changed, we obviously share the record with the new user that’s been saved to it
  2. Using a different Unbound Action (RevokeAccess), we remove the sharing of the record with the previous user (we have another field that’s being updated with the value of it, which we’re using to pass the action in, as otherwise we don’t actually know who the previous user was!)

All in all, we’re quite happy that we’ve managed to come up with this solution, which is working splendidly for us. Also, major thanks to Chad for his assistance in getting the syntax correct!

Have you ever needed to do something like this? Did you manage to implement it in some way? Drop a comment below – I’d love to hear how your experience was!

‘Ghost’ lookup value following deployment

This is something that stumped me fairly recently. It’s also something that I was trying to work out what I should use at the title for this post! Let me share what happened.

I’m working on a project that’s quite critical (COVID-19 related). This is a project that we’ve built something around Dynamics 365 as an additional wrapper, to provide specific functionality for the pandemic. It’s being rolled out (the same solution) to multiple clients, and is only using the functionality from Power Platform. No custom code at all.

Now, before going into the specifics around it, let’s take a moment to revisit what a lookup field is, and what it does. Essentially a lookup field connects two tables together (wow – that felt strange not to use the word ‘entity’!). In the front interface, it’s used for a 1:N relationship.

So for example, we can have a lookup from Account to Contact, to set the primary contact for the account. The user navigates to the field, searches for the record they’re wanting to associate, and saves it.

Underneath, there’s a relationship that’s automatically created between the two tables, showing the way that the relationship will go (ie 1:N or N:1). This is created on both sides (more on that another time around dependencies), and most people will never need to modify it

When I first started with this particular project, I got the solution, and deployed it into the Dev environment (for the project that I was on). On testing it out, I found something very interesting. We’re using the Case (Incident) table, and there are various lookup fields on it. One of these was already populated with a value. Hmm – that’s interesting, I thought. It was a new deployment, and we hadn’t set any static data up yet at all. So how could it already be populated?

How is this being set, when I’ve not entered it into the system as a record…

Furthermore, I was unable to save the Case record. When I tried to, I was getting an interesting error:

On drilling down into the error log (which admittedly is actually getting better in the details shown in it, thankfully!), it turned out to be because I didn’t have access to the referenced record (in the lookup field). It just didn’t exist.

So the lookup field value was coming in with a hard-coded GUID (record identifier). But how was this being done, especially if there weren’t any records (of that type) in the system at all?

From my experience of things, I could think of two ways in which to populate a lookup field with a hard-coded value:

  • Through a ‘real-time’ Power Automate flow, on create of the record. It’s possible to set a GUID value in the flow, and then it would be set
  • Through custom code, running on the form. Again, it’s possible to hard-code a GUID there, and then set the field

However on checking both options, none of them were happening. No Power Automate flows touching the Case record, and no custom code at all on the Case.

It was then, digging through the other parts of the solution, that I saw various Business Rules. For those unfamiliar with these, I’ll quote from the official Microsoft documentation around them:

By combining conditions and actions, you can do any of the following with business rules:

  • Set column values
  • Clear column values
  • Set column requirement levels
  • Show or hide columns
  • Enable or disable columns
  • Validate data and show error messages
  • Create business recommendations based on business intelligence.

I’ve used Business Rules (somewhat extensively) before. However on going into the one for the Case table, I found that something was happening that I wasn’t aware could happen! It’s actually possible to set a lookup field value through it:

I spy a lookup option

Even though we’ve deployed the solution from the original development environment to a different environment, this is still set. But there are no records that are available:

I had never thought that it would be possible – to set a static value (eg a number, or some text), fine. But to set referential data? Wow.

Obviously this can be quite helpful. The bit that it’s NOT helpful though is when deploying the solution to another environment (as this situation was). It doesn’t help if you re-create the record that it’s referring to with using the same record name, as it’s using the underlying GUID (which you can’t re-create). This really does take solution deployment into a whole new perspective, where you need to be careful around these sorts of things as well.

So something new that I’ve learned (I do try to learn something new each day), and specifically around an area I thought I knew quite well. It did take some time, but I’m glad that I (finally) found the root cause of it, and identified what was causing it.

Have you ever had something like this happen, where you’re searching & searching for the cause of it? Drop a line below – I’d love to hear!

Personalised Sound Notifications for Omnichannel

One of the themes running through the Wave 2 2020 update for Omnichannel is the personalisation aspect. Though systems work just fine on their own, it’s always nice to add a ‘personal touch’ to the parts that we can. Last week I shared how quick replies are now able to be personalised (Personalised Quick Replies). This week I’m going to go into how the sound notifications can be personalised as well!

These seem to be just small little features, but in my view they do bring things to the next level. Examples of this are the following:

  • If a customer session starts, wanting to know which channel it’s come in through, without needing to open the conversation
  • Many agents in a contact centre – if everyone is using the same sound, no-one knows if it’s their computer or not!
  • The different between a new conversation starting, and a new message being received on an existing conversation
  • Wanting to ensure that sound volumes aren’t too high, else they’ll disturb other people.

All of these are extremely valid scenarios, along with other ones (such as disabling sound entirely, for example!). Though this seems simple to implement, and isn’t very difficult to set it, there’s a lot of flexibility involved. I’m therefore really happy that this is now available to be used.

So, let’s see how to go about setting it up. There are two parts to this – the Omnichannel Administrator side, and what the Agent can then do

Omnichannel Administrator

In the Omnichannel Administrator Hub, the administrator should open the Notifications section, and go to the Sound Notification Settings tab:

There’s a single setting there, to toggle sound notifications on or off. Setting it to ‘Yes’ will then show the following section on the screen:

Once it’s enabled, there are then a number of system default options that are automatically loaded. Here the administrator can do the following tasks:

  • Choose to allow sounds to be played at a per channel level
  • Change the system default sound notification (more on loading in custom sounds below)
  • Allow the sound notification to be repeated until the call is answered
  • Set the maximum volume allowed for the sound (this is a lovely slider control!)

There are of course sound files that come included in the system by default. But what if we’re wanting to upload custom sound files to be used? Well, that’s not a problem. Simply by clicking in the lookup field to select a sound file, we are given the option to upload a new audio file:

Clicking this brings up the Audio File record, which we use to upload. We need to give it a name & save it, and then we’re given the ability to upload the file itself:

Note: There are specific file types that need to be used, with a maximum file size of 1MB. It does say that for best experience to use the OGG file format. There are plenty of free resources out there to download OGG files, or to convert MP3 files to the OGG file format if you need

Once we’ve uploaded the file, we get presented with a mini player to hear how it sounds. This is really cool!

All of the audio files in the system (both default & custom) are then available for agents to personalise their own experience

Note: If a company wants to upload many different custom audio files, it may be easier to add the Audio Files entity to the sitemap, and then perform this function from there

Note: To prevent agents from uploading their own audio files directly, the Omnichannel Agent security role only allows Read access, not Create/Edit access:

Omnichannel Agent

With the initial system setup performed by the Omnichannel Administrator, agents are then free to go ahead & personalise their own experience. This is done directly within the Omnichannel for Customer Service app, by selecting ‘Personalisation’ from the available menu:

Once this is selected, the agent is presented with a very similar interface to the Omnichannel Administrator:

Here the agent can change the system default for themselves (this does not affect any other Omnichannel users), change the various settings, modify the volume levels, etc.

Once saved, it’s then live & active, and will work as desired.

Incoming message alerts for active sessions

At the bottom of the sound notification settings screen, there is one further setting. This is around the behaviour of sounds for existing conversations:

This can be helpful (either from an overall system perspective, or an individual agent perspective) to either allow or turn off sounds from conversations that are already happening. Some people might find it very annoying that every time a customer sends a new message through, the system plays a sound. This is especially true when dealing with multiple conversations (which, after all, is what Omnichannel is all about!)

In summary, it’s a really good feature to have now at our convenience to use. Obviously I’d suggest not to load rock music into it, for example, unless of course your company specialises in rock music! How do you think this would be beneficial to your users? Drop a comment below – I’d love to hear!

Personalised Quick Replies

One of the things that customer service agents absolutely HATE is having to type full replies to customers. There are many things that they’ll do which are quite repetitive, and having to type the same response each & every time gets frustrating to say the least.

As I’ve covered previously at Quick Responses in Omnichannel, Omnichannel has the ability for Quick Replies. With these, agents are able to select the response that they’re wanting to use, and quickly populate it into the chat session that they’re having.

It’s also possible, using ‘slugs’, to set up responses that will automatically populate with specific pieces of information in the system. For example, something like ‘Good morning, my name is {Agent Name}, how may I assist you?’ will automatically populate the name of the agent into the chat session.

This is great; the main drawback to date has been that Omnichannel administrators are required to set these up, as well as maintain them. That’s not so great, when you consider that agents might want to personalise their responses as well. To date, that’s not able to be done within the system.

However, with Wave 2 2020, it’s now possible to allow agents to create their own quick replies, to be able to be used within chat sessions. It’s also not particularly difficult to go about getting this into the system, as we’ll see below.

The Omnichannel Administrator simply needs to go to the Personal Quick Replies section, and change the toggle to ‘Yes’, then save. This will enable personal quick replies for agents simply & swiftly.

Once the system setting has been set, and is active (it can take a few minutes to refresh through), agents are then able to start setting up their own quick replies.

To do this, agents will need to be in the Omnichannel for Customer Service app, and select the Personalisation option from the drop-down menu:

This will then open the agent personalisation tab, which has several different sections on it. The first one is the one that we’re interested in – Personal Quick Replies:

Here will list any personal quick replies that have already been set up by the agent, as well as give the option to create further ones to use:

Clicking this option brings up the familiar interface to set this up:

Note: Personal quick replies aren’t localised in Omnichannel. That’s why you need to select a Locale for the record. To be able to provide the quick response in multiple languages, create a specific response for each language, and select the locale that’s appropriate for it

Once the record is saved, it’s then possible to add tag/s to it for referencing:

Note: If you want to use the hash character (#), you can only use it at the beginning of the tag, not anywhere else in it

Once these have saved, they’re then available to be selected from the chat by the agent. The chat interface will show both system & personal quick replies. Typing ‘/q’ into the chat window will bring these up:

We can select the tab at the top to show just the personal quick replies that the agent has set up:

Alternatively, if the agent starts searching with text, they can easily distinguish between system & personal quick replies by looking at the icon against each one. System replies have a globe-style icon, whereas personal replies have a person-style icon:

So in summary, I think that this is a really great feature to add onto the original way of quick replies working. It’ll free up time for the Omnichannel Administrators, and allow agents to put their own responses in that they need. It’s also possible to share this using the OOB record sharing functionality, which means that a team lead can set them up, and then share them with the rest of the team!

How do you think this could enable or help you? Drop a comment below – I’d love to hear!

Data Export Service Connection Issues

This is a slightly different post from the usually stuff that I talk about. It’s much more ‘techy/developer’ focused, but I thought it would be quite useful still for people to keep in mind.

The background to this comes from a project that I’ve been working on with some colleagues. Part of the project involves setting up an Azure SQL database, and replicating CDS data to it. Why, I hear you ask? Well, there are some downstream systems that may be heavy users of the data, and as we well know, CDS isn’t specifically build to handle a large number of queries against it. In fact, if you start hammering the CDS layer, Microsoft is likely to reach out to ask what exactly you’re trying to do!

Therefore (as most people would do), we’re putting in database layer/s within Azure to handle the volume of data requests that we’re expecting to occur.

Azure SQL Database | Microsoft Azure

So with setting up things like databases, we need to create the name for them, along with access credentials. All regular ‘run of the mill’ stuff – no surprises there. In order for adequate security, we usually use one of a handful of password generators that we keep to hand. These have many advantages to them, such as ensuring that it’s not something we (as humans) are dreaming up, that might be easier to be guessed at. I’ve used password generators over the years for many different professional & personal projects, and they really are quite good overall.

Sordum Random Password Generator Creates Random Passwords with Ease -  MajorGeeks
Example of a password generation tool

Once we had the credentials & everything set up, we then logged in (using SQL Server Management Studio), and all was good. Everything that we needed was in place, and it was looking superb (from the front end, at least).

OK – on to getting the data actually loaded in. To do this, we’re using the Data Export Service (see https://docs.microsoft.com/en-us/power-platform/admin/replicate-data-microsoft-azure-sql-database for further information around this). The reason for using this is that the Data Export Service intelligently synchronises the entire database initially, and thereafter synchronises on a continuous basis as changes occur (delta changes) in the system. This is really good, and means we don’t need to build anything custom to handle it. Wonderful!

Setting up the Data Export Service takes a little bit of time. I’m not going to go into the details of how to set it up – instead there’s a wonderful walkthrough by the AMAZING Scott Durow at http://develop1.net/public/post/2016/12/09/Dynamic365-Data-Export-Service. Go take a look at it if you’re needing to find out how to do it.

So we were going through the process. Part of this is needing to copy the Azure connection string into into a script that you run. When you do this, you need to re-insert the password (as Azure doesn’t include it in the string). For our purposes (as we had generated this), we copied/pasted the password, and ran things.

However all we were getting was a red star, and the error message ‘Unable to validate profile’.

As you’d expect, this was HIGHLY frustrating. We started to dig down to see what actual error log/s were available (with hopefully more information on them), but didn’t make much progress there. We logged in through the front end again – yes, no problems there, all was working fine. Back to the Export Service & scripts, but again the error. As you can imagine, we weren’t very positive about this, and were really trying to find out what could possibly be causing this. Was it a system error? Was there something that we had forgotten to do, somewhere, during the initial setup process?

It’s at these sorts of times that self-doubt can start to creep in. Did we miss something small & minor, but that was actually really important? We went over the deployment steps again & again. Each time, we couldn’t find anything that we had missed out. It was getting absolutely exasperating!

Finally, after much trial & error, we narrowed the issue down to one source. It’s something we hadn’t really expected, but had indeed caused all of this to happen!

What happened was that the password that we had auto-generated had a semi-colon (‘;’) in it. In & of itself, that’s not an issue (usually). As we had seen, we were able to log into SSMS (the ‘front-end’) successfully, with no issues at all.

However when put into code, Azure treats the semi-colon as a special character (a command separator). It was therefore not recognising the entire password, which was causing the entire thing to fail! To resolve this was simple – we regenerated the password to ensure that it didn’t include a semi-colon character within it!

Now, this is indeed something that’s quite simple, and should be at the core of programming knowledge. Most password generators will have an option to avoid this happening, but not all password generators have this. Unfortunately we had fallen subject to this, but thankfully all was resolved in the end.

The setup then carried on successfully, and we were able (after all of the effort above) to achieve what we had set out to do initially.

Have you ever had a similar issue? Either with passwords, or where something worked through a front-end system, but not in code? Drop a comment below – I’d love to hear!

Marketing & an unusual error

I’ll be the first to admit that I have limited experience of Dynamics 365 for Marketing. In fact, I think that it would be stretching the description to say that I have even ‘limited experience’! I’ve seen it one or twice, and have attended a few presentations on it, but apart from that, nada.

I do remember what it used to be like in its previous incarnation, but even then I didn’t really touch it. Customer Service (& Sales) are my forte, and I generally stick within those walls. Marketing traditionally was its own individual application, and only more recently has been rolled into the wider Dynamics 365 application suite. Even so, it still sometimes works in a somewhat interesting way, different from the rest of the system.

Inevitably I’ve had to actually do something with it for a client project, which has brought me to putting up this post. We had created a few marketing forms, surfaced them correctly, etc. It was great, and working well.

Then we realised that we needed to capture some additional information, in this case a list of Countries. There’s no standard entity for it within Dynamics 365, so we created our own, and loaded a list of countries (& associated data) into it. Fine – that was working without issues, including in the places that we needed to surface it.

Then we came to needing to surface the Country value on a marketing form, through a lookup. Simple, you’d have though? Well, not so much. We went to create the field, and got presented with the following error as we did so:

The error says: ‘The role marketing services user does not have access to the entities you’ve chosen…’

In essence, the system was telling us that we weren’t able to access the entity. Though Country is a custom entity, we were logged in as users with the System Administrator role (which has access automatically to ALL entities). This left us puzzling around what to do.

The error message, thankfully, was quite clear. It was referring to a specific security role missing privileges. In this case, it was the ‘Marketing Services User’. I therefore went to check the permissions for it, and sure enough, it didn’t have permissions on the Country entity that I had created!

Now usually if a security role is missing permissions, what we do is create a custom security role (usually copying the existing role), and add the permissions to do. Best practise is NOT to edit the default security roles. The (main) reason behind this is that Microsoft could update the security role in a later update/release, which could impact on us. We therefore use custom roles to avoid this happening (& yes, I’ve seen it happen/impact in practise!).

The fly in the soup here (lovely phrase, I know) is that we couldn’t do that here. It seems that Dynamics 365 for Marketing uses an underlying security role that’s needed. Even if we had implemented a custom role, we didn’t have any idea of how to tell the system to actually use our custom role, rather than the default one that it’s currently using. Quite frustrating, I tell you!

So in the end we decided to give the default security role the necessary permissions, and see what happened:

With having granted the security permissions to the role, & saved it, we then attempted to create the marketing form field field. This time, we were successful! No errors occurred during it, thankfully:

So in summary, I still have no idea why this has happened. I’ve taken a look around, but can’t find anything obvious as to how/why it actually works like this. I guess that I’d need to dig ‘under the hood’ somewhat to see what’s actually going on, and how to dealt with it appropriately. For the moment, the solution is in place, and is working.

We’ve also been very careful (as mentioned above) to add just the specific custom entity to the default security role. We haven’t touched anything else within it – all other security permissions are done (as per best practise) with custom security roles, which are then allocated appropriately to users &/or teams. Hopefully this will be fine in the long-term, though we’ll definitely be keeping our eyes on it to make sure!

Have you ever come across something like this? How did you decide to go about solving it? Drop a comment below – I’d love to hear!

Update: Thanks to the amazing Carl Cookson, it turns out that this is due to an update from Microsoft in how Marketing works. See https://docs.microsoft.com/en-gb/dynamics365/marketing/marketing-fields for more information around it. Essentially it uses this role to sync to the Azure staged Marketing service, so this role needs to have the appropriate permission

AI Translation for Omnichannel

How to start off this post? I’ve been trying to work out how exactly I can express my excitement around this new feature for Omnichannel. Included in the Wave 2 2020 release, it’s just AMAZING. That, however, doesn’t give it true justice. So let’s see how I can describe it properly to give it due respect.

Previously I’ve mentioned the ability to use skills within Omnichannel (see https://thecrm.ninja/omnichannel-for-dynamics-365-queues-users-skills/). This can be used to indicate, for example, agents who can communicate in a certain language. That’s useful of course, but what happens when you don’t have anyone who can speak the language that the customer wants to use? It’s a problem, and one that’s really not easily solved. At least, not until now.

So, what exactly does this new translation feature do? Simple – it translates from one language to another. OK, it’s actually a little more awesome than just that. Having delved into it quite a bit over the last week or so, there are (in my view) three main benefits (with a bonus one as well!):

  1. It translates incoming text from the customer (through chat) from the language that they’re using to the language that the agent is using
  2. It translates outgoing text from the agent (through chat) from the language that the agent is using to the language that the customer is using
  3. It translates text between agents from one language to the other & vice versa (eg on an internal consult)

Now for the bonus. It doesn’t just translate text from one language to another. It follows the languages being used! So if the customer switches in mid-conversation to a different language, the system picks it up. Not only is the new incoming language translated into the agents language, but the replies from the agent are shown in the (new) language being used by the customer. It’ll automatically show text in the ‘last used’ language, which is really quite incredible (at least in my opinion).

There’s no fiddling around of needing agents to select the language that they need, or anything else. It’s a simple click to turn it on, and then another click to turn it off. I’m going to go through the setup of it below, as there are a few fiddly bits that did confuse me for a bit.

It’s also possible to use different translation tools. At the time of writing this post, it’s possible to use Bing, Google or Azure translation models. I’m sure that there will be other options available in the future as well to use, which really opens up possibilities for clients with differing digital estates.

Translation happens in real time, so there’s no waiting around for it to actually get on with it. It’s displayed immediately on the screen for the agent to see.

Setup for translation

I found the general guides to be alright, but weren’t too clear on a few items. I’m therefore sharing below how I went about it, in order to get things working properly. Please be aware that this isn’t in the order specified in the documentation, but in retrospect means less switching between screens:

  1. Ensure that you have the latest updates to your Omnichannel environment (this is always a good idea, regardless of anything else!)
  1. Go to https://github.com/microsoft/Dynamics365-Apps-Samples/tree/master/customer-service/omnichannel/real-time-translation & download the ‘webResourceV2.js’ file there (if you’re unfamiliar with how to do this, click to open the file, click the ‘Raw’ button, and then save the page (ensure it’s got the ‘.js’ extension when you save it!).
  1. Ensure you have an API key to enter into the web resource file! This is what tripped me up at first. You can use any text editor (I use Notepad++) to open it up. How you get the API key will depend on the provider. For example, to set up a free account in Azure, take a look at https://docs.microsoft.com/en-us/azure/cognitive-services/translator/translator-how-to-signup. There are also some additional things that you can configure in the web resource file, but I’m not going to go into that here
  1. Go to your solutions (this can either be through the Classic interface, or through http://make.powerapps.com). You can either create a new solution to hold the web resource file, or alternatively if you have existing solutions that you’d deploy, you can add the web resource file to that. Either:
    1. In the classic interface, navigate to Web Resources, click to create a new web resource, and upload the file (ensure you select the type to be ‘Script (JScript)’, or
    2. In the modern interface, click the ‘New’ button, select ‘Web Resource’ from the ‘Other’ section, and then follow the steps above

Once it’s saved, it’ll give you a URL. Copy that, and publish the solution.

  1. Go to the Omnichannel Administration Hub, find ‘Real Time Translation’ under Settings, and set this to Yes. You can also select a default input language from the selection. Also enter the URL that you copied above. Save it
  1. You’re all done!

Agent Experience

Depending on how you’ve configured your web resource, auto translation will either by on by default, or be off. If it’s not on by default, the agent can simply click within their chat window to select it to be active:

Once active, it’ll then start to translate everything, in both directions. Below are side by side screens of the customer & agent experiences. You’ll note that the customer is seeing the initial agent response in English, as the agent was the first in the conversation

From the agent side of things, both the original language, as well as the translated language, are shown. The customer is only shown the language that they’re actually using

If the agent isn’t sure what language the customer is using (as it’s being auto-translated for them), they can hover over the text, and it’ll show the details for it:

If the agent will consult, or transfer the session to another agent, the second agent will see the conversation in the language that they are themselves using (with the original text as well). This allows for the possibility to pass a customer to a specialist to assist them, even if they don’t speak the same language! It’s really cool to see this in action.

Even more wonderfully, this is even stored down to the transcript level:

This is really opening up major new concepts that Omnichannel can be used for, which will be supported entirely by this feature. As I said at the beginning of this post, I’m absolutely excited for it, and we’re already envisioning how this will be able to empower our clients even more.

Do you have any questions around this? Can you think of any scenarios that this could solve for you? Drop a comment below – I’d love to hear!