Sharon Sumner on The Oops Factor

Finding out how Sharon loves being an early technology adopter, the challenges that it can bring, and her love of space rockets. Delving into details of what happened when an electric car ran out of battery.

If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!

Click here to take a look at the other videos that are available to watch.

AI Translation for Omnichannel

How to start off this post? I’ve been trying to work out how exactly I can express my excitement around this new feature for Omnichannel. Included in the Wave 2 2020 release, it’s just AMAZING. That, however, doesn’t give it true justice. So let’s see how I can describe it properly to give it due respect.

Previously I’ve mentioned the ability to use skills within Omnichannel (see https://thecrm.ninja/omnichannel-for-dynamics-365-queues-users-skills/). This can be used to indicate, for example, agents who can communicate in a certain language. That’s useful of course, but what happens when you don’t have anyone who can speak the language that the customer wants to use? It’s a problem, and one that’s really not easily solved. At least, not until now.

So, what exactly does this new translation feature do? Simple – it translates from one language to another. OK, it’s actually a little more awesome than just that. Having delved into it quite a bit over the last week or so, there are (in my view) three main benefits (with a bonus one as well!):

  1. It translates incoming text from the customer (through chat) from the language that they’re using to the language that the agent is using
  2. It translates outgoing text from the agent (through chat) from the language that the agent is using to the language that the customer is using
  3. It translates text between agents from one language to the other & vice versa (eg on an internal consult)

Now for the bonus. It doesn’t just translate text from one language to another. It follows the languages being used! So if the customer switches in mid-conversation to a different language, the system picks it up. Not only is the new incoming language translated into the agents language, but the replies from the agent are shown in the (new) language being used by the customer. It’ll automatically show text in the ‘last used’ language, which is really quite incredible (at least in my opinion).

There’s no fiddling around of needing agents to select the language that they need, or anything else. It’s a simple click to turn it on, and then another click to turn it off. I’m going to go through the setup of it below, as there are a few fiddly bits that did confuse me for a bit.

It’s also possible to use different translation tools. At the time of writing this post, it’s possible to use Bing, Google or Azure translation models. I’m sure that there will be other options available in the future as well to use, which really opens up possibilities for clients with differing digital estates.

Translation happens in real time, so there’s no waiting around for it to actually get on with it. It’s displayed immediately on the screen for the agent to see.

Setup for translation

I found the general guides to be alright, but weren’t too clear on a few items. I’m therefore sharing below how I went about it, in order to get things working properly. Please be aware that this isn’t in the order specified in the documentation, but in retrospect means less switching between screens:

  1. Ensure that you have the latest updates to your Omnichannel environment (this is always a good idea, regardless of anything else!)
  1. Go to https://github.com/microsoft/Dynamics365-Apps-Samples/tree/master/customer-service/omnichannel/real-time-translation & download the ‘webResourceV2.js’ file there (if you’re unfamiliar with how to do this, click to open the file, click the ‘Raw’ button, and then save the page (ensure it’s got the ‘.js’ extension when you save it!).
  1. Ensure you have an API key to enter into the web resource file! This is what tripped me up at first. You can use any text editor (I use Notepad++) to open it up. How you get the API key will depend on the provider. For example, to set up a free account in Azure, take a look at https://docs.microsoft.com/en-us/azure/cognitive-services/translator/translator-how-to-signup. There are also some additional things that you can configure in the web resource file, but I’m not going to go into that here
  1. Go to your solutions (this can either be through the Classic interface, or through http://make.powerapps.com). You can either create a new solution to hold the web resource file, or alternatively if you have existing solutions that you’d deploy, you can add the web resource file to that. Either:
    1. In the classic interface, navigate to Web Resources, click to create a new web resource, and upload the file (ensure you select the type to be ‘Script (JScript)’, or
    2. In the modern interface, click the ‘New’ button, select ‘Web Resource’ from the ‘Other’ section, and then follow the steps above

Once it’s saved, it’ll give you a URL. Copy that, and publish the solution.

  1. Go to the Omnichannel Administration Hub, find ‘Real Time Translation’ under Settings, and set this to Yes. You can also select a default input language from the selection. Also enter the URL that you copied above. Save it
  1. You’re all done!

Agent Experience

Depending on how you’ve configured your web resource, auto translation will either by on by default, or be off. If it’s not on by default, the agent can simply click within their chat window to select it to be active:

Once active, it’ll then start to translate everything, in both directions. Below are side by side screens of the customer & agent experiences. You’ll note that the customer is seeing the initial agent response in English, as the agent was the first in the conversation

From the agent side of things, both the original language, as well as the translated language, are shown. The customer is only shown the language that they’re actually using

If the agent isn’t sure what language the customer is using (as it’s being auto-translated for them), they can hover over the text, and it’ll show the details for it:

If the agent will consult, or transfer the session to another agent, the second agent will see the conversation in the language that they are themselves using (with the original text as well). This allows for the possibility to pass a customer to a specialist to assist them, even if they don’t speak the same language! It’s really cool to see this in action.

Even more wonderfully, this is even stored down to the transcript level:

This is really opening up major new concepts that Omnichannel can be used for, which will be supported entirely by this feature. As I said at the beginning of this post, I’m absolutely excited for it, and we’re already envisioning how this will be able to empower our clients even more.

Do you have any questions around this? Can you think of any scenarios that this could solve for you? Drop a comment below – I’d love to hear!

Matt Beard on The Oops Factor

Chatting to Matt about how he got into poker to begin with in his university days (some decent studying of the theory!), & actually going to Vegas, as well as getting married there. Also touching on printing out lots of documents, and duplicate merging

If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!

Click here to take a look at the other videos that are available to watch.

AI-900: Microsoft Azure AI Fundamentals

One of my recent decisions has been to explore the Azure space. There are several reasons behind this. CDS, as we (hopefully!) know sits on top of Azure, and it’s useful to know the broader digital estate available on the platform.

I’ve also been looking into some of the Cognitive Services functions that are available within Power Platform. These all live in Azure, and are surfaced into Power Apps etc. It’s therefore good to know what can be done outside of the ‘Power Platform bubble’, and the options there.

Incidentally, a year ago I even built a canvas app that allowed you to take a picture of a motorbike tyre. Using AI Builder functionality, it then analysed if the tyre tread was legal or not! That was a really cool proof of concept.

So a good place to start, I thought, would be with the AI-900. This covers the fundamentals of the AI offerings that are in Azure. I had forgotten though that with fundamental exams, there’s only 60 minutes available! Seeing the timer ticking down from that give me a little surprise, though I managed to get through it (& pass!) in good time.

The official description of the exam is

Candidates for this exam should have foundational knowledge of machine learning (ML) and artificial intelligence (AI) concepts and related Microsoft Azure services.

This exam is an opportunity to demonstrate knowledge of common ML and AI workloads and how to implement them on Azure.

This exam is intended for candidates with both technical and non-technical backgrounds. Data science and software engineering experience are not required; however, some general programming knowledge or experience would be beneficial.

The official page for the exam is at https://docs.microsoft.com/en-us/learn/certifications/exams/ai-900, where it gives quite a good overview of things. Go take a look at it, and also take a look at the associated learning paths.

Once again, I sat the exam through the proctored option (ie from home). Honestly I think that my experience this time has probably been the best so far. I went through the usual system checks for signing in. The proctor came alone, and within 30 seconds they had released the exam!

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). I’ve tried to group things together as best as possible for the different subject areas.

  • Image recognition types
    • What each one is, what it’s used for
    • When to use for a specific scenario
  • Facial recognition
    • Different types available
    • What each one is, what it’s used for, when to use for a specific scenario
    • Limitations & issues that can occur when using it
  • Text:
    • Different recognition types
    • What each one is, what it’s used for, when to use for a specific scenario
    • Analytics. How this works, how to set up & use
    • Translation. Different options available, how they work, when to use for a specific scenario
    • Sentiment analysis. How it works, limitations, what’s needed to train a model
  • QnA Maker
    • What this does, how to set it up, how to train it
    • Generating material with it
    • Use with chatbots
  • Machine Learning
    • What this actually is, and what it does
    • How it works
    • Different types that are available, how they work, how to train a model
    • Classification options
  • Machine Learning Designer
    • How to use & set up
    • Different types of data/options used within it
    • Training & evaluation models. The steps needed for this, how to set it up correctly
    • Types of modules available
    • Validation sets
  • Chatbots
    • What they are
    • How/where they can be used
    • Limitations
    • Integration with other systems
  • Charts
    • Different charts that are available for use
    • Reading them correctly
    • Model types shown on them
    • Metrics!
  • Microsoft AI Principles
    • The different principles that are published
    • What each one means/refers to

Overall, it was quite good. The Microsoft AI Principles were new to me, and I had to guess at those (I went to look them up afterwards!). Other than that, some bits I breezed through, other parts I took careful stock of.

This is definitely an area that I’m going to continue exploring, and will be writing up further exams that I take in it. I’m curious what your experience of it has been – please drop a comment below to let me know!

Dian Taylor on The Oops Factor

Finding out what the ‘D’ in her Twitter handle actually stands for, how she started in IT, and what happens when you don’t believe in hypnotism! Also including advice as to why it’s so important to keep an open mind with life & projects

If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!

Click here to take a look at the other videos that are available to watch.

Igor Shvets on The Oops Factor

Working out Igor’s retirement plan to be based around his love of hiking (which came from the military). Also discussing how coffee, amongst other important things, is key to rapid-speed enterprise-scale project deployments!

If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!

Click here to take a look at the other videos that are available to watch.

Kaila Bloomfield on The Oops Factor

Discussing mutual interests of motorsport, racing motorbikes, seeing races, & having amazing adrenaline rushes! Also covering thoughts in our heads that we might have said out loud by mistake, and some quite interesting comments around dress codes at clients.

If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!

Click here to take a look at the other videos that are available to watch.

Beth Burrell on The Oops Factor

Finding out how exactly Beth took up golf, how that then progressed over time into relationships, and what exactly can happen when you’re too honest with people!

If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!

Click here to take a look at the other videos that are available to watch.

Handling ‘Out of Hours’

Let’s face it – we can be quite spoiled at times. As a customer, we can sometimes expect that companies be available 24/7 to service our requests, needs, issues, etc. That would be wonderful, wouldn’t it! Imagine that you have a mobile phone issue at 2am – you could call up your provider, and have it handled (or a new handset sent out) immediately. That would be quite nice!

Unfortunately the real world doesn’t (always) quite work like that. Of course there are companies that operate on a multi-national or even global scale, and there’s always customer service available (Amazon – I’m thinking of you right now!).

Previously I’ve gone into how we can set operating hours for a company, so that the ability to contact a customer support agent is only shown during these times. Take a look at Handling Company Hours for a refresher on this.

But sometimes not showing the ability to contact support could potentially be counter-productive. Customers may think that our website isn’t working properly, and possibly attempt to try to reach us through other means. This could quite well frustrate them.

Due to this, we have a nice little piece of functionality that’s now come out in Omnichannel. It’s small, simple, but yet quite brilliant in my humble opinion. This is the ability to have a chat widget available, but let customers know that that it’s currently out of company hours.

To activate this, we need to open the Chat record in the Omnichannel Administration Hub, and go to the Design tab:

Quite helpfully, the section is labelled ‘Offline’! How much better could we get.

We do need to understand that (at the time of writing this post) it’s currently in Preview, with all of the usual caveats around how that works.

We have several items available here:

  • Show widget during offline hours. This is what actually activates the setting – leaving this to false won’t do anything for us!
  • Theme colour. This allows us to set the specific theme to be used during ‘offline’ hours. It’s actually really helpful, as it serves/gives a very visual aspect to the customer to display that it’s out of hours
  • Title. The title of the chat widget, which will be displayed to the user
  • Subtitle. This allows us to place a subtitle as well, for the user to be able to see

So what does this then look like? Well, let’s take a look:

Personally I think that being able to set a theme colour for offline access gives it that little edge. Customers will become aware of this (subconsciously) when visiting the website, and come to the point of not even trying to start a chat when they see that it’s out of hours.

One MAJOR thing to bear in mind. We’re only going to be given the option to set this when we have a value set for Operating Hours. Without this being set, we won’t be shown this option. Go try it for yourself and see!

There’s not really much else to this, to be honest. But I’m liking it. I know that from a personal perspective I’ve been on various websites, and have no idea if the support chat is actually working or not. With this in place, I’m able to see that it is available for use at the correct time, and not have to wonder about it.

Have you ever thought about implementing something like this? Have you actually done so? I’d be really interested to hear from you about how you went about it – please drop a comment below!