Sharing the joys of go-carting, car racing, Larry also cover how important correct communications (in the context of the conversation) is to everything in life!
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
One of the things that customer service agents absolutely HATE is having to type full replies to customers. There are many things that they’ll do which are quite repetitive, and having to type the same response each & every time gets frustrating to say the least.
As I’ve covered previously at Quick Responses in Omnichannel, Omnichannel has the ability for Quick Replies. With these, agents are able to select the response that they’re wanting to use, and quickly populate it into the chat session that they’re having.
It’s also possible, using ‘slugs’, to set up responses that will automatically populate with specific pieces of information in the system. For example, something like ‘Good morning, my name is {Agent Name}, how may I assist you?’ will automatically populate the name of the agent into the chat session.
This is great; the main drawback to date has been that Omnichannel administrators are required to set these up, as well as maintain them. That’s not so great, when you consider that agents might want to personalise their responses as well. To date, that’s not able to be done within the system.
However, with Wave 2 2020, it’s now possible to allow agents to create their own quick replies, to be able to be used within chat sessions. It’s also not particularly difficult to go about getting this into the system, as we’ll see below.
The Omnichannel Administrator simply needs to go to the Personal Quick Replies section, and change the toggle to ‘Yes’, then save. This will enable personal quick replies for agents simply & swiftly.
Once the system setting has been set, and is active (it can take a few minutes to refresh through), agents are then able to start setting up their own quick replies.
To do this, agents will need to be in the Omnichannel for Customer Service app, and select the Personalisation option from the drop-down menu:
This will then open the agent personalisation tab, which has several different sections on it. The first one is the one that we’re interested in – Personal Quick Replies:
Here will list any personal quick replies that have already been set up by the agent, as well as give the option to create further ones to use:
Clicking this option brings up the familiar interface to set this up:
Note: Personal quick replies aren’t localised in Omnichannel. That’s why you need to select a Locale for the record. To be able to provide the quick response in multiple languages, create a specific response for each language, and select the locale that’s appropriate for it
Once the record is saved, it’s then possible to add tag/s to it for referencing:
Note: If you want to use the hash character (#), you can only use it at the beginning of the tag, not anywhere else in it
Once these have saved, they’re then available to be selected from the chat by the agent. The chat interface will show both system & personal quick replies. Typing ‘/q’ into the chat window will bring these up:
We can select the tab at the top to show just the personal quick replies that the agent has set up:
Alternatively, if the agent starts searching with text, they can easily distinguish between system & personal quick replies by looking at the icon against each one. System replies have a globe-style icon, whereas personal replies have a person-style icon:
So in summary, I think that this is a really great feature to add onto the original way of quick replies working. It’ll free up time for the Omnichannel Administrators, and allow agents to put their own responses in that they need. It’s also possible to share this using the OOB record sharing functionality, which means that a team lead can set them up, and then share them with the rest of the team!
How do you think this could enable or help you? Drop a comment below – I’d love to hear!
Reece comes on & captivates us with chatting about the joys of ethical hacking, #DevOps, & what can actually occur when using Power Apps Portals as a public website!
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
This is a slightly different post from the usually stuff that I talk about. It’s much more ‘techy/developer’ focused, but I thought it would be quite useful still for people to keep in mind.
The background to this comes from a project that I’ve been working on with some colleagues. Part of the project involves setting up an Azure SQL database, and replicating CDS data to it. Why, I hear you ask? Well, there are some downstream systems that may be heavy users of the data, and as we well know, CDS isn’t specifically build to handle a large number of queries against it. In fact, if you start hammering the CDS layer, Microsoft is likely to reach out to ask what exactly you’re trying to do!
Therefore (as most people would do), we’re putting in database layer/s within Azure to handle the volume of data requests that we’re expecting to occur.
So with setting up things like databases, we need to create the name for them, along with access credentials. All regular ‘run of the mill’ stuff – no surprises there. In order for adequate security, we usually use one of a handful of password generators that we keep to hand. These have many advantages to them, such as ensuring that it’s not something we (as humans) are dreaming up, that might be easier to be guessed at. I’ve used password generators over the years for many different professional & personal projects, and they really are quite good overall.
Once we had the credentials & everything set up, we then logged in (using SQL Server Management Studio), and all was good. Everything that we needed was in place, and it was looking superb (from the front end, at least).
OK – on to getting the data actually loaded in. To do this, we’re using the Data Export Service (see https://docs.microsoft.com/en-us/power-platform/admin/replicate-data-microsoft-azure-sql-database for further information around this). The reason for using this is that the Data Export Service intelligently synchronises the entire database initially, and thereafter synchronises on a continuous basis as changes occur (delta changes) in the system. This is really good, and means we don’t need to build anything custom to handle it. Wonderful!
Setting up the Data Export Service takes a little bit of time. I’m not going to go into the details of how to set it up – instead there’s a wonderful walkthrough by the AMAZING Scott Durow at http://develop1.net/public/post/2016/12/09/Dynamic365-Data-Export-Service. Go take a look at it if you’re needing to find out how to do it.
So we were going through the process. Part of this is needing to copy the Azure connection string into into a script that you run. When you do this, you need to re-insert the password (as Azure doesn’t include it in the string). For our purposes (as we had generated this), we copied/pasted the password, and ran things.
However all we were getting was a red star, and the error message ‘Unable to validate profile’.
As you’d expect, this was HIGHLY frustrating. We started to dig down to see what actual error log/s were available (with hopefully more information on them), but didn’t make much progress there. We logged in through the front end again – yes, no problems there, all was working fine. Back to the Export Service & scripts, but again the error. As you can imagine, we weren’t very positive about this, and were really trying to find out what could possibly be causing this. Was it a system error? Was there something that we had forgotten to do, somewhere, during the initial setup process?
It’s at these sorts of times that self-doubt can start to creep in. Did we miss something small & minor, but that was actually really important? We went over the deployment steps again & again. Each time, we couldn’t find anything that we had missed out. It was getting absolutely exasperating!
Finally, after much trial & error, we narrowed the issue down to one source. It’s something we hadn’t really expected, but had indeed caused all of this to happen!
What happened was that the password that we had auto-generated had a semi-colon (‘;’) in it. In & of itself, that’s not an issue (usually). As we had seen, we were able to log into SSMS (the ‘front-end’) successfully, with no issues at all.
However when put into code, Azure treats the semi-colon as a special character (a command separator). It was therefore not recognising the entire password, which was causing the entire thing to fail! To resolve this was simple – we regenerated the password to ensure that it didn’t include a semi-colon character within it!
Now, this is indeed something that’s quite simple, and should be at the core of programming knowledge. Most password generators will have an option to avoid this happening, but not all password generators have this. Unfortunately we had fallen subject to this, but thankfully all was resolved in the end.
The setup then carried on successfully, and we were able (after all of the effort above) to achieve what we had set out to do initially.
Have you ever had a similar issue? Either with passwords, or where something worked through a front-end system, but not in code? Drop a comment below – I’d love to hear!
Talking about his love of gaming, Fausto shares with us how it’s actually an integral part of his family networking & communication. We then move onto more serious matters involving our mental health, stigma attached to discussing it, and how we can work through & overcome things.
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
I’ll be the first to admit that I have limited experience of Dynamics 365 for Marketing. In fact, I think that it would be stretching the description to say that I have even ‘limited experience’! I’ve seen it one or twice, and have attended a few presentations on it, but apart from that, nada.
I do remember what it used to be like in its previous incarnation, but even then I didn’t really touch it. Customer Service (& Sales) are my forte, and I generally stick within those walls. Marketing traditionally was its own individual application, and only more recently has been rolled into the wider Dynamics 365 application suite. Even so, it still sometimes works in a somewhat interesting way, different from the rest of the system.
Inevitably I’ve had to actually do something with it for a client project, which has brought me to putting up this post. We had created a few marketing forms, surfaced them correctly, etc. It was great, and working well.
Then we realised that we needed to capture some additional information, in this case a list of Countries. There’s no standard entity for it within Dynamics 365, so we created our own, and loaded a list of countries (& associated data) into it. Fine – that was working without issues, including in the places that we needed to surface it.
Then we came to needing to surface the Country value on a marketing form, through a lookup. Simple, you’d have though? Well, not so much. We went to create the field, and got presented with the following error as we did so:
The error says: ‘The role marketing services user does not have access to the entities you’ve chosen…’
In essence, the system was telling us that we weren’t able to access the entity. Though Country is a custom entity, we were logged in as users with the System Administrator role (which has access automatically to ALL entities). This left us puzzling around what to do.
The error message, thankfully, was quite clear. It was referring to a specific security role missing privileges. In this case, it was the ‘Marketing Services User’. I therefore went to check the permissions for it, and sure enough, it didn’t have permissions on the Country entity that I had created!
Now usually if a security role is missing permissions, what we do is create a custom security role (usually copying the existing role), and add the permissions to do. Best practise is NOT to edit the default security roles. The (main) reason behind this is that Microsoft could update the security role in a later update/release, which could impact on us. We therefore use custom roles to avoid this happening (& yes, I’ve seen it happen/impact in practise!).
The fly in the soup here (lovely phrase, I know) is that we couldn’t do that here. It seems that Dynamics 365 for Marketing uses an underlying security role that’s needed. Even if we had implemented a custom role, we didn’t have any idea of how to tell the system to actually use our custom role, rather than the default one that it’s currently using. Quite frustrating, I tell you!
So in the end we decided to give the default security role the necessary permissions, and see what happened:
With having granted the security permissions to the role, & saved it, we then attempted to create the marketing form field field. This time, we were successful! No errors occurred during it, thankfully:
So in summary, I still have no idea why this has happened. I’ve taken a look around, but can’t find anything obvious as to how/why it actually works like this. I guess that I’d need to dig ‘under the hood’ somewhat to see what’s actually going on, and how to dealt with it appropriately. For the moment, the solution is in place, and is working.
We’ve also been very careful (as mentioned above) to add just the specific custom entity to the default security role. We haven’t touched anything else within it – all other security permissions are done (as per best practise) with custom security roles, which are then allocated appropriately to users &/or teams. Hopefully this will be fine in the long-term, though we’ll definitely be keeping our eyes on it to make sure!
Have you ever come across something like this? How did you decide to go about solving it? Drop a comment below – I’d love to hear!
Update: Thanks to the amazing Carl Cookson, it turns out that this is due to an update from Microsoft in how Marketing works. See https://docs.microsoft.com/en-gb/dynamics365/marketing/marketing-fields for more information around it. Essentially it uses this role to sync to the Azure staged Marketing service, so this role needs to have the appropriate permission
Finding out how Sharon loves being an early technology adopter, the challenges that it can bring, and her love of space rockets. Delving into details of what happened when an electric car ran out of battery.
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
How to start off this post? I’ve been trying to work out how exactly I can express my excitement around this new feature for Omnichannel. Included in the Wave 2 2020 release, it’s just AMAZING. That, however, doesn’t give it true justice. So let’s see how I can describe it properly to give it due respect.
Previously I’ve mentioned the ability to use skills within Omnichannel (see https://thecrm.ninja/omnichannel-for-dynamics-365-queues-users-skills/). This can be used to indicate, for example, agents who can communicate in a certain language. That’s useful of course, but what happens when you don’t have anyone who can speak the language that the customer wants to use? It’s a problem, and one that’s really not easily solved. At least, not until now.
So, what exactly does this new translation feature do? Simple – it translates from one language to another. OK, it’s actually a little more awesome than just that. Having delved into it quite a bit over the last week or so, there are (in my view) three main benefits (with a bonus one as well!):
It translates incoming text from the customer (through chat) from the language that they’re using to the language that the agent is using
It translates outgoing text from the agent (through chat) from the language that the agent is using to the language that the customer is using
It translates text between agents from one language to the other & vice versa (eg on an internal consult)
Now for the bonus. It doesn’t just translate text from one language to another. It follows the languages being used! So if the customer switches in mid-conversation to a different language, the system picks it up. Not only is the new incoming language translated into the agents language, but the replies from the agent are shown in the (new) language being used by the customer. It’ll automatically show text in the ‘last used’ language, which is really quite incredible (at least in my opinion).
There’s no fiddling around of needing agents to select the language that they need, or anything else. It’s a simple click to turn it on, and then another click to turn it off. I’m going to go through the setup of it below, as there are a few fiddly bits that did confuse me for a bit.
It’s also possible to use different translation tools. At the time of writing this post, it’s possible to use Bing, Google or Azure translation models. I’m sure that there will be other options available in the future as well to use, which really opens up possibilities for clients with differing digital estates.
Translation happens in real time, so there’s no waiting around for it to actually get on with it. It’s displayed immediately on the screen for the agent to see.
Setup for translation
I found the general guides to be alright, but weren’t too clear on a few items. I’m therefore sharing below how I went about it, in order to get things working properly. Please be aware that this isn’t in the order specified in the documentation, but in retrospect means less switching between screens:
Ensure that you have the latest updates to your Omnichannel environment (this is always a good idea, regardless of anything else!)
Ensure you have an API key to enter into the web resource file! This is what tripped me up at first. You can use any text editor (I use Notepad++) to open it up. How you get the API key will depend on the provider. For example, to set up a free account in Azure, take a look at https://docs.microsoft.com/en-us/azure/cognitive-services/translator/translator-how-to-signup. There are also some additional things that you can configure in the web resource file, but I’m not going to go into that here
Go to your solutions (this can either be through the Classic interface, or through http://make.powerapps.com). You can either create a new solution to hold the web resource file, or alternatively if you have existing solutions that you’d deploy, you can add the web resource file to that. Either:
In the classic interface, navigate to Web Resources, click to create a new web resource, and upload the file (ensure you select the type to be ‘Script (JScript)’, or
In the modern interface, click the ‘New’ button, select ‘Web Resource’ from the ‘Other’ section, and then follow the steps above
Once it’s saved, it’ll give you a URL. Copy that, and publish the solution.
Go to the Omnichannel Administration Hub, find ‘Real Time Translation’ under Settings, and set this to Yes. You can also select a default input language from the selection. Also enter the URL that you copied above. Save it
You’re all done!
Agent Experience
Depending on how you’ve configured your web resource, auto translation will either by on by default, or be off. If it’s not on by default, the agent can simply click within their chat window to select it to be active:
Once active, it’ll then start to translate everything, in both directions. Below are side by side screens of the customer & agent experiences. You’ll note that the customer is seeing the initial agent response in English, as the agent was the first in the conversation
From the agent side of things, both the original language, as well as the translated language, are shown. The customer is only shown the language that they’re actually using
If the agent isn’t sure what language the customer is using (as it’s being auto-translated for them), they can hover over the text, and it’ll show the details for it:
If the agent will consult, or transfer the session to another agent, the second agent will see the conversation in the language that they are themselves using (with the original text as well). This allows for the possibility to pass a customer to a specialist to assist them, even if they don’t speak the same language! It’s really cool to see this in action.
Even more wonderfully, this is even stored down to the transcript level:
This is really opening up major new concepts that Omnichannel can be used for, which will be supported entirely by this feature. As I said at the beginning of this post, I’m absolutely excited for it, and we’re already envisioning how this will be able to empower our clients even more.
Do you have any questions around this? Can you think of any scenarios that this could solve for you? Drop a comment below – I’d love to hear!
Chatting to Matt about how he got into poker to begin with in his university days (some decent studying of the theory!), & actually going to Vegas, as well as getting married there. Also touching on printing out lots of documents, and duplicate merging
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
One of my recent decisions has been to explore the Azure space. There are several reasons behind this. CDS, as we (hopefully!) know sits on top of Azure, and it’s useful to know the broader digital estate available on the platform.
I’ve also been looking into some of the Cognitive Services functions that are available within Power Platform. These all live in Azure, and are surfaced into Power Apps etc. It’s therefore good to know what can be done outside of the ‘Power Platform bubble’, and the options there.
Incidentally, a year ago I even built a canvas app that allowed you to take a picture of a motorbike tyre. Using AI Builder functionality, it then analysed if the tyre tread was legal or not! That was a really cool proof of concept.
So a good place to start, I thought, would be with the AI-900. This covers the fundamentals of the AI offerings that are in Azure. I had forgotten though that with fundamental exams, there’s only 60 minutes available! Seeing the timer ticking down from that give me a little surprise, though I managed to get through it (& pass!) in good time.
The official description of the exam is
Candidates for this exam should have foundational knowledge of machine learning (ML) and artificial intelligence (AI) concepts and related Microsoft Azure services.
This exam is an opportunity to demonstrate knowledge of common ML and AI workloads and how to implement them on Azure.
This exam is intended for candidates with both technical and non-technical backgrounds. Data science and software engineering experience are not required; however, some general programming knowledge or experience would be beneficial.
Once again, I sat the exam through the proctored option (ie from home). Honestly I think that my experience this time has probably been the best so far. I went through the usual system checks for signing in. The proctor came alone, and within 30 seconds they had released the exam!
So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). I’ve tried to group things together as best as possible for the different subject areas.
Image recognition types
What each one is, what it’s used for
When to use for a specific scenario
Facial recognition
Different types available
What each one is, what it’s used for, when to use for a specific scenario
Limitations & issues that can occur when using it
Text:
Different recognition types
What each one is, what it’s used for, when to use for a specific scenario
Analytics. How this works, how to set up & use
Translation. Different options available, how they work, when to use for a specific scenario
Sentiment analysis. How it works, limitations, what’s needed to train a model
QnA Maker
What this does, how to set it up, how to train it
Generating material with it
Use with chatbots
Machine Learning
What this actually is, and what it does
How it works
Different types that are available, how they work, how to train a model
Classification options
Machine Learning Designer
How to use & set up
Different types of data/options used within it
Training & evaluation models. The steps needed for this, how to set it up correctly
Types of modules available
Validation sets
Chatbots
What they are
How/where they can be used
Limitations
Integration with other systems
Charts
Different charts that are available for use
Reading them correctly
Model types shown on them
Metrics!
Microsoft AI Principles
The different principles that are published
What each one means/refers to
Overall, it was quite good. The Microsoft AI Principles were new to me, and I had to guess at those (I went to look them up afterwards!). Other than that, some bits I breezed through, other parts I took careful stock of.
This is definitely an area that I’m going to continue exploring, and will be writing up further exams that I take in it. I’m curious what your experience of it has been – please drop a comment below to let me know!