Talking about how online gaming can empower & connect families, as well as what ‘might just happen’ when editing a solution (unmanaged!) directly in Production
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
Sharing the joys of go-carting, car racing, Larry also cover how important correct communications (in the context of the conversation) is to everything in life!
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
Reece comes on & captivates us with chatting about the joys of ethical hacking, #DevOps, & what can actually occur when using Power Apps Portals as a public website!
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
This is a slightly different post from the usually stuff that I talk about. It’s much more ‘techy/developer’ focused, but I thought it would be quite useful still for people to keep in mind.
The background to this comes from a project that I’ve been working on with some colleagues. Part of the project involves setting up an Azure SQL database, and replicating CDS data to it. Why, I hear you ask? Well, there are some downstream systems that may be heavy users of the data, and as we well know, CDS isn’t specifically build to handle a large number of queries against it. In fact, if you start hammering the CDS layer, Microsoft is likely to reach out to ask what exactly you’re trying to do!
Therefore (as most people would do), we’re putting in database layer/s within Azure to handle the volume of data requests that we’re expecting to occur.
So with setting up things like databases, we need to create the name for them, along with access credentials. All regular ‘run of the mill’ stuff – no surprises there. In order for adequate security, we usually use one of a handful of password generators that we keep to hand. These have many advantages to them, such as ensuring that it’s not something we (as humans) are dreaming up, that might be easier to be guessed at. I’ve used password generators over the years for many different professional & personal projects, and they really are quite good overall.
Once we had the credentials & everything set up, we then logged in (using SQL Server Management Studio), and all was good. Everything that we needed was in place, and it was looking superb (from the front end, at least).
OK – on to getting the data actually loaded in. To do this, we’re using the Data Export Service (see https://docs.microsoft.com/en-us/power-platform/admin/replicate-data-microsoft-azure-sql-database for further information around this). The reason for using this is that the Data Export Service intelligently synchronises the entire database initially, and thereafter synchronises on a continuous basis as changes occur (delta changes) in the system. This is really good, and means we don’t need to build anything custom to handle it. Wonderful!
Setting up the Data Export Service takes a little bit of time. I’m not going to go into the details of how to set it up – instead there’s a wonderful walkthrough by the AMAZING Scott Durow at http://develop1.net/public/post/2016/12/09/Dynamic365-Data-Export-Service. Go take a look at it if you’re needing to find out how to do it.
So we were going through the process. Part of this is needing to copy the Azure connection string into into a script that you run. When you do this, you need to re-insert the password (as Azure doesn’t include it in the string). For our purposes (as we had generated this), we copied/pasted the password, and ran things.
However all we were getting was a red star, and the error message ‘Unable to validate profile’.
As you’d expect, this was HIGHLY frustrating. We started to dig down to see what actual error log/s were available (with hopefully more information on them), but didn’t make much progress there. We logged in through the front end again – yes, no problems there, all was working fine. Back to the Export Service & scripts, but again the error. As you can imagine, we weren’t very positive about this, and were really trying to find out what could possibly be causing this. Was it a system error? Was there something that we had forgotten to do, somewhere, during the initial setup process?
It’s at these sorts of times that self-doubt can start to creep in. Did we miss something small & minor, but that was actually really important? We went over the deployment steps again & again. Each time, we couldn’t find anything that we had missed out. It was getting absolutely exasperating!
Finally, after much trial & error, we narrowed the issue down to one source. It’s something we hadn’t really expected, but had indeed caused all of this to happen!
What happened was that the password that we had auto-generated had a semi-colon (‘;’) in it. In & of itself, that’s not an issue (usually). As we had seen, we were able to log into SSMS (the ‘front-end’) successfully, with no issues at all.
However when put into code, Azure treats the semi-colon as a special character (a command separator). It was therefore not recognising the entire password, which was causing the entire thing to fail! To resolve this was simple – we regenerated the password to ensure that it didn’t include a semi-colon character within it!
Now, this is indeed something that’s quite simple, and should be at the core of programming knowledge. Most password generators will have an option to avoid this happening, but not all password generators have this. Unfortunately we had fallen subject to this, but thankfully all was resolved in the end.
The setup then carried on successfully, and we were able (after all of the effort above) to achieve what we had set out to do initially.
Have you ever had a similar issue? Either with passwords, or where something worked through a front-end system, but not in code? Drop a comment below – I’d love to hear!
Talking about his love of gaming, Fausto shares with us how it’s actually an integral part of his family networking & communication. We then move onto more serious matters involving our mental health, stigma attached to discussing it, and how we can work through & overcome things.
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
Finding out how Sharon loves being an early technology adopter, the challenges that it can bring, and her love of space rockets. Delving into details of what happened when an electric car ran out of battery.
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
How to start off this post? I’ve been trying to work out how exactly I can express my excitement around this new feature for Omnichannel. Included in the Wave 2 2020 release, it’s just AMAZING. That, however, doesn’t give it true justice. So let’s see how I can describe it properly to give it due respect.
Previously I’ve mentioned the ability to use skills within Omnichannel (see https://thecrm.ninja/omnichannel-for-dynamics-365-queues-users-skills/). This can be used to indicate, for example, agents who can communicate in a certain language. That’s useful of course, but what happens when you don’t have anyone who can speak the language that the customer wants to use? It’s a problem, and one that’s really not easily solved. At least, not until now.
So, what exactly does this new translation feature do? Simple – it translates from one language to another. OK, it’s actually a little more awesome than just that. Having delved into it quite a bit over the last week or so, there are (in my view) three main benefits (with a bonus one as well!):
It translates incoming text from the customer (through chat) from the language that they’re using to the language that the agent is using
It translates outgoing text from the agent (through chat) from the language that the agent is using to the language that the customer is using
It translates text between agents from one language to the other & vice versa (eg on an internal consult)
Now for the bonus. It doesn’t just translate text from one language to another. It follows the languages being used! So if the customer switches in mid-conversation to a different language, the system picks it up. Not only is the new incoming language translated into the agents language, but the replies from the agent are shown in the (new) language being used by the customer. It’ll automatically show text in the ‘last used’ language, which is really quite incredible (at least in my opinion).
There’s no fiddling around of needing agents to select the language that they need, or anything else. It’s a simple click to turn it on, and then another click to turn it off. I’m going to go through the setup of it below, as there are a few fiddly bits that did confuse me for a bit.
It’s also possible to use different translation tools. At the time of writing this post, it’s possible to use Bing, Google or Azure translation models. I’m sure that there will be other options available in the future as well to use, which really opens up possibilities for clients with differing digital estates.
Translation happens in real time, so there’s no waiting around for it to actually get on with it. It’s displayed immediately on the screen for the agent to see.
Setup for translation
I found the general guides to be alright, but weren’t too clear on a few items. I’m therefore sharing below how I went about it, in order to get things working properly. Please be aware that this isn’t in the order specified in the documentation, but in retrospect means less switching between screens:
Ensure that you have the latest updates to your Omnichannel environment (this is always a good idea, regardless of anything else!)
Ensure you have an API key to enter into the web resource file! This is what tripped me up at first. You can use any text editor (I use Notepad++) to open it up. How you get the API key will depend on the provider. For example, to set up a free account in Azure, take a look at https://docs.microsoft.com/en-us/azure/cognitive-services/translator/translator-how-to-signup. There are also some additional things that you can configure in the web resource file, but I’m not going to go into that here
Go to your solutions (this can either be through the Classic interface, or through http://make.powerapps.com). You can either create a new solution to hold the web resource file, or alternatively if you have existing solutions that you’d deploy, you can add the web resource file to that. Either:
In the classic interface, navigate to Web Resources, click to create a new web resource, and upload the file (ensure you select the type to be ‘Script (JScript)’, or
In the modern interface, click the ‘New’ button, select ‘Web Resource’ from the ‘Other’ section, and then follow the steps above
Once it’s saved, it’ll give you a URL. Copy that, and publish the solution.
Go to the Omnichannel Administration Hub, find ‘Real Time Translation’ under Settings, and set this to Yes. You can also select a default input language from the selection. Also enter the URL that you copied above. Save it
You’re all done!
Agent Experience
Depending on how you’ve configured your web resource, auto translation will either by on by default, or be off. If it’s not on by default, the agent can simply click within their chat window to select it to be active:
Once active, it’ll then start to translate everything, in both directions. Below are side by side screens of the customer & agent experiences. You’ll note that the customer is seeing the initial agent response in English, as the agent was the first in the conversation
From the agent side of things, both the original language, as well as the translated language, are shown. The customer is only shown the language that they’re actually using
If the agent isn’t sure what language the customer is using (as it’s being auto-translated for them), they can hover over the text, and it’ll show the details for it:
If the agent will consult, or transfer the session to another agent, the second agent will see the conversation in the language that they are themselves using (with the original text as well). This allows for the possibility to pass a customer to a specialist to assist them, even if they don’t speak the same language! It’s really cool to see this in action.
Even more wonderfully, this is even stored down to the transcript level:
This is really opening up major new concepts that Omnichannel can be used for, which will be supported entirely by this feature. As I said at the beginning of this post, I’m absolutely excited for it, and we’re already envisioning how this will be able to empower our clients even more.
Do you have any questions around this? Can you think of any scenarios that this could solve for you? Drop a comment below – I’d love to hear!