Recently, with some of the system updates that have come out for Omnichannel, there’s been an interesting issue observed. This is essentially when the Agent Presence (which signifies the available of the agent) doesn’t load within the system.
This is of course a problem. Without agents able to set their status, it’s not possible to have conversation sessions come through to them. As a result, they’re not able to do their work!
It’s an interesting issue, and one that I’ve discussed with several other people who are deeply involved with Omnichannel for Customer Service. We’re not quite sure why it’s happening, but there seems to be some different things going on.
It can range from either the presence icon not showing up at all, to it showing up, but not being able to be selected/changed.
As there are already some stellar resources out there around this, I therefore would like to link to these, rather than just repeat information!
One of the themes running through the Wave 2 2020 update for Omnichannel is the personalisation aspect. Though systems work just fine on their own, it’s always nice to add a ‘personal touch’ to the parts that we can. Last week I shared how quick replies are now able to be personalised (Personalised Quick Replies). This week I’m going to go into how the sound notifications can be personalised as well!
These seem to be just small little features, but in my view they do bring things to the next level. Examples of this are the following:
If a customer session starts, wanting to know which channel it’s come in through, without needing to open the conversation
Many agents in a contact centre – if everyone is using the same sound, no-one knows if it’s their computer or not!
The different between a new conversation starting, and a new message being received on an existing conversation
Wanting to ensure that sound volumes aren’t too high, else they’ll disturb other people.
All of these are extremely valid scenarios, along with other ones (such as disabling sound entirely, for example!). Though this seems simple to implement, and isn’t very difficult to set it, there’s a lot of flexibility involved. I’m therefore really happy that this is now available to be used.
So, let’s see how to go about setting it up. There are two parts to this – the Omnichannel Administrator side, and what the Agent can then do
Omnichannel Administrator
In the Omnichannel Administrator Hub, the administrator should open the Notifications section, and go to the Sound Notification Settings tab:
There’s a single setting there, to toggle sound notifications on or off. Setting it to ‘Yes’ will then show the following section on the screen:
Once it’s enabled, there are then a number of system default options that are automatically loaded. Here the administrator can do the following tasks:
Choose to allow sounds to be played at a per channel level
Change the system default sound notification (more on loading in custom sounds below)
Allow the sound notification to be repeated until the call is answered
Set the maximum volume allowed for the sound (this is a lovely slider control!)
There are of course sound files that come included in the system by default. But what if we’re wanting to upload custom sound files to be used? Well, that’s not a problem. Simply by clicking in the lookup field to select a sound file, we are given the option to upload a new audio file:
Clicking this brings up the Audio File record, which we use to upload. We need to give it a name & save it, and then we’re given the ability to upload the file itself:
Note: There are specific file types that need to be used, with a maximum file size of 1MB. It does say that for best experience to use the OGG file format. There are plenty of free resources out there to download OGG files, or to convert MP3 files to the OGG file format if you need
Once we’ve uploaded the file, we get presented with a mini player to hear how it sounds. This is really cool!
All of the audio files in the system (both default & custom) are then available for agents to personalise their own experience
Note: If a company wants to upload many different custom audio files, it may be easier to add the Audio Files entity to the sitemap, and then perform this function from there
Note: To prevent agents from uploading their own audio files directly, the Omnichannel Agent security role only allows Read access, not Create/Edit access:
Omnichannel Agent
With the initial system setup performed by the Omnichannel Administrator, agents are then free to go ahead & personalise their own experience. This is done directly within the Omnichannel for Customer Service app, by selecting ‘Personalisation’ from the available menu:
Once this is selected, the agent is presented with a very similar interface to the Omnichannel Administrator:
Here the agent can change the system default for themselves (this does not affect any other Omnichannel users), change the various settings, modify the volume levels, etc.
Once saved, it’s then live & active, and will work as desired.
Incoming message alerts for active sessions
At the bottom of the sound notification settings screen, there is one further setting. This is around the behaviour of sounds for existing conversations:
This can be helpful (either from an overall system perspective, or an individual agent perspective) to either allow or turn off sounds from conversations that are already happening. Some people might find it very annoying that every time a customer sends a new message through, the system plays a sound. This is especially true when dealing with multiple conversations (which, after all, is what Omnichannel is all about!)
In summary, it’s a really good feature to have now at our convenience to use. Obviously I’d suggest not to load rock music into it, for example, unless of course your company specialises in rock music! How do you think this would be beneficial to your users? Drop a comment below – I’d love to hear!
This is a slightly different post from the usually stuff that I talk about. It’s much more ‘techy/developer’ focused, but I thought it would be quite useful still for people to keep in mind.
The background to this comes from a project that I’ve been working on with some colleagues. Part of the project involves setting up an Azure SQL database, and replicating CDS data to it. Why, I hear you ask? Well, there are some downstream systems that may be heavy users of the data, and as we well know, CDS isn’t specifically build to handle a large number of queries against it. In fact, if you start hammering the CDS layer, Microsoft is likely to reach out to ask what exactly you’re trying to do!
Therefore (as most people would do), we’re putting in database layer/s within Azure to handle the volume of data requests that we’re expecting to occur.
So with setting up things like databases, we need to create the name for them, along with access credentials. All regular ‘run of the mill’ stuff – no surprises there. In order for adequate security, we usually use one of a handful of password generators that we keep to hand. These have many advantages to them, such as ensuring that it’s not something we (as humans) are dreaming up, that might be easier to be guessed at. I’ve used password generators over the years for many different professional & personal projects, and they really are quite good overall.
Once we had the credentials & everything set up, we then logged in (using SQL Server Management Studio), and all was good. Everything that we needed was in place, and it was looking superb (from the front end, at least).
OK – on to getting the data actually loaded in. To do this, we’re using the Data Export Service (see https://docs.microsoft.com/en-us/power-platform/admin/replicate-data-microsoft-azure-sql-database for further information around this). The reason for using this is that the Data Export Service intelligently synchronises the entire database initially, and thereafter synchronises on a continuous basis as changes occur (delta changes) in the system. This is really good, and means we don’t need to build anything custom to handle it. Wonderful!
Setting up the Data Export Service takes a little bit of time. I’m not going to go into the details of how to set it up – instead there’s a wonderful walkthrough by the AMAZING Scott Durow at http://develop1.net/public/post/2016/12/09/Dynamic365-Data-Export-Service. Go take a look at it if you’re needing to find out how to do it.
So we were going through the process. Part of this is needing to copy the Azure connection string into into a script that you run. When you do this, you need to re-insert the password (as Azure doesn’t include it in the string). For our purposes (as we had generated this), we copied/pasted the password, and ran things.
However all we were getting was a red star, and the error message ‘Unable to validate profile’.
As you’d expect, this was HIGHLY frustrating. We started to dig down to see what actual error log/s were available (with hopefully more information on them), but didn’t make much progress there. We logged in through the front end again – yes, no problems there, all was working fine. Back to the Export Service & scripts, but again the error. As you can imagine, we weren’t very positive about this, and were really trying to find out what could possibly be causing this. Was it a system error? Was there something that we had forgotten to do, somewhere, during the initial setup process?
It’s at these sorts of times that self-doubt can start to creep in. Did we miss something small & minor, but that was actually really important? We went over the deployment steps again & again. Each time, we couldn’t find anything that we had missed out. It was getting absolutely exasperating!
Finally, after much trial & error, we narrowed the issue down to one source. It’s something we hadn’t really expected, but had indeed caused all of this to happen!
What happened was that the password that we had auto-generated had a semi-colon (‘;’) in it. In & of itself, that’s not an issue (usually). As we had seen, we were able to log into SSMS (the ‘front-end’) successfully, with no issues at all.
However when put into code, Azure treats the semi-colon as a special character (a command separator). It was therefore not recognising the entire password, which was causing the entire thing to fail! To resolve this was simple – we regenerated the password to ensure that it didn’t include a semi-colon character within it!
Now, this is indeed something that’s quite simple, and should be at the core of programming knowledge. Most password generators will have an option to avoid this happening, but not all password generators have this. Unfortunately we had fallen subject to this, but thankfully all was resolved in the end.
The setup then carried on successfully, and we were able (after all of the effort above) to achieve what we had set out to do initially.
Have you ever had a similar issue? Either with passwords, or where something worked through a front-end system, but not in code? Drop a comment below – I’d love to hear!
I’ll be the first to admit that I have limited experience of Dynamics 365 for Marketing. In fact, I think that it would be stretching the description to say that I have even ‘limited experience’! I’ve seen it one or twice, and have attended a few presentations on it, but apart from that, nada.
I do remember what it used to be like in its previous incarnation, but even then I didn’t really touch it. Customer Service (& Sales) are my forte, and I generally stick within those walls. Marketing traditionally was its own individual application, and only more recently has been rolled into the wider Dynamics 365 application suite. Even so, it still sometimes works in a somewhat interesting way, different from the rest of the system.
Inevitably I’ve had to actually do something with it for a client project, which has brought me to putting up this post. We had created a few marketing forms, surfaced them correctly, etc. It was great, and working well.
Then we realised that we needed to capture some additional information, in this case a list of Countries. There’s no standard entity for it within Dynamics 365, so we created our own, and loaded a list of countries (& associated data) into it. Fine – that was working without issues, including in the places that we needed to surface it.
Then we came to needing to surface the Country value on a marketing form, through a lookup. Simple, you’d have though? Well, not so much. We went to create the field, and got presented with the following error as we did so:
The error says: ‘The role marketing services user does not have access to the entities you’ve chosen…’
In essence, the system was telling us that we weren’t able to access the entity. Though Country is a custom entity, we were logged in as users with the System Administrator role (which has access automatically to ALL entities). This left us puzzling around what to do.
The error message, thankfully, was quite clear. It was referring to a specific security role missing privileges. In this case, it was the ‘Marketing Services User’. I therefore went to check the permissions for it, and sure enough, it didn’t have permissions on the Country entity that I had created!
Now usually if a security role is missing permissions, what we do is create a custom security role (usually copying the existing role), and add the permissions to do. Best practise is NOT to edit the default security roles. The (main) reason behind this is that Microsoft could update the security role in a later update/release, which could impact on us. We therefore use custom roles to avoid this happening (& yes, I’ve seen it happen/impact in practise!).
The fly in the soup here (lovely phrase, I know) is that we couldn’t do that here. It seems that Dynamics 365 for Marketing uses an underlying security role that’s needed. Even if we had implemented a custom role, we didn’t have any idea of how to tell the system to actually use our custom role, rather than the default one that it’s currently using. Quite frustrating, I tell you!
So in the end we decided to give the default security role the necessary permissions, and see what happened:
With having granted the security permissions to the role, & saved it, we then attempted to create the marketing form field field. This time, we were successful! No errors occurred during it, thankfully:
So in summary, I still have no idea why this has happened. I’ve taken a look around, but can’t find anything obvious as to how/why it actually works like this. I guess that I’d need to dig ‘under the hood’ somewhat to see what’s actually going on, and how to dealt with it appropriately. For the moment, the solution is in place, and is working.
We’ve also been very careful (as mentioned above) to add just the specific custom entity to the default security role. We haven’t touched anything else within it – all other security permissions are done (as per best practise) with custom security roles, which are then allocated appropriately to users &/or teams. Hopefully this will be fine in the long-term, though we’ll definitely be keeping our eyes on it to make sure!
Have you ever come across something like this? How did you decide to go about solving it? Drop a comment below – I’d love to hear!
Update: Thanks to the amazing Carl Cookson, it turns out that this is due to an update from Microsoft in how Marketing works. See https://docs.microsoft.com/en-gb/dynamics365/marketing/marketing-fields for more information around it. Essentially it uses this role to sync to the Azure staged Marketing service, so this role needs to have the appropriate permission
How to start off this post? I’ve been trying to work out how exactly I can express my excitement around this new feature for Omnichannel. Included in the Wave 2 2020 release, it’s just AMAZING. That, however, doesn’t give it true justice. So let’s see how I can describe it properly to give it due respect.
Previously I’ve mentioned the ability to use skills within Omnichannel (see https://thecrm.ninja/omnichannel-for-dynamics-365-queues-users-skills/). This can be used to indicate, for example, agents who can communicate in a certain language. That’s useful of course, but what happens when you don’t have anyone who can speak the language that the customer wants to use? It’s a problem, and one that’s really not easily solved. At least, not until now.
So, what exactly does this new translation feature do? Simple – it translates from one language to another. OK, it’s actually a little more awesome than just that. Having delved into it quite a bit over the last week or so, there are (in my view) three main benefits (with a bonus one as well!):
It translates incoming text from the customer (through chat) from the language that they’re using to the language that the agent is using
It translates outgoing text from the agent (through chat) from the language that the agent is using to the language that the customer is using
It translates text between agents from one language to the other & vice versa (eg on an internal consult)
Now for the bonus. It doesn’t just translate text from one language to another. It follows the languages being used! So if the customer switches in mid-conversation to a different language, the system picks it up. Not only is the new incoming language translated into the agents language, but the replies from the agent are shown in the (new) language being used by the customer. It’ll automatically show text in the ‘last used’ language, which is really quite incredible (at least in my opinion).
There’s no fiddling around of needing agents to select the language that they need, or anything else. It’s a simple click to turn it on, and then another click to turn it off. I’m going to go through the setup of it below, as there are a few fiddly bits that did confuse me for a bit.
It’s also possible to use different translation tools. At the time of writing this post, it’s possible to use Bing, Google or Azure translation models. I’m sure that there will be other options available in the future as well to use, which really opens up possibilities for clients with differing digital estates.
Translation happens in real time, so there’s no waiting around for it to actually get on with it. It’s displayed immediately on the screen for the agent to see.
Setup for translation
I found the general guides to be alright, but weren’t too clear on a few items. I’m therefore sharing below how I went about it, in order to get things working properly. Please be aware that this isn’t in the order specified in the documentation, but in retrospect means less switching between screens:
Ensure that you have the latest updates to your Omnichannel environment (this is always a good idea, regardless of anything else!)
Ensure you have an API key to enter into the web resource file! This is what tripped me up at first. You can use any text editor (I use Notepad++) to open it up. How you get the API key will depend on the provider. For example, to set up a free account in Azure, take a look at https://docs.microsoft.com/en-us/azure/cognitive-services/translator/translator-how-to-signup. There are also some additional things that you can configure in the web resource file, but I’m not going to go into that here
Go to your solutions (this can either be through the Classic interface, or through http://make.powerapps.com). You can either create a new solution to hold the web resource file, or alternatively if you have existing solutions that you’d deploy, you can add the web resource file to that. Either:
In the classic interface, navigate to Web Resources, click to create a new web resource, and upload the file (ensure you select the type to be ‘Script (JScript)’, or
In the modern interface, click the ‘New’ button, select ‘Web Resource’ from the ‘Other’ section, and then follow the steps above
Once it’s saved, it’ll give you a URL. Copy that, and publish the solution.
Go to the Omnichannel Administration Hub, find ‘Real Time Translation’ under Settings, and set this to Yes. You can also select a default input language from the selection. Also enter the URL that you copied above. Save it
You’re all done!
Agent Experience
Depending on how you’ve configured your web resource, auto translation will either by on by default, or be off. If it’s not on by default, the agent can simply click within their chat window to select it to be active:
Once active, it’ll then start to translate everything, in both directions. Below are side by side screens of the customer & agent experiences. You’ll note that the customer is seeing the initial agent response in English, as the agent was the first in the conversation
From the agent side of things, both the original language, as well as the translated language, are shown. The customer is only shown the language that they’re actually using
If the agent isn’t sure what language the customer is using (as it’s being auto-translated for them), they can hover over the text, and it’ll show the details for it:
If the agent will consult, or transfer the session to another agent, the second agent will see the conversation in the language that they are themselves using (with the original text as well). This allows for the possibility to pass a customer to a specialist to assist them, even if they don’t speak the same language! It’s really cool to see this in action.
Even more wonderfully, this is even stored down to the transcript level:
This is really opening up major new concepts that Omnichannel can be used for, which will be supported entirely by this feature. As I said at the beginning of this post, I’m absolutely excited for it, and we’re already envisioning how this will be able to empower our clients even more.
Do you have any questions around this? Can you think of any scenarios that this could solve for you? Drop a comment below – I’d love to hear!
Well, the last week has been quite busy, on many fronts! One of those is having a few new exams come out in Beta. I’ve already taken the PL-400 (see PL-400: Microsoft Power Platform Developer Exam for my review of it). Last Friday, the new PL-200 exam was released as well, so I scheduled it in for as soon as I could sit it.
Now the PL-200 is scheduled to be replacing the MB-200 exam at the end of this year (2020), assuming it comes out of beta by then of course. I remember sitting my MB-200, though I didn’t write up about it at the time. Compared to some of the other exams I’ve taken, it was hefty. I’ll freely admit that I didn’t pass on first go of it – it took me 3 tries to gain it! People will be required to take this as a pre-requisite for attaining the Microsoft Certified: Power Platform Functional Consultant Associate badge.
So I’ve been expecting this new PL-200 to be quite similar, but with more of a Power Platform focus. It’s still heavy on Dynamics 365, and I wasn’t expecting that part to change. The existing MB-2xx series are also staying in place (for the moment, anyhow).
According to the official description for the exam:
Candidates for this exam perform discovery, capture requirements, engage subject matter experts and stakeholders, translate requirements, and configure Power Platform solutions and apps. They create application enhancements, custom user experiences, system integrations, data conversions, custom process automation, and custom visualizations.
Candidates implement the design provided by and in collaboration with a solution architect and the standards, branding, and artifacts established by User Experience Designers. They design integrations to provide seamless integration with third party applications and services.
Candidates actively collaborate with quality assurance team members to ensure that solutions meet functional and non-functional requirements. They identify, generate, and deliver artifacts for packaging and deployment to DevOps engineers, and provide operations and maintenance training to Power Platform administrators.
The official Microsoft Learn page for the exam is at https://docs.microsoft.com/en-us/learn/certifications/exams/pl-200, and I’d highly recommend people to go check it out. I didn’t use it that much, but felt that I was on reasonable grounds with existing knowledge. It’s mostly there, but (at least in my exam) there were some sneaky extras that I was NOT really expecting. Hopefully I managed to get them (mostly) accurate!
Once again, I sat the exam through the proctored option (ie from home). The experience went without issues for once – sign in was fine, no issues with my headset during check-in, exam loaded & worked without problems at all.
So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). I’ve tried to group things together as best as possible for the different subject areas.
Environments
Different types of environments, what each one is used for, how to set/switch them between the different types
How to handle security/restrict access as necessary
Field types. All of the available field types, what are the benefits of each, and when each type should be used
Data storage types. Differences between Office documents (eg Excel), CDS, SQL Server, Azure SQL. When to use each one best
Charts. How they’re set up, how they can be shared with other users.
System views. What these are, who can access them, how to set them up
Entity forms. The different types of forms available, how to set them up, limitations of each. When each one should be used for a given scenarios
Model apps. Site map. What this is, how it’s used. Implementing/customising it, the different controls available & what each one does
Entity editable grids
What these are, how they can be used, how to enable & set them up
Limitations that they have within the system
Entity/record ownership. The different types of ownerships available, benefits of each, when each should be used for a given scenario
Data management
Data importing from different sources, different methods to import data
What is data mapping for import, and how it’s used
Duplicate detection. What it is, what it does, how it works. How to implement & configure it
Microsoft Word templates. How they can interact with Dynamics 365, how to set them up/adjust them, what they can be used for
Canvas Apps
Expression/function types, what they are, how they’re used
Handling data (eg collections)
Offline usage & data storage
Controls that can be used, navigating around, loading/saving data.
Power Virtual Agent/Chatbots.
Setting them up, deploying them onto websites, deploying them into Teams
System auditing, what it is, how it’s used, how to implement & configure
How to access & run user audit log reports
PowerBI. Setting up & sharing dashboards, setting up & configuring alerts, security options/roles & how they work with data
Dynamics 365 integrations. What other systems can integrate directly with Dynamics 365, & any limitations that they may have
The main surprise for me was mostly around the UI flows, and the various questions I had on them. I’ve not played around with them (yet!), but they are really cool!
If you’re going to take this, I’d love to hear how your experience of it went. Drop a comment below for me to see!
I’ve been continuing with taking new exams as they come out. Having recently taken the MB-400 exam (see MB-400 Power Apps & Dynamics 365 Developer Exam), I was slightly surprised to see the announcement that it was going to be replaced!
Admittedly, I was also surprised (in a good way) that I passed the MB-400, not being a developer! It’s been quite amusing to tell people that I’m a certified Microsoft Dynamics Developer. It definitely puts a certain look on their faces, which always cracks me up.
Then again, the general approach seems to be to move all of the ‘traditional’ Dynamics 365 exams to the new Power Platform (PL) format. This includes obviously re-doing the exams to be more Power Platform centric, covering the different parts of the platform than just the ‘first party apps’. It’s going to be interesting to see how this landscape extends & matures over time.
The learning path came out in the summer, and is located at https://docs.microsoft.com/en-us/learn/certifications/exams/pl-400. It’s actually quite good. There’s quite a lot that overlaps with the MB-400 exam material, as well as the information that’s recently been covered by Julian Sharp & Joe Griffin.
The official description of the exam is:
Candidates for this exam design, develop, secure, and troubleshoot Power Platform solutions. Candidates implement components of a solution, including application enhancements, custom user experience, system integrations, data conversions, custom process automation, and custom visualizations.
Candidates must have strong applied knowledge of Power Platform services, including in-depth understanding of capabilities, boundaries, and constraints. Candidates should have a basic understanding of DevOps practices for Power Platform.
Candidates should have development experience that includes Power Platform services, JavaScript, JSON, TypeScript, C#, HTML, .NET, Microsoft Azure, Microsoft 365, RESTful web services, ASP.NET, and Microsoft Power BI.
So the PL-400 was announced on the Wednesday of Ignite this year (at least in my timezone). Waking up to hear of the announcement, I went right ahead to book it! Unfortunately, there seemed to be some issues with the Pearson Vue booking system. It took around 12 hours to be sorted out, & I then managed to get it booked Wednesday evening, to take it Thursday.
So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change.
There were a few glitches during the actual exam. One or two questions with answers that didn’t make sense (eg line 30 does X, but the code sample finished at line 18), and question numbers that seemed to jump back & forth (first time it’s happened to me). I guess that I’ve gotten used to at least ONE glitch happening somewhere, so this was par for the course.
I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.
Model Apps.
Charts. How they work, what drives them, what they need in order to actually work, configuring them
Visualisation components for forms. What they are, examples of them, what each one does, when to use each one
Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
Entity alternate keys. What these are, when they should be used, how to set them up & configure them
Business Process Flows. What these are, how they can be used across different scenarios, limitations of them
Business Rules. What these are, how they can be used across different scenarios, limitations of them
Canvas apps
Different code types, expressions, how to use them & when to use them
Network connectivity, & how to handle this correctly within the app for data capture (this was an interesting one, which I’ve actually been looking at for a client project!)
Power Apps solution checker. How to run it, how to handle issues identified in it
Power Automates
Connectors – what these are, how to use them, security around them, querying/returning results in the correct way
Triggers. What is a trigger, how do they work, when to use/not use them
Actions. What these are, how they can be used, examples of them
Conditions. What these are, how to use them, types of conditions/expressions/data
Timeouts. How to use them, when to use them, how to configure
Power Virtual Agents. How to set them up, how to configure them, how to deploy them, how to connect them to other systems
Power App Portals. Different types, how to set them up, how to configure them, how they can work with underlying data & users
Solutions
Managed, unmanaged, differences between them, how to use each one.
Deploying solutions. Different methods that can be used to do it, best practise for each, when to use each one
Package Deployer & how to use it correctly
Security.
All of the different security types within Dynamics 365/Power Platform. Roles/Teams/Environment/Field level. How to set up, configure, use in the right way.
Hierarchy security
Wider platform security. How to use Azure Active Directory for authentication methods, what to know around this, how to set it up correctly to interact with CDS/Dynamics 365
What authentication methods are allowed, when/how they can be used, how to configure them
‘Development type stuff’
API’s. The different API’s that can be used, methods that are valid with each one, the Organisation service
Discovery URL’s. What these are, which ones are able to be used, how they’d be used/queried
Plugins. How to set up, how to register, how to deploy. Steps needed for each
Plugin debugging/troubleshooting. Synchronous vs asynchronous
Component types. Actions/conditions/expressions/data operations. What these are, when each is used
Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
Javascript web resources. How to use these correctly, how to set them up on entities/forms/fields
Powerapps Component Framework (PCF). What these are, how to develop them, how to use them in the right way
System Design
Entity relationship types. What they are, what each one does, how they work, when to use them appropriately. Tools that can be used to display them for system design purposes
Storage considerations across different types, including CDS & Azure options
Azure items
Azure Consumption API. How to monitor, how to handle, how to change/update
Azure Event Grid. What it is, the different ways in which it can be used, when each source should be used
Dynamics 365 for Finance. Native functionality included in it
The biggest surprise that I had really when thinking back to things was the inclusion of Dynamics 365 for Finance in it. Generally the world is split into ‘front of house’ (being Dynamics 365/Power Platform), and ‘back of house’ (Dynamics 365 for Finance & Supply Chain Management). The two don’t really overlap, though they’re supposed to be coming more together over time. Being that this is going to happen, I guess it’s only natural that exam questions around each other will come up!
Overall it was quite a good exam. Some of the more ‘code-style’ questions were somewhat out of my comfort zone, and I’ll freely admit to guessing some of the answers around them! Time will tell, as they say, to see how I’ve done in it.
I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it!
As a starter for 10, this wasn’t actually the blog post that I was going to write today. In fact, the subject of the post wasn’t even going to be about Power Automate! However, there was some really amazing news that dropped today from Microsoft, which I just couldn’t pass up being able to talk about.
You’ve guessed it – it’s about Power Automate! Well, I suppose that the post title was somewhat of a giveaway, wasn’t it…ah well. So let’s go ahead and find out what this is all about then!
To date, we’ve been able to put Power Automate flows into a solution. Well, it wasn’t there exactly at the beginning of things, but it happened somewhere along the way. This was very convenient, as we didn’t then need to deploy each one individually to different environments. Some solutions can contain dozens & dozens of flows, and we really do love to package them all up together for ease of movement.
So that was good. But there was still a (major) ‘bugbear’ (as I like to refer to them as). This is the fact that after we deploy a Power Automate flow, we then need to go into it & (re)authenticate it. This is due to the fact that the connector/s that it uses contains what is referred to as a ‘secret’, and these can’t be moved across environments. As a result, we need to essentially recreate the ‘secret’ in the connector (ie authentication details) every time we move it. This is an annoyance (if you have one or two flows), and an absolute bloody nightmare if you have lots.
For the technical minded – every action in a flow is bound to a specific instance of a connection that it will use to “execute” that action. This is why when moving flows across environments, users are required to rebind every operation to a connection.
For example, I’ve been working with COVID-19 triage solutions. These contain lots of flows within them, connecting to multiple different sources, and doing different things. Every time we’ve performed a release (even if it’s just a simple update), we’ve needed to manually go through each flow, (re)authenticate them, and turn them on. If you forgot one, then everything can come crashing down & not work! But there’s been no other way to do it. To represent this visually, we have the following diagram
For each & every Power Automate, the connection line gets ‘broken’ when it’s deployed, and needs to be re-made.
Until now, that is. For today, Microsoft has announced the Public Preview for ‘Connection References’. Now when something is put into Preview, I usually caveat the usage of it with saying things like ‘it might go away, or not be released for a while’. But I’m going to be quietly confident about this particular piece of functionality, as I really don’t think it’s going to be pulled!
So what exactly are these? Well, in (mostly) simple terms, Connection References provide an ‘in-between’ or ‘abstraction’ layer for the connections that use them. Let’s show this visually as well
We still need to re-authenticate the Connection Reference once we deploy things. But let’s now see how we can save ourselves a massive headache, and LOTS of time:
Oooo…now this is looking better. Instead of having to update three Power Automate flows, we only have to update the SINGLE Connection Reference that’s sitting in the middle. Now multiple that by however many flows you have (eg sending emails out, etc), and start calculating how much time you’ll now be able to spend on coffee breaks, rather than doing this manually one at a time…
We can create Connection References directly from within the solution:
We then give it a name & description, choose which connector we’re going to be using, and either select an existing connection or set a new one up:
Once we’re finished, we click ‘Create’ at the bottom. Voila – we can now see it within our solution!
Note: Interestingly enough I couldn’t actually see this within the solution after I created it, even with thecomponent selector set to show ‘All’. How I actually got them to display was changing the component selector to ‘Connection Reference’, and they then showed up. I’m thinking that this is due to it being new today/in the process of rolling out, and am expecting it to display without any issues in the near future
Let’s take a look at a Power Automate flow itself now to see how it’s referenced. When we open an item with a connector, we can now see the following:
We’re able to select the Connection Reference that we’re wanting to use. Simple, yet so powerful.
When importing a solution containing a Connection Reference, we will be prompted during the import process to set the actual connection that should be used with it:
If you don’t have any connections set up already in the environment, you’ll be able to create a new one from the dropdown.
Some things to note around this:
During the preview phase, Microsoft has specified that a single Connection Reference can only be used by up to 16 flows. This limitation will be removed once it goes GA
Existing flows will not be automatically upgraded. What you can do though is export the unmanaged solution, re-import it to the same environment, and then they will be automatically created for you. The flow/s can then be edited to update them to the correct connection reference record
The connection name and connection reference name are not currently synchronised. They can be different. Therefore it’s best to keep the naming conventions the same. Don’t set different names for connections and their associated connection references.
In summary – this is an awesome step forward with Power Automate functionality. I’m already tasking some of the developers on the team to re-do existing solutions to use it for ease of use. How do you think it’ll best benefit you? Drop a comment below!
I’ve recently been spending time looking at, and talking about, how we can handle company hours within Omnichannel. This has covered both how to use them within chat (Handling Company Hours) as well as being able to change the chat widget functionality (Handling ‘Out of Hours’).
Imaging my surprise therefore when someone asked me ‘how do we go setting them up properly?’. When I originally looked at how to use them within chat, I used the Quick Create functionality. I had meant to come back to looking at it in more detail, but that somehow fell by the wayside. So, I’m now going to make up for it!
As a quick recap – Operating Hours are what we set to show when the company is ‘open’ (or for our purposes, active). This doesn’t need to reflect the actual store hours that might exist – customer support could well start before/end after the normal store times. It’s also the case that we usually can’t just set blanket times – we’ll need to handle holidays, seasonal occasions, etc. This is where Operating Hours really comes into its own.
So to start off, it’s simple to enter operating hours. Really simple. We go to the navigation area, select, it, and click ‘New’:
We’ll create a new record, and click Save:
Once we do that, the magic starts to happen – we get to see the ‘Working Hours’ tab. Clicking on it will give us the following screen (which I can only describe as absolutely amazing!):
I don’t know about you, but I’m loving being able to see the hours for each day in a calendar-style view. It’s so much easier than needing to scroll down a list of records, trying to find a specific date. It’s also much simpler for the eye to follow/see.
At the top, we can navigate between dates, change the view to switch between a specific day, week or month, and enter new information:
There are two options for inputting new settings here:
Working Hours
Holiday
For working hours, we can input the times, whether it repeats or not, and whether it’s a full day event or not:
We can also edit an existing Working Hours entry simply by clicking on it to change it. When we do this, we get the option as to whether to modify the single item that we’ve selected, or the entire series:
It’s important to note that we’re not limited to entering just a single range per day. We can enter multiple records for a single date, or a date range, to fit what we’re actually trying to do.
For Holidays, we don’t need as many options. We assume that by setting holiday, the company is closed. We’re therefore prompted just for a date (range) to then set this:
So what we then do is build up our calendar. This will result in (hopefully!) a full overview of our company, that we can then use.
What’s important to remember is that we could have different dimensions to our company though. We may allow Sales to be open 20 hours, but Customer Service to be open only for 12 hours.
We’d therefore create multiple Operating Hour entries for each requirement, and point each channel towards to the applicable record. If we only have a single scenario that we need to handle, we can point multiple channels towards the same operating hours record – that’s not a problem at all.
So with this, we can really tweak operating hours as we need to, for each possible usage. It’s really powerful, so easy to set up, and gives us full control over things.
Have you ever struggled with something like this? How did you overcome it? Drop a comment below – I’d love to hear!
We’ve all been there. We’re in the middle of a chat session with a support agent, or talking to a salesperson, etc. Suddenly things go wrong – our browser hangs, the internet loses connection, or something else…
Alternatively, I do know of situations where kids have pulled out the internet cables during ‘playtime’ – it really does happen!
Immediately we’re frustrated. Not only have we not finished what we were trying to achieve, but we’re going to need to start all over again. Perhaps the agent took notes & logged them against our contact record, but the likelihood is that it hasn’t happened. It’s going to take time to get through to an agent again, then we have to explain the whole situation from the absolute beginning. It’s heartrending, and can cause our day to absolutely go down the tubes!
Well, what if we could just re-connect to the chat session with all our data saved? Better still, what if we could go back and continue chatting with the specific agent that we had been communicating with? Sounds amazing, but wishful, right?
Well, we now have this ability within Omnichannel, to be able to enable our customers even further. There are even two ways in which we can offer this:
Reconnecting with a link (URL). If the agent is concerned that the chat session may be interrupted, they can provide a URL at the start of the session. If the customer becomes disconnected from the session for whatever reason, they can click the link, and it’ll take them right back to it. This works for both authenticated & unauthenticated users
Reconnecting through a prompt. For authenticated chat users, if the session drops they can be presented with a prompt. This will allow them to choose whether to connect to the previous session, or start a new session.
Let’s take a look at it, and how it works.
In the Omnichannel Administration Centre, we need to go to the specific Chat record that we’re wanting to set this up for. We open the record, and are now presented with the following (we do need to scroll down the screen a bit):
Note that this is in Preview currently, so just be a bit careful with it!
There are several options available. We don’t need to use each one, but let’s understand what each one does:
Turn on reconnect to previous chat. This is the option to enable if we’re wanting to offer this. Without it set, it’s not going to work!
Reconnect time limit. How long we’ll offer the option to the customer to reconnect for. See the note below around this
Reconnect to previous agent for. How long we’ll allow the customer to connect back to the same agent. This needs to be equal or less to the ‘Reconnect Time Limit’ value that we’ve set. During this period of time, the agent’s capacity is blocked, unless the agent uses the ‘Close’ button on their interface to end the conversation (which then releases the agents availability)
Portal URL. As mentioned higher up, the agent can provide a URL for the customer to auto-reconnect if the session drops. This value is the URL that the chat widget is deployed to
Redirection URL. If the connection drops, and the re-connection timeout occurs, we can redirect the customer to a specific web-page. If this isn’t set, the customer will see the option to start a new chat conversation
Note: The ‘Reconnect Time Limit’ value is auto-set by the system to the value specified in the work-stream that’s associated with the chat widget. It’s not possible to manually change this in the chat widget itself. Instead, the work-stream ‘Auto-close after inactivity’ value would need to be changed. This is shown below:
Note: It’s also important that the customer hasn’t closed THEIR chat window! All of this relies on the customer chat still being there. If the customer has closed their window/browser, they won’t be offered this option.
Have you ever needed to offer customer capability along these lines? How did you go about it? Drop a comment below – I’d love to hear!