Developer Environment Deletion!

Strong title for a blog post, right? Well, I did want to catch your attention! So what exactly are we talking about here?

For the last few years, it’s been possible for users to sign up for a ‘Developer’ plan, which gives them a full capability Power Platform environment for free (though with some limitations to them). This used to be be called the ‘Community’ plan, and is an amazing resource for everyone, whether they’re a professional or citizen developer, to have their own personal ‘sandpit’ to play in, and try things out.

Let’s wind back a few months in time now – earlier this year, Microsoft announced that users would be able to create THREE of these Developer environments, rather than having just a single one! This was mind blowing news, and something that has been extremely welcomed. If you’re wanting to see more on the announcement, Phil Topness has a great video on it at Dataverse Environments For Everyone – New Developer Plan – Power CAT Live – YouTube.

Incidentally, I’m curious as to how much storage space Microsoft has in the background to handle these. After all, each environment takes up a minimum of 1GB of space (& can grow to 2 GB). That means that each user could have 6GB of storage being used….which when multiplied, gives a VERY large number!

Microsoft has now announced that these developer environments, however, need to be utilisied. Ie if they’ve been created, but aren’t being used, Microsoft is going to delete them! Now, from a certain perspective, this is actually quite good – after all, there are all of the storage considerations for environments that have been created, but not being used. However from a different perspective, this could be a problem. What about if you’re doing something occasionally in an environment, but not too often? What about if you decide to go on a ‘Round the World’ cruise for several months?

So let’s look at the definition for this. Microsoft states that an environment is considered to be inactive when it hasn’t been used for 90 days. At that point in time, it is disabled, and the administrator or environment owner is notified. If there is no action taken within the next 30 days, then the developer environment is automatically deleted.

Now, how does Microsoft define ‘Activity’? It goes something like this:

  • User activity: Launch an app, execute a flow (whether automatic or not), chat with a Power Virtual Agents bot
  • Maker activity: Create, read, update, or delete an app, flow (desktop and cloud flows), Power Virtual Agents bot, custom connector
  • Admin activity: Environment operations such as copy, delete, back up, recover, reset

The above is all user driven – ie a user needs to interact with something within the environment. However, it’s also important to note how automation is viewed:

  • Activity includes automated behaviors such as scheduled flow runs. For example, if there’s no user, maker, or admin activity in an environment, but it contains a cloud flow that runs daily, then the environment is considered active.

It’s also important to note that at this point in time, the above only applies to Developer environments. Other types of environment (Production, Sandbox etc) don’t have any auto-deletion policies called out for them – well, at least not yet (if something does pop up around these, I’ll definitely look to talk about them too!).

So to answer our question above about what happens with a (developer) environment that is only being used infrequently – the way to stop it being auto-deleted is to put some automation in place. This doesn’t need to be lightweight – it can be something simple & easy, just to ensure that the environment registers activity happening within it.

In my view, it would be nice to have some granularity & control over this as well – allowing organisations to set their own deletion policies. We have this in place for things like audit log retention – it would be nice to have have it in here too.

PL-500: Microsoft Power Automate RPA Developer

RPA (or Robotic Process Automation) is a capability that Microsoft has been developing for a while within the Power Platform space. Whilst cloud flows can be used to interact with any systems that has an API in place, many organisations have (legacy) systems that have no API, so interacting with them can be challengin. RPA capabilities allow organisations to be able to interact with any system overall, thereby enabling & empowering businesses holistically.

I’ve been aware for a while that there’s been an exam coming out for RPA, though it’s taken a bit of time to land. That’s fine though – I can’t really think of any absolute rush to have it in place. I do think that over time, just as with some of the other certifications, it will become a required for solution or specialisation status.

The official page for it is at https://docs.microsoft.com/en-us/certifications/exams/pl-500. The specification for it is:

Candidates for this exam automate time-consuming and repetitive tasks by using Microsoft Power Automate. They review solution requirements, create process documentation, and design, develop, troubleshoot, and evaluate solutions.

Candidates work with business stakeholders to improve and automate business workflows. They collaborate with administrators to deploy solutions to production environments, and they support solutions.

Additionally, candidates should have experience with JSON, cloud flows and desktop flows, integrating solutions with REST and SOAP services, analyzing data by using Microsoft Excel, VBScript, Visual Basic for Applications (VBA), HTML, JavaScript, one or more programming languages, and the Microsoft Power Platform suite of tools (AI Builder, Power Apps, Dataverse, and Power Virtual Agents).

Now here’s the thing. I occasionally work in the automation space, either on customer projects, or when training users in the technologies. I wouldn’t describe myself as an advanced automation developer (whether cloud or RPA capabilities). I’m most definitely NOWHERE near the level of legends such as Matt Collins-Jones, for example (go check him out if you don’t know about him!).

So I knew that I may be a bit challenged when taking the exam, especially in the more ‘pro dev’ space (aka JSON etc). In fact, I didn’t actually realise that the exam specification included that sort of thing. I know, I should have – it’s aimed at developers overall…shows that I need to brush up on reading things properly!

Also, there’s still quite a bit of a focus on Power Automate cloud flows – it’s not JUST about RPA capabilities.

Now, really nicely, there are already Microsoft Learn pathways available (which have been around for a while, and updated appropriately). This really is a big help, I feel, especially for people who are new’ish to RPA.

Of course, there’s a lovely shiny two star badge awarded when passing the exam, along with the title of ‘Microsoft Certified: Power Automate RPA Developer Associate’:

As with previous exams, I sat it from home (the proctored experience). Learning from previous times that I’ve taken exams, I ensured that my workspace was entirely clear from everything. As a result, the check-in process happened automatically, and I didn’t need to engage with any proctors at all (which was quite nice actually).

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

  • Cloud flows vs RPA flows
    • Capabilities of each
    • When to use each (ie how to handle different scenarios)
    • How to trigger each one
  • Cloud flows
    • Different types of triggers, & when each type should be used
    • Different types of actions, and the capabilities of them (at a high’ish level – expected to know common Microsoft actions, but not need to know all of the hundreds of different ones!)
    • Controls/operators. What they are, how they can be used to accomplish different requirements
    • JSON formatting & syntax
  • Business Process flow vs Business Rules
    • What each is
    • When to use each one
    • Capabilities
  • RPA flows
    • Common actions, how they work, capabilities of them
    • How expression syntax works within them
    • Debugging capabilities, and what to use when
    • How to interact with desktop applications
    • How to interact with websites
      • How data values can be used
      • How data tables can be used
      • How to use data that’s extracted from a website
    • Troubleshooting functionality
  • Usage of automation capabilities from Office 365 applications such as Excel & Visio
  • Loops
    • How they work for cloud & RPA flows
    • Troubleshooting
    • Implementing success/fail criteria
    • Error handling
  • Process Advisor
    • What it is
    • What it does
    • How it can help organisations
    • Limitations
    • What it cannot do
    • Process Mining vs Task Mining, & the important differences between them
  • Variables
    • How to handle variables across different environments
    • How to declare them (cloud flow vs RPA flow)
  • Runtime operations
    • How flows are triggered (async vs sync)
    • How flows are queued (cloud vs RPA)
    • How RPA flows are carried out when using machine groups
  • Artificial Intelligence (AI) capabilities
    • How AI can be used within flows
    • Different AI capability types (what each one can be used for)
    • AI within Power Platform, & AI within Azure Cognitive Services
  • Sharing flows
    • Different ways to share cloud flows
    • Different ways to share RPA flows
  • Application Lifecycle Management (ALM)
    • Solutions (managed vs unmanaged). Capabilities of each, when to use each type
    • AzureDevOps (ADO). What it is, when/how to use it, capabilities
    • Solution imports
    • Solution layers. What these are, troubleshooting functionality
    • Upgrade/Stage for Upgrade/Update. Which each is, what each does, how/when to use each one
    • Moving desktop flows between users
  • Security
    • Security roles needed to create
    • Security roles needed to share/modify
    • Security roles needed to register machine for RPA
    • Security roles needed to register machine groups for RPA
    • Security requirements to run different types of RPA flows (how it interacts with desktop/s)
    • Data Loss Prevention (DLP) – how it affects creation & runtime of flows

Overall, I had 46 questions, with a single case study. I’m used to having at least two case studies, so it was nice to have just one of them this time.

So….it’s a lot of stuff. Definitely targeted much more at the ‘pro-developer’ end of the scale that someone who might occasionally automate things. It’s absolutely necessary to understand coding conventions, ALM, etc.

It’s definitely an exam that if you’re not already currently hands-on with the skills needed, I’d highly recommend you get a decent amount of experience with it before taking the exam! I’d highly recommend ensuring that you have an environment in which you’re able to be hands on with all types of automation (cloud & desktop flows), and really understand how they can be handled with an eye on the enterprise scale!

If you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

‘Swarming’ for Customer Service

You might be wondering as to what I mean by ‘swarming’ in the title for this post. Don’t worry – it’ll become clear pretty soon! But first of all, let’s understand the story behind this new functionality.

Where to begin? Well, let’s take a look within an organisation. It doesn’t really matter what sort of organisation it is, as most organisations will have something similar scenarios overall. So, what are we actually talking about?

Customer Service is, of course, a very important functionality of any organisations. Customers who have purchased products may need support, or perhaps are having issues, and need them to be resolved. Customer service agents are there to handle the customer queries, and look to resolve them as soon as possible.

However, it’s possible that the customer service agents don’t actually know how to resolve the customer query/issue themselves. They can, of course, use the Knowledge Base, but that requires knowledge articles to be created & maintained.

Now within the organisation, there will be SME’s (Subject Matter Experts). These are the people who know the matter in precise detail, often being the people who have created the product and/or process to begin with. But these people aren’t usually carrying out the customer service function.

So what this means is that the customer service agents need to try to work out who might actually know the answer/s, be able to help resolve the customer issue, etc. This can take time, be laborious, and perhaps not even be able to be carried out (depending on the organisation).

Hmm. So, what if the system might be able to actually SUGGEST the right people for a problem or issue? Even better, what if the system could support them being involved directly with the record/s, regardless of whether they’re a user within Dynamics 365 or not?

Enter the swarming capability onto the Dynamics 365 scene….

The aim of swarming is to bring together the necessary experts within Dynamics 365. Now, having said that, not all users will actually be interacting directly within Dynamics 365. What happens is that a specific Teams chat is created, so that users outside of the system can see the necessary information, and give input on the situation.

This builds on the existing functionality of being able to use Teams chats directly within Dynamics 365, but takes it to a whole new level, by having the system automatically suggest relevant people within the organisation, and bring them into the swarm chat!

There are some necessary steps to configure to enable this to happen.

Firstly, Teams needs to be enabled within Dynamics 365:

Once we start to turn things on, we can then see the following. This allows us to be able to specify the types of records that we can use swarming on. This is great, as we may be building out custom functionality using other tables, and can enable swarming on these as well

Once Teams chat has been enabled, we can then start setting up the swarming capabilities:

As part of the setup, we have:

  • The ability to set the general message that users will see when they create a swarm
  • Activating the case form that’s used for swarming (as this will include the functionality for swarming on the case form)
  • A Power Automate flow that will be used for sending notifications & invites within Teams for suggested (internal) users
  • Creating swarm condition rules, which allows us to bring in specific conditions around skills etc

So, how does this work in practise, once the system has been initially configured?

Users can go to the relevant record, such as a case record. They’re able to select the ‘Create swarm’ from the menu bar:

This then allows the user to provide a summary of what the swarm is for, the scenario, as well as selecting the skills needed for the swarm. Dynamics 365 can also suggest skills that it thinks would be helpful as well:

Users from across the organisation are matched, according to the skills identified:

Notifications are sent to them within Teams, requesting their help with the matter:

When they accept the invitation, they’re then brought into the swarm:

In fact, the members of the swarm aren’t actually accessing the swarm information within Dynamics 365. Instead, they’re seeing & interacting with the swarm within Teams itself!

Once the swarm is active, information can be shared, and a solution found. The swarm can then be successfully closed down:

This is truly amazing. Obviously collaboration on issues is important, especially when considering that we’re trying to resolve customer issues as quickly as possible! I’m also really excited about this, as I was part of the initial group that Microsoft reached out to initially for feedback on the capabilities of it.

To now be able to collaborate with users who sit outside of Dynamics 365, but have them access the necessary information to help resolve things, is just mind-blowing. So many scenarios that come to mind as to how this can really empower organisations!

Can you think of a way in which this could change things in your own organisation, or at a client? Drop a comment below – I’d love to hear more!

Canvas Apps & Power Automates

So it’s been a busy few weeks here, which is why I haven’t really been putting up any articles. March/April is always a busy time for our family with stuff going on, and this year I decided not to push myself to get articles out, as otherwise I’d be running very low on sleep!

That being said, I’ve still had some great ideas about things that I’d like to share, and have been keeping a series of short notes for me to pick up. Today’s topic is one of them, which I think has been a major pain to anyone involved in canvas app development!

So, the back story to this is that we’re able to use Power Automate flows together with canvas apps. What I mean by this is that we’re able to directly trigger them from within the canvas app, rather than needing to do something like edit or create a record, and then have the Power Automate flow trigger from the record creation or modification.

There’s a specific Power Apps trigger that’s available within Power Automate exactly for this purpose:

When clicked, it gives us the trigger line in the steps as follows:

So what we’d do is within the canvas app, we would bind a button (or another control) that when selected, it would then go away & trigger the Power Automate flow. Great – so many different things that we can get to happen! One of the benefits of doing things like this is that we can then pass information from the Power Automate flow back to the canvas app directly:

This can then mean that the user can know, within the canvas app itself, that the Power Automate flow has run, and use data (or other things) that have come out of it.

OK – all good so far.

The main issue to date has been with deploying canvas apps together with Power Automate flows. See, as per best practise, we would create a solution, place the canvas app, flows, and anything else that’s necessary for it to work within it, and then deploy the solution to our target environment/s. And that’s where things just…didn’t go quite right.

Obviously within the development environment, the canvas app would be hooked up to the flows, and everything would work. Clicking the button would cause the flow to run, etc. User authentication would be in place (along with licenses of course!), and it was just fine.

But when deploying a solution containing canvas apps and associated flows between environments (regardless of whether it’s been manually deploying, or automated using a tool such as Azure DevOps), the connections to the flows would be broken. Ie, the canvas app would run, but the flows wouldn’t trigger. Looking at the connections in the canvas app within Studio would show something like the following:

All of the connections to Power Automate flows would show as ‘Not connected’. It’s not even possible to click the ellipse next to them and re-connect them – the only option available is to remove it from the canvas app!

So in order to get things working again, we’d need to do the following steps:

  • Open up the canvas app
  • Remove all connections to Power Automate flows
  • Add a temporary button, set it to be a Power Automate trigger
  • Click through all of the Power Automates needing to be connected (waiting for each one to connect, then go to the next one)
  • Remove the temporary button
  • Save and publish the solution

This, in a nutshell, has been a (major) headache. For example, I’ve been working with a solution that has over 30 Power Automate flows that can be triggered from the canvas app (lots of different functionality!). Each deployment has needed the above process to be carried out, which has usually added on at least an hour to the deployment process!

Now, this hasn’t been something that’s been unknown. In fact, the official Microsoft documentation noted the following:

So this is something that Microsoft has been well aware of, but it’s been a pain point that we’ve had to work with.

However, this has now ALL changed, which I (and MANY others) are really pleased about!

Microsoft has rolled out an update last month that means that canvas app connections to Power Automate flows will NOT break when they’re deployed across environments! This is such a massive time-saver, that I’m now trying to work out what to do with all of my free time! Only kidding…more project work will commence!

So what we can now do is take our solution, deploy it across the different environment/s that we need to get it out to (whether manually, or automated using tools such as Azure DevOps), publish the solution, and then everything works! Amazing!!

One small caveat though – to ensure that this work, you will need to go into the app, and re-publish it on the latest Power Apps version. This should of course be done in a development environment, and then can be exported and deployed as required.

Microsoft have also updated their documentation at https://docs.microsoft.com/en-us/powerapps/maker/data-platform/solutions-overview to remove the limitation text shown above. It’s a good place to keep an eye on changes that occur over time too.

This is definitely a welcome piece of development, and I know that we’ve been eagerly waiting for this for a while, and now it’s here!

Reconnecting to previous chat session

We’ve all been there. We’re in the middle of a chat session with a support agent, or talking to a salesperson, etc. Suddenly things go wrong – our browser hangs, the internet loses connection, or something else…

Alternatively, I do know of situations where kids have pulled out the internet cables during ‘playtime’ – it really does happen!

Immediately we’re frustrated. Not only have we not finished what we were trying to achieve, but we’re going to need to start all over again. Perhaps the agent took notes & logged them against our contact record, but the likelihood is that it hasn’t happened. It’s going to take time to get through to an agent again, then we have to explain the whole situation from the absolute beginning. It’s heartrending, and can cause our day to absolutely go down the tubes!

Well, what if we could just re-connect to the chat session with all our data saved? Better still, what if we could go back and continue chatting with the specific agent that we had been communicating with? Sounds amazing, but wishful, right?

Well, we now have this ability within Omnichannel, to be able to enable our customers even further. There are even two ways in which we can offer this:

  • Reconnecting with a link (URL). If the agent is concerned that the chat session may be interrupted, they can provide a URL at the start of the session. If the customer becomes disconnected from the session for whatever reason, they can click the link, and it’ll take them right back to it. This works for both authenticated & unauthenticated users
  • Reconnecting through a prompt. For authenticated chat users, if the session drops they can be presented with a prompt. This will allow them to choose whether to connect to the previous session, or start a new session.

Let’s take a look at it, and how it works.

In the Omnichannel Administration Centre, we need to go to the specific Chat record that we’re wanting to set this up for. We open the record, and are now presented with the following (we do need to scroll down the screen a bit):

Note that this is in Preview currently, so just be a bit careful with it!

There are several options available. We don’t need to use each one, but let’s understand what each one does:

  • Turn on reconnect to previous chat. This is the option to enable if we’re wanting to offer this. Without it set, it’s not going to work!
  • Reconnect time limit. How long we’ll offer the option to the customer to reconnect for. See the note below around this
  • Reconnect to previous agent for. How long we’ll allow the customer to connect back to the same agent. This needs to be equal or less to the ‘Reconnect Time Limit’ value that we’ve set. During this period of time, the agent’s capacity is blocked, unless the agent uses the ‘Close’ button on their interface to end the conversation (which then releases the agents availability)
  • Portal URL. As mentioned higher up, the agent can provide a URL for the customer to auto-reconnect if the session drops. This value is the URL that the chat widget is deployed to
  • Redirection URL. If the connection drops, and the re-connection timeout occurs, we can redirect the customer to a specific web-page. If this isn’t set, the customer will see the option to start a new chat conversation

Note: The ‘Reconnect Time Limit’ value is auto-set by the system to the value specified in the work-stream that’s associated with the chat widget. It’s not possible to manually change this in the chat widget itself. Instead, the work-stream ‘Auto-close after inactivity’ value would need to be changed. This is shown below:

Note: It’s also important that the customer hasn’t closed THEIR chat window! All of this relies on the customer chat still being there. If the customer has closed their window/browser, they won’t be offered this option.

Have you ever needed to offer customer capability along these lines? How did you go about it? Drop a comment below – I’d love to hear!

Lookup fields & Power Automate

This is an interesting post, for several reasons. Firstly, it’s the first one in 3 weeks – I was off on holiday, and decided to take an (almost) absolute break from all things digital, which included this blog. It was actually quite refreshing, though now coming back & starting to write again does seem a bit daunting, I’ll admit!

Thankfully, whilst wondering what exactly to start with, a scenario came up that I was working on. It seemed quite simple at first, but then actually got someone complicated. I therefore thought it would be helpful to others if I wrote about it, so here it is.

The scenario was as follows. We had records being auto-created in the system, and needed to create child records for them. This, as I’m sure you’ll agree, is really quite simple to do with Power Automate. We also needed to set lookup values on the child record, that were already populated on the parent record (for reference purposes).

So for example, the parent record has a lookup to Country (being a separate entity), and the child record also has a lookup to Country. These need to be the same.

Being both lookup fields, I figured that I’d be able to take the value from the parent record, and simply plop it into the corresponding field on the child record in Power Automate:

So I did that – and immediately hit an error. Not just any error, but the fabled ‘Resource not found for the segment’ error!

Obviously, I did what anyone would do at first – I put it into Google & Twitter, and took a look at what came up.

The ‘problem’ was coming from using the ‘CDS Current Environment’ connector, which is the latest version available (the old one is no longer available to use). It’s really great for a lot of things, but unfortunately not so great in a few areas. See, in the old CDS Connector, you could just drop the lookup field value into the field you were wanting to populate. Power Automate had no issues with that, & it would run just fine.

However in the ‘new’ CDS Connector, you can’t just do that. Instead, you need to use an OData reference (which I haven’t done much of before, to tell the truth). So based on the blogs I had come across, I went to work to try to get this working.

Part of the challenge was that there didn’t seem to be a unified consensus in how to do it. I came across the following variations:

  • /entityname(Lookup Field Value)
  • /entityname/(Lookup Field Value)
  • /pluralentityname(Lookup Field Value)
  • /pluralentityname/(Lookup Field Value)

Somewhat confusing, as I’m sure you’d agree. Nevertheless, I ploughed through all of the different possibilities. But nothing was working – every single time, I still got the ‘segment not found’ error message. This, as you can image, was extremely frustrating!

Thankfully, one of my good friends was around & able to help out. Namely, Tricia Sinclair came to the rescue!

We took a look at the code I was using, and she took a look at some of her own use cases (where it had worked for her). I was starting to think down the path of needing a capital letter in the entity name (some systems can be REALLY finicky around things like that), but thankfully it wasn’t.

Instead, it was the following. See, this was a custom entity. It turns out that for a custom entity (& heck, for all I know system entities as well) the syntax needed is ‘publisherprefix_pluralentityname(lookupfieldvalue)’. Now that’s not something that I had come across ANYWHERE at all!

Looking at it, I guess it makes sense. After all it would technically be possible to have multiple entities with the same name, though with different publishers. As a result, the system needs to know WHICH exact entity is being needed for the Power Automate, so uses this. Somewhat complicated (and hey – it worked without all of this in the OLD CDS Connector), but we got it to work!

Testing it out, everything worked smoothly. The Power Automates fired off without any issues, the data got created & populated, and everyone was happy.

So there you go. Another interesting little twist in syntax needed, which hopefully will NOT change in the (near) future!

Have you come across anything like this? I’d love to hear – drop a comment below around it!

Dynamics 365 Security & AAD

I come from an ‘on-premise’ background. I’ve spent years in organisations with on-premise systems such as Dynamics 365. Take me into a server room that’s alive with whirring fans, and I get quite nostalgic. Those were the days…well, in some ways, anyhow. But having recently discovered some quite helpful functionality, I thought I’d share it with others!

See, when it came to Dynamics 365 security, there was no way to automate things. Yes, users had to be created in Active Directory (and also, in a folder that the Dynamics install could refer to within AD!), but they had to be manually added to Dynamics 365. There was no way to automate this (from recollection – then again my memory grows dim with the fog of time).

So what the system administrators needed to do was to manually go to Settings/Security within the system, and there they could either add a single user at a time, or multiple users. They would then assign role/s (for multiple users, all of the users would need to have the same role/s – it wasn’t possible to modify individual users within this process).

One way to slightly speed up time in handling different security roles was to have teams, relating to the business needs. The security role/s would be created, assigned to a team, and then any user added to the team would automatically get all of the permissions that they needed.

Then came the heady world of Dynamics 365 being online! Well, nothing much changed really, at least not for a little while.

But then, things really did change, in May 2019. Functionality for security teams within Dynamics 365 was increased. Notably, there was now something called a ‘AAD Security Group Team’:

So what was this magical new item?

When we create a team, and we set the Team Type to ‘AAD Security Group’, we’re now able to set an AAD Object ID. In fact, it’s required! After we’ve created this object within Dynamics 365, we can then apply security role/s to it directly (as we could to any other team records beforehand):

Let’s take a moment to reflect & think on this. Until now, we’ve had to handle security directly within Dynamics 365. Now, we have the ability to have an Azure Active Directory (for that is what AAD stands for) group, and reference it within Dynamics 365.

Suddenly new possibilities open up. As part of the on-boarding process (for example) we can users to specific AAD security groups, which will then give them access with appropriate permissions within Dynamics 365. We’re also able to have multiple AAD groups, each inheriting a different set of Dynamics 365 roles, and thereby create a multi-layering approach to different business & security needs.

We’re also able to use tools such as PowerShell, LogicApps, Power Apps & Power Automate to carry out automation around this. There’s an Azure AD connector (https://docs.microsoft.com/en-us/connectors/azuread/) which gives the ability to set up & administer these.

We’re actually using this functionality now in some of our COVID-19 response apps. Instead of needing our own support desk to manage the (external) users, we’ve provided an interface where client IT departments can quickly log in, upload a list of users, and assign them to the relevant AAD group/s. It’s very quick, and allows the users to onboard to the Power Apps within minutes!

So with knowing this, how do you feel it might help benefit you? Comment below – I’d love to hear!

New functionality for Routing Rules

Within Omnichannel, we have the ability to route conversations based on different factors. At this point in time, there’s Skill-Based routing (covered at https://thecrm.ninja/omnichannel-for-dynamics-365-skills-part-ii/), and Workstream routing (covered at https://thecrm.ninja/omnichannel-pre-survey-responses-routing/).

Routing is used to send customer interactions to specific queues, in order to have them handled by the agents best suited. This could be based on the language that the customer is using, the query that the customer has (involving pre-chat survey questions), etc.

The way that this is done (included in the previous articles) is by selecting the details that we want to use. This could be the contact (when recognised as a record in the system), pre-chat survey responses, or several other options.

However, to date we’ve only been able to use fields/variables from the chat session itself. It’s not been possible to connect to other data that we’re holding within the system, and use that for routing. We’ve only been able to use items that are directly linked to the conversation:

  • Account
  • Case
  • Contact
  • Context Variable
  • Live Chat Context

So if we had identified the customer as existing in the system already, we weren’t able to query related records to them, eg accounts etc. That’s all changed now though – we are now able to do this!

End of term celebration | News Post Page

Let’s see an example of this. We have a customer, and we know from within Dynamics 365 that his company is a VERY large customer of ours. They spend a great deal on our products every year, and as a result, we want to route any interactions with them to a special VIP queue. Previously we were unable to do this, unless we somehow set a flag on the contact record to display this.

What we’re now able to do is go and get values from the linked account record, and use these as the routing variables within the workstream:

We can add multiple rows here, all connecting different parts of the data.

Note: The only caveat is that the entity needs to be linked to one of the Omnichannel items (which are listed above). We can’t daisy-chain non-related entities, eg Contact-Account-Invoices

These can obviously be put together in groups, to satisfy more complicated conditions, using AND/OR conditions. With this, we can therefore address very specific scenarios, tying together conditions across multiple entities.

Even nicer, we’re not restricted by the relationship type. We therefore can select an entity that’s related to the primary Omnichannel entities as:

  • One to Many
  • Many to One
  • Many to Many

With this being in place, we’re now really able to ‘fine tune’ how we can route customer interactions, and set up specific places for them to be directed to. Through this, we can identify & serve identified sectors of our customers in different ways, as we feel is best appropriate.

This is also applicable to skill attachment routing, where the same level of functionality is provided.

So with this in mind, how would YOU think that you would go ahead and use this? Leave a note in the comments below!

Omnichannel Macros

I’ve previously touched on macros in https://thecrm.ninja/omnichannel-productivity-tools/, but with some new functionality that’s now come out, I thought it would be quite interesting to dive deeper in them. By doing do, we can see how they work, the functionality that they offer, and some really cool & interesting scenarios!

Let’s have a quick reminder of what macros are all about (for those who don’t know, yet):

Macros allow customer service agents to carry out repetitive tasks that can span multiple entities. Eg opening forms (model-driven apps), pre-populating data into the form, etc. Through this, not only are there less manual tasks/steps to carry out, there’s now the ability to carry out the same tasks, without worrying about a step being missed, or the wrong data copied in, etc.

With that in mind, let’s see what there is for macros in Omnichannel. As a default, there were always the following 3 pre-defined automation actions:

With these, we’re able to do things like:

  • Opening a form to create a new record. This could be used to create a new contact automatically
  • Opening an existing record. This could be used to open an existing contact (based on pre-survey questions, such as email address etc
  • Searching the Knowledge Base using specified keywords/phrases
  • Opening an email form with a pre-defined templated
  • Linking records together

There’s now a new option available:

Hmm. This looks interesting. What happens when we select it?

We get a condition block! Clicking ‘Add an action’ will allow us to then add either one of the pre-defined automation actions, or another Control/Condition block.

OK – so you’re now thinking that I’m getting over excited about this. But hold on – let me explain further why I’m really liking this.

So when using Power Automate, frequently I’ll use condition blocks to check/satisfy things (it’s obviously available in Logic Apps as well, but I have minimal experience of those to date). Some of them can get quite advanced, but it comes in useful. However for Omnichannel macros to date, it’s not been possible to do this. We’ve been limited to just a few options, without being able to specify branching criteria based on variables.

Now we’re (finally) able to do this. The Condition field works in the same way as Power Automate does, with being able to string multiple statements together, and have actions that result from them. We’re also able to use slugs in them, to populate variables & use customer-entered data.

Let’s see an example of this. We have a customer who’s opening an Omnichannel chat session. They’ve filled in the pre-survey questions, in which we’ve asked for the following pieces of information:

  • First Name (required)
  • Last Name (required)
  • Email Address (required)
  • Company Name

With the condition check in place, we can either create just a contact record (if the customer didn’t fill in the company name field), or we can create both account & contact records, and link the two together. We could also check if the customer already exists as a contact, and then not need to create any records for them.

This means that there will be much less manual work for the agent to carry out, as they won’t have to manually create all of these records.

We’re able to string these together in ‘multi’ step scenarios, to allow things to flow on from each other:

There are also other options available to use, such as the ability to clone, and the ability to open a new application tab. I’ve covered application tabs at https://thecrm.ninja/omnichannel-application-tabs/, so we can see how helpful this could actually be. We wouldn’t need to automatically open a specific system for all customers contacting us; instead we’re able to selectively open things based on the actual customer. This makes for a much cleaner & better agent experience, in my opinion.

In summary, this is a really helpful & useful feature that’s been added, bringing even better functionality to macros. We’ve been able to do these sorts of things elsewhere to date, and being able to do it here now as well is great. All I can say is that I’m wondering what else we could do…perhaps kick off a Power Automate Flow as well? We’ll have to wait and see 🙂

Omnichannel & Sentiment Analysis (II)

I’ve previously touched upon sentiment analysis within Omnichannel in several articles (https://thecrm.ninja/omnichannel-sentiment-analysis/ and https://thecrm.ninja/omnichannel-supervisor-tools/). It’s really a great feature that allows agents to quickly & easily see how the customer is interacting. It also allows for supervisors to see at a glance how interactions are going overall.

With all of that, I thought it would be helpful to take a further look into how sentiment analysis actually works, so that we can understand it a little better.

Now, the actual nuts & bolts for sentiment analysis are provided by Azure Cognitive Services. There are a wide range of tools available through this, but we have no need to go into Azure to configure this. It’s a simple setting within Omnichannel to get it working, rather than needing to fiddle around with many different things:

However, what’s actually going on during a conversation, and how is the sentiment analysis worked out/calculated? We see the pretty little face icons (with the different colours), but how are these actually being set?

Well, there are two ways in which algorithms are used to calculate the sentiment that’s shown:

  • Natural language processing (NLP)
  • Machine learning (ML) algorithms

With these two ways methods, it’s possible to not only see what the current interactions are showing, but also to enhance the model to understand sentiment better.

Note: In a session that I presented recently, one of the attendees asked if it’s possible to train the model, to result in a custom algorithm. Unfortunately this isn’t possible to do – the machine learning that takes place is the general Azure one, rather than one for a single company or customer

The following diagram shows the sentiments that are used. They’re nicely colour-coded, for ease of reference as well:

When a customer interacts through Omnichannel, the sentiment shown is based on the last 6 messages received from the customer. As a result, the sentiment shown can very well fluctuate & change during the conversation, based on how it’s going.

The Sweetest Languages in the World - | Beyond Exclamation

Obviously, customers aren’t just going to use English to communicate. Companies are based around the world, and will use their native/local language when providing support. Omnichannel allows for this without an issue, utilising the Azure Text Translator API behind the scenes to provide this. If you’re interested to see which languages are supported for this, head to https://docs.microsoft.com/en-us/azure/cognitive-services/translator/language-support which is the latest source of information for this.

There are some interesting things to know around how this actually works:

  • When a language other than English is used, the Text Translator API translates the text to English, and then it’s analysed/scored for sentiment
  • If a language isn’t supported by the Text Translator API, it won’t be scored
  • If profanity (eg a swearword) is detected, the sentiment will automatically be shown as Negative or Very Negative, regardless of the rest of the last 6 lines of conversation

Some people have expressed their concern to me around how accurate the Azure translation actually is, but to date I haven’t seen any major concerns resulting out from it. As with the other Azure services, Microsoft is continually refining & improving it. That being said, there are several languages with very nuanced terms. I’d like to think that these would be supported without issues.

There is, however, somewhat of an interesting behaviour when starting off the analysis at the beginning of the conversation:

  • If the initial language is detected as English, it’s assumed that all of the subsequent conversation will be in English. As a result, if the customer switches away from English, the system won’t recognise this, and a Neutral sentiment score will be shown
  • If the initial conversation is not in English, then the system will check every conversation line & re-detect the language as necessary.

This seems somewhat strange to me, as I’d have thought that the system would automatically check the language for each conversation line. I can think of plenty of scenarios where different languages are used in a single conversation, even if it does start with English being used. I’d like to think that this will be updated at some point, to make the experience better.