Active or inactive, that is the question?!?!

Catchy title, right? Well I was wondering what exactly I should use for this blog post, and as you’ll see as we go through things, this is probably quite a good paraphrase to use.

So, where to start? Well, with a customer, of course! Now, this customer has been running live with a custom Dynamics 365 solution for a little while. Importantly for this story, there have not been ANY releases in quite a few months. This is of course good to bear in mind, given that we can all, um, occasionally find that a release could cause an issue, somewhere, sometimes…

Part of the capabilities that they’re using is bringing in Leads, and qualifying them appropriately. As part of this process, there are various custom attributes (aka columns) that have been added to the Lead table, along with corresponding columns added to the Contact table. There’s also some custom logic that, when a lead is qualified, copies the values from Lead to Contact record, updating it (essentially extending the standard capabilities of the system).

This has all been working well to date, and the customer team has been very happy with their system. Until it stopped working, last week. Which was strange, as nothing seemed to have changed at all?

When trying to qualify leads in the system, they were getting the following error message:

Cryptic, right? This seemed a little more interesting as well, given that when only inputting basic information into a Lead record (eg First Name, Last Name, Phone Number), it didn’t matter how many leads existed with the same information, it qualified without a problem.

However, using any custom columns that had been added to the table caused this error to occur.

The first thing that I did was to check that there had been no updates released to Production. This was confirmed as being the case. I then also checked that there had been no OTHER solutions released to Production (as this could have impacted on it). Thankfully there hadn’t – the system looked to be in as fine a shape as it’s been running for a while.

OK – on to the next step. What updates have been released by Microsoft? Well, with the fact that we were able to pinpoint the date that the functionality had stopped working, we went to find the corresponding Learn article about the release (Update 22102 – Release Notes | Microsoft Learn). Don’t worry about clicking through to read it – there’s essentially not much in it, and there’s nothing at all around the Lead table or its functionality!

Continuing to dig around, I really wasn’t sure of what was causing this, but obviously had to work it out & figure out a fix! It was quite a dilemna.

This is where the amazing Microsoft community came into play. I noticed a post by Jeroen Scheper on one of the channels that I’m on. It turns out that he was having the same issues, so we started to try collaborate on it. This both reassured me (that it wasn’t just me), but also increased the confusion, as we couldn’t work out what was going on underneath to cause this!

Raising with Microsoft (we both actually raised support incidents), I had an amazing support call almost immediately. Demonstrating the problem, I was told that it was due to Duplicate Detection rules.

Now I’ll admit that this confused me somewhat. See, I had already checked the Duplicate Detection rules, but nothing had been changed, and no new rules had been implemented.

Getting the support agent to walk me through things, they told me that I had to unpublish the rules, modify a setting on them, and then re-publish the rules. This was the setting (on each one) that had to be updated:

This again caused me to be confused. Why was the system having issues with inactive records? Surely qualified leads are active records, but just qualified (& then being locked down as a result)?

Well, it turns out that my perspective of how this works is actually incorrect. As we (hopefully) all know, whilst all records have a Status value (eg Active, Inactive), there are some records that also have a Status Reason value.

In fact, the ‘State Code’ choice value in Dataverse is restricted (we can’t access it), and seems to have some quite interesting functionality running behind it. Depending on which table is accessed, there are different options available within it.

For example, the Lead table shows:

Whereas the Contact table shows:

And the Task table shows:

Anyhow – it turns out that when a Lead record is qualified or disqualified, though it’s not shown in the user interface (nor behind the scenes), the record is actually being deactivated!

More information on this can be found at Qualify and convert leads to opportunity | Microsoft Learn.

So, this was the underlying reason behind the error message. Obviously Microsoft had updated something, which then caused this to fail. I don’t know how many different customers may have been (or still be?) experiencing the issue, but I think that the error message at least could be a little clearer? Perhaps including a link to the relevant Microsoft documentation page, for a start.

Well, thankfully this was put to bed, and I was quite thankful (as was the customer). And this is how I decided to come up with the title of this blog post!

Have you ever had something similar happen to you? Drop a comment below – I’d love to hear!

Interacting with Microsoft

People sometimes wonder about what is the best way to interact with Microsoft. In fact, this post isn’t strictly aimed at interacting with Microsoft, but can also be taken as a general guide to interacting with any organisation. The reason for deciding to write about this comes from a conversation that I had last week with a good friend, who was having issues in finding a resolution to an issue.

Let’s start at the beginning. We, or our customers, have relationships with suppliers such as Microsoft. We’ll order software (licenses), need to have them supplied to us (show up in our account), and sometimes there may be issues that we need to help/support with. There are obviously general support channels available that support tickets can be raised through, but there are also other avenues to consider as well.

Apart from the ‘professional’ relationship/s that may be in place, we may also have ‘personal’ relationships with members of different teams. These can happen in various different ways, such as speaking together at events, organising communities, etc. They are very valuable to have in place, & many people that I know, as well as myself, strive to improve & increase the network & connections that we have with Microsoft & other organisations.

However, there’s something very important to keep in mind. Just as we are doing our day job (what we’re paid to do), they are as well. At the end of the day, they (as with ourselves) need to ensure that the job gets done.

So if we reach out to ask something from them, we’re essentially asking for a favour, usually without anything recriprocal being able to be offered. A really good analogy for this, shared previously with me by Mark Smith & Chris Huntingford, is the ‘Sweet Jar Concept’.

Here’s how it goes. Imagine that the person has a jar with 100 sweets in. There are a limited number (the number itself isn’t important though) available, and the person has to choose who to give the sweets to. If we ask for a favour without knowing them, it’s highly unlikely to be granted. Even if we do know them somewhat, it may still be unlikely – they’re not going to be getting any return on the sweet that they’re giving out. Potentially if we know them well, and have proven in the past that we’re of value to them, we’ll get a sweet.

But even if we do know them well, if we keep asking for sweets (aka favours), the likelihood of them being granted will diminish (rapidly). Again – there’s a limited supply of them, and we’re not going to be looked on favourably if we keep coming back & asking for more, whilst not giving anything in return.

So HOW could we go about this, to set ourselves up for success (ie getting the outcome that we’re desiring). Well, this is actually quite simple – we need to identify who will be gaining something by helping us. Let’s explain this in more detail.

Within Microsoft (& any organisation really), people have metrics that they need to meet for their role. These are usually referred to as KPI’s (Key Performance Indicators), and are used for things like salary & role progression. What we should be doing is finding the right person (or team) that has (one or more) KPI’s aligned to what we’re trying to accomplish.

Let’s use the example here of the situation with my friend last week. He had a client who had ordered licenses from Microsoft that were needed for a project to start, but hadn’t appeared in the customer account yet. If the licenses weren’t there on-time, the project would need to be delayed, which would be expensive (& very annoying) for the customer.

On hearing the situation, I suggested to him to find the person (or people) within Microsoft who’d be aligned towards ensuring the situation was remedied ASAP. Examples of these people could be:

  • Microsoft Account Manager. This person would be aligned from the Microsoft side to ensure that the customer would have everything that they needed to be successful
  • Microsoft Sales Team. If there was a sales team involved with the license purchase, they would be very aligned to ensuring that the licenses had actually been procured and showing up in the customer account!
  • Microsoft Account Technology Strategist. This is the person responsible for designing the strategy and architecture to drive digitalisation and innovation for the customer

Now the above list isn’t exhaustive, and is also applicable to the specific scenario above. Additionally, the people mentioned might not be able to actually deal with the situation themselves, but if they’re not, are more than likely to know the right person/team who can deal with it.

With this approach, we’d be lined up for success in three ways:

  1. We’d (hopefully) get the immediate situation looked at and resolved
  2. We’d be giving our connections the ability to align to their KPI’s, and show results for them
  3. We’d be showing our value to our connections, which can then help if we have a favour to ask in the future (that’s not necessarily aligned to KPI’s

So in a nutshell – when we look to try to get something dealt with/resolved, we should ask ourselves who’s best aligned professionally to help us, with it being in line with their professional goals. This way we can drive value, as well as giving goodwill all round.

Have you ever been in a situation where this may have helped? How did you handle it? I’d love to hear – please drop a comment below!

Power Platform ALM Changes

As a starter for 10, if you haven’t yet looked into ALM for Power Platform, you should most definitely be doing so! ALM is, of course, Application Lifecycle Management. This is how, in a nutshell, we move solutions between environments.

In the good old days, this was done manually of course (CRM 4.0, I’m looking at you!). Today, though it is of course still possible to export/import solutions manually, it’s not the Microsoft Best Practise method. Doing it manually also means that it’s unlikely that you’ll have appropriate source control for your solutions too, which let’s face it, isn’t the best.

Want to look at a previous solution version? Hmm – do you still have it saved on your machine or not?

So we should generally know why we’d want to use ALM. But which tooling do we actually use for it? Going back to the on-premise days, there was TFS (or Team Foundation Server, to give its full name). This was a full source control respository, allowing developers to check in/check out code, built solutions, deploy them, etc.

With the move to ‘cloud based systems’, the TFS replacement is Azure Dev Ops (or ADO, as it’s usually referred to as). ADO works in essentially the same way as TFS did (some differences, but they’re not really relevant here), but does so through the cloud.

When it comes to Power Platform solutions, ADO uses the ‘Power Platform Build Tools’ capabilities to hook into Dataverse & pick up solutions. The tools essentially gives ADO the ability to connect in to a Power Platform environment, build/export solutions, deploy solutions, etc.

More information on the toolset can be found at Microsoft Power Platform Build Tools for Azure DevOps – Power Platform | Microsoft Docs

Now there are some limitations to the Power Platform Build Tools. In fact, I’d be so bold as to say that currently they’re not in a fully mature state. It’s not possible to do everything that you can manually (well, not with the inbuilt capabilities – there are some ‘hacks’ around that can extend them). At the moment, it’s essentially 1.0.

Well, Microsoft is announcing that they’re now releasing 2.0 of the Power Platform Build Tools this week!

In fact, this is so new that at the time of writing, there’s no Microsoft Docs available for this! So what does version 2.0 bring, and why is Microsoft releasing a new version?

So Microsoft has actually had this in planning for a while. There’s a lot going on with GitHub, as we well know, and Microsoft wants to drive the consistency of the experience for users forwards. At the moment, they work in somewhat different ways, and the aim is to bring this to parity.

The main change that the new version has is that instead of tasks being PowerShell based (which they are currently), now the tasks will be Power Platform CLI based. So Microsoft is changing the underlying working method from PS to CLI. Some of us will, of course, already be familiar with the way that the CLI works, and it’s really nice to see that the capabilities will now be part of ADO.

Now don’t start worrying that your current ADO pipelines (v0) will suddenly stop working. Microsoft is not doing anything with v0 at this point in time (though they may potentially deprecate in the future). So all of your existing ADO pipelines using the Power Platform Build Tools will continue to work, but no new features are going to be being released for it.

In terms of switching to using v2, it’s really quite simple – you’ll need to change the task version type as so:

If you are currently using YAML (as so many wonderful developers do) to author pipelines, you’ll need to do the following in the YAML code:

It’s very important to note that it’s not possible to mix and match task versions. If you do this, the ADO pipeline will fail, so please don’t try this!

I’m really excited about this, and to see that the CLI capabilities are being brought into play for ADO capabilities. I’ll admit that I’m wondering what else will be being released (in the fullness of time), as I’m sure that this is just the start of some great new stuff!

One of the things that I’m REALLY hoping for is the ability to use ADO pipelines to be able to migrate Power App Portals (or Power Pages), as currently it’s only possible to do using the Power Platform CLI, or the Configuration Migration Tool. It would be amazing to be able to do these with ADO pipelines as well!

PL-500: Microsoft Power Automate RPA Developer

RPA (or Robotic Process Automation) is a capability that Microsoft has been developing for a while within the Power Platform space. Whilst cloud flows can be used to interact with any systems that has an API in place, many organisations have (legacy) systems that have no API, so interacting with them can be challengin. RPA capabilities allow organisations to be able to interact with any system overall, thereby enabling & empowering businesses holistically.

I’ve been aware for a while that there’s been an exam coming out for RPA, though it’s taken a bit of time to land. That’s fine though – I can’t really think of any absolute rush to have it in place. I do think that over time, just as with some of the other certifications, it will become a required for solution or specialisation status.

The official page for it is at https://docs.microsoft.com/en-us/certifications/exams/pl-500. The specification for it is:

Candidates for this exam automate time-consuming and repetitive tasks by using Microsoft Power Automate. They review solution requirements, create process documentation, and design, develop, troubleshoot, and evaluate solutions.

Candidates work with business stakeholders to improve and automate business workflows. They collaborate with administrators to deploy solutions to production environments, and they support solutions.

Additionally, candidates should have experience with JSON, cloud flows and desktop flows, integrating solutions with REST and SOAP services, analyzing data by using Microsoft Excel, VBScript, Visual Basic for Applications (VBA), HTML, JavaScript, one or more programming languages, and the Microsoft Power Platform suite of tools (AI Builder, Power Apps, Dataverse, and Power Virtual Agents).

Now here’s the thing. I occasionally work in the automation space, either on customer projects, or when training users in the technologies. I wouldn’t describe myself as an advanced automation developer (whether cloud or RPA capabilities). I’m most definitely NOWHERE near the level of legends such as Matt Collins-Jones, for example (go check him out if you don’t know about him!).

So I knew that I may be a bit challenged when taking the exam, especially in the more ‘pro dev’ space (aka JSON etc). In fact, I didn’t actually realise that the exam specification included that sort of thing. I know, I should have – it’s aimed at developers overall…shows that I need to brush up on reading things properly!

Also, there’s still quite a bit of a focus on Power Automate cloud flows – it’s not JUST about RPA capabilities.

Now, really nicely, there are already Microsoft Learn pathways available (which have been around for a while, and updated appropriately). This really is a big help, I feel, especially for people who are new’ish to RPA.

Of course, there’s a lovely shiny two star badge awarded when passing the exam, along with the title of ‘Microsoft Certified: Power Automate RPA Developer Associate’:

As with previous exams, I sat it from home (the proctored experience). Learning from previous times that I’ve taken exams, I ensured that my workspace was entirely clear from everything. As a result, the check-in process happened automatically, and I didn’t need to engage with any proctors at all (which was quite nice actually).

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

  • Cloud flows vs RPA flows
    • Capabilities of each
    • When to use each (ie how to handle different scenarios)
    • How to trigger each one
  • Cloud flows
    • Different types of triggers, & when each type should be used
    • Different types of actions, and the capabilities of them (at a high’ish level – expected to know common Microsoft actions, but not need to know all of the hundreds of different ones!)
    • Controls/operators. What they are, how they can be used to accomplish different requirements
    • JSON formatting & syntax
  • Business Process flow vs Business Rules
    • What each is
    • When to use each one
    • Capabilities
  • RPA flows
    • Common actions, how they work, capabilities of them
    • How expression syntax works within them
    • Debugging capabilities, and what to use when
    • How to interact with desktop applications
    • How to interact with websites
      • How data values can be used
      • How data tables can be used
      • How to use data that’s extracted from a website
    • Troubleshooting functionality
  • Usage of automation capabilities from Office 365 applications such as Excel & Visio
  • Loops
    • How they work for cloud & RPA flows
    • Troubleshooting
    • Implementing success/fail criteria
    • Error handling
  • Process Advisor
    • What it is
    • What it does
    • How it can help organisations
    • Limitations
    • What it cannot do
    • Process Mining vs Task Mining, & the important differences between them
  • Variables
    • How to handle variables across different environments
    • How to declare them (cloud flow vs RPA flow)
  • Runtime operations
    • How flows are triggered (async vs sync)
    • How flows are queued (cloud vs RPA)
    • How RPA flows are carried out when using machine groups
  • Artificial Intelligence (AI) capabilities
    • How AI can be used within flows
    • Different AI capability types (what each one can be used for)
    • AI within Power Platform, & AI within Azure Cognitive Services
  • Sharing flows
    • Different ways to share cloud flows
    • Different ways to share RPA flows
  • Application Lifecycle Management (ALM)
    • Solutions (managed vs unmanaged). Capabilities of each, when to use each type
    • AzureDevOps (ADO). What it is, when/how to use it, capabilities
    • Solution imports
    • Solution layers. What these are, troubleshooting functionality
    • Upgrade/Stage for Upgrade/Update. Which each is, what each does, how/when to use each one
    • Moving desktop flows between users
  • Security
    • Security roles needed to create
    • Security roles needed to share/modify
    • Security roles needed to register machine for RPA
    • Security roles needed to register machine groups for RPA
    • Security requirements to run different types of RPA flows (how it interacts with desktop/s)
    • Data Loss Prevention (DLP) – how it affects creation & runtime of flows

Overall, I had 46 questions, with a single case study. I’m used to having at least two case studies, so it was nice to have just one of them this time.

So….it’s a lot of stuff. Definitely targeted much more at the ‘pro-developer’ end of the scale that someone who might occasionally automate things. It’s absolutely necessary to understand coding conventions, ALM, etc.

It’s definitely an exam that if you’re not already currently hands-on with the skills needed, I’d highly recommend you get a decent amount of experience with it before taking the exam! I’d highly recommend ensuring that you have an environment in which you’re able to be hands on with all types of automation (cloud & desktop flows), and really understand how they can be handled with an eye on the enterprise scale!

If you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Wave 2 2022 – Omnichannel

I know it’s taken me a week (or two) to get round to this, but I’ve had other things on the go (such as starting my new job, for instance). However it wouldn’t be this time of year without doing a summary of new features for the Wave 2 2022 release.

As with previous posts in this area, I’ll be focusing on the Customer Service side of things, and also more precisely with a focus on the Omnichannel capabilities. However, whilst previously I’ve tended to focus just on the Omnichannel items, Customer Service is now being much more tied together with the Omnichannel offering, so it makes sense to broaden things out a bit.

So let’s start taking a look at the wonders that will (hopefully!) be in store for us within a few months:

Customer Service Workspace – enhanced layout

Public Preview – August 2022. GA – October 2022

I’ve previously taken a look at some of the capabilities of the Customer Service Workspace (see Omnichannel vs Customer Service Workspace), and how they compare to Omnichannel. With Microsoft now rollowing out the ability to have multi-session capabilities within it, it’s sometimes a hard decision for organisations to decide which one to use (there are some key differences though).

With the upcoming release, there are going to be new layouts for the site map (navigation menu), sessions & tabs. Some of the key changes coming are:

  • Sessions and child tabs are displayed horizontally
  • Improved handling of overflowing tabs and sessions
  • Tab bar is visible only if multiple tabs are present in a session
  • Improved site map that’s accessed from the hamburger icon with support for grouping and areas
  • Improved accessibility with 400% zoom mode
  • Increased predictability of session closure in multisession apps
  • In-app notifications aligned with the multisession navigation

These look to be quite good (I definitely wouldn’t have thought of all of them!), and I can’t wait to try them out for myself.

Single sign-on capabilities

GA – October 2022

One of the things that can be quite frustrating for customers is that if they’re interacting through live chat capabilities, and then switch over to a Power Virtual Agent, they need to re-authenticate. This is of course not quite optimal for a seamless customer experience.

Microsoft are therefore enabling single sign-on capabilities. What this means in practise is the following:

  • Authenticant contexts are shared between Power Virtual Agents and Omnichannel live chat sessions. If a user authenticates in one of them, then they become authenticated across all of the capabilities. There’s no need to authenticate per communication type
  • Customers can start with an unauthenticated conversation, and then authenticate at a later point in the conversation. This will then continue as authenticated across the different channels that they’re communicating through

Voice channel – expansion of availability

GA – October 2022

The voice channel (which I still need to do a write up on!) is really amazing, allowing customers to call in directly via phone etc. It’s been rolled out already to several regions, but customers in other regions have been asking for it.

Microsoft has now confirmed that the voice channel will now be available in the following countries:

  • United Kingdom
  • Canada
  • India
  • Switzerland

This is a great move – it still doesn’t mean that every country has the voice channel available, so I expect that Microsoft will keep on adding more countries for the availability of this (I know that there’s a decent amount of back-end systems that are needed, which is why it’s taking this long to get in place).

Voicemails

GA – January 2023

This one is getting me really excited. Obviously, being able to connect to a customer service agent is important. But what if the agent isn’t around? We could of course send an email, but if we’re already connected through a specific method of contact, ideally we’d like to continue with that method.

Especially when it comes to actually calling into an organisation, it can be quite frustrating to not reach the person we’re trying to get hold of, and then need to send an email.
Voicemail capabilities, coming in early 2023, will mean that customers will be able to leave voicemails for customer service agents to pick up. The agents will be able to set up welcome messages, as well as manage & playback voicemails that have been left.
This is really cool – I’m wondering if there are going to be AI capabilities included in this in the future, so as to automatically transcibe voicemails for the agents, for instance. I don’t think that it would take a LOT more technical capabilities – we already have Azure Cognitive Services that audio can be fed through to for a written transcription to be produced.

Customer Callbacks

GA – January 2023

One of the frustrations that I think is shared universally is when contacting an organisation, and being told that you’re in a queue. Not only are you in a queue, but there may be dozens/hundreds/thousands of people ahead of you…and the number doesn’t seem to be decreasing at a rapid rate.

Some organisations offer the ability to ‘reserve’ your spot in the queue, and will call you back when you’re next. To date, this hasn’t been a feature of Omnichannel.

However, coming in early 2023, this feature will be rolling out!. It will give customers the ability to keep their queue position, and to choose if they’d like a callback to happen when they’re at the front of the queue. Note that this would require a phone number to be provided, for the customer service agent to use to contact the customer.

I think that this is a nice feature, but will be curious to see how it plays out ‘in the real world’. I know that when my local doctor surgery implemented this, it was supposed to be great, but in practise actually didn’t work well.

I’ll be looking deeper into the different functionalities when they land, and will share them here. If there’s anything you think would be helpful to focus on, drop a comment & let me know!

New Platform DLP Capabilities

DLP (or Data Loss Prevention) is a very important capability in the Power Platform. With being able to bring together multiple data sources, both within the Microsoft technology stack as well as from other providers gives users amazing capabilities.

However with such great capabilities comes great responsibility. Of course, we trust users to be able to make proper judgements as to how different data sources can be used together. But certain industries require proper auditing around this, and so being able to specify DLP policies are extremely important to any governance team.

Being able to set how data connectors can be used together (or, in the reverse, not used together) across both Power Apps as well as Power Automate flows is imperative in any modern organisation.

To date, Power Platform DLP capabilities have existed that allow us to be able to categorise connectors (whether Microsoft provided or custom) into three categories. These categories specify how the connectors are able to function – they’re able to work with other connections that are in the same category group, but cannot work with connectors that are in a different category group.

So for example, it’s been possible to allow a user to create a Power App or a Power Automate flow that interacts with data from Dataverse, but cannot interact with Twitter (in the same app or flow).

With this approach, it’s possible to create multiple DLP policies, and ‘layer’ them as needed (much like baking a 7 layer cake!) to give the functionality required per environment (or also at the tenant level).

Now this has been great, but what has been missing has been the ability to be more granular in the approach to this. What about if we need to read data from Twitter, but just push data out to Twitter?

Well, Microsoft has now iterated on the DLP functionality available! It’s important to note that this is per connector, and will depend on the capabilities of the connector. What we’re now able to do is to control the specific actions that are contained within a connector, and either allow or not allow them to be able to be utilised.

Let’s take the Twitter connector as an example:

We’re able to see all of the actions that the connector is capable of (the scroll bar on the side is a nice touch for connectors that have too many actions to fit on a single screen!). We’re then able to toggle each one to either allow or disallow it.

What’s also really nice are the options for new connector capabilities.

This follows in the footsteps of handling connectors overall – we’re able to specify which grouping they should come under (ie Business, Non-Business, or Blocked). As new connectors are released by Microsoft, we don’t need to worry that users will automatically get access to them.

So too with new actions being released for existing connectors (that we’ve already classified). We’re able to set whether we want them to be automatically allow, or automatically blocked. This means that we don’t need to be worried that suddenly a new connector action will be available for users to use, that they perhaps should not be using.

From my perspective, I think that any organisation that’s blocking one or more action capabilities for a connector will want this to be blocked by default, just to ensure that everything remains secure until they confirm whether the action should be allowed or not.

So I’m really pleased about this. The question did cross my mind as to whether it would be nice to be able to specify this on a per environment basis when creating a tenant-level policy, but I guess that this would be handled by creating multiple policies. The only issue I could see around this would be the number of policies that could need to be handled, and ensuring that they’re named properly!

Have you ever wanted these capabilities? How have you managed until now, and how do you think you’ll roll this out going forward? Drop a comment below – I’d love to hear!

Environment types, capabilities & backups

Interesting title to start a blog post with, right? I can’t tell you how much I tried to work out what to call this, but then I figured that I’d just put at a high level what I’m going to be talking about!

So let’s start at the beginning. Environments within Dataverse. An environment is essentially a container for all sorts of different components, such as data models, apps, code, etc.

Examples of what an environment can contain

Within the Power Platform, there are different types of environments. As a quick recap, currently we have the following:

  • Default. Every Power Platform tenant has a default environment. We of course shouldn’t be using this for any proper development!
  • Production. Used for any Line of Business application
  • Sandbox. A sandbox environment is any non-production Dataverse environment. Isolated from production, a sandbox environment is the place to safely develop and test application changes with low risk.
  • Trial. Used to take out a trial
  • Trial (Subscription Based). Used to take out a trial when there’s subscription licensing in place
  • Developer. Personal environment, limited to one user. Previously called the Community plan.
  • Teams. Used when an app is created within Teams, to use a Dataverse for Teams environment. Doesn’t have full Dataverse capabilities, and has various limitations
  • Support. Only able to be created by Microsoft support during a support case. Is essentially a clone of an existing environment, used for diagnosis purposes.

Now, sandbox & production environments are automatically backed up – backups occur continuously, using Azure SQL Databases underneath. It’s also possible to create a manual backup instance of an environment as well, which usually takes a few seconds to carry out (restoring a backup, on the other hand, takes quite a bit longer…).

When restoring an environment, it’s not possible to restore to a production environment (though the backup could be from a production environment). It’s only possible to restore the backup to a sandbox environment – you’d then need to promote the environment from sandbox to production.

Let’s move away from backups for a moment. When we create an environment, we have the ability to select that the environment should be enabled for Dynamics 365:

This is actually a REALLY IMPORTANT CONSIDERATION! At this point in time, it’s not possible to update from a Power Platform Dataverse environment to then bring in Dynamics 365 capabilities. What this means is that if an organisation starts with just Power Apps, and then wants to expand into using Dynamics 365, IT’S NOT POSSIBLE TO DO THIS NATIVELY. Even Microsoft Support can’t do anything around this – you’d need to create a new environment, enable it for Dynamics 365, and then restore a backup to it.

It’s something that a lot of us would like be in place, but we’re not sure if it’ll ever come about. This is a tweet of mine from 2019 that Charles Lamanna responded to (I was SO thrilled that he actually responded to me!!):

https://twitter.com/clamanna/status/1176629306484637696

However, it’s still not in place. As a result, we recommend to all clients that when they deploy a Dataverse environment, they toggle the switch above (Note: A Dynamics 365 license is NOT needed to toggle this). Once this has been toggled (without deploying any of the Dynamics 365 apps), the Dynamics 365 apps and functionality can be installed/deployed at a later point in time.

There are actually various capabilities, such as the Data Export Service (yes, I know it’s now been deprecated) that actually relied on having the environment enabled as a Dynamics 365 environment in order to work. We found this out the hard way at a client, and had to do an overnight environment re-build to get the capabilities in place.

But there’s one other thing to consider around the differences between a native Dataverse environment, and an environment which has been enabled for Dynamics 365. This is around backups.

Now, backups are of course very important (thankfully they now occur automatically, as mentioned above – I remember my onpremise days when needing to run these manually!). But there are also some important differences for backup behaviour when it comes to environment types. See, it turns out that environments aren’t actually equal in backup behaviour. This is what actually happens:

  • Sandbox environments (all types) – backups retained for 7 days
  • Dataverse production environment (not enabled for Dynamics 365) – backups retained for 7 days
  • Dataverse production environment (enabled for Dynamics 365) – backups retained for 28 days

See that? Having Dynamics 365 enabled for an environment gives you FOUR TIMES as much backup retention time! That’s incredible!

Dataverse Environment enabled for Dynamics 365 – 28 days of backups available!

So not only are you able to then upgrade to Dynamics 365 applications at a later date, you then also have more peace of mind (hopefully you don’t need to use it though!) around keeping backups for longer.

This is really cool – I hope it helps you plan your environment implementation strategy! Have you ever come up against issues when using environments, or the type/s of environment? Drop a comment below – I’d love to hear!

Staying up to date with release information

Microsoft releasing new functionality can be an interesting experience, to say the least. As a cloud platform (SAAS – Software As A Service), functionality is released the entire time. A user could log off on Friday for the weekend, and come back on Monday morning to find that something has changed slightly, or a new button is present in the interface. Over time, most of us have come to accept this.

However this is for the ‘smaller’ functionality parts within the system, whether that’s Dynamics 365, or Power Platform related. There are of course two MAIN release announcements each year. These are the Wave 1 (Spring) and Wave 2 (Autumn) release windows, with information announced about what is included in each one publicly. This information usually starts to be available around 4-6 weeks or so before the release starts to hit.

Now that’s not to say that everything within a Wave release is released in a ‘Big Bang’ moment. Far from it actually, based on my experience. Microsoft will announce what is coming as part of the Wave release, along with projected timeframes as to when it will be available. Obviously, just because it’s been announced for Day X doesn’t mean that actually happens, at least for some of the time.

But there’s an inherent time-sink to being on top of all of this information. Firstly, people need to download the Wave release information (there’s one for Dynamics 365, and a second one for Power Platform), wade through all of the information, and somehow then remember it. Let’s just say that this can be challenging for a lot of people…

But what if there was somewhere where we could track this? Well, to date there hasn’t been, at least not until now.

Microsoft have created & made available the ‘Dynamics 365 & Power Platform Release Planner’, which can be found at https://experience.dynamics.com/releaseplans:

So just as a start, this is already MUCH better than the downloadable PDF documents for wave release information (admittedly the information is also available online as a Microsoft document, but still it’s lacking in certain areas).

But there’s more to this functionality than simply presenting a list of areas. Let’s take a look into some of these.

To begin with, there’s the sitemap on the left hand side. This allows us to select a specific area of interest, whether it’s Dynamics 365 or Power Platform (amusingly this reminds me a little of a model-driven app!).

Once in an area, we can then select between Planned features, Coming Soon features, and Try Now features by using the options in the menu bar. This is a nice little piece of functionality, in my opinion, allowing us to see what falls under each ‘category’:

By default, the items are displayed in a list format. However, we’re also able to toggle the view from the menu bar to a release date format, which shows us all items grouped by release month:

There’s also some filtering functionality, allowing us to narrow down the results even further:

Opening a line item (regardless of whether it’s being displayed as a list, or arranged by date) will give further information around the specific item. It also includes a lovely little timeline widget, showing the release dates information, as well as where it’s actually up to currently (which I think is great to have it as a visual reference!):

In here, links are included to documentation around the release overview, as well as specific documentation around the selected functionality item.

Now if this was all that there was, I think that truthfully I would be quite satisfied. It’s a much more modern interface, and really looks nice. I know that various colleagues of mine would be quite satisfied as well.

But….it doesn’t stop there. There’s something else, which is really the cherry on top of the cake icing! So what is it? Well, it’s the ability to create a PERSONALISED release plan information overview.

So on each item of functionality, there’s a button called ‘+ To my plan’:

Note: You do need to be signed into the portal to have this option available to you

Clicking this will add it to a personalised release plan, which you can access from the left-side menu. Here, all of the items that you’ve selected will show up. This is really cool, I think, as it allows you to see the overall picture, but also then focus on just the areas that you’re interested in:

It’s still got all of the functionality available for filtering, date/item sorting, etc. It’s also possible to toggle back to the ‘main’ view of all release information.

So in summary, I think that this is really cool. Admittedly (as it says on the site), it’s in BETA currently. I’m hoping that it’ll stick around, and come out of Beta pretty soon! Regardless, I’m definitely starting to make use of this already in tracking the upcoming features that I’m interested in.

Updates to the Power Platform Admin Center

There’s a saying amongst seasoned IT professionals who deal with Microsoft software. It goes something like this – ‘Why make do with one admin centre, when you could just have MULTIPLE admin centres to carry out functions!’.

It’s a bit of a tongue-in-check response to the numerous different admin centres that Microsoft technology seems to have. Now, I/we totally understand that over time, different (standalone) products have come together to co-exist, but their administration centres still differ.

Over time, Microsoft has been applying efforts to make them work better together, but it can still sometimes be quite frustrating not to know exactly where to go to in order to carry out specific function/s, or not to be able to see capabilities holistically overall in a single place.

So for example, we have:

  • Microsoft 365 Admin Centre
  • Power BI Admin Centre
  • Power Platform Admin Centre (which, for Dynamics 365 deployments, still leads users to the Classic Advanced Settings for some of the functionality…)
  • etc….

Now when it comes to Power Platform related items, admins would usually go to the Power Platform Admin Centre (which though it has a URL of admin.powerplatform.com, this auto-resolves to admin.powerplatform.microsoft.com – I have no idea why this is, given that no other admin centre seems to have this structure in place….another mystery…)

From here, we’d be presented with a list of environments, similar to the screenshot below:

The menu on the left hand side gave us a few of the different admin centres that we’re able to switch to. Alternatively, we could expand the overall menu to show us more capabilities, including other apps that we may wish to access:

So this is what we’ve been used to for the last few years. Essentially, information in different areas, and we’d need to go to each admin centre to find out what’s happening. So for example, if a Power Platform Admin user wanted to see any health advisories, they’d need to go to the Microsoft 365 Admin Centre to view the Service Health area there.

Not anymore! As part of the focus on unifying information across admin centres, Microsoft has now updated the functionality for this!

Now, with the new functionality, there’s a Home screen. On this, information is able to be presented to users, as well as applying one of several themes to the interface, such as a rainbow:

Now, in terms of information available to users, these are presented as ‘cards’. Within each card, information is shown, based on the card type:

At the moment, there are three cards to choose from:

Service Health

This section outlines any service health issues, such as outages or advisory information that users should be aware of. Clicking through it will bring users to the Service Health section of the Microsoft 365 Admin Centre:

From here, users can choose to switch across to other categories, such as Incidents, History & Reported Issues.

It’s (at least) one less click from the previous method, and I’m quite liking this. In my mind, it’s about making the information as accessible as possible (leaving aside that I think that Power Platform specific alerts should actually show within the Power Platform Admin Centre…)

Message Center

The second section is the Message Center. Here we’re able to see specific messages (yes, I know I have a LOT of messages sitting here!), and clicking on them will bring up the corresponding information directly within the same interface (which again, I’m really liking). So for example:

Nicely for messages, we also have options to filter the types of services that we want to see here. This, in my mind, is quite important, as we wouldn’t want Power Platform admins to be overwhelmed by messages that have absolutely no (usual) interest for them:

We also have the ability to specify which email notifications we want to be receiving. Again, we may be interested in some non-Power Platform notifications, but not want to see them directly within the Power Platform Admin Centre. Instead, we can specify to receive these via email – another nice touch!

Documentation

Finally, we have linked out to various Power Platform (& Dynamics 365) related resources on the Microsoft website. These are all static (ie they’re provided by Microsoft), but hopefully in the future admins will have the capability to add custom links to other resources as well.

What is nice about the documentation section though is that it’s got linked to the various Community forums. Microsoft has recently started to promote these within the products, and they can be a very helpful resource at times to be able to use!

There are also links to the Microsoft Centre of Excellence toolkit, which is a great resource that organisations should look to implement.

All in all, I think that this is a VERY good start to things. I’m hopeful that with Microsoft implementing this ‘home screen’ functionality with the ability to add cards to it, there will be additional cards that are released, bringing more information & functionality into the interface. I’m also hopeful that Microsoft will allow admins to add custom functionality here as well.

It’s a good first step – now let’s wait to see how this functionality iterates over time, and hopefully enables admin users in better ways!

‘Swarming’ for Customer Service

You might be wondering as to what I mean by ‘swarming’ in the title for this post. Don’t worry – it’ll become clear pretty soon! But first of all, let’s understand the story behind this new functionality.

Where to begin? Well, let’s take a look within an organisation. It doesn’t really matter what sort of organisation it is, as most organisations will have something similar scenarios overall. So, what are we actually talking about?

Customer Service is, of course, a very important functionality of any organisations. Customers who have purchased products may need support, or perhaps are having issues, and need them to be resolved. Customer service agents are there to handle the customer queries, and look to resolve them as soon as possible.

However, it’s possible that the customer service agents don’t actually know how to resolve the customer query/issue themselves. They can, of course, use the Knowledge Base, but that requires knowledge articles to be created & maintained.

Now within the organisation, there will be SME’s (Subject Matter Experts). These are the people who know the matter in precise detail, often being the people who have created the product and/or process to begin with. But these people aren’t usually carrying out the customer service function.

So what this means is that the customer service agents need to try to work out who might actually know the answer/s, be able to help resolve the customer issue, etc. This can take time, be laborious, and perhaps not even be able to be carried out (depending on the organisation).

Hmm. So, what if the system might be able to actually SUGGEST the right people for a problem or issue? Even better, what if the system could support them being involved directly with the record/s, regardless of whether they’re a user within Dynamics 365 or not?

Enter the swarming capability onto the Dynamics 365 scene….

The aim of swarming is to bring together the necessary experts within Dynamics 365. Now, having said that, not all users will actually be interacting directly within Dynamics 365. What happens is that a specific Teams chat is created, so that users outside of the system can see the necessary information, and give input on the situation.

This builds on the existing functionality of being able to use Teams chats directly within Dynamics 365, but takes it to a whole new level, by having the system automatically suggest relevant people within the organisation, and bring them into the swarm chat!

There are some necessary steps to configure to enable this to happen.

Firstly, Teams needs to be enabled within Dynamics 365:

Once we start to turn things on, we can then see the following. This allows us to be able to specify the types of records that we can use swarming on. This is great, as we may be building out custom functionality using other tables, and can enable swarming on these as well

Once Teams chat has been enabled, we can then start setting up the swarming capabilities:

As part of the setup, we have:

  • The ability to set the general message that users will see when they create a swarm
  • Activating the case form that’s used for swarming (as this will include the functionality for swarming on the case form)
  • A Power Automate flow that will be used for sending notifications & invites within Teams for suggested (internal) users
  • Creating swarm condition rules, which allows us to bring in specific conditions around skills etc

So, how does this work in practise, once the system has been initially configured?

Users can go to the relevant record, such as a case record. They’re able to select the ‘Create swarm’ from the menu bar:

This then allows the user to provide a summary of what the swarm is for, the scenario, as well as selecting the skills needed for the swarm. Dynamics 365 can also suggest skills that it thinks would be helpful as well:

Users from across the organisation are matched, according to the skills identified:

Notifications are sent to them within Teams, requesting their help with the matter:

When they accept the invitation, they’re then brought into the swarm:

In fact, the members of the swarm aren’t actually accessing the swarm information within Dynamics 365. Instead, they’re seeing & interacting with the swarm within Teams itself!

Once the swarm is active, information can be shared, and a solution found. The swarm can then be successfully closed down:

This is truly amazing. Obviously collaboration on issues is important, especially when considering that we’re trying to resolve customer issues as quickly as possible! I’m also really excited about this, as I was part of the initial group that Microsoft reached out to initially for feedback on the capabilities of it.

To now be able to collaborate with users who sit outside of Dynamics 365, but have them access the necessary information to help resolve things, is just mind-blowing. So many scenarios that come to mind as to how this can really empower organisations!

Can you think of a way in which this could change things in your own organisation, or at a client? Drop a comment below – I’d love to hear more!