Active or inactive, that is the question?!?!

Catchy title, right? Well I was wondering what exactly I should use for this blog post, and as you’ll see as we go through things, this is probably quite a good paraphrase to use.

So, where to start? Well, with a customer, of course! Now, this customer has been running live with a custom Dynamics 365 solution for a little while. Importantly for this story, there have not been ANY releases in quite a few months. This is of course good to bear in mind, given that we can all, um, occasionally find that a release could cause an issue, somewhere, sometimes…

Part of the capabilities that they’re using is bringing in Leads, and qualifying them appropriately. As part of this process, there are various custom attributes (aka columns) that have been added to the Lead table, along with corresponding columns added to the Contact table. There’s also some custom logic that, when a lead is qualified, copies the values from Lead to Contact record, updating it (essentially extending the standard capabilities of the system).

This has all been working well to date, and the customer team has been very happy with their system. Until it stopped working, last week. Which was strange, as nothing seemed to have changed at all?

When trying to qualify leads in the system, they were getting the following error message:

Cryptic, right? This seemed a little more interesting as well, given that when only inputting basic information into a Lead record (eg First Name, Last Name, Phone Number), it didn’t matter how many leads existed with the same information, it qualified without a problem.

However, using any custom columns that had been added to the table caused this error to occur.

The first thing that I did was to check that there had been no updates released to Production. This was confirmed as being the case. I then also checked that there had been no OTHER solutions released to Production (as this could have impacted on it). Thankfully there hadn’t – the system looked to be in as fine a shape as it’s been running for a while.

OK – on to the next step. What updates have been released by Microsoft? Well, with the fact that we were able to pinpoint the date that the functionality had stopped working, we went to find the corresponding Learn article about the release (Update 22102 – Release Notes | Microsoft Learn). Don’t worry about clicking through to read it – there’s essentially not much in it, and there’s nothing at all around the Lead table or its functionality!

Continuing to dig around, I really wasn’t sure of what was causing this, but obviously had to work it out & figure out a fix! It was quite a dilemna.

This is where the amazing Microsoft community came into play. I noticed a post by Jeroen Scheper on one of the channels that I’m on. It turns out that he was having the same issues, so we started to try collaborate on it. This both reassured me (that it wasn’t just me), but also increased the confusion, as we couldn’t work out what was going on underneath to cause this!

Raising with Microsoft (we both actually raised support incidents), I had an amazing support call almost immediately. Demonstrating the problem, I was told that it was due to Duplicate Detection rules.

Now I’ll admit that this confused me somewhat. See, I had already checked the Duplicate Detection rules, but nothing had been changed, and no new rules had been implemented.

Getting the support agent to walk me through things, they told me that I had to unpublish the rules, modify a setting on them, and then re-publish the rules. This was the setting (on each one) that had to be updated:

This again caused me to be confused. Why was the system having issues with inactive records? Surely qualified leads are active records, but just qualified (& then being locked down as a result)?

Well, it turns out that my perspective of how this works is actually incorrect. As we (hopefully) all know, whilst all records have a Status value (eg Active, Inactive), there are some records that also have a Status Reason value.

In fact, the ‘State Code’ choice value in Dataverse is restricted (we can’t access it), and seems to have some quite interesting functionality running behind it. Depending on which table is accessed, there are different options available within it.

For example, the Lead table shows:

Whereas the Contact table shows:

And the Task table shows:

Anyhow – it turns out that when a Lead record is qualified or disqualified, though it’s not shown in the user interface (nor behind the scenes), the record is actually being deactivated!

More information on this can be found at Qualify and convert leads to opportunity | Microsoft Learn.

So, this was the underlying reason behind the error message. Obviously Microsoft had updated something, which then caused this to fail. I don’t know how many different customers may have been (or still be?) experiencing the issue, but I think that the error message at least could be a little clearer? Perhaps including a link to the relevant Microsoft documentation page, for a start.

Well, thankfully this was put to bed, and I was quite thankful (as was the customer). And this is how I decided to come up with the title of this blog post!

Have you ever had something similar happen to you? Drop a comment below – I’d love to hear!

Power Platform Capacity Monitoring

If I look back at customer engagements over the last few years around Power Platform, whether it was a new capability or an existing capability, there was ONE thing that stood out above all. This was the ability to be able to track capacity usage over time, and to be honest, most organisations weren’t really doing very well at it.

For those who are unaware, there are actually three different types of capacity present within Power Platform environments. These are:

  • Data
  • File
  • Log

Each one is used for a specific purpose – broadly speaking, File holds all attachements that are uploaded directly into Dataverse, Log is used for auditing purposes, and Data holds everything else (hence the name)!

Now this data is shown within the Power Platform Admin Centre, under the ‘Resources/Capacity’ section’. An example of this is:

There’s also a nice little breakdown of capacity allocation through licenses etc, which essentially shows where the available capacity has come from:

If we drill down a bit further, we can open up a specific environment, and see not only the overall usage per capacity type, but also which tables are consuming the most amount of data:

All of this is well & good so far, for someone wanting to take a look at what is currently happening. But this is a manual action – it is possible to manually export the data, but again, this isn’t automated.

It’s also not possible (at least not at this point in time) to query the underlying records that hold these values. So we’re a little stuck. If an organisation wanted to see historical data usage, and/or predict data trends (such as ‘how much capacity would we need to have in 6 months if we continued our scaling’), there’s no way to do this. At least not automatically – someone would need to store the values down manually, then report on it. A hassle, to say the least.

Now when it comes to looking overall at Power Platform, the Centre of Excellence Starter Toolkit is really quite amazing. The Microsoft PowerCAT team continue to iterate existing functionality within it, as well as bring new functionality as well.

At this point in time, however, it doesn’t have any capacity monitoring in it. Well, it sort of does – we can implement notifications to alert us when capacity reaches a certain value. But this doesn’t solve the challenge as laid out above.

So with this in mind, I set out to create a solution to handle it. I’ve always wanted to create some sort of tool for giving back to the community & helping others, and I saw this as my chance to do so (I’m in awe of the various XrmToolBox tool creators, for the record).

So, I’m releasing a capacity monitoring tool. I’m using GitHub as the host, and the repo can be accessed at https://github.com/thecrmninja/Power-Platform-Capacity-Monitoring (it was a learning experience as well as how to use GitHub as a source repository, as I’ve not done that before!).

Model-Driven App:

Reporting Dashboard:

This is just the first version – I have various ideas about how to iterate on it, and tweak functionality. Each release will include release notes & important information to be aware of (such as security needing to run it). Also importantly, thanks to the amazing Matt Collins-Jones for reviewing some of my work around this.

The audience for this tool is aimed at IT/Power Platform admins who are already familiar with the Microsoft CoE toolkit solution, and have appropriate access to it.

If you find any issues, please raise an appropriate GitHub Issue item, and I’ll look into it. Also, if you have any ideas that you think could be worthwhile, please feel free to suggest them!

Finally, I’d be interested in hearing how you think this could support you or your organisation – feel free to drop a comment below!

Interacting with Microsoft

People sometimes wonder about what is the best way to interact with Microsoft. In fact, this post isn’t strictly aimed at interacting with Microsoft, but can also be taken as a general guide to interacting with any organisation. The reason for deciding to write about this comes from a conversation that I had last week with a good friend, who was having issues in finding a resolution to an issue.

Let’s start at the beginning. We, or our customers, have relationships with suppliers such as Microsoft. We’ll order software (licenses), need to have them supplied to us (show up in our account), and sometimes there may be issues that we need to help/support with. There are obviously general support channels available that support tickets can be raised through, but there are also other avenues to consider as well.

Apart from the ‘professional’ relationship/s that may be in place, we may also have ‘personal’ relationships with members of different teams. These can happen in various different ways, such as speaking together at events, organising communities, etc. They are very valuable to have in place, & many people that I know, as well as myself, strive to improve & increase the network & connections that we have with Microsoft & other organisations.

However, there’s something very important to keep in mind. Just as we are doing our day job (what we’re paid to do), they are as well. At the end of the day, they (as with ourselves) need to ensure that the job gets done.

So if we reach out to ask something from them, we’re essentially asking for a favour, usually without anything recriprocal being able to be offered. A really good analogy for this, shared previously with me by Mark Smith & Chris Huntingford, is the ‘Sweet Jar Concept’.

Here’s how it goes. Imagine that the person has a jar with 100 sweets in. There are a limited number (the number itself isn’t important though) available, and the person has to choose who to give the sweets to. If we ask for a favour without knowing them, it’s highly unlikely to be granted. Even if we do know them somewhat, it may still be unlikely – they’re not going to be getting any return on the sweet that they’re giving out. Potentially if we know them well, and have proven in the past that we’re of value to them, we’ll get a sweet.

But even if we do know them well, if we keep asking for sweets (aka favours), the likelihood of them being granted will diminish (rapidly). Again – there’s a limited supply of them, and we’re not going to be looked on favourably if we keep coming back & asking for more, whilst not giving anything in return.

So HOW could we go about this, to set ourselves up for success (ie getting the outcome that we’re desiring). Well, this is actually quite simple – we need to identify who will be gaining something by helping us. Let’s explain this in more detail.

Within Microsoft (& any organisation really), people have metrics that they need to meet for their role. These are usually referred to as KPI’s (Key Performance Indicators), and are used for things like salary & role progression. What we should be doing is finding the right person (or team) that has (one or more) KPI’s aligned to what we’re trying to accomplish.

Let’s use the example here of the situation with my friend last week. He had a client who had ordered licenses from Microsoft that were needed for a project to start, but hadn’t appeared in the customer account yet. If the licenses weren’t there on-time, the project would need to be delayed, which would be expensive (& very annoying) for the customer.

On hearing the situation, I suggested to him to find the person (or people) within Microsoft who’d be aligned towards ensuring the situation was remedied ASAP. Examples of these people could be:

  • Microsoft Account Manager. This person would be aligned from the Microsoft side to ensure that the customer would have everything that they needed to be successful
  • Microsoft Sales Team. If there was a sales team involved with the license purchase, they would be very aligned to ensuring that the licenses had actually been procured and showing up in the customer account!
  • Microsoft Account Technology Strategist. This is the person responsible for designing the strategy and architecture to drive digitalisation and innovation for the customer

Now the above list isn’t exhaustive, and is also applicable to the specific scenario above. Additionally, the people mentioned might not be able to actually deal with the situation themselves, but if they’re not, are more than likely to know the right person/team who can deal with it.

With this approach, we’d be lined up for success in three ways:

  1. We’d (hopefully) get the immediate situation looked at and resolved
  2. We’d be giving our connections the ability to align to their KPI’s, and show results for them
  3. We’d be showing our value to our connections, which can then help if we have a favour to ask in the future (that’s not necessarily aligned to KPI’s

So in a nutshell – when we look to try to get something dealt with/resolved, we should ask ourselves who’s best aligned professionally to help us, with it being in line with their professional goals. This way we can drive value, as well as giving goodwill all round.

Have you ever been in a situation where this may have helped? How did you handle it? I’d love to hear – please drop a comment below!

Power Platform ALM Changes

As a starter for 10, if you haven’t yet looked into ALM for Power Platform, you should most definitely be doing so! ALM is, of course, Application Lifecycle Management. This is how, in a nutshell, we move solutions between environments.

In the good old days, this was done manually of course (CRM 4.0, I’m looking at you!). Today, though it is of course still possible to export/import solutions manually, it’s not the Microsoft Best Practise method. Doing it manually also means that it’s unlikely that you’ll have appropriate source control for your solutions too, which let’s face it, isn’t the best.

Want to look at a previous solution version? Hmm – do you still have it saved on your machine or not?

So we should generally know why we’d want to use ALM. But which tooling do we actually use for it? Going back to the on-premise days, there was TFS (or Team Foundation Server, to give its full name). This was a full source control respository, allowing developers to check in/check out code, built solutions, deploy them, etc.

With the move to ‘cloud based systems’, the TFS replacement is Azure Dev Ops (or ADO, as it’s usually referred to as). ADO works in essentially the same way as TFS did (some differences, but they’re not really relevant here), but does so through the cloud.

When it comes to Power Platform solutions, ADO uses the ‘Power Platform Build Tools’ capabilities to hook into Dataverse & pick up solutions. The tools essentially gives ADO the ability to connect in to a Power Platform environment, build/export solutions, deploy solutions, etc.

More information on the toolset can be found at Microsoft Power Platform Build Tools for Azure DevOps – Power Platform | Microsoft Docs

Now there are some limitations to the Power Platform Build Tools. In fact, I’d be so bold as to say that currently they’re not in a fully mature state. It’s not possible to do everything that you can manually (well, not with the inbuilt capabilities – there are some ‘hacks’ around that can extend them). At the moment, it’s essentially 1.0.

Well, Microsoft is announcing that they’re now releasing 2.0 of the Power Platform Build Tools this week!

In fact, this is so new that at the time of writing, there’s no Microsoft Docs available for this! So what does version 2.0 bring, and why is Microsoft releasing a new version?

So Microsoft has actually had this in planning for a while. There’s a lot going on with GitHub, as we well know, and Microsoft wants to drive the consistency of the experience for users forwards. At the moment, they work in somewhat different ways, and the aim is to bring this to parity.

The main change that the new version has is that instead of tasks being PowerShell based (which they are currently), now the tasks will be Power Platform CLI based. So Microsoft is changing the underlying working method from PS to CLI. Some of us will, of course, already be familiar with the way that the CLI works, and it’s really nice to see that the capabilities will now be part of ADO.

Now don’t start worrying that your current ADO pipelines (v0) will suddenly stop working. Microsoft is not doing anything with v0 at this point in time (though they may potentially deprecate in the future). So all of your existing ADO pipelines using the Power Platform Build Tools will continue to work, but no new features are going to be being released for it.

In terms of switching to using v2, it’s really quite simple – you’ll need to change the task version type as so:

If you are currently using YAML (as so many wonderful developers do) to author pipelines, you’ll need to do the following in the YAML code:

It’s very important to note that it’s not possible to mix and match task versions. If you do this, the ADO pipeline will fail, so please don’t try this!

I’m really excited about this, and to see that the CLI capabilities are being brought into play for ADO capabilities. I’ll admit that I’m wondering what else will be being released (in the fullness of time), as I’m sure that this is just the start of some great new stuff!

One of the things that I’m REALLY hoping for is the ability to use ADO pipelines to be able to migrate Power App Portals (or Power Pages), as currently it’s only possible to do using the Power Platform CLI, or the Configuration Migration Tool. It would be amazing to be able to do these with ADO pipelines as well!

PL-500: Microsoft Power Automate RPA Developer

RPA (or Robotic Process Automation) is a capability that Microsoft has been developing for a while within the Power Platform space. Whilst cloud flows can be used to interact with any systems that has an API in place, many organisations have (legacy) systems that have no API, so interacting with them can be challengin. RPA capabilities allow organisations to be able to interact with any system overall, thereby enabling & empowering businesses holistically.

I’ve been aware for a while that there’s been an exam coming out for RPA, though it’s taken a bit of time to land. That’s fine though – I can’t really think of any absolute rush to have it in place. I do think that over time, just as with some of the other certifications, it will become a required for solution or specialisation status.

The official page for it is at https://docs.microsoft.com/en-us/certifications/exams/pl-500. The specification for it is:

Candidates for this exam automate time-consuming and repetitive tasks by using Microsoft Power Automate. They review solution requirements, create process documentation, and design, develop, troubleshoot, and evaluate solutions.

Candidates work with business stakeholders to improve and automate business workflows. They collaborate with administrators to deploy solutions to production environments, and they support solutions.

Additionally, candidates should have experience with JSON, cloud flows and desktop flows, integrating solutions with REST and SOAP services, analyzing data by using Microsoft Excel, VBScript, Visual Basic for Applications (VBA), HTML, JavaScript, one or more programming languages, and the Microsoft Power Platform suite of tools (AI Builder, Power Apps, Dataverse, and Power Virtual Agents).

Now here’s the thing. I occasionally work in the automation space, either on customer projects, or when training users in the technologies. I wouldn’t describe myself as an advanced automation developer (whether cloud or RPA capabilities). I’m most definitely NOWHERE near the level of legends such as Matt Collins-Jones, for example (go check him out if you don’t know about him!).

So I knew that I may be a bit challenged when taking the exam, especially in the more ‘pro dev’ space (aka JSON etc). In fact, I didn’t actually realise that the exam specification included that sort of thing. I know, I should have – it’s aimed at developers overall…shows that I need to brush up on reading things properly!

Also, there’s still quite a bit of a focus on Power Automate cloud flows – it’s not JUST about RPA capabilities.

Now, really nicely, there are already Microsoft Learn pathways available (which have been around for a while, and updated appropriately). This really is a big help, I feel, especially for people who are new’ish to RPA.

Of course, there’s a lovely shiny two star badge awarded when passing the exam, along with the title of ‘Microsoft Certified: Power Automate RPA Developer Associate’:

As with previous exams, I sat it from home (the proctored experience). Learning from previous times that I’ve taken exams, I ensured that my workspace was entirely clear from everything. As a result, the check-in process happened automatically, and I didn’t need to engage with any proctors at all (which was quite nice actually).

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

  • Cloud flows vs RPA flows
    • Capabilities of each
    • When to use each (ie how to handle different scenarios)
    • How to trigger each one
  • Cloud flows
    • Different types of triggers, & when each type should be used
    • Different types of actions, and the capabilities of them (at a high’ish level – expected to know common Microsoft actions, but not need to know all of the hundreds of different ones!)
    • Controls/operators. What they are, how they can be used to accomplish different requirements
    • JSON formatting & syntax
  • Business Process flow vs Business Rules
    • What each is
    • When to use each one
    • Capabilities
  • RPA flows
    • Common actions, how they work, capabilities of them
    • How expression syntax works within them
    • Debugging capabilities, and what to use when
    • How to interact with desktop applications
    • How to interact with websites
      • How data values can be used
      • How data tables can be used
      • How to use data that’s extracted from a website
    • Troubleshooting functionality
  • Usage of automation capabilities from Office 365 applications such as Excel & Visio
  • Loops
    • How they work for cloud & RPA flows
    • Troubleshooting
    • Implementing success/fail criteria
    • Error handling
  • Process Advisor
    • What it is
    • What it does
    • How it can help organisations
    • Limitations
    • What it cannot do
    • Process Mining vs Task Mining, & the important differences between them
  • Variables
    • How to handle variables across different environments
    • How to declare them (cloud flow vs RPA flow)
  • Runtime operations
    • How flows are triggered (async vs sync)
    • How flows are queued (cloud vs RPA)
    • How RPA flows are carried out when using machine groups
  • Artificial Intelligence (AI) capabilities
    • How AI can be used within flows
    • Different AI capability types (what each one can be used for)
    • AI within Power Platform, & AI within Azure Cognitive Services
  • Sharing flows
    • Different ways to share cloud flows
    • Different ways to share RPA flows
  • Application Lifecycle Management (ALM)
    • Solutions (managed vs unmanaged). Capabilities of each, when to use each type
    • AzureDevOps (ADO). What it is, when/how to use it, capabilities
    • Solution imports
    • Solution layers. What these are, troubleshooting functionality
    • Upgrade/Stage for Upgrade/Update. Which each is, what each does, how/when to use each one
    • Moving desktop flows between users
  • Security
    • Security roles needed to create
    • Security roles needed to share/modify
    • Security roles needed to register machine for RPA
    • Security roles needed to register machine groups for RPA
    • Security requirements to run different types of RPA flows (how it interacts with desktop/s)
    • Data Loss Prevention (DLP) – how it affects creation & runtime of flows

Overall, I had 46 questions, with a single case study. I’m used to having at least two case studies, so it was nice to have just one of them this time.

So….it’s a lot of stuff. Definitely targeted much more at the ‘pro-developer’ end of the scale that someone who might occasionally automate things. It’s absolutely necessary to understand coding conventions, ALM, etc.

It’s definitely an exam that if you’re not already currently hands-on with the skills needed, I’d highly recommend you get a decent amount of experience with it before taking the exam! I’d highly recommend ensuring that you have an environment in which you’re able to be hands on with all types of automation (cloud & desktop flows), and really understand how they can be handled with an eye on the enterprise scale!

If you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Wave 2 2022 – Omnichannel

I know it’s taken me a week (or two) to get round to this, but I’ve had other things on the go (such as starting my new job, for instance). However it wouldn’t be this time of year without doing a summary of new features for the Wave 2 2022 release.

As with previous posts in this area, I’ll be focusing on the Customer Service side of things, and also more precisely with a focus on the Omnichannel capabilities. However, whilst previously I’ve tended to focus just on the Omnichannel items, Customer Service is now being much more tied together with the Omnichannel offering, so it makes sense to broaden things out a bit.

So let’s start taking a look at the wonders that will (hopefully!) be in store for us within a few months:

Customer Service Workspace – enhanced layout

Public Preview – August 2022. GA – October 2022

I’ve previously taken a look at some of the capabilities of the Customer Service Workspace (see Omnichannel vs Customer Service Workspace), and how they compare to Omnichannel. With Microsoft now rollowing out the ability to have multi-session capabilities within it, it’s sometimes a hard decision for organisations to decide which one to use (there are some key differences though).

With the upcoming release, there are going to be new layouts for the site map (navigation menu), sessions & tabs. Some of the key changes coming are:

  • Sessions and child tabs are displayed horizontally
  • Improved handling of overflowing tabs and sessions
  • Tab bar is visible only if multiple tabs are present in a session
  • Improved site map that’s accessed from the hamburger icon with support for grouping and areas
  • Improved accessibility with 400% zoom mode
  • Increased predictability of session closure in multisession apps
  • In-app notifications aligned with the multisession navigation

These look to be quite good (I definitely wouldn’t have thought of all of them!), and I can’t wait to try them out for myself.

Single sign-on capabilities

GA – October 2022

One of the things that can be quite frustrating for customers is that if they’re interacting through live chat capabilities, and then switch over to a Power Virtual Agent, they need to re-authenticate. This is of course not quite optimal for a seamless customer experience.

Microsoft are therefore enabling single sign-on capabilities. What this means in practise is the following:

  • Authenticant contexts are shared between Power Virtual Agents and Omnichannel live chat sessions. If a user authenticates in one of them, then they become authenticated across all of the capabilities. There’s no need to authenticate per communication type
  • Customers can start with an unauthenticated conversation, and then authenticate at a later point in the conversation. This will then continue as authenticated across the different channels that they’re communicating through

Voice channel – expansion of availability

GA – October 2022

The voice channel (which I still need to do a write up on!) is really amazing, allowing customers to call in directly via phone etc. It’s been rolled out already to several regions, but customers in other regions have been asking for it.

Microsoft has now confirmed that the voice channel will now be available in the following countries:

  • United Kingdom
  • Canada
  • India
  • Switzerland

This is a great move – it still doesn’t mean that every country has the voice channel available, so I expect that Microsoft will keep on adding more countries for the availability of this (I know that there’s a decent amount of back-end systems that are needed, which is why it’s taking this long to get in place).

Voicemails

GA – January 2023

This one is getting me really excited. Obviously, being able to connect to a customer service agent is important. But what if the agent isn’t around? We could of course send an email, but if we’re already connected through a specific method of contact, ideally we’d like to continue with that method.

Especially when it comes to actually calling into an organisation, it can be quite frustrating to not reach the person we’re trying to get hold of, and then need to send an email.
Voicemail capabilities, coming in early 2023, will mean that customers will be able to leave voicemails for customer service agents to pick up. The agents will be able to set up welcome messages, as well as manage & playback voicemails that have been left.
This is really cool – I’m wondering if there are going to be AI capabilities included in this in the future, so as to automatically transcibe voicemails for the agents, for instance. I don’t think that it would take a LOT more technical capabilities – we already have Azure Cognitive Services that audio can be fed through to for a written transcription to be produced.

Customer Callbacks

GA – January 2023

One of the frustrations that I think is shared universally is when contacting an organisation, and being told that you’re in a queue. Not only are you in a queue, but there may be dozens/hundreds/thousands of people ahead of you…and the number doesn’t seem to be decreasing at a rapid rate.

Some organisations offer the ability to ‘reserve’ your spot in the queue, and will call you back when you’re next. To date, this hasn’t been a feature of Omnichannel.

However, coming in early 2023, this feature will be rolling out!. It will give customers the ability to keep their queue position, and to choose if they’d like a callback to happen when they’re at the front of the queue. Note that this would require a phone number to be provided, for the customer service agent to use to contact the customer.

I think that this is a nice feature, but will be curious to see how it plays out ‘in the real world’. I know that when my local doctor surgery implemented this, it was supposed to be great, but in practise actually didn’t work well.

I’ll be looking deeper into the different functionalities when they land, and will share them here. If there’s anything you think would be helpful to focus on, drop a comment & let me know!

Recognition as Microsoft Partner for Business Application Solutions

It’s been a little while since I’ve previously blogged around developing customer solutions and the Microsoft Specialisations. Since I spoke about it last year (Apps & Microsoft Partner Specialisations) the landscape has moved on a little, and I thought that it would be good to take a look again at it.

Currently in the Business Applications space, there’s a single specialisation. This is the ‘Microsoft Low Code Application Development Advanced Specialisation’, which is covered in detail at the Microsoft page for it (Microsoft Low Code Application Development Advanced Specialization).

In essence, this specialisation is aimed at partners who are developing Power Apps (yes, this is specifically aimed at Power Apps), and has been around for a year or so.

In order for Microsoft to track the qualifying metrics against this specialisation, it’s very important to carry out the PAL (Partner Attach Link) process. The details of how to do this is in my earlier post, which includes some of my thoughts at the time around how a partner should best implement the procedure.

Since then, my blog post has gained a good amount of traction, and several Microsoft partners have engaged with me directly to understand this better, and to implement the process into their project playbook. I’m really delighted at having been able to help others understand the process, and the reasoning behind it.

Now that’s all good for a partner who is staying in place at a customer. However there are multiple scenarios that can differ from this. Examples of this are:

  1. Multiple partners developing a single application together
  2. One partner handing over the application to a second partner for further development
  3. One partner implementing a solution, with a second partner providing support

Now, there’s really a single answer to all of the above scenarios, but it’s a matter of how to go about implementing this properly. Let me explain.

Originally, all developers would register PAL, and this would then be tracked through the environment cadence, and associated appropriately to the partner. This would be from the developers having been the creators of the apps.

This has now changed a little bit. Microsoft now recognises the capabilities of PAL using both the Owner of the app, as well as any Co-Owners of the app. This is a little more subtle, so let’s explain this in some detail.

It is possible, of course, to change the owner of an app. More commonly, however, is the practice of adding co-owner/s to an app (I always recommend this as best practice actually, to remove key-person responsibility risks).

Note: Changing the actual owner of an app requires the usage of a PowerShell command

So what happens now is that Microsoft will track the owners/co-owners of any app that’s deployed, and PAL association will flow through this. But there are a couple of caveats which it’s important to be very aware of!

  1. All owners/co-owners must have registered PAL with their user accounts (if using a service principal/service account as an owner, there’s a way of doing this using PowerShell)
  2. Microsoft will recognise the LATEST owner/co-owner association with the app as the partner organisation that will receive PAL recognition

Now if a customer adds co-owners to an app, this shouldn’t be an issue (as none of the users would have registered PAL). But if there are multiple partners in place, ONLY THE LATEST ONE WILL BE RECOGNISED.

Therefore to take the three scenarios above, let’s see how this would apply.

  1. Multiple partners developing a single app. Recognition would not work for all partners involved, just the latest one to associate with the app
  2. Partner 1 handing over app to Partner 2. Recognition would stop for Partner 1, and would then start for Partner 2
  3. Partner 1 implementing solution, Partner 2 providing support. Care would need to be taken that the appropriate partner is associated as owner/co-owner to the app, for PAL recognition.

It’s also important for both partners & customers to understand this, in the wider context of being careful about app ownership, and the recognition that it brings from Microsoft for partners delivering solutions. If a partner would go into a customer, and suddenly start taking ownership of apps that it’s not involved in, I don’t think that Microsoft would be very approving of it.

Now, all of the above is in relation to Power Apps specifically, as I’ve noted. However, the PAL article was updated last week (located at Link a partner ID to your Power Platform and Dynamics Customer Insights accounts with your Azure credentials | Microsoft Docs) and also interestingly talks about:

Note the differences between each item

Reading between the lines here, I think that we’re going to be seeing more advanced specialisations coming out at some point. Either that, or else partner status will be including these as well, as I can’t think of any other reason why PAL would need to be tracked for these as well! I’m also wondering if other capabilities (eg Power Virtual Agents, Power Pages, etc) will be added at some point as well…

Have you had any challenges with the PAL process? Is there anything more you’d like to find out about it? Drop a comment below, and I’ll do my best to respond!

New Platform DLP Capabilities

DLP (or Data Loss Prevention) is a very important capability in the Power Platform. With being able to bring together multiple data sources, both within the Microsoft technology stack as well as from other providers gives users amazing capabilities.

However with such great capabilities comes great responsibility. Of course, we trust users to be able to make proper judgements as to how different data sources can be used together. But certain industries require proper auditing around this, and so being able to specify DLP policies are extremely important to any governance team.

Being able to set how data connectors can be used together (or, in the reverse, not used together) across both Power Apps as well as Power Automate flows is imperative in any modern organisation.

To date, Power Platform DLP capabilities have existed that allow us to be able to categorise connectors (whether Microsoft provided or custom) into three categories. These categories specify how the connectors are able to function – they’re able to work with other connections that are in the same category group, but cannot work with connectors that are in a different category group.

So for example, it’s been possible to allow a user to create a Power App or a Power Automate flow that interacts with data from Dataverse, but cannot interact with Twitter (in the same app or flow).

With this approach, it’s possible to create multiple DLP policies, and ‘layer’ them as needed (much like baking a 7 layer cake!) to give the functionality required per environment (or also at the tenant level).

Now this has been great, but what has been missing has been the ability to be more granular in the approach to this. What about if we need to read data from Twitter, but just push data out to Twitter?

Well, Microsoft has now iterated on the DLP functionality available! It’s important to note that this is per connector, and will depend on the capabilities of the connector. What we’re now able to do is to control the specific actions that are contained within a connector, and either allow or not allow them to be able to be utilised.

Let’s take the Twitter connector as an example:

We’re able to see all of the actions that the connector is capable of (the scroll bar on the side is a nice touch for connectors that have too many actions to fit on a single screen!). We’re then able to toggle each one to either allow or disallow it.

What’s also really nice are the options for new connector capabilities.

This follows in the footsteps of handling connectors overall – we’re able to specify which grouping they should come under (ie Business, Non-Business, or Blocked). As new connectors are released by Microsoft, we don’t need to worry that users will automatically get access to them.

So too with new actions being released for existing connectors (that we’ve already classified). We’re able to set whether we want them to be automatically allow, or automatically blocked. This means that we don’t need to be worried that suddenly a new connector action will be available for users to use, that they perhaps should not be using.

From my perspective, I think that any organisation that’s blocking one or more action capabilities for a connector will want this to be blocked by default, just to ensure that everything remains secure until they confirm whether the action should be allowed or not.

So I’m really pleased about this. The question did cross my mind as to whether it would be nice to be able to specify this on a per environment basis when creating a tenant-level policy, but I guess that this would be handled by creating multiple policies. The only issue I could see around this would be the number of policies that could need to be handled, and ensuring that they’re named properly!

Have you ever wanted these capabilities? How have you managed until now, and how do you think you’ll roll this out going forward? Drop a comment below – I’d love to hear!

Environment types, capabilities & backups

Interesting title to start a blog post with, right? I can’t tell you how much I tried to work out what to call this, but then I figured that I’d just put at a high level what I’m going to be talking about!

So let’s start at the beginning. Environments within Dataverse. An environment is essentially a container for all sorts of different components, such as data models, apps, code, etc.

Examples of what an environment can contain

Within the Power Platform, there are different types of environments. As a quick recap, currently we have the following:

  • Default. Every Power Platform tenant has a default environment. We of course shouldn’t be using this for any proper development!
  • Production. Used for any Line of Business application
  • Sandbox. A sandbox environment is any non-production Dataverse environment. Isolated from production, a sandbox environment is the place to safely develop and test application changes with low risk.
  • Trial. Used to take out a trial
  • Trial (Subscription Based). Used to take out a trial when there’s subscription licensing in place
  • Developer. Personal environment, limited to one user. Previously called the Community plan.
  • Teams. Used when an app is created within Teams, to use a Dataverse for Teams environment. Doesn’t have full Dataverse capabilities, and has various limitations
  • Support. Only able to be created by Microsoft support during a support case. Is essentially a clone of an existing environment, used for diagnosis purposes.

Now, sandbox & production environments are automatically backed up – backups occur continuously, using Azure SQL Databases underneath. It’s also possible to create a manual backup instance of an environment as well, which usually takes a few seconds to carry out (restoring a backup, on the other hand, takes quite a bit longer…).

When restoring an environment, it’s not possible to restore to a production environment (though the backup could be from a production environment). It’s only possible to restore the backup to a sandbox environment – you’d then need to promote the environment from sandbox to production.

Let’s move away from backups for a moment. When we create an environment, we have the ability to select that the environment should be enabled for Dynamics 365:

This is actually a REALLY IMPORTANT CONSIDERATION! At this point in time, it’s not possible to update from a Power Platform Dataverse environment to then bring in Dynamics 365 capabilities. What this means is that if an organisation starts with just Power Apps, and then wants to expand into using Dynamics 365, IT’S NOT POSSIBLE TO DO THIS NATIVELY. Even Microsoft Support can’t do anything around this – you’d need to create a new environment, enable it for Dynamics 365, and then restore a backup to it.

It’s something that a lot of us would like be in place, but we’re not sure if it’ll ever come about. This is a tweet of mine from 2019 that Charles Lamanna responded to (I was SO thrilled that he actually responded to me!!):

https://twitter.com/clamanna/status/1176629306484637696

However, it’s still not in place. As a result, we recommend to all clients that when they deploy a Dataverse environment, they toggle the switch above (Note: A Dynamics 365 license is NOT needed to toggle this). Once this has been toggled (without deploying any of the Dynamics 365 apps), the Dynamics 365 apps and functionality can be installed/deployed at a later point in time.

There are actually various capabilities, such as the Data Export Service (yes, I know it’s now been deprecated) that actually relied on having the environment enabled as a Dynamics 365 environment in order to work. We found this out the hard way at a client, and had to do an overnight environment re-build to get the capabilities in place.

But there’s one other thing to consider around the differences between a native Dataverse environment, and an environment which has been enabled for Dynamics 365. This is around backups.

Now, backups are of course very important (thankfully they now occur automatically, as mentioned above – I remember my onpremise days when needing to run these manually!). But there are also some important differences for backup behaviour when it comes to environment types. See, it turns out that environments aren’t actually equal in backup behaviour. This is what actually happens:

  • Sandbox environments (all types) – backups retained for 7 days
  • Dataverse production environment (not enabled for Dynamics 365) – backups retained for 7 days
  • Dataverse production environment (enabled for Dynamics 365) – backups retained for 28 days

See that? Having Dynamics 365 enabled for an environment gives you FOUR TIMES as much backup retention time! That’s incredible!

Dataverse Environment enabled for Dynamics 365 – 28 days of backups available!

So not only are you able to then upgrade to Dynamics 365 applications at a later date, you then also have more peace of mind (hopefully you don’t need to use it though!) around keeping backups for longer.

This is really cool – I hope it helps you plan your environment implementation strategy! Have you ever come up against issues when using environments, or the type/s of environment? Drop a comment below – I’d love to hear!

Sharing multiple knowledge articles

Most of us are aware of the knowledge article functionality within Dynamics 365. For those who aren’t familiar with it, knowledge articles can empower users within any organisation with access to existing information.

Types of knowledge articles can include solutions to common issues, product or feature documentation, answers to frequently asked questions (FAQs), product briefs, and more. Being able to have access to this means that customer service agents can easily answer queries, without needing to spend (lots of) time investigating what’s happening, and find resolutions.

Note: At this point in time, the Knowledge Article functionality is still a restricted table within Dataverse. It requires either a Dynamics 365 plan, Customer Engagement plan, or Customer Service Enterprise plan

It’s great to be able to share information around within an organisation. There is native functionality for this, with the ability to share a knowledge article record directly with other users by clicking the ‘Email a link’ button:

Note: This is performed through going to the to Knowledge Article table, opening up a record, and carrying out the functionality from there. It cannot be done through access to Knowledge Articles on the Case form.

This will create an email (in Outlook) with a URL to the specific record:

This is of course very helpful, but is only internally facing. It’s not possible to send this to a customer who’s having an issue, as the customer wouldn’t be able to access the URL!

It’s also not particularly useful if we think about how customer service agents work, as they’d need to be moving through different areas of Dynamics 365.

Thankfully knowledge management is built into the customer service capability within the system. So for example, when we open a case record, we have the ability to search for knowledge articles directly in here:

This of course works much better from a customer service agent perspective – they have all of the functionality that they need in just one area.

So how can we share information directly with customers? Copying and pasting the information into a chat or email interaction seems quite manual and bothersome.

But there’s no need to do this manually, thankfully! Again, we have in-built functionality to handle this:

Clicking the little email icon on a knowledge article creates an email within Dynamics 365 (so you’ll need to have email enabled for users, to make this work!) with the information copied into it:

OK – this much easier. We can send customers the exact information, so that they then have it to hand.

But here’s a new scenario – what if we wanted to send MULTIPLE knowledge articles to a customer at once? We could of course click the email icon each time, but that results in a separate email being created for each one, which means the customer will be receiving multiple emails. Not the most ideal scenario, surely?

Well, thanks to my amazing colleague Ryan Hunter-Stott, there is actually a way to do this! In fact, technically you could say that there are two approaches, but holistically it’s the same thing – it involves an email.

So, you can either:

  • Select to email the knowledge article to the customer, or
  • Create a blank new email

Within the email message, we have the following option to insert a knowledge article:

Clicking this brings up an interface to be able to search for knowledge articles. Clicking the envelope icon will then insert the information in the email:

Now it’s not possible to select multiple knowledge articles in this window. BUT, it IS possible to click the button to open it up again, and insert a second one. And then a third one!

This can then be sent out to the customer, with all of the information contained in it!

It’s a nice little touch, and I think it definitely beats copying & pasting information into an email manually!

Have you ever thought about this scenario? Did you find this functionality, or end up doing it in a different way? Drop a comment below – I’d love to hear!