Power BI & Dataverse Solutions

With the recent announcement of Power BI being able to be included in Power Platform solutions, LOTS of people were celebrating. Finally there would be the ability to not only include Power BI reports within solutions, but we could then also automate (aka ALM) it as well! Celebrations all round….well, for the most part.

See, although the documentation (see Power Platform solutions can now include Power BI reports and datasets – Power Platform Release Plan | Microsoft Learn) states that Power BI reports & datasets can now be included in solutions, it doesn’t actually quite work like that.

What happens is that when Power BI reports and datasets (depending on what you’re wanting to do) are included in solutions, though it does appear in the solution explorer window, it’s actually just a sort of shortcut to where they actually live. Exporting the solution then brings in the components into the exported solution file. This can be seen quite clearly when extracting the file on your computer:

As we can see from the image above, we now have the Power BI components within it

Note: If you were hoping to just go into it & see the Power BI report nicely, unfortunately you’re going to be disappointed. Instead, it’s exported as a ‘.pbipkg’ file, which doesn’t seem possible to open with Power BI Desktop at all!

But it’s there, and supposed to work. So let’s go ahead & import it into the destination environment. After all, this is the whole point of solutions – being able to move components between places!

Note: For the purpose of this blog post, I’m using manual ALM (ie manually exporting & importing the solution). However, the same will be true for automated ALM (eg using Azure DevOps).

Now this can be easier said than actually done. See, it’s quite possible that you could experience an error when importing the solution into the target environment, such as the following:

The error message (‘This solution contains Power BI components, so it couldn’t be imported here’) seems to be helpful – well, to a point. We know that there are Power BI components in the solution – after all, this is the point of it, but how comes we’re not able to import it!

Usually at this point I’d go to download the log file, and try to pinpoint the exact cause of the error. When presented with this specific error though, the log file doesn’t really seem to be of much help, despite trawling through each & every line in it. All it does is confirm that there indeed has been an import error, and it seems due to the Power BI components in the solution.

Just to double-check this, I did remove the Power BI components, export the solution, and then import it in a different environment. This worked absolutely fine without any errors! So indeed it’s got something to do with the Power BI components – but WHAT exactly is happening?

Well, the cause of this goes back to how Power BI components in Power Platform solutions actually work. As mentioned above, the Power BI items (report, dataset etc) are actually stored within Power BI itself. Yes, they’re included in the solution when we export it, but when importing them, they don’t actually save to Dataverse.

This is the absolutely KEY important thing to know and understand. When importing a solution with Power BI components, they come in as part of the solution, but are published to Power BI. Not only are they published to Power BI, a Power BI workspace is CREATED for them to live in (which will be specific per environment – a single Power BI workspace will not be shared with multiple Power Platform environments):

What this means in reality is that when the solution is imported, the Power BI workspace is created. However it’s not created by the system itself – underneath everything, the creation of the Power BI workspace is being driven by the USER ITSELF that’s importing the solution. Now, if the user account does NOT have permissions to create Power BI workspaces…well then, it’s going to error out, which is EXACTLY what is happening here!

So, it’s absolutely vital that if you are including Power BI components in a solution, you must ensure that however you’re importing it, the user account has privileges to create Power BI workspaces (as well as publish reports to an existing workspace). Without this in place, you’re going to be getting some very confusing errors happening!

It’s also important to note that even if the solution is managed, it is still possible (with the appropriate user permissions) to edit the Power BI report & dataset. Including it in a managed solution does not lock it.

Also, I’d like to thank Laura GB for her inspiration on this topic – with my limited Power BI knowledge, I usually turn to her for advice & help with Power BI.

Have you been considering including Power BI components in your solutions, or already been doing so? Have you run into this error/issue before? Drop a note below – I’d love to hear how you managed to work out the issue!

The story of MFA & the Centre of Excellence

I’ve been rolling out the Microsoft Centre of Excellence solution for several years now at customers. It’s a great place to start getting a handle on what exactly is going on within a Power Platform tenant, though there’s obviously so much more that takes place within a Centre of Excellence team.

The solution gathers telemetry around environments, Power Apps, Power Automates etc through the usage of the Power Automate Admin connectors for Power Platform (see Power Platform for Admins – Connectors | Microsoft Learn for further information on these).

Now obviously we need a user account to run these, and this usually has been through the use of a ‘pseudo service account’, as using a service principal has been tricky, to say the least. So we would get customers to set up an appropriate account with licensing & permissions in place, and use this to own & run the Power Automate flows that bring in the information to the CoE solution.

It is important to note that usage of these connectors do require a pretty high level of permissions – in fact, we usually suggest applying the Power Platform Admin security role (within the Microsoft 365 Admin Centre) to the user account. All good so far.

The tricky part has, to date, been around security. Organisations usually require (for good reasons) multi-factor authentication to be in place (aka MFA). Now this is fine for users logging in & accessing systems. However, it proves to be somewhat tricker for automations.

See, when a user logs in & authenticates through MFA, a token is stored to allow them to access systems. Automations can also use this. However the token will expire at some point (based on how each organisations has implemented MFA access/controls). When the token expires, the automations will stop running, and fail silently. There’s no prompt that the token has expired, and the only way of knowing is to take a look at the Power Automate flow history. This can be interesting though, as signing in (with the pseudo service account) will prompt for MFA authentication, and then everything will start running again!

So this has usually resulted in conversations with the client to politely point out that implementing MFA on the service account will mean that, at some point, the Power Automate flows are going to start failing. Discussions with security teams take place, mitigation using tools such as Azure Sentinel are implemented, and things move ahead (cautiously). It’s been, to date, the most annoying pain for the technical implementation (that I can think of at least, in my experience).

Now you’d think that a change in this would be shouted from the rooftops, people talking about it, social media blowing up, etc. Well, I was starting an implementation recently for a customer, and was talking to them around this, as I’d usually do. Imagine my surprise when Todd, one of the Microsoft technical people attached to the client, asked why we weren’t recommending MFA.

Taking a look at the online documentation, I noticed that something had slipped in. Finally there was the ability to use MFA!

Trawling back through the GitHub history (after all, I wanted to find out EXACTLY when this had slipped in), I discovered that it was a few months old. I was still very surprised that there hadn’t been more publicity around this (though definately a good incentive to write about it, and a great blog post to start off 2023 with!).

So moving forward, we’re now able to use MFA for the CoE user account. This is definately going to put a lot of mind at rest (especially those who are in security and/or governance). The specifics around the MFA implementation can be found at Conditional access and multi-factor authentication in Flow – Power Automate | Microsoft Learn – but it’s important to note that specific MFA policies will need to be set up & implemented for this account.

So, now the job will be to retro-fit this to all organisations that already have the CoE toolkit in place. Thankfully this shouldn’t be too difficult to do, and will most definitely enhance the security controls around it!

Have you implemented any mitigation in the past to handle non-MFA? I’m curious if you have – please drop a comment below!

Active or inactive, that is the question?!?!

Catchy title, right? Well I was wondering what exactly I should use for this blog post, and as you’ll see as we go through things, this is probably quite a good paraphrase to use.

So, where to start? Well, with a customer, of course! Now, this customer has been running live with a custom Dynamics 365 solution for a little while. Importantly for this story, there have not been ANY releases in quite a few months. This is of course good to bear in mind, given that we can all, um, occasionally find that a release could cause an issue, somewhere, sometimes…

Part of the capabilities that they’re using is bringing in Leads, and qualifying them appropriately. As part of this process, there are various custom attributes (aka columns) that have been added to the Lead table, along with corresponding columns added to the Contact table. There’s also some custom logic that, when a lead is qualified, copies the values from Lead to Contact record, updating it (essentially extending the standard capabilities of the system).

This has all been working well to date, and the customer team has been very happy with their system. Until it stopped working, last week. Which was strange, as nothing seemed to have changed at all?

When trying to qualify leads in the system, they were getting the following error message:

Cryptic, right? This seemed a little more interesting as well, given that when only inputting basic information into a Lead record (eg First Name, Last Name, Phone Number), it didn’t matter how many leads existed with the same information, it qualified without a problem.

However, using any custom columns that had been added to the table caused this error to occur.

The first thing that I did was to check that there had been no updates released to Production. This was confirmed as being the case. I then also checked that there had been no OTHER solutions released to Production (as this could have impacted on it). Thankfully there hadn’t – the system looked to be in as fine a shape as it’s been running for a while.

OK – on to the next step. What updates have been released by Microsoft? Well, with the fact that we were able to pinpoint the date that the functionality had stopped working, we went to find the corresponding Learn article about the release (Update 22102 – Release Notes | Microsoft Learn). Don’t worry about clicking through to read it – there’s essentially not much in it, and there’s nothing at all around the Lead table or its functionality!

Continuing to dig around, I really wasn’t sure of what was causing this, but obviously had to work it out & figure out a fix! It was quite a dilemna.

This is where the amazing Microsoft community came into play. I noticed a post by Jeroen Scheper on one of the channels that I’m on. It turns out that he was having the same issues, so we started to try collaborate on it. This both reassured me (that it wasn’t just me), but also increased the confusion, as we couldn’t work out what was going on underneath to cause this!

Raising with Microsoft (we both actually raised support incidents), I had an amazing support call almost immediately. Demonstrating the problem, I was told that it was due to Duplicate Detection rules.

Now I’ll admit that this confused me somewhat. See, I had already checked the Duplicate Detection rules, but nothing had been changed, and no new rules had been implemented.

Getting the support agent to walk me through things, they told me that I had to unpublish the rules, modify a setting on them, and then re-publish the rules. This was the setting (on each one) that had to be updated:

This again caused me to be confused. Why was the system having issues with inactive records? Surely qualified leads are active records, but just qualified (& then being locked down as a result)?

Well, it turns out that my perspective of how this works is actually incorrect. As we (hopefully) all know, whilst all records have a Status value (eg Active, Inactive), there are some records that also have a Status Reason value.

In fact, the ‘State Code’ choice value in Dataverse is restricted (we can’t access it), and seems to have some quite interesting functionality running behind it. Depending on which table is accessed, there are different options available within it.

For example, the Lead table shows:

Whereas the Contact table shows:

And the Task table shows:

Anyhow – it turns out that when a Lead record is qualified or disqualified, though it’s not shown in the user interface (nor behind the scenes), the record is actually being deactivated!

More information on this can be found at Qualify and convert leads to opportunity | Microsoft Learn.

So, this was the underlying reason behind the error message. Obviously Microsoft had updated something, which then caused this to fail. I don’t know how many different customers may have been (or still be?) experiencing the issue, but I think that the error message at least could be a little clearer? Perhaps including a link to the relevant Microsoft documentation page, for a start.

Well, thankfully this was put to bed, and I was quite thankful (as was the customer). And this is how I decided to come up with the title of this blog post!

Have you ever had something similar happen to you? Drop a comment below – I’d love to hear!

Power Platform Capacity Monitoring

If I look back at customer engagements over the last few years around Power Platform, whether it was a new capability or an existing capability, there was ONE thing that stood out above all. This was the ability to be able to track capacity usage over time, and to be honest, most organisations weren’t really doing very well at it.

For those who are unaware, there are actually three different types of capacity present within Power Platform environments. These are:

  • Data
  • File
  • Log

Each one is used for a specific purpose – broadly speaking, File holds all attachements that are uploaded directly into Dataverse, Log is used for auditing purposes, and Data holds everything else (hence the name)!

Now this data is shown within the Power Platform Admin Centre, under the ‘Resources/Capacity’ section’. An example of this is:

There’s also a nice little breakdown of capacity allocation through licenses etc, which essentially shows where the available capacity has come from:

If we drill down a bit further, we can open up a specific environment, and see not only the overall usage per capacity type, but also which tables are consuming the most amount of data:

All of this is well & good so far, for someone wanting to take a look at what is currently happening. But this is a manual action – it is possible to manually export the data, but again, this isn’t automated.

It’s also not possible (at least not at this point in time) to query the underlying records that hold these values. So we’re a little stuck. If an organisation wanted to see historical data usage, and/or predict data trends (such as ‘how much capacity would we need to have in 6 months if we continued our scaling’), there’s no way to do this. At least not automatically – someone would need to store the values down manually, then report on it. A hassle, to say the least.

Now when it comes to looking overall at Power Platform, the Centre of Excellence Starter Toolkit is really quite amazing. The Microsoft PowerCAT team continue to iterate existing functionality within it, as well as bring new functionality as well.

At this point in time, however, it doesn’t have any capacity monitoring in it. Well, it sort of does – we can implement notifications to alert us when capacity reaches a certain value. But this doesn’t solve the challenge as laid out above.

So with this in mind, I set out to create a solution to handle it. I’ve always wanted to create some sort of tool for giving back to the community & helping others, and I saw this as my chance to do so (I’m in awe of the various XrmToolBox tool creators, for the record).

So, I’m releasing a capacity monitoring tool. I’m using GitHub as the host, and the repo can be accessed at https://github.com/thecrmninja/Power-Platform-Capacity-Monitoring (it was a learning experience as well as how to use GitHub as a source repository, as I’ve not done that before!).

Model-Driven App:

Reporting Dashboard:

This is just the first version – I have various ideas about how to iterate on it, and tweak functionality. Each release will include release notes & important information to be aware of (such as security needing to run it). Also importantly, thanks to the amazing Matt Collins-Jones for reviewing some of my work around this.

The audience for this tool is aimed at IT/Power Platform admins who are already familiar with the Microsoft CoE toolkit solution, and have appropriate access to it.

If you find any issues, please raise an appropriate GitHub Issue item, and I’ll look into it. Also, if you have any ideas that you think could be worthwhile, please feel free to suggest them!

Finally, I’d be interested in hearing how you think this could support you or your organisation – feel free to drop a comment below!

Interacting with Microsoft

People sometimes wonder about what is the best way to interact with Microsoft. In fact, this post isn’t strictly aimed at interacting with Microsoft, but can also be taken as a general guide to interacting with any organisation. The reason for deciding to write about this comes from a conversation that I had last week with a good friend, who was having issues in finding a resolution to an issue.

Let’s start at the beginning. We, or our customers, have relationships with suppliers such as Microsoft. We’ll order software (licenses), need to have them supplied to us (show up in our account), and sometimes there may be issues that we need to help/support with. There are obviously general support channels available that support tickets can be raised through, but there are also other avenues to consider as well.

Apart from the ‘professional’ relationship/s that may be in place, we may also have ‘personal’ relationships with members of different teams. These can happen in various different ways, such as speaking together at events, organising communities, etc. They are very valuable to have in place, & many people that I know, as well as myself, strive to improve & increase the network & connections that we have with Microsoft & other organisations.

However, there’s something very important to keep in mind. Just as we are doing our day job (what we’re paid to do), they are as well. At the end of the day, they (as with ourselves) need to ensure that the job gets done.

So if we reach out to ask something from them, we’re essentially asking for a favour, usually without anything recriprocal being able to be offered. A really good analogy for this, shared previously with me by Mark Smith & Chris Huntingford, is the ‘Sweet Jar Concept’.

Here’s how it goes. Imagine that the person has a jar with 100 sweets in. There are a limited number (the number itself isn’t important though) available, and the person has to choose who to give the sweets to. If we ask for a favour without knowing them, it’s highly unlikely to be granted. Even if we do know them somewhat, it may still be unlikely – they’re not going to be getting any return on the sweet that they’re giving out. Potentially if we know them well, and have proven in the past that we’re of value to them, we’ll get a sweet.

But even if we do know them well, if we keep asking for sweets (aka favours), the likelihood of them being granted will diminish (rapidly). Again – there’s a limited supply of them, and we’re not going to be looked on favourably if we keep coming back & asking for more, whilst not giving anything in return.

So HOW could we go about this, to set ourselves up for success (ie getting the outcome that we’re desiring). Well, this is actually quite simple – we need to identify who will be gaining something by helping us. Let’s explain this in more detail.

Within Microsoft (& any organisation really), people have metrics that they need to meet for their role. These are usually referred to as KPI’s (Key Performance Indicators), and are used for things like salary & role progression. What we should be doing is finding the right person (or team) that has (one or more) KPI’s aligned to what we’re trying to accomplish.

Let’s use the example here of the situation with my friend last week. He had a client who had ordered licenses from Microsoft that were needed for a project to start, but hadn’t appeared in the customer account yet. If the licenses weren’t there on-time, the project would need to be delayed, which would be expensive (& very annoying) for the customer.

On hearing the situation, I suggested to him to find the person (or people) within Microsoft who’d be aligned towards ensuring the situation was remedied ASAP. Examples of these people could be:

  • Microsoft Account Manager. This person would be aligned from the Microsoft side to ensure that the customer would have everything that they needed to be successful
  • Microsoft Sales Team. If there was a sales team involved with the license purchase, they would be very aligned to ensuring that the licenses had actually been procured and showing up in the customer account!
  • Microsoft Account Technology Strategist. This is the person responsible for designing the strategy and architecture to drive digitalisation and innovation for the customer

Now the above list isn’t exhaustive, and is also applicable to the specific scenario above. Additionally, the people mentioned might not be able to actually deal with the situation themselves, but if they’re not, are more than likely to know the right person/team who can deal with it.

With this approach, we’d be lined up for success in three ways:

  1. We’d (hopefully) get the immediate situation looked at and resolved
  2. We’d be giving our connections the ability to align to their KPI’s, and show results for them
  3. We’d be showing our value to our connections, which can then help if we have a favour to ask in the future (that’s not necessarily aligned to KPI’s

So in a nutshell – when we look to try to get something dealt with/resolved, we should ask ourselves who’s best aligned professionally to help us, with it being in line with their professional goals. This way we can drive value, as well as giving goodwill all round.

Have you ever been in a situation where this may have helped? How did you handle it? I’d love to hear – please drop a comment below!

Power Platform ALM Changes

As a starter for 10, if you haven’t yet looked into ALM for Power Platform, you should most definitely be doing so! ALM is, of course, Application Lifecycle Management. This is how, in a nutshell, we move solutions between environments.

In the good old days, this was done manually of course (CRM 4.0, I’m looking at you!). Today, though it is of course still possible to export/import solutions manually, it’s not the Microsoft Best Practise method. Doing it manually also means that it’s unlikely that you’ll have appropriate source control for your solutions too, which let’s face it, isn’t the best.

Want to look at a previous solution version? Hmm – do you still have it saved on your machine or not?

So we should generally know why we’d want to use ALM. But which tooling do we actually use for it? Going back to the on-premise days, there was TFS (or Team Foundation Server, to give its full name). This was a full source control respository, allowing developers to check in/check out code, built solutions, deploy them, etc.

With the move to ‘cloud based systems’, the TFS replacement is Azure Dev Ops (or ADO, as it’s usually referred to as). ADO works in essentially the same way as TFS did (some differences, but they’re not really relevant here), but does so through the cloud.

When it comes to Power Platform solutions, ADO uses the ‘Power Platform Build Tools’ capabilities to hook into Dataverse & pick up solutions. The tools essentially gives ADO the ability to connect in to a Power Platform environment, build/export solutions, deploy solutions, etc.

More information on the toolset can be found at Microsoft Power Platform Build Tools for Azure DevOps – Power Platform | Microsoft Docs

Now there are some limitations to the Power Platform Build Tools. In fact, I’d be so bold as to say that currently they’re not in a fully mature state. It’s not possible to do everything that you can manually (well, not with the inbuilt capabilities – there are some ‘hacks’ around that can extend them). At the moment, it’s essentially 1.0.

Well, Microsoft is announcing that they’re now releasing 2.0 of the Power Platform Build Tools this week!

In fact, this is so new that at the time of writing, there’s no Microsoft Docs available for this! So what does version 2.0 bring, and why is Microsoft releasing a new version?

So Microsoft has actually had this in planning for a while. There’s a lot going on with GitHub, as we well know, and Microsoft wants to drive the consistency of the experience for users forwards. At the moment, they work in somewhat different ways, and the aim is to bring this to parity.

The main change that the new version has is that instead of tasks being PowerShell based (which they are currently), now the tasks will be Power Platform CLI based. So Microsoft is changing the underlying working method from PS to CLI. Some of us will, of course, already be familiar with the way that the CLI works, and it’s really nice to see that the capabilities will now be part of ADO.

Now don’t start worrying that your current ADO pipelines (v0) will suddenly stop working. Microsoft is not doing anything with v0 at this point in time (though they may potentially deprecate in the future). So all of your existing ADO pipelines using the Power Platform Build Tools will continue to work, but no new features are going to be being released for it.

In terms of switching to using v2, it’s really quite simple – you’ll need to change the task version type as so:

If you are currently using YAML (as so many wonderful developers do) to author pipelines, you’ll need to do the following in the YAML code:

It’s very important to note that it’s not possible to mix and match task versions. If you do this, the ADO pipeline will fail, so please don’t try this!

I’m really excited about this, and to see that the CLI capabilities are being brought into play for ADO capabilities. I’ll admit that I’m wondering what else will be being released (in the fullness of time), as I’m sure that this is just the start of some great new stuff!

One of the things that I’m REALLY hoping for is the ability to use ADO pipelines to be able to migrate Power App Portals (or Power Pages), as currently it’s only possible to do using the Power Platform CLI, or the Configuration Migration Tool. It would be amazing to be able to do these with ADO pipelines as well!

PL-500: Microsoft Power Automate RPA Developer

RPA (or Robotic Process Automation) is a capability that Microsoft has been developing for a while within the Power Platform space. Whilst cloud flows can be used to interact with any systems that has an API in place, many organisations have (legacy) systems that have no API, so interacting with them can be challengin. RPA capabilities allow organisations to be able to interact with any system overall, thereby enabling & empowering businesses holistically.

I’ve been aware for a while that there’s been an exam coming out for RPA, though it’s taken a bit of time to land. That’s fine though – I can’t really think of any absolute rush to have it in place. I do think that over time, just as with some of the other certifications, it will become a required for solution or specialisation status.

The official page for it is at https://docs.microsoft.com/en-us/certifications/exams/pl-500. The specification for it is:

Candidates for this exam automate time-consuming and repetitive tasks by using Microsoft Power Automate. They review solution requirements, create process documentation, and design, develop, troubleshoot, and evaluate solutions.

Candidates work with business stakeholders to improve and automate business workflows. They collaborate with administrators to deploy solutions to production environments, and they support solutions.

Additionally, candidates should have experience with JSON, cloud flows and desktop flows, integrating solutions with REST and SOAP services, analyzing data by using Microsoft Excel, VBScript, Visual Basic for Applications (VBA), HTML, JavaScript, one or more programming languages, and the Microsoft Power Platform suite of tools (AI Builder, Power Apps, Dataverse, and Power Virtual Agents).

Now here’s the thing. I occasionally work in the automation space, either on customer projects, or when training users in the technologies. I wouldn’t describe myself as an advanced automation developer (whether cloud or RPA capabilities). I’m most definitely NOWHERE near the level of legends such as Matt Collins-Jones, for example (go check him out if you don’t know about him!).

So I knew that I may be a bit challenged when taking the exam, especially in the more ‘pro dev’ space (aka JSON etc). In fact, I didn’t actually realise that the exam specification included that sort of thing. I know, I should have – it’s aimed at developers overall…shows that I need to brush up on reading things properly!

Also, there’s still quite a bit of a focus on Power Automate cloud flows – it’s not JUST about RPA capabilities.

Now, really nicely, there are already Microsoft Learn pathways available (which have been around for a while, and updated appropriately). This really is a big help, I feel, especially for people who are new’ish to RPA.

Of course, there’s a lovely shiny two star badge awarded when passing the exam, along with the title of ‘Microsoft Certified: Power Automate RPA Developer Associate’:

As with previous exams, I sat it from home (the proctored experience). Learning from previous times that I’ve taken exams, I ensured that my workspace was entirely clear from everything. As a result, the check-in process happened automatically, and I didn’t need to engage with any proctors at all (which was quite nice actually).

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

  • Cloud flows vs RPA flows
    • Capabilities of each
    • When to use each (ie how to handle different scenarios)
    • How to trigger each one
  • Cloud flows
    • Different types of triggers, & when each type should be used
    • Different types of actions, and the capabilities of them (at a high’ish level – expected to know common Microsoft actions, but not need to know all of the hundreds of different ones!)
    • Controls/operators. What they are, how they can be used to accomplish different requirements
    • JSON formatting & syntax
  • Business Process flow vs Business Rules
    • What each is
    • When to use each one
    • Capabilities
  • RPA flows
    • Common actions, how they work, capabilities of them
    • How expression syntax works within them
    • Debugging capabilities, and what to use when
    • How to interact with desktop applications
    • How to interact with websites
      • How data values can be used
      • How data tables can be used
      • How to use data that’s extracted from a website
    • Troubleshooting functionality
  • Usage of automation capabilities from Office 365 applications such as Excel & Visio
  • Loops
    • How they work for cloud & RPA flows
    • Troubleshooting
    • Implementing success/fail criteria
    • Error handling
  • Process Advisor
    • What it is
    • What it does
    • How it can help organisations
    • Limitations
    • What it cannot do
    • Process Mining vs Task Mining, & the important differences between them
  • Variables
    • How to handle variables across different environments
    • How to declare them (cloud flow vs RPA flow)
  • Runtime operations
    • How flows are triggered (async vs sync)
    • How flows are queued (cloud vs RPA)
    • How RPA flows are carried out when using machine groups
  • Artificial Intelligence (AI) capabilities
    • How AI can be used within flows
    • Different AI capability types (what each one can be used for)
    • AI within Power Platform, & AI within Azure Cognitive Services
  • Sharing flows
    • Different ways to share cloud flows
    • Different ways to share RPA flows
  • Application Lifecycle Management (ALM)
    • Solutions (managed vs unmanaged). Capabilities of each, when to use each type
    • AzureDevOps (ADO). What it is, when/how to use it, capabilities
    • Solution imports
    • Solution layers. What these are, troubleshooting functionality
    • Upgrade/Stage for Upgrade/Update. Which each is, what each does, how/when to use each one
    • Moving desktop flows between users
  • Security
    • Security roles needed to create
    • Security roles needed to share/modify
    • Security roles needed to register machine for RPA
    • Security roles needed to register machine groups for RPA
    • Security requirements to run different types of RPA flows (how it interacts with desktop/s)
    • Data Loss Prevention (DLP) – how it affects creation & runtime of flows

Overall, I had 46 questions, with a single case study. I’m used to having at least two case studies, so it was nice to have just one of them this time.

So….it’s a lot of stuff. Definitely targeted much more at the ‘pro-developer’ end of the scale that someone who might occasionally automate things. It’s absolutely necessary to understand coding conventions, ALM, etc.

It’s definitely an exam that if you’re not already currently hands-on with the skills needed, I’d highly recommend you get a decent amount of experience with it before taking the exam! I’d highly recommend ensuring that you have an environment in which you’re able to be hands on with all types of automation (cloud & desktop flows), and really understand how they can be handled with an eye on the enterprise scale!

If you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Wave 2 2022 – Omnichannel

I know it’s taken me a week (or two) to get round to this, but I’ve had other things on the go (such as starting my new job, for instance). However it wouldn’t be this time of year without doing a summary of new features for the Wave 2 2022 release.

As with previous posts in this area, I’ll be focusing on the Customer Service side of things, and also more precisely with a focus on the Omnichannel capabilities. However, whilst previously I’ve tended to focus just on the Omnichannel items, Customer Service is now being much more tied together with the Omnichannel offering, so it makes sense to broaden things out a bit.

So let’s start taking a look at the wonders that will (hopefully!) be in store for us within a few months:

Customer Service Workspace – enhanced layout

Public Preview – August 2022. GA – October 2022

I’ve previously taken a look at some of the capabilities of the Customer Service Workspace (see Omnichannel vs Customer Service Workspace), and how they compare to Omnichannel. With Microsoft now rollowing out the ability to have multi-session capabilities within it, it’s sometimes a hard decision for organisations to decide which one to use (there are some key differences though).

With the upcoming release, there are going to be new layouts for the site map (navigation menu), sessions & tabs. Some of the key changes coming are:

  • Sessions and child tabs are displayed horizontally
  • Improved handling of overflowing tabs and sessions
  • Tab bar is visible only if multiple tabs are present in a session
  • Improved site map that’s accessed from the hamburger icon with support for grouping and areas
  • Improved accessibility with 400% zoom mode
  • Increased predictability of session closure in multisession apps
  • In-app notifications aligned with the multisession navigation

These look to be quite good (I definitely wouldn’t have thought of all of them!), and I can’t wait to try them out for myself.

Single sign-on capabilities

GA – October 2022

One of the things that can be quite frustrating for customers is that if they’re interacting through live chat capabilities, and then switch over to a Power Virtual Agent, they need to re-authenticate. This is of course not quite optimal for a seamless customer experience.

Microsoft are therefore enabling single sign-on capabilities. What this means in practise is the following:

  • Authenticant contexts are shared between Power Virtual Agents and Omnichannel live chat sessions. If a user authenticates in one of them, then they become authenticated across all of the capabilities. There’s no need to authenticate per communication type
  • Customers can start with an unauthenticated conversation, and then authenticate at a later point in the conversation. This will then continue as authenticated across the different channels that they’re communicating through

Voice channel – expansion of availability

GA – October 2022

The voice channel (which I still need to do a write up on!) is really amazing, allowing customers to call in directly via phone etc. It’s been rolled out already to several regions, but customers in other regions have been asking for it.

Microsoft has now confirmed that the voice channel will now be available in the following countries:

  • United Kingdom
  • Canada
  • India
  • Switzerland

This is a great move – it still doesn’t mean that every country has the voice channel available, so I expect that Microsoft will keep on adding more countries for the availability of this (I know that there’s a decent amount of back-end systems that are needed, which is why it’s taking this long to get in place).

Voicemails

GA – January 2023

This one is getting me really excited. Obviously, being able to connect to a customer service agent is important. But what if the agent isn’t around? We could of course send an email, but if we’re already connected through a specific method of contact, ideally we’d like to continue with that method.

Especially when it comes to actually calling into an organisation, it can be quite frustrating to not reach the person we’re trying to get hold of, and then need to send an email.
Voicemail capabilities, coming in early 2023, will mean that customers will be able to leave voicemails for customer service agents to pick up. The agents will be able to set up welcome messages, as well as manage & playback voicemails that have been left.
This is really cool – I’m wondering if there are going to be AI capabilities included in this in the future, so as to automatically transcibe voicemails for the agents, for instance. I don’t think that it would take a LOT more technical capabilities – we already have Azure Cognitive Services that audio can be fed through to for a written transcription to be produced.

Customer Callbacks

GA – January 2023

One of the frustrations that I think is shared universally is when contacting an organisation, and being told that you’re in a queue. Not only are you in a queue, but there may be dozens/hundreds/thousands of people ahead of you…and the number doesn’t seem to be decreasing at a rapid rate.

Some organisations offer the ability to ‘reserve’ your spot in the queue, and will call you back when you’re next. To date, this hasn’t been a feature of Omnichannel.

However, coming in early 2023, this feature will be rolling out!. It will give customers the ability to keep their queue position, and to choose if they’d like a callback to happen when they’re at the front of the queue. Note that this would require a phone number to be provided, for the customer service agent to use to contact the customer.

I think that this is a nice feature, but will be curious to see how it plays out ‘in the real world’. I know that when my local doctor surgery implemented this, it was supposed to be great, but in practise actually didn’t work well.

I’ll be looking deeper into the different functionalities when they land, and will share them here. If there’s anything you think would be helpful to focus on, drop a comment & let me know!

Recognition as Microsoft Partner for Business Application Solutions

It’s been a little while since I’ve previously blogged around developing customer solutions and the Microsoft Specialisations. Since I spoke about it last year (Apps & Microsoft Partner Specialisations) the landscape has moved on a little, and I thought that it would be good to take a look again at it.

Currently in the Business Applications space, there’s a single specialisation. This is the ‘Microsoft Low Code Application Development Advanced Specialisation’, which is covered in detail at the Microsoft page for it (Microsoft Low Code Application Development Advanced Specialization).

In essence, this specialisation is aimed at partners who are developing Power Apps (yes, this is specifically aimed at Power Apps), and has been around for a year or so.

In order for Microsoft to track the qualifying metrics against this specialisation, it’s very important to carry out the PAL (Partner Attach Link) process. The details of how to do this is in my earlier post, which includes some of my thoughts at the time around how a partner should best implement the procedure.

Since then, my blog post has gained a good amount of traction, and several Microsoft partners have engaged with me directly to understand this better, and to implement the process into their project playbook. I’m really delighted at having been able to help others understand the process, and the reasoning behind it.

Now that’s all good for a partner who is staying in place at a customer. However there are multiple scenarios that can differ from this. Examples of this are:

  1. Multiple partners developing a single application together
  2. One partner handing over the application to a second partner for further development
  3. One partner implementing a solution, with a second partner providing support

Now, there’s really a single answer to all of the above scenarios, but it’s a matter of how to go about implementing this properly. Let me explain.

Originally, all developers would register PAL, and this would then be tracked through the environment cadence, and associated appropriately to the partner. This would be from the developers having been the creators of the apps.

This has now changed a little bit. Microsoft now recognises the capabilities of PAL using both the Owner of the app, as well as any Co-Owners of the app. This is a little more subtle, so let’s explain this in some detail.

It is possible, of course, to change the owner of an app. More commonly, however, is the practice of adding co-owner/s to an app (I always recommend this as best practice actually, to remove key-person responsibility risks).

Note: Changing the actual owner of an app requires the usage of a PowerShell command

So what happens now is that Microsoft will track the owners/co-owners of any app that’s deployed, and PAL association will flow through this. But there are a couple of caveats which it’s important to be very aware of!

  1. All owners/co-owners must have registered PAL with their user accounts (if using a service principal/service account as an owner, there’s a way of doing this using PowerShell)
  2. Microsoft will recognise the LATEST owner/co-owner association with the app as the partner organisation that will receive PAL recognition

Now if a customer adds co-owners to an app, this shouldn’t be an issue (as none of the users would have registered PAL). But if there are multiple partners in place, ONLY THE LATEST ONE WILL BE RECOGNISED.

Therefore to take the three scenarios above, let’s see how this would apply.

  1. Multiple partners developing a single app. Recognition would not work for all partners involved, just the latest one to associate with the app
  2. Partner 1 handing over app to Partner 2. Recognition would stop for Partner 1, and would then start for Partner 2
  3. Partner 1 implementing solution, Partner 2 providing support. Care would need to be taken that the appropriate partner is associated as owner/co-owner to the app, for PAL recognition.

It’s also important for both partners & customers to understand this, in the wider context of being careful about app ownership, and the recognition that it brings from Microsoft for partners delivering solutions. If a partner would go into a customer, and suddenly start taking ownership of apps that it’s not involved in, I don’t think that Microsoft would be very approving of it.

Now, all of the above is in relation to Power Apps specifically, as I’ve noted. However, the PAL article was updated last week (located at Link a partner ID to your Power Platform and Dynamics Customer Insights accounts with your Azure credentials | Microsoft Docs) and also interestingly talks about:

Note the differences between each item

Reading between the lines here, I think that we’re going to be seeing more advanced specialisations coming out at some point. Either that, or else partner status will be including these as well, as I can’t think of any other reason why PAL would need to be tracked for these as well! I’m also wondering if other capabilities (eg Power Virtual Agents, Power Pages, etc) will be added at some point as well…

Have you had any challenges with the PAL process? Is there anything more you’d like to find out about it? Drop a comment below, and I’ll do my best to respond!