Environment Grouping

One of the main ‘complaints’ that Power Platform administrators have is around how policies are applied to environments. Within Azure, it’s possible to set up security policies and apply them in bulk, or group together components under a single set of policies. However when it comes to Power Platform, this has not been possible – each environment has needed to be configured on its own.

I’m not talking here about DLP policies, as these are set up and then relevant environments selected/deselected as needed. I’m talking about things like setting Canvas App sharing limits, welcoming new makers, and other items.

Well, Microsoft has now made this possible to do – though the current first iteration (now in Public Preview) only has a few options within it, I’m quite certain that many more items will be coming down the line to fall under the new Environment Grouping feature.

At the moment, there are 6 options available for Power Platform administrators to be able to set and configure. Note that you do need to have the M365 security roles for either Global Tenant Administrator or Power Platform Administrator to be able to access and carry this out.

To be clear, Environment Grouping is a feature of Managed Environments. I’m not going to go into the debate about whether you should or shouldn’t adopt Managed Environments (at least not here – I may be speaking about it publicly later on this year), but you do need to have these in order to use this functionality. More specifically, you will ONLY be able to add environments that are set as ‘Managed’ to Environment Groups (though they don’t have to have Dataverse in play):

So, what exactly is the purpose of Environment Grouping? Well, it’s to minimise the amount of time that Power Platform administrators need to spend in setting up & applying policies.

Think of the users within your organsiation. You’re going to have different personas, such as developers, testers, end users, etc.

You’re also likely (especially in larger organisation) to have different business units & functions requiring different items. For example, you may lock down access to social media, but Marketing and Recruitment may indeed need access to social media to be able to carry out their jobs.

With these personas in mind, you can then start to look into building out different rule groupings, which will apply to all environments that are included under the Environment Group. It’s somewhat similar to the way in which DLP policies work – you create a DLP policy, and then everything that comes under the DLP policy gets the DLP policy setting.

There are many ways to manage pockets of environments within your tenant using environment groups. For example, global organisations can create an environment group for all environments in each geographic region to ensure compliance with legal and regulatory requirements. You can also organise environment groups by department or other criteria.

One of the other features around Environment Groups is the ability to use Environment Routing. I’ve talked about this previously when the feature was first released (Developer Environment Routing!) – Environment Groups now takes this to the next level, by being able to automatically set the Environment Group that new developer environments will fall under (so therefore policies will be automatically applied). Important to note here that all developer environments created through this WILL be set as ‘Managed’.

More information on the new capabilities can of course be found on Microsoft Learn, at https://learn.microsoft.com/en-us/power-platform/admin/environment-groups.

I think that this is a great new feature to have in place for Power Platform administrators, and look forward to seeing new functionality rolled out within this to enable organisations in a better way. Being able to cut down on administration/governance time, whilst being able to be more effective is, in my view, a win-win for ALL of us, and I can’t wait to see how it will develop over time.

So, my question to you is how would YOU look to use such functionality? What features might you like to appear within Environment Grouping to enable you and your organisation? Drop a comment below – I’d love to hear!

AI, Microsoft Copilot, and Copyright

Firstly, a Happy New Year to everyone – I’m sure that you have amazing plans for 2024, and wish you the best of luck with them!

Over the last few months, I’ve been keeping a close eye on Copilot (since it was announced as being in GA), and various happenings around the wider AI scene. It seems almost impossible to find someone who is NOT aware of the OpenAI happenings towards the end of 2023, and so many more conversations now include mention of ChatGPT, Azure OpenAI, etc.

But there’s one item I’d like to pick up on & discuss, which given the news events of last week, is extremely pertinent. This is the topic of copyright, which is a very important topic to understand. However in order to understand it properly, we need to understand how AI offerings are generated in the first instance.

All of the various AI offerings rely on LLM’s. This stands for ‘Large Language Model’, and these are deep learning models that are pre-trained on vast amounts of data, and by vast, the number of data points are truly staggering:

Incidentally, users should consider the use case that they’re using AI for, and look to use the best optimum model for it. As an example – if wanting to find out the best routine for making a cup of coffee, it is unlikely that a GPT-4 model would be neededa suitable model could be one with less parameters in it. This is in part due to the amount of resources needing to be used to run queries on more advanced data models.

When using an AI offering, these datasets are used to present answers back to the users. Some AI models are not limited to just their LLM datasets, and are able to actively trawl & access websites as well for more up to date information.

But there are two inherent problems with the way that this can work:

Data

When users interact with chatbots or other AI capabilities, they’re inputting data into them. This data could be used by the AI capability to further train models forward, ingesting the data provided to them. Given that data could be sensitive or proprietary, this can be problematic. Not all AI organisations use data just for processing, as has been discovered by various users.

Copyright

Given that responses back to users (whether in text format, image format, or other formats) are based on the datasets from the underlying LLM’s, users could potentially be provided with information that is actually copyright, and which they’re not able to use. This is very problematic, as it can result in users passing off material as their own, whilst it actually belongs to someone else.

Microsoft’s approach

Microsoft’s approach to AI capabilities has been made extremely clear. You may be familiar with seeing slides similar to the following slide:

What this means is the following:

  • Microsoft will not use any customer data to train AI models & capabilities for any standard AI offering
  • Any custom Copilots created by Microsoft customers will remain their own – Microsoft will not use data or capabilities from customer created collateral within the Microsoft AI offerings. This means that a bespoke Copilot will only offer its functionality to the customer that created it – other organisations, even within the same sector & creating Copilots themselves, will not benefit from this

Microsoft has also confirmed publicly that any information generated through the usage of Copilot or Azure OpenAI can be used without concern about copyright claims (Microsoft announces new Copilot Copyright Commitment for customers – Microsoft On the Issues). In fact, Microsoft has even gone so far as to say that Microsoft will assume responsibility for any potential legal risks involved.

If a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters,

Brad Smith, Microsoft Chief Legal Officer

This is quite important – customers are able to use Copilot & Azure OpenAI capabilities, and be assured that they will not have to be concerned about copyright issues or challenges. There are of course some conditions around this, in the way that prompts & interactions need to be handled (see Customer Copyright Commitment Required Mitigations | Microsoft Learn for further information on this).

Microsoft has this called out specifically within their Universal License Terms, available to view in full at Microsoft Product Terms.

Recent news events

With the announcement last week that the New York Times has filed suit against the OpenAI Corporation & Microsoft, this is very timely to look at (New York Times Sues OpenAI and Microsoft Over Use of Copyrighted Work – The New York Times (nytimes.com)).

The implications of such a lawsuit will affect how AI capabilities will be able to be created & used on an on-going basis. Copyright is of course very important to respect, and it will be quite interesting to see how this plays out. Having taken a look at some of the material included in the lawsuit, there are most definitely similarities between the New York Times information, and the AI generated output.

So, in my opinion, this is going to be a very interesting space moving forward, and I look forward to seeing how it goes, and any effects that it has on the usage of AI within organisations.

Developer environments – new capabilities to create for users

Developer environments are awesome. There – I’ve said it for the record. Formerly known as the ‘Community Plan’, developer environments are there for users to be able to play with things, get up to speed, test out new functionality, etc. They’re free to use – even with premium capabilities & connectors, users do not need premium licensing in place (caveat – if it’s enabled as a Managed Environment, it will require premium licensing).

Originally, users were only able to create a single developer environment. However, earlier on this year Microsoft lifted this restriction – users are now able to create up to THREE developer environments for their own usage (which makes it even easier now for users to get used to ALM capabilities, and try it out for themselves).

Now, the ability for users to create developer environments is controlled at the tenant level, and it’s either On or Off. It requires a global tenant admin to modify this setting, but it’s not possible to say ‘User Group A will not be able to create developer environments for themselves, but User Group B will be able to’.

Organisations have differing viewpoints on whether they should allow their users the ability to create developer environments or not. I know this well, as usually I’m part of conversations with them when they’re debating this.

One of the main challenges that comes when organisations don’t allow users to create their own developer environments has been that historically, it’s not been possible for someone else to create the environment on their behalf. If we think of ‘traditional IT’, if we’re not able to do something due to locked down permissions, we can usually ask ‘IT’ to do it for us, and grant us access. This has not been the case with developer environments though – well, not until recently.

Something that I do from time to time is chat with the Microsoft Product Engineering groups, to provide feedback to (try to!) help iterate products forward and better. One of the conversations I had in the summer was with the team responsible for developer environments. I was able to share experiences & conversations that I had been having with large scale enterprise organisations, and (very politely!) asked if they could look to open up the ability to do something around this.

Around a month ago or so, the first iteration of this dropped – in the Power Platform Admin Centre interface, it was now possible to specify the user for whom an environment was to be created!

This was an amazing start to things, and definitely would start unblocking Power Platform IT teams to enable their users, in circumstances where their organisations had decided to turn off the ability for users to create their own developer environments.

However, this still required the need to do it manually. Unless looking into an RPA process (which, let’s face it, would be clunky & undesirable), it meant that someone with appropriate privileges would need to go & actually create the environment, and associate it to the user.

However, this has now taken another MASSIVE step forward – I’m delighted to announce that this capability has been implemented in the Power Platform CLI, and is live RIGHT NOW (you’ll need to upgrade to the latest version – it’s present in 1.28.3 onwards).

So, with this in place, it’s now possible to use PowerShell commands to be able to create developer environments on behalf of users, and assign it to them. Organisations usually already have PowerShell scripts to handle new joiners, and will therefore be able to integrate this capability into these, to automatically set up developer environments for users. Alternatively, existing users could look to raise internal requests, and have them automated through the use of PowerShell (along with appropriate approval processes, of course!).

So this is really nice to see. However, I think it can still go one step further (at least!), and am trying to use my connection network to raise with the right people.

See, we have the Power Platform for Admins connector within Power Platform already. One of the functions available in this is to be able to create Power Platform environments:

However, if we look at the action (& the advanced settings within this action), there’s no ability to set this:

Interestingly enough, the API version listed by default is actually several years old. By doing some digging around, I can see that there are multiple later API versions, so I’m not sure why it’s using an older one by default:

What would be really amazing is to have these capabilities surfaced directly within Power Platform, using this connector. Then we could look to have everything handled directly within Power Platform. Given that the CoE toolkit already includes an Environment Request feature, I would see this as building on top & enabling it even further. Obviously organisations wouldn’t need the CoE toolkit itself, as they could look to build out something custom to handle this.

What are your thoughts on this – how do you see these features enabling your organisation? If your organisation HAS locked down the ability for users to provision developer environments, are you able to share some insights as to why? I’d love to hear more – drop a comment below!¬

Developer Environment Routing!

Recently I talked about the wider vision that organisations would be able to use, for helping users get access to the right environments (Default Environment – How to handle? » The CRM Ninja). As part of this, I discussed the Microsoft vision of having environment routing in place, to move users automatically to specific environments.

At the point of writing, there wasn’t anything that I could publicly talk about. However, overnight Microsoft have released functionality around this – what I see as being the first step that this direction is taking. The documentation for this is at https://powerapps.microsoft.com/en-us/blog/default-environment-routing-public-preview/

The functionality released is to enable new users to Power Platform to automatically have a developer environment created for them to access, rather than landing in the Default environment within their tenant. Many organisations struggle with users creating content in the Default environment, when it’s not really (at least not in my opinion) the right place to do this.

Now, when we say ‘new users’, this doesn’t actually mean users newly created in M365 (or Entra ID/AAD). What this means is ‘users who have not accessed anything within Power Platform before’. In the back end, there’s a counter on each user record that keeps track of this, which this functionality is using to determine if users have accessed Power Platform beforehand or not.

What is important to note on this as well is that the Default environment DOES NOT need to be set to Managed for this to work. Microsoft documentation doesn’t make this clear at the moment, but hopefully it’ll be updated soon to clarify this.

Two settings do need to be toggled on within the Power Platform Admin Centre for this to work:

Once these have been set & saved, let’s take a look at how things actually happen. I’ve created a new user for testing purposes:

When signing in, it then briefly shows the general interface that we’re used to for a few seconds:

But, then we get this exciting NEW screen!

And then after a minute or so, we get placed nicely in the new environment:

Looking at the Power Platform Admin Centre, we can see the new environment that’s been created:

To be candid, during my testing things didn’t always work – I had some differing behaviour, or (on one occasion) the interface just hung. I’m going to put this down to being newly released & the product team working through potential issues (remember of course – this is in PREVIEW), and am hoping that they’re resolved very soon.

Also, it’s important to note that the developer environments created through this are MANAGED. Users will be able to create collateral in them, but to run apps etc will need premium licensing in place.

Moving forward, it would be great to have some information displayed to users if something hasn’t worked, as well as notifications to admins (configurable) so that they’re aware as well. Examples of this could include where an organisation has maxed out the number of (free) developer licenses available (yes, I know this sounds stange, but there’s a default limit of 9,999 developer licenses per org).

But I think it’s a great first step forward, and hopefully there will be many different ways that this product will be developed forward. My initial thoughts would include:

  • Creating developer environments for existing Power Platform users who don’t have a personal developer environment
  • Routing existing Power Platform users who have their own Developer environment to it
  • Being able to route to other places as well, including being able to specify which users/groups of users should be routed

It’s an exciting place to be in, and I look forward to seeing more of it!

What are your thoughts around this? Does your organisation allow users to have personal developer enviroments, or do they lock it down?

Default Environment – How to handle?

As we’re all aware, the default (Power Platform) environment in any Azure tenant is a very ‘interesting’ thing to have. It’s there by default when an Azure tenant is created, all users within the Azure tenant automatically have access to it, we’re not able to restrict users from being in it, etc etc.

Though it’s able to be backed up, it’s not able to be restored over itself, there’s no SLA/support available on it….the list goes on & on…!

Many of us have come up against issues caused by people using the default environment whilst not knowing about challenges involving it, which usually results in pulling out our hair, banging our head against the wall, and other like-minded productive approaches.

However, it is the first place that users, being new to Power Platform, land up, and instinctively they’ll start building applications, automations etc within it (though usually without using solutions as a container for the development of items). So to date, there’s not really anything that’s been able to be done around this, apart from monitoring users & chasing them after the fact.

Now, we’re all about enabling our users in the right way, helping educate & support them. Telling them a big NO doesn’t help, and can even be an initial blocker to having people start playing around & building technological solutions.

So how can we go about enabling our users, but also having the appropriate level of governance over the top? Well, there are several steps that I think we can take, which will help us with these. Now, not all of these are yet in place, though they have been talked about publicly. So let’s go take a look at them

  1. The first step, in my mind, is to start off with enabling the default environment as a managed environment (yes, this can ACTUALLY be done!). Managed environments have many different properties associated with them, but the one of most interest (for this at least) is the requirement to have a premium license in place.

All users within an organisation should by default have an M365 license SKU against them (usually this would be an E3 or E5). Users with these can immediately use the seeded Power Platform capabilities within them to create Power Platform collateral (using standard connector capabilities). However, with the default environment being managed, they will NOT be able to access it!

Note: For the moment, I’m leaving out users who have premium Power Platform licenses – this is deliberate

  1. Environment routing. Announced recently is the environment routing capabilities. This will enable users to be automatically routed to an appropriate environment, based on various conditions that can be set. With this, we could create appropriate business unit ‘sandboxes’, and we could route users to these. The user experience would be that when logging in, they would automatically then go to the right environment, rather than trying to work out which environment they should actually go to. This will save on confusion, and be a good user experience (in my opinion).
  1. Just-In-Time (JIT) Environment Creation. One of the items mentioned by Charles Lamanna at the European Power Platform Conference 2023 in Dublin is a new capability that’s coming in soon (I hope!). From the sound of it, this will give the ability to automatically create a new environment for users who do not already have one.

This sounds really cool. With the recent advent of Development Environments (& the ability for all users to have multiples of these), this could work REALLY well with the environment routing capability mentioned above. When a user would log in for the first time, it could look to see if they have a developer environment – if yes, then route them to it. But if the user didn’t, then to automatically spin up & create a new developer environment, and route them to it.

Now there are some caveats with this approach, leaving aside that some of the functionality isn’t GA yet.

It would mean that organisations would need to be alright with changing the default environment to become a managed environment. Obviously, risk assessments would need to be carried out with this, and non-premium solutions migrated elsewhere.

It’s also important to call out that organisations which have a CDS 1.0 implementation (ie before Power Platform became GA etc) will only have the ability to upgrade default to managed. They are not able to downgrade back to an unmanaged default environment, given limitations of the original CDS implementation (I’ve heard some truly HORRIFIC stories around this, so be careful!)

The above, however, is just the start of things. There are many other concepts to keep in mind, such as Landing Zones, Policies, etc. I’m going to be looking to cover these in upcoming posts, so keep an eye out for them!

New Power Platform Security Role Editor

We’ve all been there. Security role wise, that is. It’s the point in any project where we start looking at configuring user security. To do this, we’ve used the Security Role section in the Settings area (once it’s actually loaded, of course):

Ah, the joys of this – dating back to CRM 3.0 (to my recollection – though it possibly might be 4.0). All of these lovely little circles, which fill up more & more as we click on them, whilst trying to work out what each one actually does:

And that’s not to mention the ‘Missing Entities’ tab (did anyone ever figure out what this was supposed to be used for), or the ‘Custom Entities’ tab which seemed like a catch all place. Plus the fact that non-table permissions (eg Export to Excel) were placed on random tabs that meant we needed to hunt through each tab to find the appropriate item.

Now many of us spend hours in here (then further hours once we started troubleshooting user issues that were down to security role misconfiguration). The absolutely ‘JOYS‘ of the header title row not being scrollable (though it was possible to hover over each permission, and it would tell you what it was). The power of clicking on the line item, and seeing ALL of the little circles fill up – if you haven’t ever done it, you’ll not have experienced the bliss that this could bring!

But all things come to an end(or as the Wheel of Time series says:’ The Wheel of Time turns, and Ages come and pass, leaving memories that become legend‘), and now we have a NEW security role experience.

First of all, the UI has changed. It’s cleaner, responsive, gives more information to users upfront….and the heading SCROLL!!!

We’re able to show just the tables that have permissions assigned to them (rather than wading through dozens or hundreds of entries that have no relevance), or show everything:

Oh, and those random non-table privileges that we had to try to find beforehand – these are now grouped very nicely. This is SO much easier to manage!

We can also take permissions that have been set on a specific table, and then copy them to another table (it promps us to select – and we can select MULTIPLE tables to copy to!):

But best of all is the way that we can now set permissions for items. There are several different ways of doing this.

Firstly, Microsoft has now provided us with the ability to select standard pre-defined options. Using these will set permissions across all categories for the item appropriately:

This is really neat, and is likely to save quite a bit of time overall. However if we’re needing to tweak security permissions to custom settings, we can do this as well. Instead of clicking on circles, we now have lovely dropdowns to use:

In short, I’m absolutely loving this. The interface is quick to load, intuitive, and works well without fuss.

Given how much time I’ve spent over the years in wrestling with security roles, I think this is going to be a definite timesaver for so many people (though we’ll still need to troubleshoot interesting error messages at times that testing will throw up, and work out how/what we’re needing to tweak for security access to work).

There are still some tweaks that I think Microsoft could make to get this experience even better. My top three suggestions would include:

  • The ability to select multiple lines, and then set a permission across all of them (sort of like bulk editing)
  • Being able to have this area solution aware. When we have various different projects going on, it would be great to have the ability to filter the permissions grid by a solution. This would be a timesaver, rather than having to wade through items that aren’t relevant
  • Export to Excel. Having a report generated to save digitally or print off is amazing for documentation purposes. There are 3rd party tools (thank you XrmToolBox!), but it would be great to be built into here

Overall, I’m really quite happy and impressed with it (it’s definitely taken enough time for Microsoft to pay attention to this, and get it out), and hope that it’ll continue to improve!

What have your bugbears with the legacy security editor been over the years, and how are you liking the new experience? Drop a comment below – I’d love to hear!

Developer Environment Deletion!

Strong title for a blog post, right? Well, I did want to catch your attention! So what exactly are we talking about here?

For the last few years, it’s been possible for users to sign up for a ‘Developer’ plan, which gives them a full capability Power Platform environment for free (though with some limitations to them). This used to be be called the ‘Community’ plan, and is an amazing resource for everyone, whether they’re a professional or citizen developer, to have their own personal ‘sandpit’ to play in, and try things out.

Let’s wind back a few months in time now – earlier this year, Microsoft announced that users would be able to create THREE of these Developer environments, rather than having just a single one! This was mind blowing news, and something that has been extremely welcomed. If you’re wanting to see more on the announcement, Phil Topness has a great video on it at Dataverse Environments For Everyone – New Developer Plan – Power CAT Live – YouTube.

Incidentally, I’m curious as to how much storage space Microsoft has in the background to handle these. After all, each environment takes up a minimum of 1GB of space (& can grow to 2 GB). That means that each user could have 6GB of storage being used….which when multiplied, gives a VERY large number!

Microsoft has now announced that these developer environments, however, need to be utilisied. Ie if they’ve been created, but aren’t being used, Microsoft is going to delete them! Now, from a certain perspective, this is actually quite good – after all, there are all of the storage considerations for environments that have been created, but not being used. However from a different perspective, this could be a problem. What about if you’re doing something occasionally in an environment, but not too often? What about if you decide to go on a ‘Round the World’ cruise for several months?

So let’s look at the definition for this. Microsoft states that an environment is considered to be inactive when it hasn’t been used for 90 days. At that point in time, it is disabled, and the administrator or environment owner is notified. If there is no action taken within the next 30 days, then the developer environment is automatically deleted.

Now, how does Microsoft define ‘Activity’? It goes something like this:

  • User activity: Launch an app, execute a flow (whether automatic or not), chat with a Power Virtual Agents bot
  • Maker activity: Create, read, update, or delete an app, flow (desktop and cloud flows), Power Virtual Agents bot, custom connector
  • Admin activity: Environment operations such as copy, delete, back up, recover, reset

The above is all user driven – ie a user needs to interact with something within the environment. However, it’s also important to note how automation is viewed:

  • Activity includes automated behaviors such as scheduled flow runs. For example, if there’s no user, maker, or admin activity in an environment, but it contains a cloud flow that runs daily, then the environment is considered active.

It’s also important to note that at this point in time, the above only applies to Developer environments. Other types of environment (Production, Sandbox etc) don’t have any auto-deletion policies called out for them – well, at least not yet (if something does pop up around these, I’ll definitely look to talk about them too!).

So to answer our question above about what happens with a (developer) environment that is only being used infrequently – the way to stop it being auto-deleted is to put some automation in place. This doesn’t need to be lightweight – it can be something simple & easy, just to ensure that the environment registers activity happening within it.

In my view, it would be nice to have some granularity & control over this as well – allowing organisations to set their own deletion policies. We have this in place for things like audit log retention – it would be nice to have have it in here too.

Power BI & Dataverse Solutions

With the recent announcement of Power BI being able to be included in Power Platform solutions, LOTS of people were celebrating. Finally there would be the ability to not only include Power BI reports within solutions, but we could then also automate (aka ALM) it as well! Celebrations all round….well, for the most part.

See, although the documentation (see Power Platform solutions can now include Power BI reports and datasets – Power Platform Release Plan | Microsoft Learn) states that Power BI reports & datasets can now be included in solutions, it doesn’t actually quite work like that.

What happens is that when Power BI reports and datasets (depending on what you’re wanting to do) are included in solutions, though it does appear in the solution explorer window, it’s actually just a sort of shortcut to where they actually live. Exporting the solution then brings in the components into the exported solution file. This can be seen quite clearly when extracting the file on your computer:

As we can see from the image above, we now have the Power BI components within it

Note: If you were hoping to just go into it & see the Power BI report nicely, unfortunately you’re going to be disappointed. Instead, it’s exported as a ‘.pbipkg’ file, which doesn’t seem possible to open with Power BI Desktop at all!

But it’s there, and supposed to work. So let’s go ahead & import it into the destination environment. After all, this is the whole point of solutions – being able to move components between places!

Note: For the purpose of this blog post, I’m using manual ALM (ie manually exporting & importing the solution). However, the same will be true for automated ALM (eg using Azure DevOps).

Now this can be easier said than actually done. See, it’s quite possible that you could experience an error when importing the solution into the target environment, such as the following:

The error message (‘This solution contains Power BI components, so it couldn’t be imported here’) seems to be helpful – well, to a point. We know that there are Power BI components in the solution – after all, this is the point of it, but how comes we’re not able to import it!

Usually at this point I’d go to download the log file, and try to pinpoint the exact cause of the error. When presented with this specific error though, the log file doesn’t really seem to be of much help, despite trawling through each & every line in it. All it does is confirm that there indeed has been an import error, and it seems due to the Power BI components in the solution.

Just to double-check this, I did remove the Power BI components, export the solution, and then import it in a different environment. This worked absolutely fine without any errors! So indeed it’s got something to do with the Power BI components – but WHAT exactly is happening?

Well, the cause of this goes back to how Power BI components in Power Platform solutions actually work. As mentioned above, the Power BI items (report, dataset etc) are actually stored within Power BI itself. Yes, they’re included in the solution when we export it, but when importing them, they don’t actually save to Dataverse.

This is the absolutely KEY important thing to know and understand. When importing a solution with Power BI components, they come in as part of the solution, but are published to Power BI. Not only are they published to Power BI, a Power BI workspace is CREATED for them to live in (which will be specific per environment – a single Power BI workspace will not be shared with multiple Power Platform environments):

What this means in reality is that when the solution is imported, the Power BI workspace is created. However it’s not created by the system itself – underneath everything, the creation of the Power BI workspace is being driven by the USER ITSELF that’s importing the solution. Now, if the user account does NOT have permissions to create Power BI workspaces…well then, it’s going to error out, which is EXACTLY what is happening here!

So, it’s absolutely vital that if you are including Power BI components in a solution, you must ensure that however you’re importing it, the user account has privileges to create Power BI workspaces (as well as publish reports to an existing workspace). Without this in place, you’re going to be getting some very confusing errors happening!

It’s also important to note that even if the solution is managed, it is still possible (with the appropriate user permissions) to edit the Power BI report & dataset. Including it in a managed solution does not lock it.

Also, I’d like to thank Laura GB for her inspiration on this topic – with my limited Power BI knowledge, I usually turn to her for advice & help with Power BI.

Have you been considering including Power BI components in your solutions, or already been doing so? Have you run into this error/issue before? Drop a note below – I’d love to hear how you managed to work out the issue!

The story of MFA & the Centre of Excellence

I’ve been rolling out the Microsoft Centre of Excellence solution for several years now at customers. It’s a great place to start getting a handle on what exactly is going on within a Power Platform tenant, though there’s obviously so much more that takes place within a Centre of Excellence team.

The solution gathers telemetry around environments, Power Apps, Power Automates etc through the usage of the Power Automate Admin connectors for Power Platform (see Power Platform for Admins – Connectors | Microsoft Learn for further information on these).

Now obviously we need a user account to run these, and this usually has been through the use of a ‘pseudo service account’, as using a service principal has been tricky, to say the least. So we would get customers to set up an appropriate account with licensing & permissions in place, and use this to own & run the Power Automate flows that bring in the information to the CoE solution.

It is important to note that usage of these connectors do require a pretty high level of permissions – in fact, we usually suggest applying the Power Platform Admin security role (within the Microsoft 365 Admin Centre) to the user account. All good so far.

The tricky part has, to date, been around security. Organisations usually require (for good reasons) multi-factor authentication to be in place (aka MFA). Now this is fine for users logging in & accessing systems. However, it proves to be somewhat tricker for automations.

See, when a user logs in & authenticates through MFA, a token is stored to allow them to access systems. Automations can also use this. However the token will expire at some point (based on how each organisations has implemented MFA access/controls). When the token expires, the automations will stop running, and fail silently. There’s no prompt that the token has expired, and the only way of knowing is to take a look at the Power Automate flow history. This can be interesting though, as signing in (with the pseudo service account) will prompt for MFA authentication, and then everything will start running again!

So this has usually resulted in conversations with the client to politely point out that implementing MFA on the service account will mean that, at some point, the Power Automate flows are going to start failing. Discussions with security teams take place, mitigation using tools such as Azure Sentinel are implemented, and things move ahead (cautiously). It’s been, to date, the most annoying pain for the technical implementation (that I can think of at least, in my experience).

Now you’d think that a change in this would be shouted from the rooftops, people talking about it, social media blowing up, etc. Well, I was starting an implementation recently for a customer, and was talking to them around this, as I’d usually do. Imagine my surprise when Todd, one of the Microsoft technical people attached to the client, asked why we weren’t recommending MFA.

Taking a look at the online documentation, I noticed that something had slipped in. Finally there was the ability to use MFA!

Trawling back through the GitHub history (after all, I wanted to find out EXACTLY when this had slipped in), I discovered that it was a few months old. I was still very surprised that there hadn’t been more publicity around this (though definately a good incentive to write about it, and a great blog post to start off 2023 with!).

So moving forward, we’re now able to use MFA for the CoE user account. This is definately going to put a lot of mind at rest (especially those who are in security and/or governance). The specifics around the MFA implementation can be found at Conditional access and multi-factor authentication in Flow – Power Automate | Microsoft Learn – but it’s important to note that specific MFA policies will need to be set up & implemented for this account.

So, now the job will be to retro-fit this to all organisations that already have the CoE toolkit in place. Thankfully this shouldn’t be too difficult to do, and will most definitely enhance the security controls around it!

Have you implemented any mitigation in the past to handle non-MFA? I’m curious if you have – please drop a comment below!