Environment Grouping

One of the main ‘complaints’ that Power Platform administrators have is around how policies are applied to environments. Within Azure, it’s possible to set up security policies and apply them in bulk, or group together components under a single set of policies. However when it comes to Power Platform, this has not been possible – each environment has needed to be configured on its own.

I’m not talking here about DLP policies, as these are set up and then relevant environments selected/deselected as needed. I’m talking about things like setting Canvas App sharing limits, welcoming new makers, and other items.

Well, Microsoft has now made this possible to do – though the current first iteration (now in Public Preview) only has a few options within it, I’m quite certain that many more items will be coming down the line to fall under the new Environment Grouping feature.

At the moment, there are 6 options available for Power Platform administrators to be able to set and configure. Note that you do need to have the M365 security roles for either Global Tenant Administrator or Power Platform Administrator to be able to access and carry this out.

To be clear, Environment Grouping is a feature of Managed Environments. I’m not going to go into the debate about whether you should or shouldn’t adopt Managed Environments (at least not here – I may be speaking about it publicly later on this year), but you do need to have these in order to use this functionality. More specifically, you will ONLY be able to add environments that are set as ‘Managed’ to Environment Groups (though they don’t have to have Dataverse in play):

So, what exactly is the purpose of Environment Grouping? Well, it’s to minimise the amount of time that Power Platform administrators need to spend in setting up & applying policies.

Think of the users within your organsiation. You’re going to have different personas, such as developers, testers, end users, etc.

You’re also likely (especially in larger organisation) to have different business units & functions requiring different items. For example, you may lock down access to social media, but Marketing and Recruitment may indeed need access to social media to be able to carry out their jobs.

With these personas in mind, you can then start to look into building out different rule groupings, which will apply to all environments that are included under the Environment Group. It’s somewhat similar to the way in which DLP policies work – you create a DLP policy, and then everything that comes under the DLP policy gets the DLP policy setting.

There are many ways to manage pockets of environments within your tenant using environment groups. For example, global organisations can create an environment group for all environments in each geographic region to ensure compliance with legal and regulatory requirements. You can also organise environment groups by department or other criteria.

One of the other features around Environment Groups is the ability to use Environment Routing. I’ve talked about this previously when the feature was first released (Developer Environment Routing!) – Environment Groups now takes this to the next level, by being able to automatically set the Environment Group that new developer environments will fall under (so therefore policies will be automatically applied). Important to note here that all developer environments created through this WILL be set as ‘Managed’.

More information on the new capabilities can of course be found on Microsoft Learn, at https://learn.microsoft.com/en-us/power-platform/admin/environment-groups.

I think that this is a great new feature to have in place for Power Platform administrators, and look forward to seeing new functionality rolled out within this to enable organisations in a better way. Being able to cut down on administration/governance time, whilst being able to be more effective is, in my view, a win-win for ALL of us, and I can’t wait to see how it will develop over time.

So, my question to you is how would YOU look to use such functionality? What features might you like to appear within Environment Grouping to enable you and your organisation? Drop a comment below – I’d love to hear!

Developer Environment Routing!

Recently I talked about the wider vision that organisations would be able to use, for helping users get access to the right environments (Default Environment – How to handle? » The CRM Ninja). As part of this, I discussed the Microsoft vision of having environment routing in place, to move users automatically to specific environments.

At the point of writing, there wasn’t anything that I could publicly talk about. However, overnight Microsoft have released functionality around this – what I see as being the first step that this direction is taking. The documentation for this is at https://powerapps.microsoft.com/en-us/blog/default-environment-routing-public-preview/

The functionality released is to enable new users to Power Platform to automatically have a developer environment created for them to access, rather than landing in the Default environment within their tenant. Many organisations struggle with users creating content in the Default environment, when it’s not really (at least not in my opinion) the right place to do this.

Now, when we say ‘new users’, this doesn’t actually mean users newly created in M365 (or Entra ID/AAD). What this means is ‘users who have not accessed anything within Power Platform before’. In the back end, there’s a counter on each user record that keeps track of this, which this functionality is using to determine if users have accessed Power Platform beforehand or not.

What is important to note on this as well is that the Default environment DOES NOT need to be set to Managed for this to work. Microsoft documentation doesn’t make this clear at the moment, but hopefully it’ll be updated soon to clarify this.

Two settings do need to be toggled on within the Power Platform Admin Centre for this to work:

Once these have been set & saved, let’s take a look at how things actually happen. I’ve created a new user for testing purposes:

When signing in, it then briefly shows the general interface that we’re used to for a few seconds:

But, then we get this exciting NEW screen!

And then after a minute or so, we get placed nicely in the new environment:

Looking at the Power Platform Admin Centre, we can see the new environment that’s been created:

To be candid, during my testing things didn’t always work – I had some differing behaviour, or (on one occasion) the interface just hung. I’m going to put this down to being newly released & the product team working through potential issues (remember of course – this is in PREVIEW), and am hoping that they’re resolved very soon.

Also, it’s important to note that the developer environments created through this are MANAGED. Users will be able to create collateral in them, but to run apps etc will need premium licensing in place.

Moving forward, it would be great to have some information displayed to users if something hasn’t worked, as well as notifications to admins (configurable) so that they’re aware as well. Examples of this could include where an organisation has maxed out the number of (free) developer licenses available (yes, I know this sounds stange, but there’s a default limit of 9,999 developer licenses per org).

But I think it’s a great first step forward, and hopefully there will be many different ways that this product will be developed forward. My initial thoughts would include:

  • Creating developer environments for existing Power Platform users who don’t have a personal developer environment
  • Routing existing Power Platform users who have their own Developer environment to it
  • Being able to route to other places as well, including being able to specify which users/groups of users should be routed

It’s an exciting place to be in, and I look forward to seeing more of it!

What are your thoughts around this? Does your organisation allow users to have personal developer enviroments, or do they lock it down?

Default Environment – How to handle?

As we’re all aware, the default (Power Platform) environment in any Azure tenant is a very ‘interesting’ thing to have. It’s there by default when an Azure tenant is created, all users within the Azure tenant automatically have access to it, we’re not able to restrict users from being in it, etc etc.

Though it’s able to be backed up, it’s not able to be restored over itself, there’s no SLA/support available on it….the list goes on & on…!

Many of us have come up against issues caused by people using the default environment whilst not knowing about challenges involving it, which usually results in pulling out our hair, banging our head against the wall, and other like-minded productive approaches.

However, it is the first place that users, being new to Power Platform, land up, and instinctively they’ll start building applications, automations etc within it (though usually without using solutions as a container for the development of items). So to date, there’s not really anything that’s been able to be done around this, apart from monitoring users & chasing them after the fact.

Now, we’re all about enabling our users in the right way, helping educate & support them. Telling them a big NO doesn’t help, and can even be an initial blocker to having people start playing around & building technological solutions.

So how can we go about enabling our users, but also having the appropriate level of governance over the top? Well, there are several steps that I think we can take, which will help us with these. Now, not all of these are yet in place, though they have been talked about publicly. So let’s go take a look at them

  1. The first step, in my mind, is to start off with enabling the default environment as a managed environment (yes, this can ACTUALLY be done!). Managed environments have many different properties associated with them, but the one of most interest (for this at least) is the requirement to have a premium license in place.

All users within an organisation should by default have an M365 license SKU against them (usually this would be an E3 or E5). Users with these can immediately use the seeded Power Platform capabilities within them to create Power Platform collateral (using standard connector capabilities). However, with the default environment being managed, they will NOT be able to access it!

Note: For the moment, I’m leaving out users who have premium Power Platform licenses – this is deliberate

  1. Environment routing. Announced recently is the environment routing capabilities. This will enable users to be automatically routed to an appropriate environment, based on various conditions that can be set. With this, we could create appropriate business unit ‘sandboxes’, and we could route users to these. The user experience would be that when logging in, they would automatically then go to the right environment, rather than trying to work out which environment they should actually go to. This will save on confusion, and be a good user experience (in my opinion).
  1. Just-In-Time (JIT) Environment Creation. One of the items mentioned by Charles Lamanna at the European Power Platform Conference 2023 in Dublin is a new capability that’s coming in soon (I hope!). From the sound of it, this will give the ability to automatically create a new environment for users who do not already have one.

This sounds really cool. With the recent advent of Development Environments (& the ability for all users to have multiples of these), this could work REALLY well with the environment routing capability mentioned above. When a user would log in for the first time, it could look to see if they have a developer environment – if yes, then route them to it. But if the user didn’t, then to automatically spin up & create a new developer environment, and route them to it.

Now there are some caveats with this approach, leaving aside that some of the functionality isn’t GA yet.

It would mean that organisations would need to be alright with changing the default environment to become a managed environment. Obviously, risk assessments would need to be carried out with this, and non-premium solutions migrated elsewhere.

It’s also important to call out that organisations which have a CDS 1.0 implementation (ie before Power Platform became GA etc) will only have the ability to upgrade default to managed. They are not able to downgrade back to an unmanaged default environment, given limitations of the original CDS implementation (I’ve heard some truly HORRIFIC stories around this, so be careful!)

The above, however, is just the start of things. There are many other concepts to keep in mind, such as Landing Zones, Policies, etc. I’m going to be looking to cover these in upcoming posts, so keep an eye out for them!

Power BI & Dataverse Solutions

With the recent announcement of Power BI being able to be included in Power Platform solutions, LOTS of people were celebrating. Finally there would be the ability to not only include Power BI reports within solutions, but we could then also automate (aka ALM) it as well! Celebrations all round….well, for the most part.

See, although the documentation (see Power Platform solutions can now include Power BI reports and datasets – Power Platform Release Plan | Microsoft Learn) states that Power BI reports & datasets can now be included in solutions, it doesn’t actually quite work like that.

What happens is that when Power BI reports and datasets (depending on what you’re wanting to do) are included in solutions, though it does appear in the solution explorer window, it’s actually just a sort of shortcut to where they actually live. Exporting the solution then brings in the components into the exported solution file. This can be seen quite clearly when extracting the file on your computer:

As we can see from the image above, we now have the Power BI components within it

Note: If you were hoping to just go into it & see the Power BI report nicely, unfortunately you’re going to be disappointed. Instead, it’s exported as a ‘.pbipkg’ file, which doesn’t seem possible to open with Power BI Desktop at all!

But it’s there, and supposed to work. So let’s go ahead & import it into the destination environment. After all, this is the whole point of solutions – being able to move components between places!

Note: For the purpose of this blog post, I’m using manual ALM (ie manually exporting & importing the solution). However, the same will be true for automated ALM (eg using Azure DevOps).

Now this can be easier said than actually done. See, it’s quite possible that you could experience an error when importing the solution into the target environment, such as the following:

The error message (‘This solution contains Power BI components, so it couldn’t be imported here’) seems to be helpful – well, to a point. We know that there are Power BI components in the solution – after all, this is the point of it, but how comes we’re not able to import it!

Usually at this point I’d go to download the log file, and try to pinpoint the exact cause of the error. When presented with this specific error though, the log file doesn’t really seem to be of much help, despite trawling through each & every line in it. All it does is confirm that there indeed has been an import error, and it seems due to the Power BI components in the solution.

Just to double-check this, I did remove the Power BI components, export the solution, and then import it in a different environment. This worked absolutely fine without any errors! So indeed it’s got something to do with the Power BI components – but WHAT exactly is happening?

Well, the cause of this goes back to how Power BI components in Power Platform solutions actually work. As mentioned above, the Power BI items (report, dataset etc) are actually stored within Power BI itself. Yes, they’re included in the solution when we export it, but when importing them, they don’t actually save to Dataverse.

This is the absolutely KEY important thing to know and understand. When importing a solution with Power BI components, they come in as part of the solution, but are published to Power BI. Not only are they published to Power BI, a Power BI workspace is CREATED for them to live in (which will be specific per environment – a single Power BI workspace will not be shared with multiple Power Platform environments):

What this means in reality is that when the solution is imported, the Power BI workspace is created. However it’s not created by the system itself – underneath everything, the creation of the Power BI workspace is being driven by the USER ITSELF that’s importing the solution. Now, if the user account does NOT have permissions to create Power BI workspaces…well then, it’s going to error out, which is EXACTLY what is happening here!

So, it’s absolutely vital that if you are including Power BI components in a solution, you must ensure that however you’re importing it, the user account has privileges to create Power BI workspaces (as well as publish reports to an existing workspace). Without this in place, you’re going to be getting some very confusing errors happening!

It’s also important to note that even if the solution is managed, it is still possible (with the appropriate user permissions) to edit the Power BI report & dataset. Including it in a managed solution does not lock it.

Also, I’d like to thank Laura GB for her inspiration on this topic – with my limited Power BI knowledge, I usually turn to her for advice & help with Power BI.

Have you been considering including Power BI components in your solutions, or already been doing so? Have you run into this error/issue before? Drop a note below – I’d love to hear how you managed to work out the issue!

Power Platform Capacity Monitoring

If I look back at customer engagements over the last few years around Power Platform, whether it was a new capability or an existing capability, there was ONE thing that stood out above all. This was the ability to be able to track capacity usage over time, and to be honest, most organisations weren’t really doing very well at it.

For those who are unaware, there are actually three different types of capacity present within Power Platform environments. These are:

  • Data
  • File
  • Log

Each one is used for a specific purpose – broadly speaking, File holds all attachements that are uploaded directly into Dataverse, Log is used for auditing purposes, and Data holds everything else (hence the name)!

Now this data is shown within the Power Platform Admin Centre, under the ‘Resources/Capacity’ section’. An example of this is:

There’s also a nice little breakdown of capacity allocation through licenses etc, which essentially shows where the available capacity has come from:

If we drill down a bit further, we can open up a specific environment, and see not only the overall usage per capacity type, but also which tables are consuming the most amount of data:

All of this is well & good so far, for someone wanting to take a look at what is currently happening. But this is a manual action – it is possible to manually export the data, but again, this isn’t automated.

It’s also not possible (at least not at this point in time) to query the underlying records that hold these values. So we’re a little stuck. If an organisation wanted to see historical data usage, and/or predict data trends (such as ‘how much capacity would we need to have in 6 months if we continued our scaling’), there’s no way to do this. At least not automatically – someone would need to store the values down manually, then report on it. A hassle, to say the least.

Now when it comes to looking overall at Power Platform, the Centre of Excellence Starter Toolkit is really quite amazing. The Microsoft PowerCAT team continue to iterate existing functionality within it, as well as bring new functionality as well.

At this point in time, however, it doesn’t have any capacity monitoring in it. Well, it sort of does – we can implement notifications to alert us when capacity reaches a certain value. But this doesn’t solve the challenge as laid out above.

So with this in mind, I set out to create a solution to handle it. I’ve always wanted to create some sort of tool for giving back to the community & helping others, and I saw this as my chance to do so (I’m in awe of the various XrmToolBox tool creators, for the record).

So, I’m releasing a capacity monitoring tool. I’m using GitHub as the host, and the repo can be accessed at https://github.com/thecrmninja/Power-Platform-Capacity-Monitoring (it was a learning experience as well as how to use GitHub as a source repository, as I’ve not done that before!).

Model-Driven App:

Reporting Dashboard:

This is just the first version – I have various ideas about how to iterate on it, and tweak functionality. Each release will include release notes & important information to be aware of (such as security needing to run it). Also importantly, thanks to the amazing Matt Collins-Jones for reviewing some of my work around this.

The audience for this tool is aimed at IT/Power Platform admins who are already familiar with the Microsoft CoE toolkit solution, and have appropriate access to it.

If you find any issues, please raise an appropriate GitHub Issue item, and I’ll look into it. Also, if you have any ideas that you think could be worthwhile, please feel free to suggest them!

Finally, I’d be interested in hearing how you think this could support you or your organisation – feel free to drop a comment below!

Power Platform ALM Changes

As a starter for 10, if you haven’t yet looked into ALM for Power Platform, you should most definitely be doing so! ALM is, of course, Application Lifecycle Management. This is how, in a nutshell, we move solutions between environments.

In the good old days, this was done manually of course (CRM 4.0, I’m looking at you!). Today, though it is of course still possible to export/import solutions manually, it’s not the Microsoft Best Practise method. Doing it manually also means that it’s unlikely that you’ll have appropriate source control for your solutions too, which let’s face it, isn’t the best.

Want to look at a previous solution version? Hmm – do you still have it saved on your machine or not?

So we should generally know why we’d want to use ALM. But which tooling do we actually use for it? Going back to the on-premise days, there was TFS (or Team Foundation Server, to give its full name). This was a full source control respository, allowing developers to check in/check out code, built solutions, deploy them, etc.

With the move to ‘cloud based systems’, the TFS replacement is Azure Dev Ops (or ADO, as it’s usually referred to as). ADO works in essentially the same way as TFS did (some differences, but they’re not really relevant here), but does so through the cloud.

When it comes to Power Platform solutions, ADO uses the ‘Power Platform Build Tools’ capabilities to hook into Dataverse & pick up solutions. The tools essentially gives ADO the ability to connect in to a Power Platform environment, build/export solutions, deploy solutions, etc.

More information on the toolset can be found at Microsoft Power Platform Build Tools for Azure DevOps – Power Platform | Microsoft Docs

Now there are some limitations to the Power Platform Build Tools. In fact, I’d be so bold as to say that currently they’re not in a fully mature state. It’s not possible to do everything that you can manually (well, not with the inbuilt capabilities – there are some ‘hacks’ around that can extend them). At the moment, it’s essentially 1.0.

Well, Microsoft is announcing that they’re now releasing 2.0 of the Power Platform Build Tools this week!

In fact, this is so new that at the time of writing, there’s no Microsoft Docs available for this! So what does version 2.0 bring, and why is Microsoft releasing a new version?

So Microsoft has actually had this in planning for a while. There’s a lot going on with GitHub, as we well know, and Microsoft wants to drive the consistency of the experience for users forwards. At the moment, they work in somewhat different ways, and the aim is to bring this to parity.

The main change that the new version has is that instead of tasks being PowerShell based (which they are currently), now the tasks will be Power Platform CLI based. So Microsoft is changing the underlying working method from PS to CLI. Some of us will, of course, already be familiar with the way that the CLI works, and it’s really nice to see that the capabilities will now be part of ADO.

Now don’t start worrying that your current ADO pipelines (v0) will suddenly stop working. Microsoft is not doing anything with v0 at this point in time (though they may potentially deprecate in the future). So all of your existing ADO pipelines using the Power Platform Build Tools will continue to work, but no new features are going to be being released for it.

In terms of switching to using v2, it’s really quite simple – you’ll need to change the task version type as so:

If you are currently using YAML (as so many wonderful developers do) to author pipelines, you’ll need to do the following in the YAML code:

It’s very important to note that it’s not possible to mix and match task versions. If you do this, the ADO pipeline will fail, so please don’t try this!

I’m really excited about this, and to see that the CLI capabilities are being brought into play for ADO capabilities. I’ll admit that I’m wondering what else will be being released (in the fullness of time), as I’m sure that this is just the start of some great new stuff!

One of the things that I’m REALLY hoping for is the ability to use ADO pipelines to be able to migrate Power App Portals (or Power Pages), as currently it’s only possible to do using the Power Platform CLI, or the Configuration Migration Tool. It would be amazing to be able to do these with ADO pipelines as well!

Recognition as Microsoft Partner for Business Application Solutions

It’s been a little while since I’ve previously blogged around developing customer solutions and the Microsoft Specialisations. Since I spoke about it last year (Apps & Microsoft Partner Specialisations) the landscape has moved on a little, and I thought that it would be good to take a look again at it.

Currently in the Business Applications space, there’s a single specialisation. This is the ‘Microsoft Low Code Application Development Advanced Specialisation’, which is covered in detail at the Microsoft page for it (Microsoft Low Code Application Development Advanced Specialization).

In essence, this specialisation is aimed at partners who are developing Power Apps (yes, this is specifically aimed at Power Apps), and has been around for a year or so.

In order for Microsoft to track the qualifying metrics against this specialisation, it’s very important to carry out the PAL (Partner Attach Link) process. The details of how to do this is in my earlier post, which includes some of my thoughts at the time around how a partner should best implement the procedure.

Since then, my blog post has gained a good amount of traction, and several Microsoft partners have engaged with me directly to understand this better, and to implement the process into their project playbook. I’m really delighted at having been able to help others understand the process, and the reasoning behind it.

Now that’s all good for a partner who is staying in place at a customer. However there are multiple scenarios that can differ from this. Examples of this are:

  1. Multiple partners developing a single application together
  2. One partner handing over the application to a second partner for further development
  3. One partner implementing a solution, with a second partner providing support

Now, there’s really a single answer to all of the above scenarios, but it’s a matter of how to go about implementing this properly. Let me explain.

Originally, all developers would register PAL, and this would then be tracked through the environment cadence, and associated appropriately to the partner. This would be from the developers having been the creators of the apps.

This has now changed a little bit. Microsoft now recognises the capabilities of PAL using both the Owner of the app, as well as any Co-Owners of the app. This is a little more subtle, so let’s explain this in some detail.

It is possible, of course, to change the owner of an app. More commonly, however, is the practice of adding co-owner/s to an app (I always recommend this as best practice actually, to remove key-person responsibility risks).

Note: Changing the actual owner of an app requires the usage of a PowerShell command

So what happens now is that Microsoft will track the owners/co-owners of any app that’s deployed, and PAL association will flow through this. But there are a couple of caveats which it’s important to be very aware of!

  1. All owners/co-owners must have registered PAL with their user accounts (if using a service principal/service account as an owner, there’s a way of doing this using PowerShell)
  2. Microsoft will recognise the LATEST owner/co-owner association with the app as the partner organisation that will receive PAL recognition

Now if a customer adds co-owners to an app, this shouldn’t be an issue (as none of the users would have registered PAL). But if there are multiple partners in place, ONLY THE LATEST ONE WILL BE RECOGNISED.

Therefore to take the three scenarios above, let’s see how this would apply.

  1. Multiple partners developing a single app. Recognition would not work for all partners involved, just the latest one to associate with the app
  2. Partner 1 handing over app to Partner 2. Recognition would stop for Partner 1, and would then start for Partner 2
  3. Partner 1 implementing solution, Partner 2 providing support. Care would need to be taken that the appropriate partner is associated as owner/co-owner to the app, for PAL recognition.

It’s also important for both partners & customers to understand this, in the wider context of being careful about app ownership, and the recognition that it brings from Microsoft for partners delivering solutions. If a partner would go into a customer, and suddenly start taking ownership of apps that it’s not involved in, I don’t think that Microsoft would be very approving of it.

Now, all of the above is in relation to Power Apps specifically, as I’ve noted. However, the PAL article was updated last week (located at Link a partner ID to your Power Platform and Dynamics Customer Insights accounts with your Azure credentials | Microsoft Docs) and also interestingly talks about:

Note the differences between each item

Reading between the lines here, I think that we’re going to be seeing more advanced specialisations coming out at some point. Either that, or else partner status will be including these as well, as I can’t think of any other reason why PAL would need to be tracked for these as well! I’m also wondering if other capabilities (eg Power Virtual Agents, Power Pages, etc) will be added at some point as well…

Have you had any challenges with the PAL process? Is there anything more you’d like to find out about it? Drop a comment below, and I’ll do my best to respond!

New Platform DLP Capabilities

DLP (or Data Loss Prevention) is a very important capability in the Power Platform. With being able to bring together multiple data sources, both within the Microsoft technology stack as well as from other providers gives users amazing capabilities.

However with such great capabilities comes great responsibility. Of course, we trust users to be able to make proper judgements as to how different data sources can be used together. But certain industries require proper auditing around this, and so being able to specify DLP policies are extremely important to any governance team.

Being able to set how data connectors can be used together (or, in the reverse, not used together) across both Power Apps as well as Power Automate flows is imperative in any modern organisation.

To date, Power Platform DLP capabilities have existed that allow us to be able to categorise connectors (whether Microsoft provided or custom) into three categories. These categories specify how the connectors are able to function – they’re able to work with other connections that are in the same category group, but cannot work with connectors that are in a different category group.

So for example, it’s been possible to allow a user to create a Power App or a Power Automate flow that interacts with data from Dataverse, but cannot interact with Twitter (in the same app or flow).

With this approach, it’s possible to create multiple DLP policies, and ‘layer’ them as needed (much like baking a 7 layer cake!) to give the functionality required per environment (or also at the tenant level).

Now this has been great, but what has been missing has been the ability to be more granular in the approach to this. What about if we need to read data from Twitter, but just push data out to Twitter?

Well, Microsoft has now iterated on the DLP functionality available! It’s important to note that this is per connector, and will depend on the capabilities of the connector. What we’re now able to do is to control the specific actions that are contained within a connector, and either allow or not allow them to be able to be utilised.

Let’s take the Twitter connector as an example:

We’re able to see all of the actions that the connector is capable of (the scroll bar on the side is a nice touch for connectors that have too many actions to fit on a single screen!). We’re then able to toggle each one to either allow or disallow it.

What’s also really nice are the options for new connector capabilities.

This follows in the footsteps of handling connectors overall – we’re able to specify which grouping they should come under (ie Business, Non-Business, or Blocked). As new connectors are released by Microsoft, we don’t need to worry that users will automatically get access to them.

So too with new actions being released for existing connectors (that we’ve already classified). We’re able to set whether we want them to be automatically allow, or automatically blocked. This means that we don’t need to be worried that suddenly a new connector action will be available for users to use, that they perhaps should not be using.

From my perspective, I think that any organisation that’s blocking one or more action capabilities for a connector will want this to be blocked by default, just to ensure that everything remains secure until they confirm whether the action should be allowed or not.

So I’m really pleased about this. The question did cross my mind as to whether it would be nice to be able to specify this on a per environment basis when creating a tenant-level policy, but I guess that this would be handled by creating multiple policies. The only issue I could see around this would be the number of policies that could need to be handled, and ensuring that they’re named properly!

Have you ever wanted these capabilities? How have you managed until now, and how do you think you’ll roll this out going forward? Drop a comment below – I’d love to hear!

Environment types, capabilities & backups

Interesting title to start a blog post with, right? I can’t tell you how much I tried to work out what to call this, but then I figured that I’d just put at a high level what I’m going to be talking about!

So let’s start at the beginning. Environments within Dataverse. An environment is essentially a container for all sorts of different components, such as data models, apps, code, etc.

Examples of what an environment can contain

Within the Power Platform, there are different types of environments. As a quick recap, currently we have the following:

  • Default. Every Power Platform tenant has a default environment. We of course shouldn’t be using this for any proper development!
  • Production. Used for any Line of Business application
  • Sandbox. A sandbox environment is any non-production Dataverse environment. Isolated from production, a sandbox environment is the place to safely develop and test application changes with low risk.
  • Trial. Used to take out a trial
  • Trial (Subscription Based). Used to take out a trial when there’s subscription licensing in place
  • Developer. Personal environment, limited to one user. Previously called the Community plan.
  • Teams. Used when an app is created within Teams, to use a Dataverse for Teams environment. Doesn’t have full Dataverse capabilities, and has various limitations
  • Support. Only able to be created by Microsoft support during a support case. Is essentially a clone of an existing environment, used for diagnosis purposes.

Now, sandbox & production environments are automatically backed up – backups occur continuously, using Azure SQL Databases underneath. It’s also possible to create a manual backup instance of an environment as well, which usually takes a few seconds to carry out (restoring a backup, on the other hand, takes quite a bit longer…).

When restoring an environment, it’s not possible to restore to a production environment (though the backup could be from a production environment). It’s only possible to restore the backup to a sandbox environment – you’d then need to promote the environment from sandbox to production.

Let’s move away from backups for a moment. When we create an environment, we have the ability to select that the environment should be enabled for Dynamics 365:

This is actually a REALLY IMPORTANT CONSIDERATION! At this point in time, it’s not possible to update from a Power Platform Dataverse environment to then bring in Dynamics 365 capabilities. What this means is that if an organisation starts with just Power Apps, and then wants to expand into using Dynamics 365, IT’S NOT POSSIBLE TO DO THIS NATIVELY. Even Microsoft Support can’t do anything around this – you’d need to create a new environment, enable it for Dynamics 365, and then restore a backup to it.

It’s something that a lot of us would like be in place, but we’re not sure if it’ll ever come about. This is a tweet of mine from 2019 that Charles Lamanna responded to (I was SO thrilled that he actually responded to me!!):

https://twitter.com/clamanna/status/1176629306484637696

However, it’s still not in place. As a result, we recommend to all clients that when they deploy a Dataverse environment, they toggle the switch above (Note: A Dynamics 365 license is NOT needed to toggle this). Once this has been toggled (without deploying any of the Dynamics 365 apps), the Dynamics 365 apps and functionality can be installed/deployed at a later point in time.

There are actually various capabilities, such as the Data Export Service (yes, I know it’s now been deprecated) that actually relied on having the environment enabled as a Dynamics 365 environment in order to work. We found this out the hard way at a client, and had to do an overnight environment re-build to get the capabilities in place.

But there’s one other thing to consider around the differences between a native Dataverse environment, and an environment which has been enabled for Dynamics 365. This is around backups.

Now, backups are of course very important (thankfully they now occur automatically, as mentioned above – I remember my onpremise days when needing to run these manually!). But there are also some important differences for backup behaviour when it comes to environment types. See, it turns out that environments aren’t actually equal in backup behaviour. This is what actually happens:

  • Sandbox environments (all types) – backups retained for 7 days
  • Dataverse production environment (not enabled for Dynamics 365) – backups retained for 7 days
  • Dataverse production environment (enabled for Dynamics 365) – backups retained for 28 days

See that? Having Dynamics 365 enabled for an environment gives you FOUR TIMES as much backup retention time! That’s incredible!

Dataverse Environment enabled for Dynamics 365 – 28 days of backups available!

So not only are you able to then upgrade to Dynamics 365 applications at a later date, you then also have more peace of mind (hopefully you don’t need to use it though!) around keeping backups for longer.

This is really cool – I hope it helps you plan your environment implementation strategy! Have you ever come up against issues when using environments, or the type/s of environment? Drop a comment below – I’d love to hear!