Error in Customer Insights – Data

Not a long blog post, but something that may come in handy for some people!

I was recently playing around with Customer Insights – both the Data & Journeys side of thing for a Proof of Concept I was creating for a customer. It’s definitively interesting to see how Microsoft have been evolving the product over the last year or so (which was the last time I played around with it).

One of the components that we were very interested to play around with specifically is the ‘Discovery’ part of Customer Insights – Data. As shown in the screenshot below, this is where you’re able to use natural language to query your data, to then get results using AI. This means that you don’t have to understand any specific query language (SQL, R, M etc), but rather just ‘converse’ with it as you would another person.

You’ll perhaps note that there’s NO mention of Copilot here, though perhaps Microsoft may at some point decide to call this Copilot functionality as well?

The team had loaded in the data – we had a fair few number of rows (multiple millions of them!), gone through the unification process, enrichment process, etc etc. All of this was set up & working properly.

However, when I tried to go to the ‘Discovery’ tab in my own browser, I was getting an extremely strange error:

As you can see, it’s incredibly informative…NOT!!! I mean, what does ‘Value cannot be null. (Parameter ‘key’)’ actually mean to the average person?

At first, I thought it was something to do with the underlying data, so went back to check that. However the data seemed fine. Furthermore, other people on the team were able to access Discovery in their own browsers without any issues.

Having no other option (turning it off & on again didn’t work), I raised a support ticket with Microsoft. This was responded to in a timely fashion, and I found myself working with Rohan, the Microsoft Support Representative.

In my initial ticket submission, I had included details of what was going on, what I had clicked on, that others in the team didn’t have the problem, the Organisation ID, URL’s – you name it!

Rohan jumped on a call with me, and it turned out to be the shortest support session I have EVER had. He asked me to change the system language to another language (I had been using English, to decided to change it to German). Once the language change had been applied, we navigated back to the ‘Discovery’ tab (all in German, I may add), and when the screen loaded, there was no error! Holding our breath, we then changed the language back to English (more complicated than I had imagined, with navigating in a different language).

Once this was done, and everything was back in familiar English, the ‘Discovery’ tab then loaded without issues (again!), and I was able to go ahead and start running queries in it using natural language. It was great!

In fact, it’s actually taken me longer to type out the above than the length of the support call – it was indeed that quick! Obviously lots of praise to Rohan (he did mention he had seen this once before, which is why he knew how to fix the issue).

The bigger question in my mind is what exactly was happening/going wrong underneath – I have asked this to Microsoft, but haven’t gotten a response. My guess is that something in the user/language settings hadn’t been populated properly, and therefore resulted in that error message. Updating/changing this forced it to then populate properly, and it worked.

Have you ever seen something like this, where changing a system setting (such as language) helped resolve an issue? I’d love to hear more about it –

MB-280: Microsoft Dynamics 365 Customer Experience Analyst

It’s been a while since taking a Microsoft certification exam, but with the new MB-280 exam being launched in the last few days, I’ve obviously needed to take a look at it! It felt a little strange, as I’m now used to the certification renewal process (which is why I haven’t taken any exams in a while), but thankfully things went alright with the overall exam.

For those who haven’t been following the news, Microsoft made an announcement a few months back that some exams would be retiring, and the new MB-280 exam would be the replacement for this. In short, this is supposed to replace the MB-210 (Sales), MB-220 (Customer Insights – Journeys) & MB-260 (Customer Insights – Data). Malin Martnes wrote a good blog post in June – I’d suggest to take a look at it at for more general information around it.

Now I’m all up for new certifications being created & made available. However, and I know this could be considered controversial, I have ABSOLUTELY NO IDEA as to why this exam was created in THIS specific way. If an exam had been created, for example, to bring together the two sides of Customer Insights (ie to cover both Data & Journeys in a single exam), I think that would have been quite good.

But with having taken this, my thoughts (& feedback to Microsoft directly) is that they should un-deprecate (if that’s a word/phrase?) the MB-210 exam, and continue it forward. There’s no reason that I can see having Marketing & Sales together in a single exam – it feels like two (or technically 3?) lego bricks lumped together without any rhyme or reason.

The learning path for the exam was also launched in the last few days, and can be found at Study guide for Exam MB-280: Microsoft Dynamics 365 Customer Experience Analyst | Microsoft Learn

The official description of the exam is:

As a candidate for this exam, you’re a Microsoft Dynamics 365 customer experience analyst who has:

  • Participated in or plans to participate in Dynamics 365 Sales implementations.
  • An understanding of an organization’s sales process.
  • An understanding of the seller’s perspective (user experience).
  • The ability to demonstrate Dynamics 365 Customer Insights – Data and Customer Insights – Journeys capabilities.

You’re responsible for configuring, customizing, and expanding the functionality of Dynamics 365 Sales to create business solutions that support, automate, and accelerate the company’s sales process. You use your knowledge of customer experience capabilities in Dynamics 365 Sales and Microsoft Power Platform to inform the following design and implementation tasks:

  • Configure Dynamics 365 Sales standard and premium features.
  • Implement collaboration features.
  • Configure the security model.
  • Perform Dynamics 365 Sales customizations.
  • Extend Dynamics 365 Sales with Microsoft Power Platform.
  • Deploy the Dynamics 365 App for Outlook.

As a candidate, you need:

  • An understanding of the Dataverse security model and features, including business units, security roles, and row ownership and sharing.
  • Experience configuring model-driven apps in Microsoft Power Apps.
  • An understanding of accounts, contacts, and activities.
  • An understanding of leads and opportunities.
  • An understanding of the components of model-driven apps, including forms, views, charts, and dashboards.
  • An understanding of model-driven app personal settings.
  • Experience working with Dataverse solutions.
  • An understanding of Dataverse, including tables, columns, and relationships.
  • Familiarity with Power Automate cloud flow concepts, such as connectors, triggers, and actions.

More can be found at the exam page itself, which is located at Exam MB-280: Microsoft Dynamics 365 Customer Experience Analyst (beta) – Certifications | Microsoft Learn

Now during my exam, I was looking forward to seeing the ‘new’ capability around being able to use Microsoft Learn during the exam (new to me – as I haven’t taken any other exams in the last year or so since it was announced!). However there didn’t seem to be any capability to launch Microsoft Learn – I’m not sure why it wasn’t available, as this isn’t a Fundamental level exam

Questions also used the older terms of references rather than the newer/accepted terms – ie using ‘field’ instead of ‘column’, and ‘entity’ instead of ‘table’. Again, I have no idea why this is – all other exams (including the renewals for them) are using these properly (in my summary below I have ensured I use the correct terms).

So, as I’ve posted before around my exam experiences, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Sales Apps
    • Configuring forms, columns & tables
    • Configuring security roles & access to records
    • Configuring relationships between records (including deletion properties)
    • Sales Mobile App – security & deployment
    • Forecasting – setting up & configuring
    • Configuring Goals
    • Configuring Opportunities
    • Handling currencies
  • Copilot for Sales
    • Setting up & deploying to users
    • Configuring access
  • Outlook App
    • Deploying & setting up
    • Configuring forms & information
  • Exchange
    • Connecting to mailboxes
    • Configuring folder permissions
    • Configuring multiple domains
  • Product Families & Catalogue
    • Creating & setting up
    • Configuring options
    • Adding items to be used
  • Price Lists
    • Creating & setting up
    • Configuring options, including discounts
    • Using time-restricted price lists
    • Handling currencies
  • Document Management
    • Different document management capabilities
    • Usage of SharePoint in different ways
  • Data Import
    • Usage of Power Query
    • Data manipulation
    • Handling duplicate records
  • SMS
    • Setting up & configuring SMS provider
  • Journeys
    • Different triggers to use based on scenarios & requirements
    • How to trigger journeys
    • How to set up emails to be used within a journey
  • Segments
    • Different types of segments
    • Creating & modifying segments
  • Searching/Filtering
    • Using Advanced Find
    • Setting up/modifying queries to include/exclude records based on conditions
  • Business Process Flows
    • Modifying business process flows
    • Handling conditions within business process flows

As a Sales exam, it seemed alright. But as mentioned above, the Customer Insights questions just seemed strange to me – I’d expect a consultant to be very technically skilled in Customer Insights, but not in Sales (& vice versa), so I’m not understanding bringing these two sides together.

I’m going to be quite interested in seeing how the exam is actually launched (as it’s currently in Beta of course). Having chatted with a few others who have taken the exam (whilst obviously respecting the NDA!), they also can’t really understand the landscape. Personally, I think that if it continues like this, Microsoft is going to hear quite a few complaints around it.

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it! I’d also be interested in your thoughts/opinions around the direction that Microsoft has taken for this!

Environment Grouping

One of the main ‘complaints’ that Power Platform administrators have is around how policies are applied to environments. Within Azure, it’s possible to set up security policies and apply them in bulk, or group together components under a single set of policies. However when it comes to Power Platform, this has not been possible – each environment has needed to be configured on its own.

I’m not talking here about DLP policies, as these are set up and then relevant environments selected/deselected as needed. I’m talking about things like setting Canvas App sharing limits, welcoming new makers, and other items.

Well, Microsoft has now made this possible to do – though the current first iteration (now in Public Preview) only has a few options within it, I’m quite certain that many more items will be coming down the line to fall under the new Environment Grouping feature.

At the moment, there are 6 options available for Power Platform administrators to be able to set and configure. Note that you do need to have the M365 security roles for either Global Tenant Administrator or Power Platform Administrator to be able to access and carry this out.

To be clear, Environment Grouping is a feature of Managed Environments. I’m not going to go into the debate about whether you should or shouldn’t adopt Managed Environments (at least not here – I may be speaking about it publicly later on this year), but you do need to have these in order to use this functionality. More specifically, you will ONLY be able to add environments that are set as ‘Managed’ to Environment Groups (though they don’t have to have Dataverse in play):

So, what exactly is the purpose of Environment Grouping? Well, it’s to minimise the amount of time that Power Platform administrators need to spend in setting up & applying policies.

Think of the users within your organsiation. You’re going to have different personas, such as developers, testers, end users, etc.

You’re also likely (especially in larger organisation) to have different business units & functions requiring different items. For example, you may lock down access to social media, but Marketing and Recruitment may indeed need access to social media to be able to carry out their jobs.

With these personas in mind, you can then start to look into building out different rule groupings, which will apply to all environments that are included under the Environment Group. It’s somewhat similar to the way in which DLP policies work – you create a DLP policy, and then everything that comes under the DLP policy gets the DLP policy setting.

There are many ways to manage pockets of environments within your tenant using environment groups. For example, global organisations can create an environment group for all environments in each geographic region to ensure compliance with legal and regulatory requirements. You can also organise environment groups by department or other criteria.

One of the other features around Environment Groups is the ability to use Environment Routing. I’ve talked about this previously when the feature was first released (Developer Environment Routing!) – Environment Groups now takes this to the next level, by being able to automatically set the Environment Group that new developer environments will fall under (so therefore policies will be automatically applied). Important to note here that all developer environments created through this WILL be set as ‘Managed’.

More information on the new capabilities can of course be found on Microsoft Learn, at https://learn.microsoft.com/en-us/power-platform/admin/environment-groups.

I think that this is a great new feature to have in place for Power Platform administrators, and look forward to seeing new functionality rolled out within this to enable organisations in a better way. Being able to cut down on administration/governance time, whilst being able to be more effective is, in my view, a win-win for ALL of us, and I can’t wait to see how it will develop over time.

So, my question to you is how would YOU look to use such functionality? What features might you like to appear within Environment Grouping to enable you and your organisation? Drop a comment below – I’d love to hear!

AI, Microsoft Copilot, and Copyright

Firstly, a Happy New Year to everyone – I’m sure that you have amazing plans for 2024, and wish you the best of luck with them!

Over the last few months, I’ve been keeping a close eye on Copilot (since it was announced as being in GA), and various happenings around the wider AI scene. It seems almost impossible to find someone who is NOT aware of the OpenAI happenings towards the end of 2023, and so many more conversations now include mention of ChatGPT, Azure OpenAI, etc.

But there’s one item I’d like to pick up on & discuss, which given the news events of last week, is extremely pertinent. This is the topic of copyright, which is a very important topic to understand. However in order to understand it properly, we need to understand how AI offerings are generated in the first instance.

All of the various AI offerings rely on LLM’s. This stands for ‘Large Language Model’, and these are deep learning models that are pre-trained on vast amounts of data, and by vast, the number of data points are truly staggering:

Incidentally, users should consider the use case that they’re using AI for, and look to use the best optimum model for it. As an example – if wanting to find out the best routine for making a cup of coffee, it is unlikely that a GPT-4 model would be neededa suitable model could be one with less parameters in it. This is in part due to the amount of resources needing to be used to run queries on more advanced data models.

When using an AI offering, these datasets are used to present answers back to the users. Some AI models are not limited to just their LLM datasets, and are able to actively trawl & access websites as well for more up to date information.

But there are two inherent problems with the way that this can work:

Data

When users interact with chatbots or other AI capabilities, they’re inputting data into them. This data could be used by the AI capability to further train models forward, ingesting the data provided to them. Given that data could be sensitive or proprietary, this can be problematic. Not all AI organisations use data just for processing, as has been discovered by various users.

Copyright

Given that responses back to users (whether in text format, image format, or other formats) are based on the datasets from the underlying LLM’s, users could potentially be provided with information that is actually copyright, and which they’re not able to use. This is very problematic, as it can result in users passing off material as their own, whilst it actually belongs to someone else.

Microsoft’s approach

Microsoft’s approach to AI capabilities has been made extremely clear. You may be familiar with seeing slides similar to the following slide:

What this means is the following:

  • Microsoft will not use any customer data to train AI models & capabilities for any standard AI offering
  • Any custom Copilots created by Microsoft customers will remain their own – Microsoft will not use data or capabilities from customer created collateral within the Microsoft AI offerings. This means that a bespoke Copilot will only offer its functionality to the customer that created it – other organisations, even within the same sector & creating Copilots themselves, will not benefit from this

Microsoft has also confirmed publicly that any information generated through the usage of Copilot or Azure OpenAI can be used without concern about copyright claims (Microsoft announces new Copilot Copyright Commitment for customers – Microsoft On the Issues). In fact, Microsoft has even gone so far as to say that Microsoft will assume responsibility for any potential legal risks involved.

If a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters,

Brad Smith, Microsoft Chief Legal Officer

This is quite important – customers are able to use Copilot & Azure OpenAI capabilities, and be assured that they will not have to be concerned about copyright issues or challenges. There are of course some conditions around this, in the way that prompts & interactions need to be handled (see Customer Copyright Commitment Required Mitigations | Microsoft Learn for further information on this).

Microsoft has this called out specifically within their Universal License Terms, available to view in full at Microsoft Product Terms.

Recent news events

With the announcement last week that the New York Times has filed suit against the OpenAI Corporation & Microsoft, this is very timely to look at (New York Times Sues OpenAI and Microsoft Over Use of Copyrighted Work – The New York Times (nytimes.com)).

The implications of such a lawsuit will affect how AI capabilities will be able to be created & used on an on-going basis. Copyright is of course very important to respect, and it will be quite interesting to see how this plays out. Having taken a look at some of the material included in the lawsuit, there are most definitely similarities between the New York Times information, and the AI generated output.

So, in my opinion, this is going to be a very interesting space moving forward, and I look forward to seeing how it goes, and any effects that it has on the usage of AI within organisations.

Developer environments – new capabilities to create for users

Developer environments are awesome. There – I’ve said it for the record. Formerly known as the ‘Community Plan’, developer environments are there for users to be able to play with things, get up to speed, test out new functionality, etc. They’re free to use – even with premium capabilities & connectors, users do not need premium licensing in place (caveat – if it’s enabled as a Managed Environment, it will require premium licensing).

Originally, users were only able to create a single developer environment. However, earlier on this year Microsoft lifted this restriction – users are now able to create up to THREE developer environments for their own usage (which makes it even easier now for users to get used to ALM capabilities, and try it out for themselves).

Now, the ability for users to create developer environments is controlled at the tenant level, and it’s either On or Off. It requires a global tenant admin to modify this setting, but it’s not possible to say ‘User Group A will not be able to create developer environments for themselves, but User Group B will be able to’.

Organisations have differing viewpoints on whether they should allow their users the ability to create developer environments or not. I know this well, as usually I’m part of conversations with them when they’re debating this.

One of the main challenges that comes when organisations don’t allow users to create their own developer environments has been that historically, it’s not been possible for someone else to create the environment on their behalf. If we think of ‘traditional IT’, if we’re not able to do something due to locked down permissions, we can usually ask ‘IT’ to do it for us, and grant us access. This has not been the case with developer environments though – well, not until recently.

Something that I do from time to time is chat with the Microsoft Product Engineering groups, to provide feedback to (try to!) help iterate products forward and better. One of the conversations I had in the summer was with the team responsible for developer environments. I was able to share experiences & conversations that I had been having with large scale enterprise organisations, and (very politely!) asked if they could look to open up the ability to do something around this.

Around a month ago or so, the first iteration of this dropped – in the Power Platform Admin Centre interface, it was now possible to specify the user for whom an environment was to be created!

This was an amazing start to things, and definitely would start unblocking Power Platform IT teams to enable their users, in circumstances where their organisations had decided to turn off the ability for users to create their own developer environments.

However, this still required the need to do it manually. Unless looking into an RPA process (which, let’s face it, would be clunky & undesirable), it meant that someone with appropriate privileges would need to go & actually create the environment, and associate it to the user.

However, this has now taken another MASSIVE step forward – I’m delighted to announce that this capability has been implemented in the Power Platform CLI, and is live RIGHT NOW (you’ll need to upgrade to the latest version – it’s present in 1.28.3 onwards).

So, with this in place, it’s now possible to use PowerShell commands to be able to create developer environments on behalf of users, and assign it to them. Organisations usually already have PowerShell scripts to handle new joiners, and will therefore be able to integrate this capability into these, to automatically set up developer environments for users. Alternatively, existing users could look to raise internal requests, and have them automated through the use of PowerShell (along with appropriate approval processes, of course!).

So this is really nice to see. However, I think it can still go one step further (at least!), and am trying to use my connection network to raise with the right people.

See, we have the Power Platform for Admins connector within Power Platform already. One of the functions available in this is to be able to create Power Platform environments:

However, if we look at the action (& the advanced settings within this action), there’s no ability to set this:

Interestingly enough, the API version listed by default is actually several years old. By doing some digging around, I can see that there are multiple later API versions, so I’m not sure why it’s using an older one by default:

What would be really amazing is to have these capabilities surfaced directly within Power Platform, using this connector. Then we could look to have everything handled directly within Power Platform. Given that the CoE toolkit already includes an Environment Request feature, I would see this as building on top & enabling it even further. Obviously organisations wouldn’t need the CoE toolkit itself, as they could look to build out something custom to handle this.

What are your thoughts on this – how do you see these features enabling your organisation? If your organisation HAS locked down the ability for users to provision developer environments, are you able to share some insights as to why? I’d love to hear more – drop a comment below!¬

Developer Environment Routing!

Recently I talked about the wider vision that organisations would be able to use, for helping users get access to the right environments (Default Environment – How to handle? » The CRM Ninja). As part of this, I discussed the Microsoft vision of having environment routing in place, to move users automatically to specific environments.

At the point of writing, there wasn’t anything that I could publicly talk about. However, overnight Microsoft have released functionality around this – what I see as being the first step that this direction is taking. The documentation for this is at https://powerapps.microsoft.com/en-us/blog/default-environment-routing-public-preview/

The functionality released is to enable new users to Power Platform to automatically have a developer environment created for them to access, rather than landing in the Default environment within their tenant. Many organisations struggle with users creating content in the Default environment, when it’s not really (at least not in my opinion) the right place to do this.

Now, when we say ‘new users’, this doesn’t actually mean users newly created in M365 (or Entra ID/AAD). What this means is ‘users who have not accessed anything within Power Platform before’. In the back end, there’s a counter on each user record that keeps track of this, which this functionality is using to determine if users have accessed Power Platform beforehand or not.

What is important to note on this as well is that the Default environment DOES NOT need to be set to Managed for this to work. Microsoft documentation doesn’t make this clear at the moment, but hopefully it’ll be updated soon to clarify this.

Two settings do need to be toggled on within the Power Platform Admin Centre for this to work:

Once these have been set & saved, let’s take a look at how things actually happen. I’ve created a new user for testing purposes:

When signing in, it then briefly shows the general interface that we’re used to for a few seconds:

But, then we get this exciting NEW screen!

And then after a minute or so, we get placed nicely in the new environment:

Looking at the Power Platform Admin Centre, we can see the new environment that’s been created:

To be candid, during my testing things didn’t always work – I had some differing behaviour, or (on one occasion) the interface just hung. I’m going to put this down to being newly released & the product team working through potential issues (remember of course – this is in PREVIEW), and am hoping that they’re resolved very soon.

Also, it’s important to note that the developer environments created through this are MANAGED. Users will be able to create collateral in them, but to run apps etc will need premium licensing in place.

Moving forward, it would be great to have some information displayed to users if something hasn’t worked, as well as notifications to admins (configurable) so that they’re aware as well. Examples of this could include where an organisation has maxed out the number of (free) developer licenses available (yes, I know this sounds stange, but there’s a default limit of 9,999 developer licenses per org).

But I think it’s a great first step forward, and hopefully there will be many different ways that this product will be developed forward. My initial thoughts would include:

  • Creating developer environments for existing Power Platform users who don’t have a personal developer environment
  • Routing existing Power Platform users who have their own Developer environment to it
  • Being able to route to other places as well, including being able to specify which users/groups of users should be routed

It’s an exciting place to be in, and I look forward to seeing more of it!

What are your thoughts around this? Does your organisation allow users to have personal developer enviroments, or do they lock it down?

Default Environment – How to handle?

As we’re all aware, the default (Power Platform) environment in any Azure tenant is a very ‘interesting’ thing to have. It’s there by default when an Azure tenant is created, all users within the Azure tenant automatically have access to it, we’re not able to restrict users from being in it, etc etc.

Though it’s able to be backed up, it’s not able to be restored over itself, there’s no SLA/support available on it….the list goes on & on…!

Many of us have come up against issues caused by people using the default environment whilst not knowing about challenges involving it, which usually results in pulling out our hair, banging our head against the wall, and other like-minded productive approaches.

However, it is the first place that users, being new to Power Platform, land up, and instinctively they’ll start building applications, automations etc within it (though usually without using solutions as a container for the development of items). So to date, there’s not really anything that’s been able to be done around this, apart from monitoring users & chasing them after the fact.

Now, we’re all about enabling our users in the right way, helping educate & support them. Telling them a big NO doesn’t help, and can even be an initial blocker to having people start playing around & building technological solutions.

So how can we go about enabling our users, but also having the appropriate level of governance over the top? Well, there are several steps that I think we can take, which will help us with these. Now, not all of these are yet in place, though they have been talked about publicly. So let’s go take a look at them

  1. The first step, in my mind, is to start off with enabling the default environment as a managed environment (yes, this can ACTUALLY be done!). Managed environments have many different properties associated with them, but the one of most interest (for this at least) is the requirement to have a premium license in place.

All users within an organisation should by default have an M365 license SKU against them (usually this would be an E3 or E5). Users with these can immediately use the seeded Power Platform capabilities within them to create Power Platform collateral (using standard connector capabilities). However, with the default environment being managed, they will NOT be able to access it!

Note: For the moment, I’m leaving out users who have premium Power Platform licenses – this is deliberate

  1. Environment routing. Announced recently is the environment routing capabilities. This will enable users to be automatically routed to an appropriate environment, based on various conditions that can be set. With this, we could create appropriate business unit ‘sandboxes’, and we could route users to these. The user experience would be that when logging in, they would automatically then go to the right environment, rather than trying to work out which environment they should actually go to. This will save on confusion, and be a good user experience (in my opinion).
  1. Just-In-Time (JIT) Environment Creation. One of the items mentioned by Charles Lamanna at the European Power Platform Conference 2023 in Dublin is a new capability that’s coming in soon (I hope!). From the sound of it, this will give the ability to automatically create a new environment for users who do not already have one.

This sounds really cool. With the recent advent of Development Environments (& the ability for all users to have multiples of these), this could work REALLY well with the environment routing capability mentioned above. When a user would log in for the first time, it could look to see if they have a developer environment – if yes, then route them to it. But if the user didn’t, then to automatically spin up & create a new developer environment, and route them to it.

Now there are some caveats with this approach, leaving aside that some of the functionality isn’t GA yet.

It would mean that organisations would need to be alright with changing the default environment to become a managed environment. Obviously, risk assessments would need to be carried out with this, and non-premium solutions migrated elsewhere.

It’s also important to call out that organisations which have a CDS 1.0 implementation (ie before Power Platform became GA etc) will only have the ability to upgrade default to managed. They are not able to downgrade back to an unmanaged default environment, given limitations of the original CDS implementation (I’ve heard some truly HORRIFIC stories around this, so be careful!)

The above, however, is just the start of things. There are many other concepts to keep in mind, such as Landing Zones, Policies, etc. I’m going to be looking to cover these in upcoming posts, so keep an eye out for them!

New Power Platform Security Role Editor

We’ve all been there. Security role wise, that is. It’s the point in any project where we start looking at configuring user security. To do this, we’ve used the Security Role section in the Settings area (once it’s actually loaded, of course):

Ah, the joys of this – dating back to CRM 3.0 (to my recollection – though it possibly might be 4.0). All of these lovely little circles, which fill up more & more as we click on them, whilst trying to work out what each one actually does:

And that’s not to mention the ‘Missing Entities’ tab (did anyone ever figure out what this was supposed to be used for), or the ‘Custom Entities’ tab which seemed like a catch all place. Plus the fact that non-table permissions (eg Export to Excel) were placed on random tabs that meant we needed to hunt through each tab to find the appropriate item.

Now many of us spend hours in here (then further hours once we started troubleshooting user issues that were down to security role misconfiguration). The absolutely ‘JOYS‘ of the header title row not being scrollable (though it was possible to hover over each permission, and it would tell you what it was). The power of clicking on the line item, and seeing ALL of the little circles fill up – if you haven’t ever done it, you’ll not have experienced the bliss that this could bring!

But all things come to an end(or as the Wheel of Time series says:’ The Wheel of Time turns, and Ages come and pass, leaving memories that become legend‘), and now we have a NEW security role experience.

First of all, the UI has changed. It’s cleaner, responsive, gives more information to users upfront….and the heading SCROLL!!!

We’re able to show just the tables that have permissions assigned to them (rather than wading through dozens or hundreds of entries that have no relevance), or show everything:

Oh, and those random non-table privileges that we had to try to find beforehand – these are now grouped very nicely. This is SO much easier to manage!

We can also take permissions that have been set on a specific table, and then copy them to another table (it promps us to select – and we can select MULTIPLE tables to copy to!):

But best of all is the way that we can now set permissions for items. There are several different ways of doing this.

Firstly, Microsoft has now provided us with the ability to select standard pre-defined options. Using these will set permissions across all categories for the item appropriately:

This is really neat, and is likely to save quite a bit of time overall. However if we’re needing to tweak security permissions to custom settings, we can do this as well. Instead of clicking on circles, we now have lovely dropdowns to use:

In short, I’m absolutely loving this. The interface is quick to load, intuitive, and works well without fuss.

Given how much time I’ve spent over the years in wrestling with security roles, I think this is going to be a definite timesaver for so many people (though we’ll still need to troubleshoot interesting error messages at times that testing will throw up, and work out how/what we’re needing to tweak for security access to work).

There are still some tweaks that I think Microsoft could make to get this experience even better. My top three suggestions would include:

  • The ability to select multiple lines, and then set a permission across all of them (sort of like bulk editing)
  • Being able to have this area solution aware. When we have various different projects going on, it would be great to have the ability to filter the permissions grid by a solution. This would be a timesaver, rather than having to wade through items that aren’t relevant
  • Export to Excel. Having a report generated to save digitally or print off is amazing for documentation purposes. There are 3rd party tools (thank you XrmToolBox!), but it would be great to be built into here

Overall, I’m really quite happy and impressed with it (it’s definitely taken enough time for Microsoft to pay attention to this, and get it out), and hope that it’ll continue to improve!

What have your bugbears with the legacy security editor been over the years, and how are you liking the new experience? Drop a comment below – I’d love to hear!

Justin Wilkinson on The Oops Factor

Finding out from Justin how he got into his love of watches, starting off with on-premise systems, and learning the importance of saying no rather than taking on/helping everyone out.


If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!

Click here to take a look at the other videos that are available to watch.

Kartik Kanakasabesan on The Oops Factor

Talking to Kartik about his love of reading & science fiction (Isaac Asimov, anyone?), as well as touching on development tools, capabilities & limitations. Technology moves on in interesting ways!


If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!

Click here to take a look at the other videos that are available to watch.