Workaround for sharing Canvas Apps

Don’t you find it absolutely frustrating when there’s a canvas app that you want to get access to, or give other users access to, but can’t see it? It’s REALLY annoying, but it’s sort of the way that Microsoft has designed the platform (at least at this point in time).

See, when a user creates a canvas app, only the creator is able to see & launch it. If other users want to get access to it, the creator needs to share it. This can be done by sharing the app directly with another user, or by sharing it with an AAD Security Group (which is sort of best practise).

Now, of course there’s the Microsoft Power Platform Centre of Excellence solution, which includes a very handy app to assign permissions for canvas apps. After all, if a user is on holiday, sick leave, or has left the company, there needs to be some way of assigning permissions for other users to gain access to it. It’s really helpful, but of course needs the CoE solution installed.

Let’s think of another scenario. What about if we have some canvas apps as part of a solution, that’s deployed through (proper) ALM – such as using Azure DevOps with automated pipelines. Best practise for this is to use service principals (ie non-interactive user logins). This is great, but then the canvas app/s will be owned by this user. So without the use of the CoE ‘Set App Permissions’ canvas app, we’re sort of stuck, as we can’t gain access to the app.

Or can we…..?

So this is a scenario that I’ve been dealing with recently, and I’ve found a really cool workaround that doesn’t need the CoE ‘Set App Permissions’ canvas app to be able to handle the situation.

The example below (amusingly, in my opinion) is actually using the Microsoft CoE solution as an example, but this works with any canvas apps that are held within a solution (against, this heavily supports using solutions for ALL development items!).

So, this is what the actual installed apps look like in this environment:

As we can see, there are a lot of them! But what happens if I’m logged in as my regular user? What do I see if I go to the list of apps? Well, I’ll see the following:

Now, as we can see, I’m able to see the model-driven app (as these aren’t hidden at all). But I’m not able to see ANY of the canvas app! So how can I get access to it, or share it with other users?

Well, if I take a look at the solution itself, I can see the following when browsing to the list of apps (I’m really loving the new Solution Explorer layout, I’ll freely admit!):

I can try to play the canvas app (in this case, the ‘Set App Permissions’ app) directly from the solution. But when I try to do this, I’ll get the following error message:

Now, this is of course happening because I’m not the owner of the app, & the app hasn’t been shared with me at all. So really I was expecting this error to happen.

However, if I take a look at the menu options displayed for me, I can see that the ‘Share’ option isn’t greyed out. I wonder what happens if I click it…

Now this is EXCITING! When clicking the ‘Share’ option on the menu, I’m given the regular sharing screen, where I’m able to set app permissions. So it looks like I’m able to do something here. OK – let’s go ahead & try to share the app with my own user:

So I’ve looked up my own user, and then clicked ‘Share’. This is what happens next…

Exciting moment – will this work?

Waiting with bated breath, and then…

It’s worked! The app sharing has been successful with my user.

Note: The example that I’m using here is with my own user account. However it doesn’t need to be – I can select any user account or AAD Security Group, and share accordingly.

Going to my list of apps, I can now see that the app is showing up for me:

Clicking the app to launch it presents me with the permissions dialogue, and having confirmed permissions, then launches it properly:

So this is indeed a way in which it’s possible to share canvas apps with users and/or AAD security groups, even when a user isn’t the owner of the canvas app.

It is important to note that the user carrying this out does need to have one of the following permissions in the environment:

  • System Customiser
  • System Administrator

Without having one of these roles, it’s not going to be possible to carry out the above (mostly because it’s not possible to see solutions & dig down into them).

This is a handy little trick that hopefully will help clear up one of the headaches when trying to share canvas apps! Of course it’s possible to use the Microsoft CoE tool to set app permissions, but if a customer doesn’t have it installed, then this would be another way to approach things.

Have you ever had this issue? How did you go about solving it? I’d love to hear – please drop a comment below…

Troubleshooting the ‘Follow’ functionality

On a recent client project, we’ve come up against an interesting situation. Some of the users have the ‘Follow’ functionality available to them, but others don’t seem to have it. This, of course, is quite confusing, so I thought it would be good to write about it, for others who may come up against this.

But first, let’s take a step back. After all, before this had happened I had never heard of the ‘follow’ functionality within the system, and I’m quite sure that many others haven’t either! So what exactly is this all about?

What is ‘Follow’?

We’ve all been there – we have some customers who are ‘priority customers’, and we want to know/see everything that’s happening around them. Obviously we can go into their specific record/s, and see what’s going on. For example, seeing new cases added for these customers, other activities, etc. But what if we don’t want to have to manually open the records each time, or set up specific views in the system for them?

Well, this is where the Follow functionality comes in. It’s possible to track activities (in ‘real-time’) for records that a user follows. Microsoft has given us the ability to set this (or unset this) on a per record basis, so that users can set their own preferences within the system. When a user follows a specific record, the details for that record then show up in the users activity feed. This can then be used further, such as displaying it within a dashboard, for example.

Follow functionality through views
Follow functionality on a specific record

It’s also possible to automatically follow records based on specific criteria.

How to set up Follow functionality

In order for records to be able to have the follow functionality available to them, they need to have the Activity Feed enabled for the specific table. The default system tables such as Accounts, Contacts & Leads already have this enabled, so these records are able to be followed without any additional configuration around them.

To enable other tables (such as custom tables that you may have created) to be able to have the records within them followed, we need to carry out the following steps:

  1. Go to the Advanced Settings menu, and open Activity Feeds Configuration

2. Find the table that we’re wanting to configure this for (if it’s not showing up, click the ‘Refresh’ button on the menu)

Here we can see that the Channel table isn’t enabled at this point

3. Click the ‘Activate’ button on the menu bar

4. Confirm the pop up screen

And voila – you’re done! Users will now be able to go into the table/s, and follow (or unfollow) records there

Troubleshooting

So we now understand what the follow functionality is, and how to enable it. But what happens when users can’t actually see it within the system, to be able to use it?

Well, there are several different things that we can do to look to solve the issue:

  • Have activity feeds been configured for the table? If they’ve not been configured, then they’ll need to have this set up (this is why I’ve put the steps above as to how to do this!)
  • Are security roles set up correctly?

The second one is what turns out to have been the issue for this project. It’s been quite confusing, as originally mentioned, that certain users did see the follow functionality, but others users didn’t see it.

The first place to check is the ‘follow’ privileges on each security role:

As you can see above, we had given organisation-level access on the security role (& actually across all security roles), though the users were still having issues. So the next step is to check a different security privilege within the security role. This is the ‘Post Configuration’ setting, which is found under the Custom Entities section (why it’s under Custom, I have NO idea):

Without this enabled, users with the security role will NOT be able to see/use the follow functionality within the system!

Hopefully this should then sort out all issues, and users will be able to use the functionality as required.

Have you ever had issues with this feature? Have you found a different solution to fix it? Drop a comment below – I’d love to hear!

Environments & ‘Admin Mode’

With some recent events happening (both professional & personal), I’ve taken a slight step back from putting out posts on here. Thankfully things seem to be settling down, so I’m getting (back) into the swing of things!

I thought that it would be good to talk about a subject that I fell ‘foul’ of recently. This is around environments, and more specifically, the ‘admin mode’ that it’s possible to use on them.

So what exactly is this ‘admin mode’? Well, the aim of it to restrict access to certain users, namely System Administrators & System Customisers. Why would we want to do this? There are several scenarios that come into mind:

  • Performing a system upgrade (such as enabling new features)
  • Changing environment type (eg Production to Sandbox, or vice-versa)
  • Restoring an environment

Essentially, any time we have operation-type work that we’re wanting to carry out. This way whatever we’re doing won’t affect users, and anything that the users are doing won’t affect things either (symbiotic relationship there!).

So as an example, if we’re doing a major release, which changes functionality within a system, we wouldn’t want users in the system carrying out their usual work, as this could have data issue if saving during the actual release. We of course SHOULD be communicating to users that a release is going to take place, and that they shouldn’t be in the system at the time, but ‘admin mode’ is how we can truly enforce it.

Something to bear in mind as well is that if you’re going ahead & restoring an environment to a previous state (whether that’s an automatic save point, or a manual one), it will automatically put the environment into ‘admin mode’ once the restore has been completed. This is very important to keep in mind!

There are three settings around administration mode:

  1. ‘Administration Mode’. This sets whether admin mode is on or off!
  2. ‘Background Operations’. This sets whether background processes, such as workflows, power automate flows, and Exchange synchronisation are enabled (allowed to happen) or disabled (stopped from happening
  3. ‘Custom Message’. This allows you to set a custom message that users (who are not system administrator/system customiser) will see when they attempt to access the environment

So this is the scenario that tripped me up a few weeks back:

  • I was needing to restore an environment to an earlier save point (to be clear, this was NOT a production environment)
  • I went ahead with the restore, and it completed successfully
  • Given that I was doing this at night, one of my children woke up, and I had to deal with them
  • I came back to things, saw that it completed, and then went ahead with the release that I was needing to do

All seemed to go well. However, when users were testing (which admittedly was a few days later), they reported that some functionality wasn’t working. This was strange, as it had been working before the release (& the release that I did hadn’t actually touched it!).

It turned out to be Power Automate flows that just didn’t seem to be running. OK – I started to look into them, but couldn’t figure out why they hadn’t run.

Creating a test Power Automate flow didn’t seem to work either – despite running it to test it, the trigger never activated! I was quite puzzled by this, and couldn’t (initially) work out the reason.

Then I thought to check environment settings! Lo & behold, the environment was STILL in administration mode, and the Background Process option was disabled! Aha – I’ve found the source!

Flipping this out of administration mode thankfully then allowed all Power Automate flows to work/run, and users confirmed that functionality was indeed running as expected. As you can imagine, I was quite relieved!

man in white shirt and black pants standing on black concrete bench near white building during

Something that I hadn’t realised previously is that if you manually put an environment into administration mode, it doesn’t automatically disable background processes. However, if you restore an environment, it DOES disable background processes by default. So if you’re wanting to try out automation items within a restored environment that’s still in administration mode, you’re going to need to ensure that you toggle the Background Processes toggle to allow it to work!

One further thing to learn as well (which I’ve been asked already by some people, so thought that I would mention it here). I’ve mentioned above that users were in the system, but reporting that things weren’t working. Now given that the environment was in administration mode, people have asked how users could be in it! The answer is that these users actually had the system customiser role applied to them, which is why they could get in! If they hadn’t had the role, then perhaps I might have realised things a little sooner (ie that the environment was in administration mode).

So a (good) little lesson learned, and I’ll definitely take it forwards. Has this, or anything else like it, ever tripped you up? Drop a comment below – I’d love to hear!

PL-600: Microsoft Power Platform Solution Architect

Well, it’s FINALLY here. And by finally, I guess I’m saying that I’ve been waiting for this for a while? The PL-600 exam is the new ‘Holy Grail’ for Dynamics 365/Power Platform people, being the Solution Architect (3 star) exam. Ten minutes after it went live, I booked to take it, and four hours after it went live I sat it! (I would have taken it sooner, but had to have supper first, get the kids to bed, etc…)

The first solution architect exam that Microsoft has done in this space has been the MB-600 (see my exam experience write-up on it at MB-600 Solution Architect Exam). However with the somewhat recent shift moving towards certifications for the wider Power Platform, it was inevitable that this exam would change as well.

Interestingly enough, the MB-600 now counts towards some of the Microsoft Partner qualifications. I’d expect that when it retires (currently planned for June 2021), the PL-600 will take the place of it in the required certifications to have.

So, how to discuss it? Well, the obvious first start is to link to the official Microsoft page for it, which is at https://docs.microsoft.com/en-us/learn/certifications/power-platform-solution-architect-expert/. According to the specification for it:

Microsoft Power Platform solution architects lead successful implementations and focus on how solutions address the broader business and technical needs of organizations.
A solution architect has functional and technical knowledge of the Power Platform, Dynamics 365 customer engagement apps, related Microsoft cloud solutions, and other third-party technologies. A solution architect applies knowledge and experience throughout an engagement. The solution architect performs proactive and preventative work to increase the value of the customer’s investment and promote organizational health. This role requires the ability to identify opportunities to solve business problems.
Solution architects have experience across functional and technical disciplines of the Power Platform. Solution architects should be able to facilitate design decisions across development, configuration, integration, infrastructure, security, availability, storage, and change management. This role balances a project’s business needs while meeting functional and non-functional requirements.

So not really changed that much from the MB-600, though obviously there’s now an expectation for solutions to bring in other parts of the Power Platform, as well as dip into Azure offerings as well. Pretty much par for the course, in my experience, with how recent projects that I’ve been on have been implemented.

At the time of writing, there are no official Microsoft Learning paths available to use to study. I do expect this to change in the near future, and will update this article when they’re out. However the objectives/sub-objectives are available to view from the main exam page, and I’d highly recommend going ahead & taking a good look at these.

Passing the exam (along with having either the PL-200 Microsoft Power Platform Functional Consultant or PL-400: Microsoft Power Platform Developer Exam qualifications as well) will result in a lovely (new) shiny badge. Oh, we do so love those three stars on it!

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

Overall, I had 47 questions, which is around the usual amount that I’ve experienced in my exams over the last year or so. What was slightly unusual was that instead of two case studies, I got three of them! (note that your own experience may likely vary from mine).

Some of the naming conventions weren’t updated to the latest methods, which I would have expected. I still had a few references to ‘entities’ and ‘fields’ come up, though for the most part ‘tables’ and ‘columns’ were used. I guess it’s a matter of time to get everything up to speed with it.

  • Environments
    • Region locations, handling scenarios with multiple countries
    • Analytics
    • Data migrations
  • Requirement Gathering
    • Functional
    • Non-functional
  • Data structure
    • Tables
      • Types of tables
        • Standard vs custom functionality
        • Virtual tables. What these are, when they would be used, limitations to them
        • Activity types
      • Table relationships & behaviours
      • Types of columns, what each one is suited for
      • Business rules. What they are, how they can be used
      • Business process flows. What they are, how they can be used
  • App types (differences between them, scenarios each one is best suited for
    • Model
    • Canvas
    • Portal
  • Model-driven apps
    • Form controls (standard vs custom)
    • Form layout (standard functionality vs custom functionality)
    • Formatting inputs
    • Restricting inputs
  • Automation
    • Power Automate flows. What they are, how they can be used, restrictions with them
    • Azure Logic Apps. What they are, how they can be used, restrictions with them
    • Power Virtual Agents
  • Communication channels
    • Self service abilities through Power Virtual Agent chatbots. How this works, when you’d use them, limitations that exist
    • Live agent abilities through Omnichannel. How this is implemented, how customers can connect to a live agent (directly, as well as through chatbots)
    • Teams. When this can be used, how other platform abilities can be used through it
  • Integration
    • Integration tools
    • Power Platform systems
    • Azure systems
    • Third party systems
    • Reporting across data held in different systems
    • Dynamics 365 API
  • Reporting
    • Power BI. What it is, how it’s used, how it’s configured, limitations with it, how to share information with other users
    • Interactive Dashboards. What these are, how these are set up and used, limitations to them
  • Troubleshooting
    • Canvas app issues
    • Model driven app issues
    • Data migration
  • Security
    • Data Protection. What is it, where it’s set up, how it’s used across different requirements in the platform
    • Types of users (interactive/non-interactive)
    • Azure Active Directory, and the role/s it can play, different types of AAD authentication
    • Power Platform security roles
    • Power Platform security teams, types
    • Portal security
    • Restricting who can view forms
    • Field level security
    • Hierarchy abilities
    • Auditing abilities and controls
    • Portal security

Wow. It’s a lot of stuff. Not that I’m surprised by that, as essentially it’s the sort of thing that I was expecting (being familiar with the MB-600). I think that on a ‘day to day’ basis, I cover most of these items already, so didn’t have to do a massive amount of revision for items that I wasn’t familiar with.

From my experience in taking it, I’d say that around 30% of the questions seemed to be focused on Dynamics 365, with 70% being focused on Power Platform capabilities. It’s about what I thought it would be when the exam was first announced. Obviously some people are more Dynamics 365 focused, and others are more Power Platform focused, but the aim of the exam (& qualification) is to really understand the breadth of the offerings available.

I can’t tell you if I’ve passed it or not…YET!. Results aren’t going to be out for several months, based on previous experience with Beta exams, but I’ve got a good feeling about this.

So, if you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Solution Dependencies & Management

Solutions are marvellous things. They enable us to be able to package up lots of components, and deploy them to different environments all together as one single package.

However, there have been changes over time as to how solutions are used. I’m not (for the most part) going to go into the Managed VS Unmanaged debate, which I leave to people who are more in the know….

Microsoft Dynamics 365 apps are installed using solutions. Third party apps provided by Independent Software Vendors (ISVs) also use solutions.

In Power Apps, solutions are leveraged to transport apps and components from one environment to another or to apply a set of customisations to existing apps. A solution can contain one or more apps as well as other components such as entities, option sets, etc. You can get a solution from AppSource or from an independent software vendor (ISV).

Custom development should also take place within a solution, to allow it to be deployed appropriately.

But it’s important to take a closer look at how solutions work overall, as we can be involved on multiple projects within the same environment. Not only that, some solutions may require other solutions to be present first, in order to actually work! A great example of this is Master Data Management (or MDM), which is where companies have a ‘backbone’ of data, which other parts of the system then hangs off.

To understand this concept better, let’s take a quick look at solution layering.

Solution Layering

Layering occurs on the import of solutions and describes the dependency chain of components from the root solution introducing it, through each solution that extends or changes the components behaviours. Layers are created through an extension of an existing component (taking a dependency on it) or creation of a new component or version of a solution

Managed and unmanaged solutions exist at different levels within a Microsoft Dataverse environment. In Dataverse, there are two distinct layer levels:

  • Unmanaged layer. All imported unmanaged solutions and unmanaged customizations exist at this layer. The unmanaged layer is a single layer.
  • Managed layers. All imported managed solutions and the system solution exist at this level. When multiple managed solutions are installed, the last one installed is above the managed solution installed previously. This means that the second solution installed can customize the one installed before it. When two managed solutions have conflicting definitions, the runtime behaviour is either “Last one wins” or a merge logic is implemented. If you uninstall a managed solution, the managed solution below it takes effect. If you uninstall all managed solutions, the default behaviour defined within the system solution is applied. At the base of the managed layers level is the system layer. The system layer contains the tables and components that are required for the platform to function.

The following diagram introduces how managed and unmanaged solutions interact with the system solution to control application behavior.

  • The system solution represents the solution components defined within Dynamics 365 or the Power Platform. Without any managed solutions or customisations, the system solution defines the default application behaviour. Many of the components in the system solution are customisable and can be used in managed solutions or unmanaged customisations.
  • Managed solutions are installed on top of the system solution and can modify any customisable solution components or add more solution components. Managed solutions can also be layered on top of other managed solutions. As long as a managed solution enables customization of its solution components, other managed solutions can be installed on top of it and modify any customisable solution components that it provides.
  • Unmanaged customisations. All customisable solution components provided by the system solution or any managed solutions can be customized in the unmanaged customisations
  • Unmanaged solutions are groups of unmanaged customisations. Any unmanaged customized solution component can be associated with any number of unmanaged solutions. These can be edited & modified, regardless of the environment in which they’ve been deployed to
  • The ultimate behaviour of an instance of Dynamics 365 or Power Platform application is the culmination of the system solution, any managed solutions, and any unmanaged customisations.

The official stance of Microsoft, according to its Application Lifecyle Management (ALM) documentation, is that unmanaged solutions are used for development, and that managed solutions are released downstream to further environments. For bespoke solutions, however, this may not fit, and an appropriate balance must be found.

Data ‘Backbone’ & Solution Dependencies

Given the way that companies are adopting Power Platform (and Dynamics 365, of course!) it’s highly likely that we will build out system structures that will form the backbone for multiple applications on an on-going basis. With this in mind, it’s appropriate to put in place proper planning for this, to avoid any issues that could occur in the future with appropriate system designs

Solution Dependencies

When creating system structures within an environment, using unmanaged solutions, connecting two (or more) tables together will create dependencies on each other. In simple terms, if we connect Table A to Table B, there’s a reciprocal relationship created back from Table B to Table A:

This happens even if Table A is in Solution 1, and Table B is in Solution 2. If they’re in the same environment (& both solutions are unmanaged), it will create the two-way dependency.

This will cause issues if trying to deploy each solution individually, and will fail on import, as the system will require all items to be available in the solution

Workable scenario

The way in which to handle the issue of solution dependencies is to ensure that the ‘master backbone’ of system design is created in the main development environment, and then to use that in secondary development environments as the core of additional solutions:

This is in line with the emerging recent Microsoft Best Practise information around solution management (which is likely to be moving towards having a single environment per developer, rather than multiple developers working in the same environment).

The steps for doing this are as follows:

  1. Main ‘core solution’ exists (as unmanaged) within the main development environment
  2. When a project requires this to build upon:
    1. Secondary development environment is created
    1. ‘Core solution’ is exported as managed from the main development environment, & imported into the secondary development environment
    1. Project work is carried out within the secondary development environment
    1. Once project solution is complete (or when appropriate for deployment), it can be exported from the secondary development environment
      1. If deploying directly from the secondary development environment to downstream environments, it should be exported as managed
    1. The solution should be exported as unmanaged, and imported back into the main development environment. This will not cause dependencies to be created with the ‘core solution’ in it

Note: The main ‘core solution’ should consist of the items that are needed for core system work. If additional items are needed for multiple projects to work off (eg Account Manager field), this would need to be added to the core solution, rather than the individual project solution/s, as otherwise there could be further issues downstream.

If the project is completed, but requires further work to be carried out later on (or development support), then the following should be done:

  1. Secondary development environment is created
  2. ‘Core solution’ exported from the main development environment as a managed solution, and imported into the secondary development environment
  3. Project solution exported as unmanaged from the main development environment, and imported into the secondary development environment
  4. Work and/or support can be carried out within the secondary development environment, and released appropriately

I’m expecting further information around this to be released by Microsoft in due course (I’m a little surprised there’s not more out there at the moment, to be honest!). It’s vital that we ensure that we’re working with solutions in the right way, to stop any issues occurring later on down the line.

Have you ever had a problem around this? Drop a comment below – I’d love to hear your experiences!

Managed Solutions, & replacing a field

Well to start with, I’m sure that I’m going to get pulled up by some people for my use of the word ‘field’ in the title. After all, officially it’s now a ‘column’! But I (still) can’t let go of calling them as I’ve done so for over a decade, so field it is.

Now to the actual topic of this blog post, which is centred around Managed Solutions. Leaving aside the whole debate about whether we should be using managed or unmanaged solutions (& when/where to do each), there is one definitive benefit of using a managed solution.

See, unmanaged solutions are additive in nature. Work is done in the development environment, then deployed. Further work is done (additional items added, etc), and deployed, and they then appear in the downstream environments. However, if you delete an item in the development environment, it’s not removed when the solution is deployed downstream.

Managed solutions, on the other hand, are both additive & detractive. As with unmanaged solutions, items added in the development environment are also added downstream when deployed. However, if an item is removed from the solution in the development environment, it will also be removed when the solution is deployed downstream. It’s one of the useful ways to ensure that you don’t end up with random unused items just lying around in Production (which have a habit then of popping up in the Advanced Find window, for example). So it’s really quite handy for a lot of reasons to go down this route.

Well, I found myself going down this route recently, but with slightly unexpected results, I’ll freely admit…

The scenario was that we had deployed a managed solution to the UAT (test) environment on a client project. Then the client changed their mind (shock & horror!!) as to a specific item, and we needed to change it from a text item to a lookup item. Obviously (as per best practise, of course) this would need to be done in the development environment, and then released downstream. Given that this is a managed solution, I’d expect this to work, without any issues. Well, it didn’t…

The change in the development environment (deleted the old item, ‘re-created’ it as a lookup with the same system name) was done, we exported it as managed, and then went to import it in the UAT environment. It took the solution file, thought about it for a while (it’s somewhat of a large solution), & then errored:

Exception type: System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault] Message: Attribute mdm_field is a String, but a Lookup type was specified.

Now I was somewhat confused by this message occurring. It’s not been the first time I’ve seen it over the years, but in my previous experience I’ve seen it when handling unmanaged solutions. It’s when you delete an item in the development environment, re-create it as a different item type (with the same underlying system name), and then deploy it as unmanaged. The solution import in the second environment fails due to the different in the type (as it sees the same name). This, of course, is to be expected.

But here we’ve been using managed solutions for deployment, and as mentioned above, they’re detractive as well. The expected behaviour (at least from my side of things) would be that the system would note that the item type has changed, remove the old item, & import the new item. In my mind, that’s logical, but apparently not?

See, even managed solutions have their limitations, of which this is one of them. Having checked with several other people who I reached out to around this, I’ve discovered that it can’t work in the way that I was expecting it to. Instead, a specific process has to be followed

  1. In the development environment, remove the item, & export the solution as managed
  2. In the downstream environment(s), deploy this (interim) managed solution. This will remove the item from the environments
  3. In the development environment, re-create the item with the different system type. Then export it as managed
  4. In the downstream environments, deploy this solution. This will then add the item (with the new system type) into the environment.

This means that development & deployment teams (if separate ones) need to co-ordinate around this, to ensure it’s done in the right way. It could also be developed/exported in succession, and then imported in succession as well (either manually, or through an Azure DevOps Pipeline, for example).

This worked wonderfully for us, and to be honest, I was quite relieved after several hours of frustration with things. Even better, it was a Friday, so meant that the week could end well!

Have you ever come across this, and been frustrated as well? Have you got a similar story with something else that happened to you around solutions? Drop a comment below – I’d love to hear!

Customising Case Resolutions

Well, the title is a bit of a mouthful, I’ll admit. Hopefully though this brings some good information, and can help people out.

Cases are wonderful things, and can be used for tracking client interactions, compliments/complaints, and so many other things. What cases do have is the ability to resolve them, and provide information around the resolution.

Now, the standard way of doing this provides the following screen:

There’s the ability to set the Resolution Type (being a dropdown, aka Choice, field), & putting in free text for the Resolution itself (allowing us to track information around it). There are also time fields, which can be used for working out the time spent, as well as any time that’s going to be chargeable.

Now when going in to modify these, we’d think to open up the Case Resolution table. However, this isn’t actually the right place to do it. Instead, we’re needing to update the Case table itself, as the Care Resolution items comes from the Case Status field!

Somewhat annoyingly, it’s not possible to do this through the new ‘Maker’ interface:

In order to actually handle this, we need to switch across to the Classis editor to set this up. This could be because it’s actually a situation of having both parent & child entries. What I mean by this is that there’s the actual status (being Active, Resolved or Cancelled), and then a reason under each one. Hopefully at some point it’ll be updated into the new UI, so that we can do it from there.

We’ll need to change the Status item to ‘Resolved’, & can then add in the options that we want:

After adding them, we need to save & publish, and then they’ll show up for us, and are able to be selected:

So that’s great – we’re able to customise it. But what if we’re wanting to customise the actual ‘Resolve Case’ form itself? Not everyone wants to show Time/Billable Time on it (quite a few of our clients ask us to remove it), and perhaps they want to add additional custom fields.

So from the usual perspective of doing this, we’d open up the Case Resolution table, create new fields as required, and modify the existing form (we’re not able to create any other forms for this specific table). After all, this is how we’d do it for any table in the system (whether a standard one, or a custom one). This is going to be the Main form, rather than the QuickCreate one:

We save & publish it, and then would open up a Case record, click ‘Resolve Case’, and expect to see it. However, that doesn’t happen, which has been most puzzlingly to me!

It turns out that there are two things needed to be done in order to get to see our ‘custom’ form (though it’s not really custom, as it’s modifying the default form, but whatever).

  1. We need to modify security permissions for users, and is a critical requirement. An example of this is shown below:
Security Role: Customer Service Representative

2. We need to enable customisable dialogues. Yes, it’s a setting that needs to be updated in order for users to see the custom layout of the form. If we don’t do this, they’re shown the default form, even though we’ve modified it! Seems a little strange that the system seems to have this concept of a ‘shadow’ form, but I guess that’s how it is.

To do this, we need to go into the Service Management settings area. I usually launch this through the Customer Service Hub app, though it’s available through several of the other standard apps as well:

Once there, we need to click into the Service Configuration menu item, and then change the ‘Resolve Case Dialogue’ option as shown below:

Remember to click the ‘Save’ button to save this.

Finally we can go back to our Case record, click ‘Resolve Case’, and look what appears!

So in summary, it’s definitely possible to modify & change the way that Case resolutions works in the system. It does take a little bit of fiddling around with settings in different areas, which can be confusing if we’re not used to this, but can give a great result in the end.

Have you ever come across this, and wondered how to do it? Have you developed Case Resolutions any further? Drop a comment below – I’d love to hear!

Record security with Power Automate

Today’s post is around record security, and how Power Automate can really be quite useful with this!

Let’s take a quick recap of how security works (which is applicable to both Dynamics 365, as well as Power Platform apps). We have the following:

  • Security roles, which are set up with specific privileges (Create/Read/Update/Delete etc) across each entity table, as well as for other system permissions
  • Users, who can have one (or more) security roles applied to them (security roles being additive in nature)
  • Teams, who can have one (or more) security roles applied to them. Users are added into the team, and inherit all permissions that the team has (much easier than applying multiple roles on a ‘per user’ basis)

That’s great for general security setup, but it does take a system admin to get it handled. Alternatively, of course, it’s possible to use AAD Security Groups which are connected to security teams within Power Platform, and users added to them will inherit the necessary permissions.

But what if we want to allow users who aren’t system administrators to allow other users access to the records? Well, it’s also possible to share a specific record with another user – doing this allows the second user to see/access the record, even if they usually wouldn’t be able to do so. This is really great, but does require a manual approach (in that each record would need to be opened, shared with the other user/s, and then closed).

I’ve been working on a project recently where we have the need to share/un-share a larger number of records, but with a different user for each record. We’ve been looking into different ways of doing this, and obviously Power Automate came into mind! We didn’t want to use code for this, for a variety of reasons.

Security and Compliance in PowerApps and Flow - Michał Guzowski Consulting

The scenario we had in mind was to have a lookup to the User record, and with populating this with a user, it would then share the record with them. This would be great, as we could bulk-update records as needed (even from an integration perspective), and hopefully all would work well.

So with that, I started to investigate what options could be available. Unfortunately, there didn’t seem to be any out of the box connectors/actions that could be used for this, which was quite disheartening.

My next move was to look at the user forums, & see if anyone had done anything similar. I was absolutely excited to come across a series of responses from Chad Althaus around this exact subject! It turns out that there’s something called ‘Unbound Actions’, which is perfect for the scenario that we’re trying to achieve.

There are two types of actions available within Power Automate:

  • Bound actions. This are actions that target a single entity table or a set of records for a single entity table
  • Unbound actions. These aren’t bound to an entity type and are called as static operations. They can be used in different ways

There are quite a lot of unbound actions available to use:

The one I’m interested in for this scenario is the GrantAccess action. More information around this can be found at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/web-api/grantaccess?view=dynamics-ce-odata-9

It does require some JSON input, but when formatted correctly, it shows along the following lines:

The different parts of this works as follows:

  • Target is the actual record we’re wanting to apply the action to
  • SystemUserID is the actual system user, and we also need to specify the odatatype
  • AccessMask is what we’re wanting to do when sharing the record (as there are different options available for sharing, ie ReadOnly, Edit, ShareOnwards, etc)

Using this, we’ve therefore built out the following scenario:

  1. Field added to the record, looking up to Users
  2. Relevant users who are able to access the record can set this lookup field to be a specific user record (who doesn’t have access to this record)
  3. Power Automate flow fires on the update of the record when it’s saved (filtering on just this attribute), sharing the record with the selected user
  4. The user then gets an email to notify them that the record has been shared with them, with a URL link to it (it’s somewhat annoying that there’s no inbuild system notification when a record has been shared with you, but I guess that’s something we’re having to live with!)
  5. They can then go in & access the record as they need to

We’ve also given some thought to general record security, and have additionally implemented the following as well:

  1. If the user lookup value is changed, we obviously share the record with the new user that’s been saved to it
  2. Using a different Unbound Action (RevokeAccess), we remove the sharing of the record with the previous user (we have another field that’s being updated with the value of it, which we’re using to pass the action in, as otherwise we don’t actually know who the previous user was!)

All in all, we’re quite happy that we’ve managed to come up with this solution, which is working splendidly for us. Also, major thanks to Chad for his assistance in getting the syntax correct!

Have you ever needed to do something like this? Did you manage to implement it in some way? Drop a comment below – I’d love to hear how your experience was!

‘Ghost’ lookup value following deployment

This is something that stumped me fairly recently. It’s also something that I was trying to work out what I should use at the title for this post! Let me share what happened.

I’m working on a project that’s quite critical (COVID-19 related). This is a project that we’ve built something around Dynamics 365 as an additional wrapper, to provide specific functionality for the pandemic. It’s being rolled out (the same solution) to multiple clients, and is only using the functionality from Power Platform. No custom code at all.

Now, before going into the specifics around it, let’s take a moment to revisit what a lookup field is, and what it does. Essentially a lookup field connects two tables together (wow – that felt strange not to use the word ‘entity’!). In the front interface, it’s used for a 1:N relationship.

So for example, we can have a lookup from Account to Contact, to set the primary contact for the account. The user navigates to the field, searches for the record they’re wanting to associate, and saves it.

Underneath, there’s a relationship that’s automatically created between the two tables, showing the way that the relationship will go (ie 1:N or N:1). This is created on both sides (more on that another time around dependencies), and most people will never need to modify it

When I first started with this particular project, I got the solution, and deployed it into the Dev environment (for the project that I was on). On testing it out, I found something very interesting. We’re using the Case (Incident) table, and there are various lookup fields on it. One of these was already populated with a value. Hmm – that’s interesting, I thought. It was a new deployment, and we hadn’t set any static data up yet at all. So how could it already be populated?

How is this being set, when I’ve not entered it into the system as a record…

Furthermore, I was unable to save the Case record. When I tried to, I was getting an interesting error:

On drilling down into the error log (which admittedly is actually getting better in the details shown in it, thankfully!), it turned out to be because I didn’t have access to the referenced record (in the lookup field). It just didn’t exist.

So the lookup field value was coming in with a hard-coded GUID (record identifier). But how was this being done, especially if there weren’t any records (of that type) in the system at all?

From my experience of things, I could think of two ways in which to populate a lookup field with a hard-coded value:

  • Through a ‘real-time’ Power Automate flow, on create of the record. It’s possible to set a GUID value in the flow, and then it would be set
  • Through custom code, running on the form. Again, it’s possible to hard-code a GUID there, and then set the field

However on checking both options, none of them were happening. No Power Automate flows touching the Case record, and no custom code at all on the Case.

It was then, digging through the other parts of the solution, that I saw various Business Rules. For those unfamiliar with these, I’ll quote from the official Microsoft documentation around them:

By combining conditions and actions, you can do any of the following with business rules:

  • Set column values
  • Clear column values
  • Set column requirement levels
  • Show or hide columns
  • Enable or disable columns
  • Validate data and show error messages
  • Create business recommendations based on business intelligence.

I’ve used Business Rules (somewhat extensively) before. However on going into the one for the Case table, I found that something was happening that I wasn’t aware could happen! It’s actually possible to set a lookup field value through it:

I spy a lookup option

Even though we’ve deployed the solution from the original development environment to a different environment, this is still set. But there are no records that are available:

I had never thought that it would be possible – to set a static value (eg a number, or some text), fine. But to set referential data? Wow.

Obviously this can be quite helpful. The bit that it’s NOT helpful though is when deploying the solution to another environment (as this situation was). It doesn’t help if you re-create the record that it’s referring to with using the same record name, as it’s using the underlying GUID (which you can’t re-create). This really does take solution deployment into a whole new perspective, where you need to be careful around these sorts of things as well.

So something new that I’ve learned (I do try to learn something new each day), and specifically around an area I thought I knew quite well. It did take some time, but I’m glad that I (finally) found the root cause of it, and identified what was causing it.

Have you ever had something like this happen, where you’re searching & searching for the cause of it? Drop a line below – I’d love to hear!

Data Export Service Connection Issues

This is a slightly different post from the usually stuff that I talk about. It’s much more ‘techy/developer’ focused, but I thought it would be quite useful still for people to keep in mind.

The background to this comes from a project that I’ve been working on with some colleagues. Part of the project involves setting up an Azure SQL database, and replicating CDS data to it. Why, I hear you ask? Well, there are some downstream systems that may be heavy users of the data, and as we well know, CDS isn’t specifically build to handle a large number of queries against it. In fact, if you start hammering the CDS layer, Microsoft is likely to reach out to ask what exactly you’re trying to do!

Therefore (as most people would do), we’re putting in database layer/s within Azure to handle the volume of data requests that we’re expecting to occur.

Azure SQL Database | Microsoft Azure

So with setting up things like databases, we need to create the name for them, along with access credentials. All regular ‘run of the mill’ stuff – no surprises there. In order for adequate security, we usually use one of a handful of password generators that we keep to hand. These have many advantages to them, such as ensuring that it’s not something we (as humans) are dreaming up, that might be easier to be guessed at. I’ve used password generators over the years for many different professional & personal projects, and they really are quite good overall.

Sordum Random Password Generator Creates Random Passwords with Ease -  MajorGeeks
Example of a password generation tool

Once we had the credentials & everything set up, we then logged in (using SQL Server Management Studio), and all was good. Everything that we needed was in place, and it was looking superb (from the front end, at least).

OK – on to getting the data actually loaded in. To do this, we’re using the Data Export Service (see https://docs.microsoft.com/en-us/power-platform/admin/replicate-data-microsoft-azure-sql-database for further information around this). The reason for using this is that the Data Export Service intelligently synchronises the entire database initially, and thereafter synchronises on a continuous basis as changes occur (delta changes) in the system. This is really good, and means we don’t need to build anything custom to handle it. Wonderful!

Setting up the Data Export Service takes a little bit of time. I’m not going to go into the details of how to set it up – instead there’s a wonderful walkthrough by the AMAZING Scott Durow at http://develop1.net/public/post/2016/12/09/Dynamic365-Data-Export-Service. Go take a look at it if you’re needing to find out how to do it.

So we were going through the process. Part of this is needing to copy the Azure connection string into into a script that you run. When you do this, you need to re-insert the password (as Azure doesn’t include it in the string). For our purposes (as we had generated this), we copied/pasted the password, and ran things.

However all we were getting was a red star, and the error message ‘Unable to validate profile’.

As you’d expect, this was HIGHLY frustrating. We started to dig down to see what actual error log/s were available (with hopefully more information on them), but didn’t make much progress there. We logged in through the front end again – yes, no problems there, all was working fine. Back to the Export Service & scripts, but again the error. As you can imagine, we weren’t very positive about this, and were really trying to find out what could possibly be causing this. Was it a system error? Was there something that we had forgotten to do, somewhere, during the initial setup process?

It’s at these sorts of times that self-doubt can start to creep in. Did we miss something small & minor, but that was actually really important? We went over the deployment steps again & again. Each time, we couldn’t find anything that we had missed out. It was getting absolutely exasperating!

Finally, after much trial & error, we narrowed the issue down to one source. It’s something we hadn’t really expected, but had indeed caused all of this to happen!

What happened was that the password that we had auto-generated had a semi-colon (‘;’) in it. In & of itself, that’s not an issue (usually). As we had seen, we were able to log into SSMS (the ‘front-end’) successfully, with no issues at all.

However when put into code, Azure treats the semi-colon as a special character (a command separator). It was therefore not recognising the entire password, which was causing the entire thing to fail! To resolve this was simple – we regenerated the password to ensure that it didn’t include a semi-colon character within it!

Now, this is indeed something that’s quite simple, and should be at the core of programming knowledge. Most password generators will have an option to avoid this happening, but not all password generators have this. Unfortunately we had fallen subject to this, but thankfully all was resolved in the end.

The setup then carried on successfully, and we were able (after all of the effort above) to achieve what we had set out to do initially.

Have you ever had a similar issue? Either with passwords, or where something worked through a front-end system, but not in code? Drop a comment below – I’d love to hear!