Calculated columns not working with data migration

Interesting title, isn’t it? I thought to do something that might grab peoples attention, and this was the best that I could come up with! So, let’s get into the scenario, the issue experienced, and how we managed to resolve it.

The scenario on this project was as follows. We’ve been implementing a customer service solution for a sales company, that manufacture multiple products, under multiple brands. Currently there are multiple systems used for order entries, which at some point will be moved to a single system.

However for the moment, they’re wanting to be able to carry out holistic customer service across all brands, to be able to enable all customer service agents to have access to the same data, customers able to be serviced in the same way, regardless of brand, etc.

rectangular brown wooden table

As a result, Dynamics 365 Customer Service was the ticket, and has many standard capabilities that addresses the need of the customer.

Now, whilst sales (aka orders) will not be handled within Dynamics 365 itself, we didn’t want the customer service agents to have to look up order information in the ordering systems. Instead, we wanted to be able to bring the sales/order information into Dynamics 365 for reference (at some point it’s likely that the customer will actually use Dynamics 365 capabilities for sales as well).

In order to do this, we’ve had some amazing data architects bringing the data together into Azure Data Factory (ADF)) from the multiple order systems, and then pushing the data into Dynamics 365 (users have read-only view of it).

With bringing in the data, we were looking to capitalise on the native functionality of Dynamics 365, namely the ability for columns to be automatically calculated. An example of this would be bringing in the order line amount, the tax amount, and then having the total order line amount automatically calculated. This is standard system functionality, and when working in Dynamics 365, has many different uses across the system.

Now, it’s important to note here that as we’re not actually handling orders within Dynamics 365, we’re also not holding a ‘proper’ product list within Dynamics 365 itself. However, orders need to show product information on them (bit useless otherwise!), so we’re using the capability of ‘write-in products’.

Note: If you haven’t come across write-in products before, it’s actually a really great item. Essentially, it allows products to be entered for opportunities, quotes, orders etc (wherever products are used), but for when the product/s aren’t in the system product catalogue. Write-in products allow you to simply type the name of a product or service, & then type in the price. This is very useful if, for instance, a product isn’t yet available in the product catalogue, but you still want to be able to quote it. In our scenario, we’re using write-in products to avoid the need to manage the product catalogue itself. It’s also helpful for when you don’t want to use price lists, as all products need to be associated to a price list.

So we start off the data migration, and it’s looking good. No issues being reported by the integration…

But, then users go in to the UAT system to check through things, and find that when looking at orders, the totals aren’t being calculated:

Order line not calculating
Order not calculating either!

Hmm. That’s strange. So we started to look at what could have caused this problem…

  • Is the environment in ‘admin mode’? If an environment is in admin mode, then auto-calculations won’t work at all. Well, the environment wasn’t in admin mode, so it wasn’t that
  • Is there a plugin not firing correctly? Well, this is native Microsoft standard functionality within the platform, so unlikely, but we double-checked to make sure. No, there wasn’t anything causing issues in that dimension
  • Does it work for users, when it’s created manually within the system? Yes, it DOES work when users enter an order/order line with a product. Hmm…this was getting VERY confusing

For clarification, we didn’t want to auto-calculate the information within ADF, and then push it into the relevant Dynamics 365 columns. We wanted to be able to rely on the system working in the way that it should!

Finally, we found out why the calculated columns weren’t working. There’s actually a system setting that governs how this works:

With this set, the auto calculations are now working in the system:

So, thankfully we managed to get this working, and everything went smoothly from that point.

Have you ever been caught out by something similar? I’d love to hear – please drop a comment below!

Security Roles & Assigning Records

Let’s face it, and call a spade a spade (or a shovel, depending on where in the world you happen to be). Security roles are very important within Dataverse, to control what users can (& can’t!) do within the system. Setting them up can be quite time-consuming, and troubleshooting them can sometimes be a bit of a nightmare.

Obviously we need to ensure that users can carry out the actions that they’re supposed to do, and stop them doing any actions that they’re not supposed to do. This, believe it or not, is generally common sense (which can be lacking at times, I’ll admit).

Depending on the size of the organisation, and of course the project, the number of security roles can range from a few, to a LOT!

Testing out security can take quite a bit of time, to ensure that testing covers all necessary functionality. It’s a very granular approach, and can often feel like opening a door, to then find another closed door behind the first one. Error messages appear, a resolution is implemented, then another appears, etc…

Most of us aren’t new to this, and understand that it’s vitally important to work through these. We’ve seen lots of different errors over our lifetime of projects, and can usually identify (quickly) what’s going on, and what we need to resolve.

Last week, however, I had something new occur, that I’ve never seen before. I therefore thought it might be good to talk about it, so that if it happens to others, they’ll know how to handle it!

The scenario is as follows:

  • The client is using Leads to capture initial information (we’re not using Opportunities, but that’s a whole other story)
  • Different teams of users have varying access requirements to the Leads table. Some need to be able to view, some need to be able to create/edit, and others aren’t allowed to view it at all
  • The lead process is driven by both region (where the lead is located), as well as products (which products the lead is interested in)

Now, initially we had some issues with different teams not having the right level of access, but we managed to handle those. Typically we’d see an error message along the following lines:

We’d then use this to narrow down the necessary permissions, adjust the security role, re-test, and continue (sometimes onto the next error message, but hey, that’s par for the course!).

However, just as we thought we had figured out all of the security roles, we had a small sub-set of users report an error that I had NEVER seen before.

The scenario was as follows:

  • The users were able to access Lead records. All good there.
  • The users were able to edit Lead records. All good there.
  • The users were trying to assign records (ie change the record owner) to another user. This generally worked, but when trying to assign the record to certain users, they got the following error:

Now this was a strange error. After all, the users were able to open/edit the lead record, and on checking the permissions in the security role, everything seemed to be set up alright.

The next step was to go look at the error log. In general, error logs can be a massive help (well, most of the time), assuming that the person looking at it can interpret what it means. The error log gave us the following:

As an aside, the most amusing thing about this particular error log, in my opinion, was that the HelpLink URL provided actually didn’t work! Ah well…

So on taking a look, we see that the user is missing the Read privilege (on what we’re assuming is the Lead table). This didn’t make sense – we then went back to DOUBLE-check, and indeed the user who was trying to carry out the action had read privileges on the table. It also didn’t make sense, as the user was able to open the lead record itself (disclaimer – I’ve not yet tried doing a security role where the user has create/write access to a table, but no read access..I’m wondering what would happen in such a scenario)

Then we had a lightbulb moment.

photo of bulb artwork

In truth, we should have probably figured this out before, which I’ll freely admit. See, if we take a look at the original error that the user was getting, they were getting this when trying to assign the record to another user. We had also seen that the error was only happening when the record was being assigned to certain users (ie it wasn’t happening for all users). And finally, after all, the error message title itself says ‘Assignee does not hold the required read permissions’.

So what was the issue? Well, it was actually quite simple (in hindsight!). The error was occurring when the record was being attempted to be assigned to a user that did not have any permissions to the Lead table!

What was the resolution? Well, to simply grant (read) access to the Lead table, and ensure that all necessary users had this granted to them! Thankfully a quick resolution (once we had worked out what was going on), and users were able to continue testing out the rest of the system.

Has something like this ever happened to you? Drop a comment below – I’d love to hear the details!

MB-260: Microsoft Customer Data Platform Specialist

It’s been a while since I’ve taken an exam. Admittedly, this is for two reason. Firstly, the renewal process for exams now (as updated last year) is not to take it again, but rather to re-qualify through Microsoft Learn. The second reason is that I’ve been waiting for some new exams to come out (OK – there’s the DA-100, which is still on my list of things to do…).

Well, there’s a new exam on the block. In fact, it’s a different type of exam – this is a ‘Speciality’ exam, rather than focusing on a specific type of application. It’s the first of its kind, though there are likely to be more to follow in the future.

It’s the MB-260, which is all around Customer Data. That’s right – it’s not about how to do sales, or customer service, or something else. It’s about taking the (holistic) approach to ALL of the data that we can hold on customers, and do something with it.

The official page for it is at https://docs.microsoft.com/en-us/learn/certifications/exams/mb-260https://docs.microsoft.com/en-us/learn/certifications/exams/mb-260. The specification for it is:

Candidates for this exam implement solutions that provide insights into customer profiles and that track engagement activities to help improve customer experiences and increase customer retention.

Candidates should have firsthand experience with Dynamics 365 Customer Insights and one or more additional Dynamics 365 apps, Power Query, Microsoft Dataverse, Common Data Model, and Microsoft Power Platform. They should also have direct experience with practices related to privacy, compliance, consent, security, responsible AI, and data retention policy.

Candidates need experience with processes related to KPIs, data retention, validation, visualization, preparation, matching, fragmentation, segmentation, and enhancement. They should have a general understanding of Azure Machine Learning, Azure Synapse Analytics, and Azure Data Factory.

Note that there’s quite a bit of Azure in there – it’s not just about Power Platform, Dataverse, or Dynamics 365. People who handle reporting on customer data should have various Azure skills as well.

There’s also a new type of badge that will be available:

At the time of writing, there are no official Microsoft Learning paths available to use to study. I do expect this to change in the near future, and will update this article when they’re out. However the objectives/sub-objectives are available to view from the main exam page, and I’d highly recommend going ahead & taking a good look at these.

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

Overall, I had 51 questions, which was towards the higher number of questions that I’ve experienced in my exams over the last year or so. There was only a single case study though.

Some of the naming conventions weren’t updated to the latest methods, which I would have expected. I still had a few references to ‘entities’ and ‘fields’ come up, though for the most part ‘tables’ and ‘columns’ were used. I guess it’s a matter of time to get everything up to speed with it.

  • Differences between Audience Insights and Engagement Insights
    • What are the benefits of each
    • When would you use each one
    • What types of users will benefit from each type
    • How to create customer insights
  • Environments
    • Types of environments
    • How to create a new environment
    • What options are available when creating an environment
    • What is possible to copy from an existing environment
  • Relationships
    • Different types of relationships
    • What is each one used for
    • Limitations of different relationship types
  • Business level measures vs customer level measures
    • What each one is, and what they’re used for
  • Power Query
    • How to use
    • How to configure
    • How to load data
  • Data mapping
    • Different types available to use
    • Scenarios each type should be used for
    • Limitations of each type
    • How to set it up
  • Segments
    • What are segments, how are they set up, how are they used
      What are quick segments, how are they set up, how are they used
      What are segment overlaps, how are they set up, how are they used
      What are segment differentiators, how are they set up, how are they used
  • Measures
    • What are measures, how are they set up, how are they used
  • Data refresh
    • Automated vs manual options
    • Limitations of each type
    • Availability of each type
    • How to set up each type
    • How to apply each type
  • Data Unification
    • What is this
    • How it can be used
    • How to set it up
    • Limitations of it
    • Process validation
    • Changing existing models
  • AI for Audience Insights
    • What is this
    • What can it be used for
    • How to use it
    • Factors that can affect outcomes
  • Security
    • Using Azure Key Vault
    • Capabilities of this
    • How to set it up
    • How to use it
  • Dynamics 365
    • Capabilities for interacting with Dynamics 365
    • How to set it up
    • How to display data, and where it can be displayed
    • What actions users are able to carry out within Dynamics 365

Wow. It’s a lot of stuff. It’s definitely an exam that if you’re not already currently hands-on with the skills needed, I’d highly recommend you get a decent amount of experience with it before taking the exam!

I can’t tell you if I’ve passed it or not…YET!. Results aren’t going to be out for several months, and to be honest, I’m not quite sure how well I’ve actually done.

So, if you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Workaround for sharing Canvas Apps

Don’t you find it absolutely frustrating when there’s a canvas app that you want to get access to, or give other users access to, but can’t see it? It’s REALLY annoying, but it’s sort of the way that Microsoft has designed the platform (at least at this point in time).

See, when a user creates a canvas app, only the creator is able to see & launch it. If other users want to get access to it, the creator needs to share it. This can be done by sharing the app directly with another user, or by sharing it with an AAD Security Group (which is sort of best practise).

Now, of course there’s the Microsoft Power Platform Centre of Excellence solution, which includes a very handy app to assign permissions for canvas apps. After all, if a user is on holiday, sick leave, or has left the company, there needs to be some way of assigning permissions for other users to gain access to it. It’s really helpful, but of course needs the CoE solution installed.

Let’s think of another scenario. What about if we have some canvas apps as part of a solution, that’s deployed through (proper) ALM – such as using Azure DevOps with automated pipelines. Best practise for this is to use service principals (ie non-interactive user logins). This is great, but then the canvas app/s will be owned by this user. So without the use of the CoE ‘Set App Permissions’ canvas app, we’re sort of stuck, as we can’t gain access to the app.

Or can we…..?

So this is a scenario that I’ve been dealing with recently, and I’ve found a really cool workaround that doesn’t need the CoE ‘Set App Permissions’ canvas app to be able to handle the situation.

The example below (amusingly, in my opinion) is actually using the Microsoft CoE solution as an example, but this works with any canvas apps that are held within a solution (against, this heavily supports using solutions for ALL development items!).

So, this is what the actual installed apps look like in this environment:

As we can see, there are a lot of them! But what happens if I’m logged in as my regular user? What do I see if I go to the list of apps? Well, I’ll see the following:

Now, as we can see, I’m able to see the model-driven app (as these aren’t hidden at all). But I’m not able to see ANY of the canvas app! So how can I get access to it, or share it with other users?

Well, if I take a look at the solution itself, I can see the following when browsing to the list of apps (I’m really loving the new Solution Explorer layout, I’ll freely admit!):

I can try to play the canvas app (in this case, the ‘Set App Permissions’ app) directly from the solution. But when I try to do this, I’ll get the following error message:

Now, this is of course happening because I’m not the owner of the app, & the app hasn’t been shared with me at all. So really I was expecting this error to happen.

However, if I take a look at the menu options displayed for me, I can see that the ‘Share’ option isn’t greyed out. I wonder what happens if I click it…

Now this is EXCITING! When clicking the ‘Share’ option on the menu, I’m given the regular sharing screen, where I’m able to set app permissions. So it looks like I’m able to do something here. OK – let’s go ahead & try to share the app with my own user:

So I’ve looked up my own user, and then clicked ‘Share’. This is what happens next…

Exciting moment – will this work?

Waiting with bated breath, and then…

It’s worked! The app sharing has been successful with my user.

Note: The example that I’m using here is with my own user account. However it doesn’t need to be – I can select any user account or AAD Security Group, and share accordingly.

Going to my list of apps, I can now see that the app is showing up for me:

Clicking the app to launch it presents me with the permissions dialogue, and having confirmed permissions, then launches it properly:

So this is indeed a way in which it’s possible to share canvas apps with users and/or AAD security groups, even when a user isn’t the owner of the canvas app.

It is important to note that the user carrying this out does need to have one of the following permissions in the environment:

  • System Customiser
  • System Administrator

Without having one of these roles, it’s not going to be possible to carry out the above (mostly because it’s not possible to see solutions & dig down into them).

This is a handy little trick that hopefully will help clear up one of the headaches when trying to share canvas apps! Of course it’s possible to use the Microsoft CoE tool to set app permissions, but if a customer doesn’t have it installed, then this would be another way to approach things.

Have you ever had this issue? How did you go about solving it? I’d love to hear – please drop a comment below…

Troubleshooting the ‘Follow’ functionality

On a recent client project, we’ve come up against an interesting situation. Some of the users have the ‘Follow’ functionality available to them, but others don’t seem to have it. This, of course, is quite confusing, so I thought it would be good to write about it, for others who may come up against this.

But first, let’s take a step back. After all, before this had happened I had never heard of the ‘follow’ functionality within the system, and I’m quite sure that many others haven’t either! So what exactly is this all about?

What is ‘Follow’?

We’ve all been there – we have some customers who are ‘priority customers’, and we want to know/see everything that’s happening around them. Obviously we can go into their specific record/s, and see what’s going on. For example, seeing new cases added for these customers, other activities, etc. But what if we don’t want to have to manually open the records each time, or set up specific views in the system for them?

Well, this is where the Follow functionality comes in. It’s possible to track activities (in ‘real-time’) for records that a user follows. Microsoft has given us the ability to set this (or unset this) on a per record basis, so that users can set their own preferences within the system. When a user follows a specific record, the details for that record then show up in the users activity feed. This can then be used further, such as displaying it within a dashboard, for example.

Follow functionality through views
Follow functionality on a specific record

It’s also possible to automatically follow records based on specific criteria.

How to set up Follow functionality

In order for records to be able to have the follow functionality available to them, they need to have the Activity Feed enabled for the specific table. The default system tables such as Accounts, Contacts & Leads already have this enabled, so these records are able to be followed without any additional configuration around them.

To enable other tables (such as custom tables that you may have created) to be able to have the records within them followed, we need to carry out the following steps:

  1. Go to the Advanced Settings menu, and open Activity Feeds Configuration

2. Find the table that we’re wanting to configure this for (if it’s not showing up, click the ‘Refresh’ button on the menu)

Here we can see that the Channel table isn’t enabled at this point

3. Click the ‘Activate’ button on the menu bar

4. Confirm the pop up screen

And voila – you’re done! Users will now be able to go into the table/s, and follow (or unfollow) records there

Troubleshooting

So we now understand what the follow functionality is, and how to enable it. But what happens when users can’t actually see it within the system, to be able to use it?

Well, there are several different things that we can do to look to solve the issue:

  • Have activity feeds been configured for the table? If they’ve not been configured, then they’ll need to have this set up (this is why I’ve put the steps above as to how to do this!)
  • Are security roles set up correctly?

The second one is what turns out to have been the issue for this project. It’s been quite confusing, as originally mentioned, that certain users did see the follow functionality, but others users didn’t see it.

The first place to check is the ‘follow’ privileges on each security role:

As you can see above, we had given organisation-level access on the security role (& actually across all security roles), though the users were still having issues. So the next step is to check a different security privilege within the security role. This is the ‘Post Configuration’ setting, which is found under the Custom Entities section (why it’s under Custom, I have NO idea):

Without this enabled, users with the security role will NOT be able to see/use the follow functionality within the system!

Hopefully this should then sort out all issues, and users will be able to use the functionality as required.

Have you ever had issues with this feature? Have you found a different solution to fix it? Drop a comment below – I’d love to hear!

Environments & ‘Admin Mode’

With some recent events happening (both professional & personal), I’ve taken a slight step back from putting out posts on here. Thankfully things seem to be settling down, so I’m getting (back) into the swing of things!

I thought that it would be good to talk about a subject that I fell ‘foul’ of recently. This is around environments, and more specifically, the ‘admin mode’ that it’s possible to use on them.

So what exactly is this ‘admin mode’? Well, the aim of it to restrict access to certain users, namely System Administrators & System Customisers. Why would we want to do this? There are several scenarios that come into mind:

  • Performing a system upgrade (such as enabling new features)
  • Changing environment type (eg Production to Sandbox, or vice-versa)
  • Restoring an environment

Essentially, any time we have operation-type work that we’re wanting to carry out. This way whatever we’re doing won’t affect users, and anything that the users are doing won’t affect things either (symbiotic relationship there!).

So as an example, if we’re doing a major release, which changes functionality within a system, we wouldn’t want users in the system carrying out their usual work, as this could have data issue if saving during the actual release. We of course SHOULD be communicating to users that a release is going to take place, and that they shouldn’t be in the system at the time, but ‘admin mode’ is how we can truly enforce it.

Something to bear in mind as well is that if you’re going ahead & restoring an environment to a previous state (whether that’s an automatic save point, or a manual one), it will automatically put the environment into ‘admin mode’ once the restore has been completed. This is very important to keep in mind!

There are three settings around administration mode:

  1. ‘Administration Mode’. This sets whether admin mode is on or off!
  2. ‘Background Operations’. This sets whether background processes, such as workflows, power automate flows, and Exchange synchronisation are enabled (allowed to happen) or disabled (stopped from happening
  3. ‘Custom Message’. This allows you to set a custom message that users (who are not system administrator/system customiser) will see when they attempt to access the environment

So this is the scenario that tripped me up a few weeks back:

  • I was needing to restore an environment to an earlier save point (to be clear, this was NOT a production environment)
  • I went ahead with the restore, and it completed successfully
  • Given that I was doing this at night, one of my children woke up, and I had to deal with them
  • I came back to things, saw that it completed, and then went ahead with the release that I was needing to do

All seemed to go well. However, when users were testing (which admittedly was a few days later), they reported that some functionality wasn’t working. This was strange, as it had been working before the release (& the release that I did hadn’t actually touched it!).

It turned out to be Power Automate flows that just didn’t seem to be running. OK – I started to look into them, but couldn’t figure out why they hadn’t run.

Creating a test Power Automate flow didn’t seem to work either – despite running it to test it, the trigger never activated! I was quite puzzled by this, and couldn’t (initially) work out the reason.

Then I thought to check environment settings! Lo & behold, the environment was STILL in administration mode, and the Background Process option was disabled! Aha – I’ve found the source!

Flipping this out of administration mode thankfully then allowed all Power Automate flows to work/run, and users confirmed that functionality was indeed running as expected. As you can imagine, I was quite relieved!

man in white shirt and black pants standing on black concrete bench near white building during

Something that I hadn’t realised previously is that if you manually put an environment into administration mode, it doesn’t automatically disable background processes. However, if you restore an environment, it DOES disable background processes by default. So if you’re wanting to try out automation items within a restored environment that’s still in administration mode, you’re going to need to ensure that you toggle the Background Processes toggle to allow it to work!

One further thing to learn as well (which I’ve been asked already by some people, so thought that I would mention it here). I’ve mentioned above that users were in the system, but reporting that things weren’t working. Now given that the environment was in administration mode, people have asked how users could be in it! The answer is that these users actually had the system customiser role applied to them, which is why they could get in! If they hadn’t had the role, then perhaps I might have realised things a little sooner (ie that the environment was in administration mode).

So a (good) little lesson learned, and I’ll definitely take it forwards. Has this, or anything else like it, ever tripped you up? Drop a comment below – I’d love to hear!

PL-600: Microsoft Power Platform Solution Architect

Well, it’s FINALLY here. And by finally, I guess I’m saying that I’ve been waiting for this for a while? The PL-600 exam is the new ‘Holy Grail’ for Dynamics 365/Power Platform people, being the Solution Architect (3 star) exam. Ten minutes after it went live, I booked to take it, and four hours after it went live I sat it! (I would have taken it sooner, but had to have supper first, get the kids to bed, etc…)

The first solution architect exam that Microsoft has done in this space has been the MB-600 (see my exam experience write-up on it at MB-600 Solution Architect Exam). However with the somewhat recent shift moving towards certifications for the wider Power Platform, it was inevitable that this exam would change as well.

Interestingly enough, the MB-600 now counts towards some of the Microsoft Partner qualifications. I’d expect that when it retires (currently planned for June 2021), the PL-600 will take the place of it in the required certifications to have.

So, how to discuss it? Well, the obvious first start is to link to the official Microsoft page for it, which is at https://docs.microsoft.com/en-us/learn/certifications/power-platform-solution-architect-expert/. According to the specification for it:

Microsoft Power Platform solution architects lead successful implementations and focus on how solutions address the broader business and technical needs of organizations.
A solution architect has functional and technical knowledge of the Power Platform, Dynamics 365 customer engagement apps, related Microsoft cloud solutions, and other third-party technologies. A solution architect applies knowledge and experience throughout an engagement. The solution architect performs proactive and preventative work to increase the value of the customer’s investment and promote organizational health. This role requires the ability to identify opportunities to solve business problems.
Solution architects have experience across functional and technical disciplines of the Power Platform. Solution architects should be able to facilitate design decisions across development, configuration, integration, infrastructure, security, availability, storage, and change management. This role balances a project’s business needs while meeting functional and non-functional requirements.

So not really changed that much from the MB-600, though obviously there’s now an expectation for solutions to bring in other parts of the Power Platform, as well as dip into Azure offerings as well. Pretty much par for the course, in my experience, with how recent projects that I’ve been on have been implemented.

At the time of writing, there are no official Microsoft Learning paths available to use to study. I do expect this to change in the near future, and will update this article when they’re out. However the objectives/sub-objectives are available to view from the main exam page, and I’d highly recommend going ahead & taking a good look at these.

Passing the exam (along with having either the PL-200 Microsoft Power Platform Functional Consultant or PL-400: Microsoft Power Platform Developer Exam qualifications as well) will result in a lovely (new) shiny badge. Oh, we do so love those three stars on it!

As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.

Overall, I had 47 questions, which is around the usual amount that I’ve experienced in my exams over the last year or so. What was slightly unusual was that instead of two case studies, I got three of them! (note that your own experience may likely vary from mine).

Some of the naming conventions weren’t updated to the latest methods, which I would have expected. I still had a few references to ‘entities’ and ‘fields’ come up, though for the most part ‘tables’ and ‘columns’ were used. I guess it’s a matter of time to get everything up to speed with it.

  • Environments
    • Region locations, handling scenarios with multiple countries
    • Analytics
    • Data migrations
  • Requirement Gathering
    • Functional
    • Non-functional
  • Data structure
    • Tables
      • Types of tables
        • Standard vs custom functionality
        • Virtual tables. What these are, when they would be used, limitations to them
        • Activity types
      • Table relationships & behaviours
      • Types of columns, what each one is suited for
      • Business rules. What they are, how they can be used
      • Business process flows. What they are, how they can be used
  • App types (differences between them, scenarios each one is best suited for
    • Model
    • Canvas
    • Portal
  • Model-driven apps
    • Form controls (standard vs custom)
    • Form layout (standard functionality vs custom functionality)
    • Formatting inputs
    • Restricting inputs
  • Automation
    • Power Automate flows. What they are, how they can be used, restrictions with them
    • Azure Logic Apps. What they are, how they can be used, restrictions with them
    • Power Virtual Agents
  • Communication channels
    • Self service abilities through Power Virtual Agent chatbots. How this works, when you’d use them, limitations that exist
    • Live agent abilities through Omnichannel. How this is implemented, how customers can connect to a live agent (directly, as well as through chatbots)
    • Teams. When this can be used, how other platform abilities can be used through it
  • Integration
    • Integration tools
    • Power Platform systems
    • Azure systems
    • Third party systems
    • Reporting across data held in different systems
    • Dynamics 365 API
  • Reporting
    • Power BI. What it is, how it’s used, how it’s configured, limitations with it, how to share information with other users
    • Interactive Dashboards. What these are, how these are set up and used, limitations to them
  • Troubleshooting
    • Canvas app issues
    • Model driven app issues
    • Data migration
  • Security
    • Data Protection. What is it, where it’s set up, how it’s used across different requirements in the platform
    • Types of users (interactive/non-interactive)
    • Azure Active Directory, and the role/s it can play, different types of AAD authentication
    • Power Platform security roles
    • Power Platform security teams, types
    • Portal security
    • Restricting who can view forms
    • Field level security
    • Hierarchy abilities
    • Auditing abilities and controls
    • Portal security

Wow. It’s a lot of stuff. Not that I’m surprised by that, as essentially it’s the sort of thing that I was expecting (being familiar with the MB-600). I think that on a ‘day to day’ basis, I cover most of these items already, so didn’t have to do a massive amount of revision for items that I wasn’t familiar with.

From my experience in taking it, I’d say that around 30% of the questions seemed to be focused on Dynamics 365, with 70% being focused on Power Platform capabilities. It’s about what I thought it would be when the exam was first announced. Obviously some people are more Dynamics 365 focused, and others are more Power Platform focused, but the aim of the exam (& qualification) is to really understand the breadth of the offerings available.

I can’t tell you if I’ve passed it or not…YET!. Results aren’t going to be out for several months, based on previous experience with Beta exams, but I’ve got a good feeling about this.

So, if you’re aiming to take it – I wish you the very best of luck, and let me know your experience!

Solution Dependencies & Management

Solutions are marvellous things. They enable us to be able to package up lots of components, and deploy them to different environments all together as one single package.

However, there have been changes over time as to how solutions are used. I’m not (for the most part) going to go into the Managed VS Unmanaged debate, which I leave to people who are more in the know….

Microsoft Dynamics 365 apps are installed using solutions. Third party apps provided by Independent Software Vendors (ISVs) also use solutions.

In Power Apps, solutions are leveraged to transport apps and components from one environment to another or to apply a set of customisations to existing apps. A solution can contain one or more apps as well as other components such as entities, option sets, etc. You can get a solution from AppSource or from an independent software vendor (ISV).

Custom development should also take place within a solution, to allow it to be deployed appropriately.

But it’s important to take a closer look at how solutions work overall, as we can be involved on multiple projects within the same environment. Not only that, some solutions may require other solutions to be present first, in order to actually work! A great example of this is Master Data Management (or MDM), which is where companies have a ‘backbone’ of data, which other parts of the system then hangs off.

To understand this concept better, let’s take a quick look at solution layering.

Solution Layering

Layering occurs on the import of solutions and describes the dependency chain of components from the root solution introducing it, through each solution that extends or changes the components behaviours. Layers are created through an extension of an existing component (taking a dependency on it) or creation of a new component or version of a solution

Managed and unmanaged solutions exist at different levels within a Microsoft Dataverse environment. In Dataverse, there are two distinct layer levels:

  • Unmanaged layer. All imported unmanaged solutions and unmanaged customizations exist at this layer. The unmanaged layer is a single layer.
  • Managed layers. All imported managed solutions and the system solution exist at this level. When multiple managed solutions are installed, the last one installed is above the managed solution installed previously. This means that the second solution installed can customize the one installed before it. When two managed solutions have conflicting definitions, the runtime behaviour is either “Last one wins” or a merge logic is implemented. If you uninstall a managed solution, the managed solution below it takes effect. If you uninstall all managed solutions, the default behaviour defined within the system solution is applied. At the base of the managed layers level is the system layer. The system layer contains the tables and components that are required for the platform to function.

The following diagram introduces how managed and unmanaged solutions interact with the system solution to control application behavior.

  • The system solution represents the solution components defined within Dynamics 365 or the Power Platform. Without any managed solutions or customisations, the system solution defines the default application behaviour. Many of the components in the system solution are customisable and can be used in managed solutions or unmanaged customisations.
  • Managed solutions are installed on top of the system solution and can modify any customisable solution components or add more solution components. Managed solutions can also be layered on top of other managed solutions. As long as a managed solution enables customization of its solution components, other managed solutions can be installed on top of it and modify any customisable solution components that it provides.
  • Unmanaged customisations. All customisable solution components provided by the system solution or any managed solutions can be customized in the unmanaged customisations
  • Unmanaged solutions are groups of unmanaged customisations. Any unmanaged customized solution component can be associated with any number of unmanaged solutions. These can be edited & modified, regardless of the environment in which they’ve been deployed to
  • The ultimate behaviour of an instance of Dynamics 365 or Power Platform application is the culmination of the system solution, any managed solutions, and any unmanaged customisations.

The official stance of Microsoft, according to its Application Lifecyle Management (ALM) documentation, is that unmanaged solutions are used for development, and that managed solutions are released downstream to further environments. For bespoke solutions, however, this may not fit, and an appropriate balance must be found.

Data ‘Backbone’ & Solution Dependencies

Given the way that companies are adopting Power Platform (and Dynamics 365, of course!) it’s highly likely that we will build out system structures that will form the backbone for multiple applications on an on-going basis. With this in mind, it’s appropriate to put in place proper planning for this, to avoid any issues that could occur in the future with appropriate system designs

Solution Dependencies

When creating system structures within an environment, using unmanaged solutions, connecting two (or more) tables together will create dependencies on each other. In simple terms, if we connect Table A to Table B, there’s a reciprocal relationship created back from Table B to Table A:

This happens even if Table A is in Solution 1, and Table B is in Solution 2. If they’re in the same environment (& both solutions are unmanaged), it will create the two-way dependency.

This will cause issues if trying to deploy each solution individually, and will fail on import, as the system will require all items to be available in the solution

Workable scenario

The way in which to handle the issue of solution dependencies is to ensure that the ‘master backbone’ of system design is created in the main development environment, and then to use that in secondary development environments as the core of additional solutions:

This is in line with the emerging recent Microsoft Best Practise information around solution management (which is likely to be moving towards having a single environment per developer, rather than multiple developers working in the same environment).

The steps for doing this are as follows:

  1. Main ‘core solution’ exists (as unmanaged) within the main development environment
  2. When a project requires this to build upon:
    1. Secondary development environment is created
    1. ‘Core solution’ is exported as managed from the main development environment, & imported into the secondary development environment
    1. Project work is carried out within the secondary development environment
    1. Once project solution is complete (or when appropriate for deployment), it can be exported from the secondary development environment
      1. If deploying directly from the secondary development environment to downstream environments, it should be exported as managed
    1. The solution should be exported as unmanaged, and imported back into the main development environment. This will not cause dependencies to be created with the ‘core solution’ in it

Note: The main ‘core solution’ should consist of the items that are needed for core system work. If additional items are needed for multiple projects to work off (eg Account Manager field), this would need to be added to the core solution, rather than the individual project solution/s, as otherwise there could be further issues downstream.

If the project is completed, but requires further work to be carried out later on (or development support), then the following should be done:

  1. Secondary development environment is created
  2. ‘Core solution’ exported from the main development environment as a managed solution, and imported into the secondary development environment
  3. Project solution exported as unmanaged from the main development environment, and imported into the secondary development environment
  4. Work and/or support can be carried out within the secondary development environment, and released appropriately

I’m expecting further information around this to be released by Microsoft in due course (I’m a little surprised there’s not more out there at the moment, to be honest!). It’s vital that we ensure that we’re working with solutions in the right way, to stop any issues occurring later on down the line.

Have you ever had a problem around this? Drop a comment below – I’d love to hear your experiences!

Managed Solutions, & replacing a field

Well to start with, I’m sure that I’m going to get pulled up by some people for my use of the word ‘field’ in the title. After all, officially it’s now a ‘column’! But I (still) can’t let go of calling them as I’ve done so for over a decade, so field it is.

Now to the actual topic of this blog post, which is centred around Managed Solutions. Leaving aside the whole debate about whether we should be using managed or unmanaged solutions (& when/where to do each), there is one definitive benefit of using a managed solution.

See, unmanaged solutions are additive in nature. Work is done in the development environment, then deployed. Further work is done (additional items added, etc), and deployed, and they then appear in the downstream environments. However, if you delete an item in the development environment, it’s not removed when the solution is deployed downstream.

Managed solutions, on the other hand, are both additive & detractive. As with unmanaged solutions, items added in the development environment are also added downstream when deployed. However, if an item is removed from the solution in the development environment, it will also be removed when the solution is deployed downstream. It’s one of the useful ways to ensure that you don’t end up with random unused items just lying around in Production (which have a habit then of popping up in the Advanced Find window, for example). So it’s really quite handy for a lot of reasons to go down this route.

Well, I found myself going down this route recently, but with slightly unexpected results, I’ll freely admit…

The scenario was that we had deployed a managed solution to the UAT (test) environment on a client project. Then the client changed their mind (shock & horror!!) as to a specific item, and we needed to change it from a text item to a lookup item. Obviously (as per best practise, of course) this would need to be done in the development environment, and then released downstream. Given that this is a managed solution, I’d expect this to work, without any issues. Well, it didn’t…

The change in the development environment (deleted the old item, ‘re-created’ it as a lookup with the same system name) was done, we exported it as managed, and then went to import it in the UAT environment. It took the solution file, thought about it for a while (it’s somewhat of a large solution), & then errored:

Exception type: System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault] Message: Attribute mdm_field is a String, but a Lookup type was specified.

Now I was somewhat confused by this message occurring. It’s not been the first time I’ve seen it over the years, but in my previous experience I’ve seen it when handling unmanaged solutions. It’s when you delete an item in the development environment, re-create it as a different item type (with the same underlying system name), and then deploy it as unmanaged. The solution import in the second environment fails due to the different in the type (as it sees the same name). This, of course, is to be expected.

But here we’ve been using managed solutions for deployment, and as mentioned above, they’re detractive as well. The expected behaviour (at least from my side of things) would be that the system would note that the item type has changed, remove the old item, & import the new item. In my mind, that’s logical, but apparently not?

See, even managed solutions have their limitations, of which this is one of them. Having checked with several other people who I reached out to around this, I’ve discovered that it can’t work in the way that I was expecting it to. Instead, a specific process has to be followed

  1. In the development environment, remove the item, & export the solution as managed
  2. In the downstream environment(s), deploy this (interim) managed solution. This will remove the item from the environments
  3. In the development environment, re-create the item with the different system type. Then export it as managed
  4. In the downstream environments, deploy this solution. This will then add the item (with the new system type) into the environment.

This means that development & deployment teams (if separate ones) need to co-ordinate around this, to ensure it’s done in the right way. It could also be developed/exported in succession, and then imported in succession as well (either manually, or through an Azure DevOps Pipeline, for example).

This worked wonderfully for us, and to be honest, I was quite relieved after several hours of frustration with things. Even better, it was a Friday, so meant that the week could end well!

Have you ever come across this, and been frustrated as well? Have you got a similar story with something else that happened to you around solutions? Drop a comment below – I’d love to hear!

Customising Case Resolutions

Well, the title is a bit of a mouthful, I’ll admit. Hopefully though this brings some good information, and can help people out.

Cases are wonderful things, and can be used for tracking client interactions, compliments/complaints, and so many other things. What cases do have is the ability to resolve them, and provide information around the resolution.

Now, the standard way of doing this provides the following screen:

There’s the ability to set the Resolution Type (being a dropdown, aka Choice, field), & putting in free text for the Resolution itself (allowing us to track information around it). There are also time fields, which can be used for working out the time spent, as well as any time that’s going to be chargeable.

Now when going in to modify these, we’d think to open up the Case Resolution table. However, this isn’t actually the right place to do it. Instead, we’re needing to update the Case table itself, as the Care Resolution items comes from the Case Status field!

Somewhat annoyingly, it’s not possible to do this through the new ‘Maker’ interface:

In order to actually handle this, we need to switch across to the Classis editor to set this up. This could be because it’s actually a situation of having both parent & child entries. What I mean by this is that there’s the actual status (being Active, Resolved or Cancelled), and then a reason under each one. Hopefully at some point it’ll be updated into the new UI, so that we can do it from there.

We’ll need to change the Status item to ‘Resolved’, & can then add in the options that we want:

After adding them, we need to save & publish, and then they’ll show up for us, and are able to be selected:

So that’s great – we’re able to customise it. But what if we’re wanting to customise the actual ‘Resolve Case’ form itself? Not everyone wants to show Time/Billable Time on it (quite a few of our clients ask us to remove it), and perhaps they want to add additional custom fields.

So from the usual perspective of doing this, we’d open up the Case Resolution table, create new fields as required, and modify the existing form (we’re not able to create any other forms for this specific table). After all, this is how we’d do it for any table in the system (whether a standard one, or a custom one). This is going to be the Main form, rather than the QuickCreate one:

We save & publish it, and then would open up a Case record, click ‘Resolve Case’, and expect to see it. However, that doesn’t happen, which has been most puzzlingly to me!

It turns out that there are two things needed to be done in order to get to see our ‘custom’ form (though it’s not really custom, as it’s modifying the default form, but whatever).

  1. We need to modify security permissions for users, and is a critical requirement. An example of this is shown below:
Security Role: Customer Service Representative

2. We need to enable customisable dialogues. Yes, it’s a setting that needs to be updated in order for users to see the custom layout of the form. If we don’t do this, they’re shown the default form, even though we’ve modified it! Seems a little strange that the system seems to have this concept of a ‘shadow’ form, but I guess that’s how it is.

To do this, we need to go into the Service Management settings area. I usually launch this through the Customer Service Hub app, though it’s available through several of the other standard apps as well:

Once there, we need to click into the Service Configuration menu item, and then change the ‘Resolve Case Dialogue’ option as shown below:

Remember to click the ‘Save’ button to save this.

Finally we can go back to our Case record, click ‘Resolve Case’, and look what appears!

So in summary, it’s definitely possible to modify & change the way that Case resolutions works in the system. It does take a little bit of fiddling around with settings in different areas, which can be confusing if we’re not used to this, but can give a great result in the end.

Have you ever come across this, and wondered how to do it? Have you developed Case Resolutions any further? Drop a comment below – I’d love to hear!