Customising Case Resolutions

Well, the title is a bit of a mouthful, I’ll admit. Hopefully though this brings some good information, and can help people out.

Cases are wonderful things, and can be used for tracking client interactions, compliments/complaints, and so many other things. What cases do have is the ability to resolve them, and provide information around the resolution.

Now, the standard way of doing this provides the following screen:

There’s the ability to set the Resolution Type (being a dropdown, aka Choice, field), & putting in free text for the Resolution itself (allowing us to track information around it). There are also time fields, which can be used for working out the time spent, as well as any time that’s going to be chargeable.

Now when going in to modify these, we’d think to open up the Case Resolution table. However, this isn’t actually the right place to do it. Instead, we’re needing to update the Case table itself, as the Care Resolution items comes from the Case Status field!

Somewhat annoyingly, it’s not possible to do this through the new ‘Maker’ interface:

In order to actually handle this, we need to switch across to the Classis editor to set this up. This could be because it’s actually a situation of having both parent & child entries. What I mean by this is that there’s the actual status (being Active, Resolved or Cancelled), and then a reason under each one. Hopefully at some point it’ll be updated into the new UI, so that we can do it from there.

We’ll need to change the Status item to ‘Resolved’, & can then add in the options that we want:

After adding them, we need to save & publish, and then they’ll show up for us, and are able to be selected:

So that’s great – we’re able to customise it. But what if we’re wanting to customise the actual ‘Resolve Case’ form itself? Not everyone wants to show Time/Billable Time on it (quite a few of our clients ask us to remove it), and perhaps they want to add additional custom fields.

So from the usual perspective of doing this, we’d open up the Case Resolution table, create new fields as required, and modify the existing form (we’re not able to create any other forms for this specific table). After all, this is how we’d do it for any table in the system (whether a standard one, or a custom one). This is going to be the Main form, rather than the QuickCreate one:

We save & publish it, and then would open up a Case record, click ‘Resolve Case’, and expect to see it. However, that doesn’t happen, which has been most puzzlingly to me!

It turns out that there are two things needed to be done in order to get to see our ‘custom’ form (though it’s not really custom, as it’s modifying the default form, but whatever).

  1. We need to modify security permissions for users, and is a critical requirement. An example of this is shown below:
Security Role: Customer Service Representative

2. We need to enable customisable dialogues. Yes, it’s a setting that needs to be updated in order for users to see the custom layout of the form. If we don’t do this, they’re shown the default form, even though we’ve modified it! Seems a little strange that the system seems to have this concept of a ‘shadow’ form, but I guess that’s how it is.

To do this, we need to go into the Service Management settings area. I usually launch this through the Customer Service Hub app, though it’s available through several of the other standard apps as well:

Once there, we need to click into the Service Configuration menu item, and then change the ‘Resolve Case Dialogue’ option as shown below:

Remember to click the ‘Save’ button to save this.

Finally we can go back to our Case record, click ‘Resolve Case’, and look what appears!

So in summary, it’s definitely possible to modify & change the way that Case resolutions works in the system. It does take a little bit of fiddling around with settings in different areas, which can be confusing if we’re not used to this, but can give a great result in the end.

Have you ever come across this, and wondered how to do it? Have you developed Case Resolutions any further? Drop a comment below – I’d love to hear!

Record security with Power Automate

Today’s post is around record security, and how Power Automate can really be quite useful with this!

Let’s take a quick recap of how security works (which is applicable to both Dynamics 365, as well as Power Platform apps). We have the following:

  • Security roles, which are set up with specific privileges (Create/Read/Update/Delete etc) across each entity table, as well as for other system permissions
  • Users, who can have one (or more) security roles applied to them (security roles being additive in nature)
  • Teams, who can have one (or more) security roles applied to them. Users are added into the team, and inherit all permissions that the team has (much easier than applying multiple roles on a ‘per user’ basis)

That’s great for general security setup, but it does take a system admin to get it handled. Alternatively, of course, it’s possible to use AAD Security Groups which are connected to security teams within Power Platform, and users added to them will inherit the necessary permissions.

But what if we want to allow users who aren’t system administrators to allow other users access to the records? Well, it’s also possible to share a specific record with another user – doing this allows the second user to see/access the record, even if they usually wouldn’t be able to do so. This is really great, but does require a manual approach (in that each record would need to be opened, shared with the other user/s, and then closed).

I’ve been working on a project recently where we have the need to share/un-share a larger number of records, but with a different user for each record. We’ve been looking into different ways of doing this, and obviously Power Automate came into mind! We didn’t want to use code for this, for a variety of reasons.

Security and Compliance in PowerApps and Flow - Michał Guzowski Consulting

The scenario we had in mind was to have a lookup to the User record, and with populating this with a user, it would then share the record with them. This would be great, as we could bulk-update records as needed (even from an integration perspective), and hopefully all would work well.

So with that, I started to investigate what options could be available. Unfortunately, there didn’t seem to be any out of the box connectors/actions that could be used for this, which was quite disheartening.

My next move was to look at the user forums, & see if anyone had done anything similar. I was absolutely excited to come across a series of responses from Chad Althaus around this exact subject! It turns out that there’s something called ‘Unbound Actions’, which is perfect for the scenario that we’re trying to achieve.

There are two types of actions available within Power Automate:

  • Bound actions. This are actions that target a single entity table or a set of records for a single entity table
  • Unbound actions. These aren’t bound to an entity type and are called as static operations. They can be used in different ways

There are quite a lot of unbound actions available to use:

The one I’m interested in for this scenario is the GrantAccess action. More information around this can be found at https://docs.microsoft.com/en-us/dynamics365/customer-engagement/web-api/grantaccess?view=dynamics-ce-odata-9

It does require some JSON input, but when formatted correctly, it shows along the following lines:

The different parts of this works as follows:

  • Target is the actual record we’re wanting to apply the action to
  • SystemUserID is the actual system user, and we also need to specify the odatatype
  • AccessMask is what we’re wanting to do when sharing the record (as there are different options available for sharing, ie ReadOnly, Edit, ShareOnwards, etc)

Using this, we’ve therefore built out the following scenario:

  1. Field added to the record, looking up to Users
  2. Relevant users who are able to access the record can set this lookup field to be a specific user record (who doesn’t have access to this record)
  3. Power Automate flow fires on the update of the record when it’s saved (filtering on just this attribute), sharing the record with the selected user
  4. The user then gets an email to notify them that the record has been shared with them, with a URL link to it (it’s somewhat annoying that there’s no inbuild system notification when a record has been shared with you, but I guess that’s something we’re having to live with!)
  5. They can then go in & access the record as they need to

We’ve also given some thought to general record security, and have additionally implemented the following as well:

  1. If the user lookup value is changed, we obviously share the record with the new user that’s been saved to it
  2. Using a different Unbound Action (RevokeAccess), we remove the sharing of the record with the previous user (we have another field that’s being updated with the value of it, which we’re using to pass the action in, as otherwise we don’t actually know who the previous user was!)

All in all, we’re quite happy that we’ve managed to come up with this solution, which is working splendidly for us. Also, major thanks to Chad for his assistance in getting the syntax correct!

Have you ever needed to do something like this? Did you manage to implement it in some way? Drop a comment below – I’d love to hear how your experience was!

‘Ghost’ lookup value following deployment

This is something that stumped me fairly recently. It’s also something that I was trying to work out what I should use at the title for this post! Let me share what happened.

I’m working on a project that’s quite critical (COVID-19 related). This is a project that we’ve built something around Dynamics 365 as an additional wrapper, to provide specific functionality for the pandemic. It’s being rolled out (the same solution) to multiple clients, and is only using the functionality from Power Platform. No custom code at all.

Now, before going into the specifics around it, let’s take a moment to revisit what a lookup field is, and what it does. Essentially a lookup field connects two tables together (wow – that felt strange not to use the word ‘entity’!). In the front interface, it’s used for a 1:N relationship.

So for example, we can have a lookup from Account to Contact, to set the primary contact for the account. The user navigates to the field, searches for the record they’re wanting to associate, and saves it.

Underneath, there’s a relationship that’s automatically created between the two tables, showing the way that the relationship will go (ie 1:N or N:1). This is created on both sides (more on that another time around dependencies), and most people will never need to modify it

When I first started with this particular project, I got the solution, and deployed it into the Dev environment (for the project that I was on). On testing it out, I found something very interesting. We’re using the Case (Incident) table, and there are various lookup fields on it. One of these was already populated with a value. Hmm – that’s interesting, I thought. It was a new deployment, and we hadn’t set any static data up yet at all. So how could it already be populated?

How is this being set, when I’ve not entered it into the system as a record…

Furthermore, I was unable to save the Case record. When I tried to, I was getting an interesting error:

On drilling down into the error log (which admittedly is actually getting better in the details shown in it, thankfully!), it turned out to be because I didn’t have access to the referenced record (in the lookup field). It just didn’t exist.

So the lookup field value was coming in with a hard-coded GUID (record identifier). But how was this being done, especially if there weren’t any records (of that type) in the system at all?

From my experience of things, I could think of two ways in which to populate a lookup field with a hard-coded value:

  • Through a ‘real-time’ Power Automate flow, on create of the record. It’s possible to set a GUID value in the flow, and then it would be set
  • Through custom code, running on the form. Again, it’s possible to hard-code a GUID there, and then set the field

However on checking both options, none of them were happening. No Power Automate flows touching the Case record, and no custom code at all on the Case.

It was then, digging through the other parts of the solution, that I saw various Business Rules. For those unfamiliar with these, I’ll quote from the official Microsoft documentation around them:

By combining conditions and actions, you can do any of the following with business rules:

  • Set column values
  • Clear column values
  • Set column requirement levels
  • Show or hide columns
  • Enable or disable columns
  • Validate data and show error messages
  • Create business recommendations based on business intelligence.

I’ve used Business Rules (somewhat extensively) before. However on going into the one for the Case table, I found that something was happening that I wasn’t aware could happen! It’s actually possible to set a lookup field value through it:

I spy a lookup option

Even though we’ve deployed the solution from the original development environment to a different environment, this is still set. But there are no records that are available:

I had never thought that it would be possible – to set a static value (eg a number, or some text), fine. But to set referential data? Wow.

Obviously this can be quite helpful. The bit that it’s NOT helpful though is when deploying the solution to another environment (as this situation was). It doesn’t help if you re-create the record that it’s referring to with using the same record name, as it’s using the underlying GUID (which you can’t re-create). This really does take solution deployment into a whole new perspective, where you need to be careful around these sorts of things as well.

So something new that I’ve learned (I do try to learn something new each day), and specifically around an area I thought I knew quite well. It did take some time, but I’m glad that I (finally) found the root cause of it, and identified what was causing it.

Have you ever had something like this happen, where you’re searching & searching for the cause of it? Drop a line below – I’d love to hear!

Data Export Service Connection Issues

This is a slightly different post from the usually stuff that I talk about. It’s much more ‘techy/developer’ focused, but I thought it would be quite useful still for people to keep in mind.

The background to this comes from a project that I’ve been working on with some colleagues. Part of the project involves setting up an Azure SQL database, and replicating CDS data to it. Why, I hear you ask? Well, there are some downstream systems that may be heavy users of the data, and as we well know, CDS isn’t specifically build to handle a large number of queries against it. In fact, if you start hammering the CDS layer, Microsoft is likely to reach out to ask what exactly you’re trying to do!

Therefore (as most people would do), we’re putting in database layer/s within Azure to handle the volume of data requests that we’re expecting to occur.

Azure SQL Database | Microsoft Azure

So with setting up things like databases, we need to create the name for them, along with access credentials. All regular ‘run of the mill’ stuff – no surprises there. In order for adequate security, we usually use one of a handful of password generators that we keep to hand. These have many advantages to them, such as ensuring that it’s not something we (as humans) are dreaming up, that might be easier to be guessed at. I’ve used password generators over the years for many different professional & personal projects, and they really are quite good overall.

Sordum Random Password Generator Creates Random Passwords with Ease -  MajorGeeks
Example of a password generation tool

Once we had the credentials & everything set up, we then logged in (using SQL Server Management Studio), and all was good. Everything that we needed was in place, and it was looking superb (from the front end, at least).

OK – on to getting the data actually loaded in. To do this, we’re using the Data Export Service (see https://docs.microsoft.com/en-us/power-platform/admin/replicate-data-microsoft-azure-sql-database for further information around this). The reason for using this is that the Data Export Service intelligently synchronises the entire database initially, and thereafter synchronises on a continuous basis as changes occur (delta changes) in the system. This is really good, and means we don’t need to build anything custom to handle it. Wonderful!

Setting up the Data Export Service takes a little bit of time. I’m not going to go into the details of how to set it up – instead there’s a wonderful walkthrough by the AMAZING Scott Durow at http://develop1.net/public/post/2016/12/09/Dynamic365-Data-Export-Service. Go take a look at it if you’re needing to find out how to do it.

So we were going through the process. Part of this is needing to copy the Azure connection string into into a script that you run. When you do this, you need to re-insert the password (as Azure doesn’t include it in the string). For our purposes (as we had generated this), we copied/pasted the password, and ran things.

However all we were getting was a red star, and the error message ‘Unable to validate profile’.

As you’d expect, this was HIGHLY frustrating. We started to dig down to see what actual error log/s were available (with hopefully more information on them), but didn’t make much progress there. We logged in through the front end again – yes, no problems there, all was working fine. Back to the Export Service & scripts, but again the error. As you can imagine, we weren’t very positive about this, and were really trying to find out what could possibly be causing this. Was it a system error? Was there something that we had forgotten to do, somewhere, during the initial setup process?

It’s at these sorts of times that self-doubt can start to creep in. Did we miss something small & minor, but that was actually really important? We went over the deployment steps again & again. Each time, we couldn’t find anything that we had missed out. It was getting absolutely exasperating!

Finally, after much trial & error, we narrowed the issue down to one source. It’s something we hadn’t really expected, but had indeed caused all of this to happen!

What happened was that the password that we had auto-generated had a semi-colon (‘;’) in it. In & of itself, that’s not an issue (usually). As we had seen, we were able to log into SSMS (the ‘front-end’) successfully, with no issues at all.

However when put into code, Azure treats the semi-colon as a special character (a command separator). It was therefore not recognising the entire password, which was causing the entire thing to fail! To resolve this was simple – we regenerated the password to ensure that it didn’t include a semi-colon character within it!

Now, this is indeed something that’s quite simple, and should be at the core of programming knowledge. Most password generators will have an option to avoid this happening, but not all password generators have this. Unfortunately we had fallen subject to this, but thankfully all was resolved in the end.

The setup then carried on successfully, and we were able (after all of the effort above) to achieve what we had set out to do initially.

Have you ever had a similar issue? Either with passwords, or where something worked through a front-end system, but not in code? Drop a comment below – I’d love to hear!

Marketing & an unusual error

I’ll be the first to admit that I have limited experience of Dynamics 365 for Marketing. In fact, I think that it would be stretching the description to say that I have even ‘limited experience’! I’ve seen it one or twice, and have attended a few presentations on it, but apart from that, nada.

I do remember what it used to be like in its previous incarnation, but even then I didn’t really touch it. Customer Service (& Sales) are my forte, and I generally stick within those walls. Marketing traditionally was its own individual application, and only more recently has been rolled into the wider Dynamics 365 application suite. Even so, it still sometimes works in a somewhat interesting way, different from the rest of the system.

Inevitably I’ve had to actually do something with it for a client project, which has brought me to putting up this post. We had created a few marketing forms, surfaced them correctly, etc. It was great, and working well.

Then we realised that we needed to capture some additional information, in this case a list of Countries. There’s no standard entity for it within Dynamics 365, so we created our own, and loaded a list of countries (& associated data) into it. Fine – that was working without issues, including in the places that we needed to surface it.

Then we came to needing to surface the Country value on a marketing form, through a lookup. Simple, you’d have though? Well, not so much. We went to create the field, and got presented with the following error as we did so:

The error says: ‘The role marketing services user does not have access to the entities you’ve chosen…’

In essence, the system was telling us that we weren’t able to access the entity. Though Country is a custom entity, we were logged in as users with the System Administrator role (which has access automatically to ALL entities). This left us puzzling around what to do.

The error message, thankfully, was quite clear. It was referring to a specific security role missing privileges. In this case, it was the ‘Marketing Services User’. I therefore went to check the permissions for it, and sure enough, it didn’t have permissions on the Country entity that I had created!

Now usually if a security role is missing permissions, what we do is create a custom security role (usually copying the existing role), and add the permissions to do. Best practise is NOT to edit the default security roles. The (main) reason behind this is that Microsoft could update the security role in a later update/release, which could impact on us. We therefore use custom roles to avoid this happening (& yes, I’ve seen it happen/impact in practise!).

The fly in the soup here (lovely phrase, I know) is that we couldn’t do that here. It seems that Dynamics 365 for Marketing uses an underlying security role that’s needed. Even if we had implemented a custom role, we didn’t have any idea of how to tell the system to actually use our custom role, rather than the default one that it’s currently using. Quite frustrating, I tell you!

So in the end we decided to give the default security role the necessary permissions, and see what happened:

With having granted the security permissions to the role, & saved it, we then attempted to create the marketing form field field. This time, we were successful! No errors occurred during it, thankfully:

So in summary, I still have no idea why this has happened. I’ve taken a look around, but can’t find anything obvious as to how/why it actually works like this. I guess that I’d need to dig ‘under the hood’ somewhat to see what’s actually going on, and how to dealt with it appropriately. For the moment, the solution is in place, and is working.

We’ve also been very careful (as mentioned above) to add just the specific custom entity to the default security role. We haven’t touched anything else within it – all other security permissions are done (as per best practise) with custom security roles, which are then allocated appropriately to users &/or teams. Hopefully this will be fine in the long-term, though we’ll definitely be keeping our eyes on it to make sure!

Have you ever come across something like this? How did you decide to go about solving it? Drop a comment below – I’d love to hear!

Update: Thanks to the amazing Carl Cookson, it turns out that this is due to an update from Microsoft in how Marketing works. See https://docs.microsoft.com/en-gb/dynamics365/marketing/marketing-fields for more information around it. Essentially it uses this role to sync to the Azure staged Marketing service, so this role needs to have the appropriate permission

AI Translation for Omnichannel

How to start off this post? I’ve been trying to work out how exactly I can express my excitement around this new feature for Omnichannel. Included in the Wave 2 2020 release, it’s just AMAZING. That, however, doesn’t give it true justice. So let’s see how I can describe it properly to give it due respect.

Previously I’ve mentioned the ability to use skills within Omnichannel (see https://thecrm.ninja/omnichannel-for-dynamics-365-queues-users-skills/). This can be used to indicate, for example, agents who can communicate in a certain language. That’s useful of course, but what happens when you don’t have anyone who can speak the language that the customer wants to use? It’s a problem, and one that’s really not easily solved. At least, not until now.

So, what exactly does this new translation feature do? Simple – it translates from one language to another. OK, it’s actually a little more awesome than just that. Having delved into it quite a bit over the last week or so, there are (in my view) three main benefits (with a bonus one as well!):

  1. It translates incoming text from the customer (through chat) from the language that they’re using to the language that the agent is using
  2. It translates outgoing text from the agent (through chat) from the language that the agent is using to the language that the customer is using
  3. It translates text between agents from one language to the other & vice versa (eg on an internal consult)

Now for the bonus. It doesn’t just translate text from one language to another. It follows the languages being used! So if the customer switches in mid-conversation to a different language, the system picks it up. Not only is the new incoming language translated into the agents language, but the replies from the agent are shown in the (new) language being used by the customer. It’ll automatically show text in the ‘last used’ language, which is really quite incredible (at least in my opinion).

There’s no fiddling around of needing agents to select the language that they need, or anything else. It’s a simple click to turn it on, and then another click to turn it off. I’m going to go through the setup of it below, as there are a few fiddly bits that did confuse me for a bit.

It’s also possible to use different translation tools. At the time of writing this post, it’s possible to use Bing, Google or Azure translation models. I’m sure that there will be other options available in the future as well to use, which really opens up possibilities for clients with differing digital estates.

Translation happens in real time, so there’s no waiting around for it to actually get on with it. It’s displayed immediately on the screen for the agent to see.

Setup for translation

I found the general guides to be alright, but weren’t too clear on a few items. I’m therefore sharing below how I went about it, in order to get things working properly. Please be aware that this isn’t in the order specified in the documentation, but in retrospect means less switching between screens:

  1. Ensure that you have the latest updates to your Omnichannel environment (this is always a good idea, regardless of anything else!)
  1. Go to https://github.com/microsoft/Dynamics365-Apps-Samples/tree/master/customer-service/omnichannel/real-time-translation & download the ‘webResourceV2.js’ file there (if you’re unfamiliar with how to do this, click to open the file, click the ‘Raw’ button, and then save the page (ensure it’s got the ‘.js’ extension when you save it!).
  1. Ensure you have an API key to enter into the web resource file! This is what tripped me up at first. You can use any text editor (I use Notepad++) to open it up. How you get the API key will depend on the provider. For example, to set up a free account in Azure, take a look at https://docs.microsoft.com/en-us/azure/cognitive-services/translator/translator-how-to-signup. There are also some additional things that you can configure in the web resource file, but I’m not going to go into that here
  1. Go to your solutions (this can either be through the Classic interface, or through http://make.powerapps.com). You can either create a new solution to hold the web resource file, or alternatively if you have existing solutions that you’d deploy, you can add the web resource file to that. Either:
    1. In the classic interface, navigate to Web Resources, click to create a new web resource, and upload the file (ensure you select the type to be ‘Script (JScript)’, or
    2. In the modern interface, click the ‘New’ button, select ‘Web Resource’ from the ‘Other’ section, and then follow the steps above

Once it’s saved, it’ll give you a URL. Copy that, and publish the solution.

  1. Go to the Omnichannel Administration Hub, find ‘Real Time Translation’ under Settings, and set this to Yes. You can also select a default input language from the selection. Also enter the URL that you copied above. Save it
  1. You’re all done!

Agent Experience

Depending on how you’ve configured your web resource, auto translation will either by on by default, or be off. If it’s not on by default, the agent can simply click within their chat window to select it to be active:

Once active, it’ll then start to translate everything, in both directions. Below are side by side screens of the customer & agent experiences. You’ll note that the customer is seeing the initial agent response in English, as the agent was the first in the conversation

From the agent side of things, both the original language, as well as the translated language, are shown. The customer is only shown the language that they’re actually using

If the agent isn’t sure what language the customer is using (as it’s being auto-translated for them), they can hover over the text, and it’ll show the details for it:

If the agent will consult, or transfer the session to another agent, the second agent will see the conversation in the language that they are themselves using (with the original text as well). This allows for the possibility to pass a customer to a specialist to assist them, even if they don’t speak the same language! It’s really cool to see this in action.

Even more wonderfully, this is even stored down to the transcript level:

This is really opening up major new concepts that Omnichannel can be used for, which will be supported entirely by this feature. As I said at the beginning of this post, I’m absolutely excited for it, and we’re already envisioning how this will be able to empower our clients even more.

Do you have any questions around this? Can you think of any scenarios that this could solve for you? Drop a comment below – I’d love to hear!

PL-200 Microsoft Power Platform Functional Consultant

Well, the last week has been quite busy, on many fronts! One of those is having a few new exams come out in Beta. I’ve already taken the PL-400 (see PL-400: Microsoft Power Platform Developer Exam for my review of it). Last Friday, the new PL-200 exam was released as well, so I scheduled it in for as soon as I could sit it.

Now the PL-200 is scheduled to be replacing the MB-200 exam at the end of this year (2020), assuming it comes out of beta by then of course. I remember sitting my MB-200, though I didn’t write up about it at the time. Compared to some of the other exams I’ve taken, it was hefty. I’ll freely admit that I didn’t pass on first go of it – it took me 3 tries to gain it! People will be required to take this as a pre-requisite for attaining the Microsoft Certified: Power Platform Functional Consultant Associate badge.

So I’ve been expecting this new PL-200 to be quite similar, but with more of a Power Platform focus. It’s still heavy on Dynamics 365, and I wasn’t expecting that part to change. The existing MB-2xx series are also staying in place (for the moment, anyhow).

According to the official description for the exam:

Candidates for this exam perform discovery, capture requirements, engage subject matter experts and stakeholders, translate requirements, and configure Power Platform solutions and apps. They create application enhancements, custom user experiences, system integrations, data conversions, custom process automation, and custom visualizations.

Candidates implement the design provided by and in collaboration with a solution architect and the standards, branding, and artifacts established by User Experience Designers. They design integrations to provide seamless integration with third party applications and services.

Candidates actively collaborate with quality assurance team members to ensure that solutions meet functional and non-functional requirements. They identify, generate, and deliver artifacts for packaging and deployment to DevOps engineers, and provide operations and maintenance training to Power Platform administrators.

The official Microsoft Learn page for the exam is at https://docs.microsoft.com/en-us/learn/certifications/exams/pl-200, and I’d highly recommend people to go check it out. I didn’t use it that much, but felt that I was on reasonable grounds with existing knowledge. It’s mostly there, but (at least in my exam) there were some sneaky extras that I was NOT really expecting. Hopefully I managed to get them (mostly) accurate!

Once again, I sat the exam through the proctored option (ie from home). The experience went without issues for once – sign in was fine, no issues with my headset during check-in, exam loaded & worked without problems at all.

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). I’ve tried to group things together as best as possible for the different subject areas.

  • Environments
    • Different types of environments, what each one is used for, how to set/switch them between the different types
    • How to handle security/restrict access as necessary
  • Field types. All of the available field types, what are the benefits of each, and when each type should be used
  • Data storage types. Differences between Office documents (eg Excel), CDS, SQL Server, Azure SQL. When to use each one best
  • Charts. How they’re set up, how they can be shared with other users.
  • System views. What these are, who can access them, how to set them up
  • Entity forms. The different types of forms available, how to set them up, limitations of each. When each one should be used for a given scenarios
  • Model apps. Site map. What this is, how it’s used. Implementing/customising it, the different controls available & what each one does
  • Entity editable grids
    • What these are, how they can be used, how to enable & set them up
    • Limitations that they have within the system
  • Entity/record ownership. The different types of ownerships available, benefits of each, when each should be used for a given scenario
  • Data management
    • Data importing from different sources, different methods to import data
    • What is data mapping for import, and how it’s used
  • Duplicate detection. What it is, what it does, how it works. How to implement & configure it
  • Microsoft Word templates. How they can interact with Dynamics 365, how to set them up/adjust them, what they can be used for
  • Canvas Apps
    • Expression/function types, what they are, how they’re used
    • Handling data (eg collections)
    • Offline usage & data storage
    • Controls that can be used, navigating around, loading/saving data.
  • Power Virtual Agent/Chatbots.
    • Setting them up, deploying them onto websites, deploying them into Teams
    • Configuring topics, routing, handling unknown questions
    • Bot model data, including being able to access across multiple chatbots
    • Reporting on their usage, & how customer engagements have been processed
  • Power App portals
    • Registering users, registration code process
    • Validating/confirming user accounts
    • Forms security, displaying/hiding forms & data
  • AI capabilities. AI models available. Pre-built models vs custom training, capabilities (eg text scanning), and when to use each one.
  • Omnichannel
    • What it is, when it’s used
    • How to implement, deploy & configure customers being able to be sent through to it
  • Automation
    • Workflows, Power Automate, Business Process Flows
    • What each one is, benefits/use cases for each one, when to use each for specific scenarios
  • Power Automate
    • What are triggers, & how do they work
    • What are actions, and how do they work
    • What are connectors, and how do they work
    • Prebuilt vs custom connectors, capabilities, and when to use each one
    • How to set up each type & configure them
    • Instant vs Scheduled vs Triggered
    • Security – how to enable/disable their use by users
  • Business Process Flows
    • What they are, how they’re used, limitations that they have
    • How to handle security for them
  • Business rules
    • What they are, how they’re used, how to set up/configure
    • How to use them in different parts of the system (eg forms, apps, etc)
    • Actions vs Conditions vs Recommendations
  • UI Flows (RPA)
    • What these are, how they are used
    • Requirements in order to use them
    • Desktop vs Cloud
    • Implementation, customisation, configuration & deployment
    • Limitations of them
    • Data extraction from runs
  • Security & Compliance
    • Security roles, security teams, security groups
    • What each one is, how it’s used
    • System auditing, what it is, how it’s used, how to implement & configure
    • How to access & run user audit log reports
  • PowerBI. Setting up & sharing dashboards, setting up & configuring alerts, security options/roles & how they work with data
  • Dynamics 365 integrations. What other systems can integrate directly with Dynamics 365, & any limitations that they may have

The main surprise for me was mostly around the UI flows, and the various questions I had on them. I’ve not played around with them (yet!), but they are really cool!

If you’re going to take this, I’d love to hear how your experience of it went. Drop a comment below for me to see!

PL-400: Microsoft Power Platform Developer Exam

I’ve been continuing with taking new exams as they come out. Having recently taken the MB-400 exam (see MB-400 Power Apps & Dynamics 365 Developer Exam), I was slightly surprised to see the announcement that it was going to be replaced!

Admittedly, I was also surprised (in a good way) that I passed the MB-400, not being a developer! It’s been quite amusing to tell people that I’m a certified Microsoft Dynamics Developer. It definitely puts a certain look on their faces, which always cracks me up.

Then again, the general approach seems to be to move all of the ‘traditional’ Dynamics 365 exams to the new Power Platform (PL) format. This includes obviously re-doing the exams to be more Power Platform centric, covering the different parts of the platform than just the ‘first party apps’. It’s going to be interesting to see how this landscape extends & matures over time.

The learning path came out in the summer, and is located at https://docs.microsoft.com/en-us/learn/certifications/exams/pl-400. It’s actually quite good. There’s quite a lot that overlaps with the MB-400 exam material, as well as the information that’s recently been covered by Julian Sharp & Joe Griffin.

The official description of the exam is:

Candidates for this exam design, develop, secure, and troubleshoot Power Platform solutions. Candidates implement components of a solution, including application enhancements, custom user experience, system integrations, data conversions, custom process automation, and custom visualizations.

Candidates must have strong applied knowledge of Power Platform services, including in-depth understanding of capabilities, boundaries, and constraints. Candidates should have a basic understanding of DevOps practices for Power Platform.

Candidates should have development experience that includes Power Platform services, JavaScript, JSON, TypeScript, C#, HTML, .NET, Microsoft Azure, Microsoft 365, RESTful web services, ASP.NET, and Microsoft Power BI.

So the PL-400 was announced on the Wednesday of Ignite this year (at least in my timezone). Waking up to hear of the announcement, I went right ahead to book it! Unfortunately, there seemed to be some issues with the Pearson Vue booking system. It took around 12 hours to be sorted out, & I then managed to get it booked Wednesday evening, to take it Thursday.

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change.

There were a few glitches during the actual exam. One or two questions with answers that didn’t make sense (eg line 30 does X, but the code sample finished at line 18), and question numbers that seemed to jump back & forth (first time it’s happened to me). I guess that I’ve gotten used to at least ONE glitch happening somewhere, so this was par for the course.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Model Apps.
    • Charts. How they work, what drives them, what they need in order to actually work, configuring them
    • Visualisation components for forms. What they are, examples of them, what each one does, when to use each one
    • Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
    • Entity alternate keys. What these are, when they should be used, how to set them up & configure them
    • Business Process Flows. What these are, how they can be used across different scenarios, limitations of them
    • Business Rules. What these are, how they can be used across different scenarios, limitations of them
  • Canvas apps
    • Different code types, expressions, how to use them & when to use them
    • Network connectivity, & how to handle this correctly within the app for data capture (this was an interesting one, which I’ve actually been looking at for a client project!)
    • Power Apps solution checker. How to run it, how to handle issues identified in it
  • Power Automates
    • Connectors – what these are, how to use them, security around them, querying/returning results in the correct way
    • Triggers. What is a trigger, how do they work, when to use/not use them
    • Actions. What these are, how they can be used, examples of them
    • Conditions. What these are, how to use them, types of conditions/expressions/data
    • Timeouts. How to use them, when to use them, how to configure
  • Power Virtual Agents. How to set them up, how to configure them, how to deploy them, how to connect them to other systems
  • Power App Portals. Different types, how to set them up, how to configure them, how they can work with underlying data & users
  • Solutions
    • Managed, unmanaged, differences between them, how to use each one.
    • Deploying solutions. Different methods that can be used to do it, best practise for each, when to use each one
    • Package Deployer & how to use it correctly
  • Security.
    • All of the different security types within Dynamics 365/Power Platform. Roles/Teams/Environment/Field level. How to set up, configure, use in the right way.
    • Hierarchy security
    • Wider platform security. How to use Azure Active Directory for authentication methods, what to know around this, how to set it up correctly to interact with CDS/Dynamics 365
    • What authentication methods are allowed, when/how they can be used, how to configure them
  • ‘Development type stuff’
    • API’s. The different API’s that can be used, methods that are valid with each one, the Organisation service
    • Discovery URL’s. What these are, which ones are able to be used, how they’d be used/queried
    • Plugins. How to set up, how to register, how to deploy. Steps needed for each
    • Plugin debugging/troubleshooting. Synchronous vs asynchronous
    • Component types. Actions/conditions/expressions/data operations. What these are, when each is used
    • Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
    • Javascript web resources. How to use these correctly, how to set them up on entities/forms/fields
    • Powerapps Component Framework (PCF). What these are, how to develop them, how to use them in the right way
  • System Design
    • Entity relationship types. What they are, what each one does, how they work, when to use them appropriately. Tools that can be used to display them for system design purposes
    • Storage considerations across different types, including CDS & Azure options
  • Azure items
    • Azure Consumption API. How to monitor, how to handle, how to change/update
    • Azure Event Grid. What it is, the different ways in which it can be used, when each source should be used
  • Dynamics 365 for Finance. Native functionality included in it

The biggest surprise that I had really when thinking back to things was the inclusion of Dynamics 365 for Finance in it. Generally the world is split into ‘front of house’ (being Dynamics 365/Power Platform), and ‘back of house’ (Dynamics 365 for Finance & Supply Chain Management). The two don’t really overlap, though they’re supposed to be coming more together over time. Being that this is going to happen, I guess it’s only natural that exam questions around each other will come up!

Overall it was quite a good exam. Some of the more ‘code-style’ questions were somewhat out of my comfort zone, and I’ll freely admit to guessing some of the answers around them! Time will tell, as they say, to see how I’ve done in it.

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it!

Good news for Power Automate Flows!

As a starter for 10, this wasn’t actually the blog post that I was going to write today. In fact, the subject of the post wasn’t even going to be about Power Automate! However, there was some really amazing news that dropped today from Microsoft, which I just couldn’t pass up being able to talk about.

You’ve guessed it – it’s about Power Automate! Well, I suppose that the post title was somewhat of a giveaway, wasn’t it…ah well. So let’s go ahead and find out what this is all about then!

To date, we’ve been able to put Power Automate flows into a solution. Well, it wasn’t there exactly at the beginning of things, but it happened somewhere along the way. This was very convenient, as we didn’t then need to deploy each one individually to different environments. Some solutions can contain dozens & dozens of flows, and we really do love to package them all up together for ease of movement.

So that was good. But there was still a (major) ‘bugbear’ (as I like to refer to them as). This is the fact that after we deploy a Power Automate flow, we then need to go into it & (re)authenticate it. This is due to the fact that the connector/s that it uses contains what is referred to as a ‘secret’, and these can’t be moved across environments. As a result, we need to essentially recreate the ‘secret’ in the connector (ie authentication details) every time we move it. This is an annoyance (if you have one or two flows), and an absolute bloody nightmare if you have lots.

For the technical minded – every action in a flow is bound to a specific instance of a connection that it will use to “execute” that action. This is why when moving flows across environments, users are required to rebind every operation to a connection.

For example, I’ve been working with COVID-19 triage solutions. These contain lots of flows within them, connecting to multiple different sources, and doing different things. Every time we’ve performed a release (even if it’s just a simple update), we’ve needed to manually go through each flow, (re)authenticate them, and turn them on. If you forgot one, then everything can come crashing down & not work! But there’s been no other way to do it. To represent this visually, we have the following diagram

For each & every Power Automate, the connection line gets ‘broken’ when it’s deployed, and needs to be re-made.

Until now, that is. For today, Microsoft has announced the Public Preview for ‘Connection References’. Now when something is put into Preview, I usually caveat the usage of it with saying things like ‘it might go away, or not be released for a while’. But I’m going to be quietly confident about this particular piece of functionality, as I really don’t think it’s going to be pulled!

So what exactly are these? Well, in (mostly) simple terms, Connection References provide an ‘in-between’ or ‘abstraction’ layer for the connections that use them. Let’s show this visually as well

We still need to re-authenticate the Connection Reference once we deploy things. But let’s now see how we can save ourselves a massive headache, and LOTS of time:

Oooo…now this is looking better. Instead of having to update three Power Automate flows, we only have to update the SINGLE Connection Reference that’s sitting in the middle. Now multiple that by however many flows you have (eg sending emails out, etc), and start calculating how much time you’ll now be able to spend on coffee breaks, rather than doing this manually one at a time…

We can create Connection References directly from within the solution:

We then give it a name & description, choose which connector we’re going to be using, and either select an existing connection or set a new one up:

Once we’re finished, we click ‘Create’ at the bottom. Voila – we can now see it within our solution!

Note: Interestingly enough I couldn’t actually see this within the solution after I created it, even with the component selector set to show ‘All’. How I actually got them to display was changing the component selector to ‘Connection Reference’, and they then showed up. I’m thinking that this is due to it being new today/in the process of rolling out, and am expecting it to display without any issues in the near future

Let’s take a look at a Power Automate flow itself now to see how it’s referenced. When we open an item with a connector, we can now see the following:

We’re able to select the Connection Reference that we’re wanting to use. Simple, yet so powerful.

When importing a solution containing a Connection Reference, we will be prompted during the import process to set the actual connection that should be used with it:

If you don’t have any connections set up already in the environment, you’ll be able to create a new one from the dropdown.

Some things to note around this:

  • During the preview phase, Microsoft has specified that a single Connection Reference can only be used by up to 16 flows. This limitation will be removed once it goes GA
  • Existing flows will not be automatically upgraded. What you can do though is export the unmanaged solution, re-import it to the same environment, and then they will be automatically created for you. The flow/s can then be edited to update them to the correct connection reference record
  • The connection name and connection reference name are not currently synchronised. They can be different. Therefore it’s best to keep the naming conventions the same. Don’t set different names for connections and their associated connection references.

In summary – this is an awesome step forward with Power Automate functionality. I’m already tasking some of the developers on the team to re-do existing solutions to use it for ease of use. How do you think it’ll best benefit you? Drop a comment below!

Handling ‘Out of Hours’

Let’s face it – we can be quite spoiled at times. As a customer, we can sometimes expect that companies be available 24/7 to service our requests, needs, issues, etc. That would be wonderful, wouldn’t it! Imagine that you have a mobile phone issue at 2am – you could call up your provider, and have it handled (or a new handset sent out) immediately. That would be quite nice!

Unfortunately the real world doesn’t (always) quite work like that. Of course there are companies that operate on a multi-national or even global scale, and there’s always customer service available (Amazon – I’m thinking of you right now!).

Previously I’ve gone into how we can set operating hours for a company, so that the ability to contact a customer support agent is only shown during these times. Take a look at Handling Company Hours for a refresher on this.

But sometimes not showing the ability to contact support could potentially be counter-productive. Customers may think that our website isn’t working properly, and possibly attempt to try to reach us through other means. This could quite well frustrate them.

Due to this, we have a nice little piece of functionality that’s now come out in Omnichannel. It’s small, simple, but yet quite brilliant in my humble opinion. This is the ability to have a chat widget available, but let customers know that that it’s currently out of company hours.

To activate this, we need to open the Chat record in the Omnichannel Administration Hub, and go to the Design tab:

Quite helpfully, the section is labelled ‘Offline’! How much better could we get.

We do need to understand that (at the time of writing this post) it’s currently in Preview, with all of the usual caveats around how that works.

We have several items available here:

  • Show widget during offline hours. This is what actually activates the setting – leaving this to false won’t do anything for us!
  • Theme colour. This allows us to set the specific theme to be used during ‘offline’ hours. It’s actually really helpful, as it serves/gives a very visual aspect to the customer to display that it’s out of hours
  • Title. The title of the chat widget, which will be displayed to the user
  • Subtitle. This allows us to place a subtitle as well, for the user to be able to see

So what does this then look like? Well, let’s take a look:

Personally I think that being able to set a theme colour for offline access gives it that little edge. Customers will become aware of this (subconsciously) when visiting the website, and come to the point of not even trying to start a chat when they see that it’s out of hours.

One MAJOR thing to bear in mind. We’re only going to be given the option to set this when we have a value set for Operating Hours. Without this being set, we won’t be shown this option. Go try it for yourself and see!

There’s not really much else to this, to be honest. But I’m liking it. I know that from a personal perspective I’ve been on various websites, and have no idea if the support chat is actually working or not. With this in place, I’m able to see that it is available for use at the correct time, and not have to wonder about it.

Have you ever thought about implementing something like this? Have you actually done so? I’d be really interested to hear from you about how you went about it – please drop a comment below!