PL-400: Microsoft Power Platform Developer Exam

I’ve been continuing with taking new exams as they come out. Having recently taken the MB-400 exam (see MB-400 Power Apps & Dynamics 365 Developer Exam), I was slightly surprised to see the announcement that it was going to be replaced!

Admittedly, I was also surprised (in a good way) that I passed the MB-400, not being a developer! It’s been quite amusing to tell people that I’m a certified Microsoft Dynamics Developer. It definitely puts a certain look on their faces, which always cracks me up.

Then again, the general approach seems to be to move all of the ‘traditional’ Dynamics 365 exams to the new Power Platform (PL) format. This includes obviously re-doing the exams to be more Power Platform centric, covering the different parts of the platform than just the ‘first party apps’. It’s going to be interesting to see how this landscape extends & matures over time.

The learning path came out in the summer, and is located at https://docs.microsoft.com/en-us/learn/certifications/exams/pl-400. It’s actually quite good. There’s quite a lot that overlaps with the MB-400 exam material, as well as the information that’s recently been covered by Julian Sharp & Joe Griffin.

The official description of the exam is:

Candidates for this exam design, develop, secure, and troubleshoot Power Platform solutions. Candidates implement components of a solution, including application enhancements, custom user experience, system integrations, data conversions, custom process automation, and custom visualizations.

Candidates must have strong applied knowledge of Power Platform services, including in-depth understanding of capabilities, boundaries, and constraints. Candidates should have a basic understanding of DevOps practices for Power Platform.

Candidates should have development experience that includes Power Platform services, JavaScript, JSON, TypeScript, C#, HTML, .NET, Microsoft Azure, Microsoft 365, RESTful web services, ASP.NET, and Microsoft Power BI.

So the PL-400 was announced on the Wednesday of Ignite this year (at least in my timezone). Waking up to hear of the announcement, I went right ahead to book it! Unfortunately, there seemed to be some issues with the Pearson Vue booking system. It took around 12 hours to be sorted out, & I then managed to get it booked Wednesday evening, to take it Thursday.

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change.

There were a few glitches during the actual exam. One or two questions with answers that didn’t make sense (eg line 30 does X, but the code sample finished at line 18), and question numbers that seemed to jump back & forth (first time it’s happened to me). I guess that I’ve gotten used to at least ONE glitch happening somewhere, so this was par for the course.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Model Apps.
    • Charts. How they work, what drives them, what they need in order to actually work, configuring them
    • Visualisation components for forms. What they are, examples of them, what each one does, when to use each one
    • Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
    • Entity alternate keys. What these are, when they should be used, how to set them up & configure them
    • Business Process Flows. What these are, how they can be used across different scenarios, limitations of them
    • Business Rules. What these are, how they can be used across different scenarios, limitations of them
  • Canvas apps
    • Different code types, expressions, how to use them & when to use them
    • Network connectivity, & how to handle this correctly within the app for data capture (this was an interesting one, which I’ve actually been looking at for a client project!)
    • Power Apps solution checker. How to run it, how to handle issues identified in it
  • Power Automates
    • Connectors – what these are, how to use them, security around them, querying/returning results in the correct way
    • Triggers. What is a trigger, how do they work, when to use/not use them
    • Actions. What these are, how they can be used, examples of them
    • Conditions. What these are, how to use them, types of conditions/expressions/data
    • Timeouts. How to use them, when to use them, how to configure
  • Power Virtual Agents. How to set them up, how to configure them, how to deploy them, how to connect them to other systems
  • Power App Portals. Different types, how to set them up, how to configure them, how they can work with underlying data & users
  • Solutions
    • Managed, unmanaged, differences between them, how to use each one.
    • Deploying solutions. Different methods that can be used to do it, best practise for each, when to use each one
    • Package Deployer & how to use it correctly
  • Security.
    • All of the different security types within Dynamics 365/Power Platform. Roles/Teams/Environment/Field level. How to set up, configure, use in the right way.
    • Hierarchy security
    • Wider platform security. How to use Azure Active Directory for authentication methods, what to know around this, how to set it up correctly to interact with CDS/Dynamics 365
    • What authentication methods are allowed, when/how they can be used, how to configure them
  • ‘Development type stuff’
    • API’s. The different API’s that can be used, methods that are valid with each one, the Organisation service
    • Discovery URL’s. What these are, which ones are able to be used, how they’d be used/queried
    • Plugins. How to set up, how to register, how to deploy. Steps needed for each
    • Plugin debugging/troubleshooting. Synchronous vs asynchronous
    • Component types. Actions/conditions/expressions/data operations. What these are, when each is used
    • Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
    • Javascript web resources. How to use these correctly, how to set them up on entities/forms/fields
    • Powerapps Component Framework (PCF). What these are, how to develop them, how to use them in the right way
  • System Design
    • Entity relationship types. What they are, what each one does, how they work, when to use them appropriately. Tools that can be used to display them for system design purposes
    • Storage considerations across different types, including CDS & Azure options
  • Azure items
    • Azure Consumption API. How to monitor, how to handle, how to change/update
    • Azure Event Grid. What it is, the different ways in which it can be used, when each source should be used
  • Dynamics 365 for Finance. Native functionality included in it

The biggest surprise that I had really when thinking back to things was the inclusion of Dynamics 365 for Finance in it. Generally the world is split into ‘front of house’ (being Dynamics 365/Power Platform), and ‘back of house’ (Dynamics 365 for Finance & Supply Chain Management). The two don’t really overlap, though they’re supposed to be coming more together over time. Being that this is going to happen, I guess it’s only natural that exam questions around each other will come up!

Overall it was quite a good exam. Some of the more ‘code-style’ questions were somewhat out of my comfort zone, and I’ll freely admit to guessing some of the answers around them! Time will tell, as they say, to see how I’ve done in it.

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it!

Good news for Power Automate Flows!

As a starter for 10, this wasn’t actually the blog post that I was going to write today. In fact, the subject of the post wasn’t even going to be about Power Automate! However, there was some really amazing news that dropped today from Microsoft, which I just couldn’t pass up being able to talk about.

You’ve guessed it – it’s about Power Automate! Well, I suppose that the post title was somewhat of a giveaway, wasn’t it…ah well. So let’s go ahead and find out what this is all about then!

To date, we’ve been able to put Power Automate flows into a solution. Well, it wasn’t there exactly at the beginning of things, but it happened somewhere along the way. This was very convenient, as we didn’t then need to deploy each one individually to different environments. Some solutions can contain dozens & dozens of flows, and we really do love to package them all up together for ease of movement.

So that was good. But there was still a (major) ‘bugbear’ (as I like to refer to them as). This is the fact that after we deploy a Power Automate flow, we then need to go into it & (re)authenticate it. This is due to the fact that the connector/s that it uses contains what is referred to as a ‘secret’, and these can’t be moved across environments. As a result, we need to essentially recreate the ‘secret’ in the connector (ie authentication details) every time we move it. This is an annoyance (if you have one or two flows), and an absolute bloody nightmare if you have lots.

For the technical minded – every action in a flow is bound to a specific instance of a connection that it will use to “execute” that action. This is why when moving flows across environments, users are required to rebind every operation to a connection.

For example, I’ve been working with COVID-19 triage solutions. These contain lots of flows within them, connecting to multiple different sources, and doing different things. Every time we’ve performed a release (even if it’s just a simple update), we’ve needed to manually go through each flow, (re)authenticate them, and turn them on. If you forgot one, then everything can come crashing down & not work! But there’s been no other way to do it. To represent this visually, we have the following diagram

For each & every Power Automate, the connection line gets ‘broken’ when it’s deployed, and needs to be re-made.

Until now, that is. For today, Microsoft has announced the Public Preview for ‘Connection References’. Now when something is put into Preview, I usually caveat the usage of it with saying things like ‘it might go away, or not be released for a while’. But I’m going to be quietly confident about this particular piece of functionality, as I really don’t think it’s going to be pulled!

So what exactly are these? Well, in (mostly) simple terms, Connection References provide an ‘in-between’ or ‘abstraction’ layer for the connections that use them. Let’s show this visually as well

We still need to re-authenticate the Connection Reference once we deploy things. But let’s now see how we can save ourselves a massive headache, and LOTS of time:

Oooo…now this is looking better. Instead of having to update three Power Automate flows, we only have to update the SINGLE Connection Reference that’s sitting in the middle. Now multiple that by however many flows you have (eg sending emails out, etc), and start calculating how much time you’ll now be able to spend on coffee breaks, rather than doing this manually one at a time…

We can create Connection References directly from within the solution:

We then give it a name & description, choose which connector we’re going to be using, and either select an existing connection or set a new one up:

Once we’re finished, we click ‘Create’ at the bottom. Voila – we can now see it within our solution!

Note: Interestingly enough I couldn’t actually see this within the solution after I created it, even with the component selector set to show ‘All’. How I actually got them to display was changing the component selector to ‘Connection Reference’, and they then showed up. I’m thinking that this is due to it being new today/in the process of rolling out, and am expecting it to display without any issues in the near future

Let’s take a look at a Power Automate flow itself now to see how it’s referenced. When we open an item with a connector, we can now see the following:

We’re able to select the Connection Reference that we’re wanting to use. Simple, yet so powerful.

When importing a solution containing a Connection Reference, we will be prompted during the import process to set the actual connection that should be used with it:

If you don’t have any connections set up already in the environment, you’ll be able to create a new one from the dropdown.

Some things to note around this:

  • During the preview phase, Microsoft has specified that a single Connection Reference can only be used by up to 16 flows. This limitation will be removed once it goes GA
  • Existing flows will not be automatically upgraded. What you can do though is export the unmanaged solution, re-import it to the same environment, and then they will be automatically created for you. The flow/s can then be edited to update them to the correct connection reference record
  • The connection name and connection reference name are not currently synchronised. They can be different. Therefore it’s best to keep the naming conventions the same. Don’t set different names for connections and their associated connection references.

In summary – this is an awesome step forward with Power Automate functionality. I’m already tasking some of the developers on the team to re-do existing solutions to use it for ease of use. How do you think it’ll best benefit you? Drop a comment below!

Handling ‘Out of Hours’

Let’s face it – we can be quite spoiled at times. As a customer, we can sometimes expect that companies be available 24/7 to service our requests, needs, issues, etc. That would be wonderful, wouldn’t it! Imagine that you have a mobile phone issue at 2am – you could call up your provider, and have it handled (or a new handset sent out) immediately. That would be quite nice!

Unfortunately the real world doesn’t (always) quite work like that. Of course there are companies that operate on a multi-national or even global scale, and there’s always customer service available (Amazon – I’m thinking of you right now!).

Previously I’ve gone into how we can set operating hours for a company, so that the ability to contact a customer support agent is only shown during these times. Take a look at Handling Company Hours for a refresher on this.

But sometimes not showing the ability to contact support could potentially be counter-productive. Customers may think that our website isn’t working properly, and possibly attempt to try to reach us through other means. This could quite well frustrate them.

Due to this, we have a nice little piece of functionality that’s now come out in Omnichannel. It’s small, simple, but yet quite brilliant in my humble opinion. This is the ability to have a chat widget available, but let customers know that that it’s currently out of company hours.

To activate this, we need to open the Chat record in the Omnichannel Administration Hub, and go to the Design tab:

Quite helpfully, the section is labelled ‘Offline’! How much better could we get.

We do need to understand that (at the time of writing this post) it’s currently in Preview, with all of the usual caveats around how that works.

We have several items available here:

  • Show widget during offline hours. This is what actually activates the setting – leaving this to false won’t do anything for us!
  • Theme colour. This allows us to set the specific theme to be used during ‘offline’ hours. It’s actually really helpful, as it serves/gives a very visual aspect to the customer to display that it’s out of hours
  • Title. The title of the chat widget, which will be displayed to the user
  • Subtitle. This allows us to place a subtitle as well, for the user to be able to see

So what does this then look like? Well, let’s take a look:

Personally I think that being able to set a theme colour for offline access gives it that little edge. Customers will become aware of this (subconsciously) when visiting the website, and come to the point of not even trying to start a chat when they see that it’s out of hours.

One MAJOR thing to bear in mind. We’re only going to be given the option to set this when we have a value set for Operating Hours. Without this being set, we won’t be shown this option. Go try it for yourself and see!

There’s not really much else to this, to be honest. But I’m liking it. I know that from a personal perspective I’ve been on various websites, and have no idea if the support chat is actually working or not. With this in place, I’m able to see that it is available for use at the correct time, and not have to wonder about it.

Have you ever thought about implementing something like this? Have you actually done so? I’d be really interested to hear from you about how you went about it – please drop a comment below!

Strange behaviour with views

Normally when I write a blog post, it’s about sharing some cool features, new functionality, etc. However this post is going to be a little different, because I don’t actually have an answer (yet!) to what is going on here.

Let me explain the situation.

I’m needing to show some very specific data for reference purposes. For the purposes of this, let’s say that I’m looking at Contacts, and needing to report on Phone Calls. The reason is to identify Contacts who are frequent callers. My criteria are as follows:

  • At least one phone call (that has the Contact as the Regarding value) need to have a specific field set
  • At least one phone call (that has the Contact as the Regarding value) needs to have its Activity Status as Open

These two conditions are separate. So the contact essentially needs to have at least 2 phone calls against them, with each one meeting one of the conditions. There can be more than 1 phone call record with the same condition – that’s not an issue here.

Back in the (good old) day, I’d have written some cool SQL to return this data. Two Left Outer Joins, and we’d be done. However I can’t do that now (I’ve recently started dipping into FetchXML, which is an entirely other story to cover at some point). So I’m having to use the Advanced Find to check that I’m getting the right data.

This isn’t the easiest of things to do. I’m needing to start from Contact, go to Phone Call, go back up to Contact, & go back down to Phone Call. But hey, this is what it looks like:

So with this set up, I run the query, and get some results (in this specific scenario/time, there are 3 results). I go through the data to check that the results are actually satisfying my requirements, which they are:

Wonderful – let’s move forward then!

My next step is to look to set this up as a system view. To do this, I go to the Power Apps Maker (http://make.powerapps.com/), open my solution & find the Contact entity. Opening it, I switch to the Views tab:

I create a new view, add the columns I need, and then open up the Filter Criteria to start setting this up. I’m using the Advanced Find as a reference guide for the conditions I’m needing to use. Going through it, I replicate the values across:

That looks about the same as the Advanced Find, right? It’s laid out slightly differently, but that’s just the designer. OK – let’s go ahead to save/publish it, and see it it in the app:

Hold on. There’s only 1 record showing up there. Admittedly it’s in the list that came from Advanced Find, but what’s happened to the other 2 records?

So I go to check the data. I had already done this before, but I thought that perhaps I overlooked something, so I checked again. Nope – all of the data is fine/correct. There should indeed be 3 records showing up in the system view, but 2 are missing…

Note: As an aside, I do know that this isn’t permissions related. I’m doing all of this as a systems administrator with full privileges to everything. So it’s not that

OK – next steps:

  • Clear browser cache, reload and see if they’re showing up (useful tip – Control+F5 does this!). Nope, they’re not showing
  • Use Incognito mode, log in and see if they’re showing there. No, they’re still hiding away
  • Use a different browser, with a different system administrator login. Unbelievably they’re still being very shy, and refusing to appear!

Even more confusing about all of this is something truly perplexing. I can open up Advanced Find, select the system view (without doing ANYTHING else) & click ‘Results’. When doing this, all of the records appear! So in the entity view they’re not, but when I use that same system view through Advanced Find, they are!

I’m scratching my head at this. It just doesn’t make sense. I have no idea why this is happening. Reaching out to others, they also don’t seem to have any idea either.

My next step (I’m feeling SO proud of this, and so dev!) was to check the FetchXML. Perhaps there was something underlying in it that’s causing this? Using the FetchXML Builder in XrmToolBox, I loaded both views up, and compared them. It’s crazy – they seem to be exactly the same! (well, some cosmetic differences with where aliases appeared on the line, but this wouldn’t affect it):

At this point, I’m thinking that there are some magic elves under the hood, squirrelling away the data. It has to be the only logical reason for this, right?

The only thing I could find in the FetchXML that might make a difference is that there’s a ‘Distinct’ clause at the top of it in the one that’s working:

Why this would cause the issue, I have NO idea. Views return distinct results in them anyhow, so I’m not sure what this is actually doing here.

Regardless, using FetchXML Builder I updated the code, and WOW – it worked! I’m now returning 3 records in my system view! Absolutely strange, but hey – if it’s working now, who am I to question it…

I’m going to try to raise this through official Microsoft Channels, and see what I might be able to find out from them. However if you’ve come across this (or similar), or have some ideas about how to work around it, I’d LOVE to hear from you!

MB-400 Power Apps & Dynamics 365 Developer Exam

I haven’t usually been putting up posts around the exams that I take. A few months back I did decide to write one on the MB-600 exam (MB-600 Solution Architect Exam), which just took off! It was quite amazing (& pleasing) how many people were looking at it, & asking me questions around the exam.

As a result, I’ve decided to continue this, and am therefore now writing this post on the MB-400 exam.

There are several different ‘ranges’ of exams within the Dynamics 365/Power Platform space. These are aimed at different types of roles, or specific specialisation/s within a role. A good example of this is the MB-2xx range. It covers functional technology, and is split across the different ‘main’ areas of Dynamics 365.

The MB-400 (the only one in the range at the moment) is aimed at developers. According to the official description for the exam:

Candidates for this exam are Developers who work with Microsoft Power Apps model-driven apps in Dynamics 365 to design, develop, secure, and extend a Dynamics 365 implementation. Candidates implement components of a solution that include application enhancements, custom user experience, system integrations, data conversions, custom process automation, and custom visualizations.

Candidates must have strong applied knowledge of Power Apps model-driven apps in Dynamics 365, including in-depth understanding of customization, configuration, integration, and extensibility, as well as boundaries and constraints. Candidates should have a basic understanding of DevOps practices for Power Apps model-driven apps in Dynamics 365. Candidates must expose, store, and report on data.

Candidates should have development experience that includes JavaScript, TypeScript, C#, HTML, .NET, Microsoft Azure, Office 365, RESTful Web Services, ASP.NET, and Power BI.

As anyone who knows me will attest, I am NOT a developer. However I decided (for several reasons) to give this one a go, and see what would happen! I knew I’d be pushing myself out of my comfort zone, there would be things I wouldn’t understand/know at ALL, but hey – I was curious to see what would happen! Even more challenging, I decided to book & take it within a 24 hour period!

Now as this has been out for a little while (& isn’t in Beta), there’s thankfully some good resources on Microsoft Learn about it. Take a look at https://docs.microsoft.com/en-us/learn/certifications/exams/mb-400, where there are several learning paths that can be followed.

A big shout out as well to Julian Sharp & Joe Griffin who recently ran a multi-week course around it. The official Microsoft learning paths are great of course, but seem to miss out quite a bit of what’s actually needed to be known for this. The course that they ran covered a lot more. Hopefully there will be more courses like this run in the future!

When passing it (& assuming that you’ve passed the MB-200 as well), you get a lovely shiny badge!

Microsoft Certified: Power Apps + Dynamics 365 Developer Associate
I’m SO proud of this!

Once again, I sat the exam through the proctored option (ie from home). The experience went somewhat better than previous times. Amusingly I got told off by the proctor during the exam for ‘looking down at the keyboard’, rather than looking at the screen! I explained that I was using a different computer, & kept clicking the wrong mouse button on it (leaving aside that I was exhausted when doing it!).

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!).

  • Model driven apps:
    • User experience
    • Show/hide fields
    • Change field labels
  • Canvas apps – functionality, online/offline capabilities, field types (including searching/filtering data)
  • Plugin debugging
  • Configuring security for system connections (security types)
  • D365 Web API – how it’s used, types of calls made from/to it
  • Azure API – making calls to/from it
  • Code for importing data (debugging, variables)
  • Advanced Find
  • Types of calls (synchronous, asynchronous, )
  • Data modelling
  • Creating & deploying solutions through different methods
  • Publisher versioning
  • Identifying code variables, and saying what would happen in given scenarios
  • Power Apps Component Framework (PCF) – how to use, how to package components, how to deploy
  • PCF components & classes
  • JavaScript – code examples, what happens when a given scenario happens
  • JavaScript functions
  • Dynamics 365 Ribbon – what it is, what you can do with it, different types of functionality & ways to do things with it
  • Security & Permissions, including roles, teams, field level security, business units
  • Workflows, Power Automate Flows (how they’re set up, different functionality within them, how to do things with them given a specific scenario)
  • Business Rules (what they can/can’t do, different scopes, etc)
  • Field types (eg option-sets, calculated fields, roll-up fields, multi-select, etc)
  • Importing solutions – requirements for this, versioning, deployment between environments
  • Compatibility with Microsoft Teams

Now many of these (as I said above) are outside of my comfort zone. In fact, I’d say that even with absolutely cramming for a whole day for the exam, I still felt that I was guessing the answer for at least 30% of the questions. Admittedly though, as Julian Sharp says, a ‘gut feeling’ answer is usually right most of the time, coming from what the subconscious has absorbed during revision.

I was REALLY happy that I got a passing mark for this, & admittedly was VERY relieved as well. So now another lovely shiny badge in my collection, and I’m now going to go and update it on LinkedIn as well!

If you have any questions on this, feel free to drop them below, and I’ll try to help out as best as I can!

Omnichannel & LogMeIn

Overview

Many people in the IT scene will know of LogMeIn (https://www.logmein.com/), or LMI for short. For as long as I can remember (which means going back almost 2 decades!) they’ve been one of the main remote access solutions. With their product range, it was possible to leave your computer at home, travel abroad, and easily log into it from practically any computer anywhere.

It’s also a great product for IT professionals. Being able to deliver customer support through remote sessions, manage identity solutions, etc. The number of products over the years has grown, and been quite pleasing to watch:

Of course, LogMeIn Free (a great starter product for personal usage) was removed some years back, which to this I still believe is a great pity. Obviously the company decided to focus on the more enterprise side of things, which I can understand as a business.

So, why am I now writing about them? Quite simple, actually. LogMeIn are one of the providers that are working with Microsoft to provide Co-Browse solutions for Omnichannel! It’s a very new piece of functionality that’s been launched in the Dynamics 365 product, and there aren’t many providers out there that have integration points to it.

What is Co-Browse?

It’s important to understand what co-browsing is, and some useful stats:

“Co-browsing” refers to the ability to have a service provider & customer jointly navigate an application in real time through the web.

Co-browsing: The Gateway to Happy Customers & Better Financial Results, 2015

So co-browsing is useful. But just how useful can it actually be? Well, apparently it can be VITAL:

Co-browse has the potential to bridge the gap between human & AI-driven customer interaction, & to enable organisations to differentiate their customer service.

By 2022, co-browsing will be used in 2% of customer service interactions, up from 0.1% in 2017 (2000% growth).

Gartner 2017: How Co-browsing Can Differentiate Your Customer Service

LogMeIn has had their Rescue offering available on the general market for a while as a standalone product (alongside the rest of their offerings). They’ve now build it out into a new standalone product called Rescue Live Guide, and provided an integration into Omnichannel for Dynamics 365. Customers obviously need to have licenses for the product, but with these, they now have the ability to co-browse during support sessions. Not only can they see what’s going on, but they can also interact with the customer browser itself, providing an even better support experience.

So, let’s go ahead and take a look at how to set it up, the experience itself, and my thoughts on things.

Setup

When I first started testing out the LogMeIn offering, I had to go through a manual install process. This was due to the product just being released (in May 2020), but wasn’t actually that difficult to carry out.

However, they were in the process of switching over to an automatic installation through AppSource, as most of the other apps have. It’s great to be able to see that this has gone live, and is now available for users – it really does make the install that much easier!

Clicking ‘Get It Now’ takes you through the usual route of installing a solution from AppSource: selecting the environment, confirming the installation, etc. After around 5 minutes, I can now see the following:

Once it’s installed, we’ll need to set it as the co-browse provider for the channel that we’re wanting it for. To do this, open the chat record, go to Conversation Options, and select it there:

We’ll also need to put in two records for the LogMeIn co-browse configuration:

Finally, there’s a script block that needs to be added to the webpage where the chat widget is located. This enables the LogMeIn co-browsing ability from the customer side. It can be added right under the chat widget code itself; in the fullness of time, this may be able to be auto-generated as part of the chat widget code, but it’s not at the moment (this is dependent on Microsoft being able to offer it):

Right – setup all done, but before we see it in action, let’s take a quick look at the Rescue Live Guide admin console side of things.

Rescue Live Guide Admin Console

Although the functionality is within Omnichannel for Dynamics 365, administering agent licenses and groups takes places within the Rescue Live Guide admin console at https://console.logmeinrescue.com/admin. As companies will need to have Rescue Live Guide licenses, they would usually be familiar with this.

There’s the ability to create new users or groups, and manage them as well:

It’s also possible to set the names that are used for the agent & customer. These can be either the actual name of the agent, or instead potentially a job role/title:

I’m not going to go further into the admin functionality here – documentation can be found on the Rescue Live Guide site around this. Let’s instead take a look at the experience within Omnichannel, which after all is what we’re here to see!

Agent Experience

So how does this actually work, in practise? Well, from the customer side, they start a chat like they would usually do. When the agent responds, they’re given an option for ‘Live Guide’:

When the agent clicks on this, two things happen:

  1. Firstly, there’s a URL that’s posted in the chat. This contains a link for the customer to click, with an auto-generated ID number
  2. The agent is taken to the LogMeIn Rescue site page in a new tab.

Note: At the moment, the agent will have to sign in manually. LogMeIn have told me that their roadmap includes Single Sign On, so that after the initial setup they’ll be signed in automatically, and not have to perform this step in the future.

Once logged in, the agent will see that the session is ready, & waiting for the customer to connect to it. Once the customer has clicked the URL provided in the chat, it will open the Rescue Live Guide session, and authorise the agent to co-browse with them. They’ll then see the following prompt. This tells them that the session is connected to the agent, and that they can begin:

Once the customer has accepted to start browsing together with the agent, they get some small extra items appearing on their screen:

  • They can see that there is indeed a shared browsing session happening
  • They can also see where the agent’s mouse cursor is pointing to (by default, without the agent actually doing anything)

It’s important to note that that the co-browse session is taking place within the specific browser (tab) that is open. Therefore if the user navigates away, the session is paused until they navigate back to it.

On the agent’s side, they can view the customers browser. They can only see what’s happening in the actual tab that’s open for the co-browse session (see below for some more information around this though). It’s quite similar to the customer’s side, though has some LogMeIn features available. Well, obviously it’s similar to the customer – the agent is seeing the customer’s browser window!

They can of course still access the Omnichannel chat itself, and send information through that as well if they wish to.

Just as the customer can see the agent’s mouse position, the agent can see the customer’s mouse position. There are also gesture indicators so that each person can see what the other clicks etc as well, which can be really helpful when walking through a process.

The functionality currently available to the agents covers scrolling (within the page), highlighting, drawing and ‘virtual tabs’. As shown in the image above, the agent is able to highlight text/images, which will then be displayed as being highlighted to the customer. Agents are also able to enter text into text fields, click on buttons, and interact with the native webpage functionality.

Note: The Rescue Live Guide admin centre provides granular controls around these, so that customers can allow agents certain rights, rather than allow them to do everything.

The agent is also able to ‘draw’ on the webpage to be able to point something out, highlight a part of the page, etc.

Note: These annotations will disappear once the customer or agent starts scrolling up/down the page again.

As I’ve mentioned above, the session is taking place within a single browser tab. If the user nagivates away (to a different tab), the session is paused. The agent isn’t able to see any other tabs. So what happens if we do indeed need to open a new tab for something?

Well, there’s a really nice feature that the agent is able to use for this. It’s sort of a ‘virtual tab’ within the browser tab. Sounds interesting!

The customer is able to see this, and can navigate between the tabs. They’re now also able to open a new virtual tab themselves (which is an update to the functionality – originally they weren’t able to, and had to request the agent to do it).

Customer view of the support session

If the customer wants to pause or stop the session, the user simply has to click the ‘Stop’ button in the bottom left. They’ll then be presented with the following screen:

Whilst the session is paused, the customer can continue to use their machine as normal, but the agent won’t be able to see what’s going on. Only if the customer allows the session to resume by clicking ‘Continue Browsing’ will the agent be able to see the customer’s browser once again.

Alternatively, the agent can end the support session themselves, and the customer will be notified about this.

Security

I’m not going to dwell too much on security, as there’s a great document available at https://logmeincdn.azureedge.net/legal/gdpr-v2/Rescue_Live_Guide_SPOC_2020.pdf which goes into quite some detail.

Suffice it to say that LogMeIn have been a market leader for many years in this sector, and I’m happy that sessions through their products are adequately encrypted & protected.

Other functionality

Apart from the above, which is obviously the core of the product, there’s other functionality that’s possible to enable through the LogMeIn Rescue console:

  • Session recordings. It’s possible to record these for playback, which is then available from the LogMeIn portal. All recordings are carried out from the agent’s viewpoint, not the customers – there is therefore no issue that sensitive information from the customers side could be seen
  • Data masking. It’s possible to use data masking to hide sensitive information. At the moment the setup for this is a very manual process, so I’m not going to go into how to set it up it here. Having played with it a little, it’s really quite useful. Agents can’t see sensitive information on their screen, and if a customer needs to enter/update information, the session pauses whilst this is being done. However I understand that part of the LogMeIn roadmap for the near future is to make the setup process much more user friendly. When this is released & available, I’m planning to do a post on this
  • Reporting happens through the LogMeIn portal (see my thoughts below on this). It looks nice, and can be downloaded as a CSV file. Again, the functionality of this is going to be expanded in the near future.
Reporting in the LogMeIn website console

My thoughts

Having gone through testing out the product, I think that LogMeIn has brought a really great product of theirs into the Omnichannel experience. I used to use their products regularly (I ran an IT MSP some years back, in which we used LogMeIn products as well), and always found that they behaved well.

Now having the ability for agents to not only see, but also interact with the customer browsing experience really does take things to the next level. Audio and/or video support is great of course, but sometimes being able to see what the customer is seeing in their browser results in a much quicker resolution. This of course results in happy customers, which is what we’re striving to achieve!

As I’ve said above, I’ve used LogMeIn over the years, and always found their products to be pretty much amazing. With Rescue Live Guide, there are several differentiators that the solution brings to market:

  • For the standalone solution of Rescue Live Guide dedicated web resources aren’t needed. It’s an easy solution to set up, and for the customer to engage with – all it requires is a URL to be provided to them to get the session going. Obviously, as mentioned above, there is some slight coding needed for the Omnichannel integration, but this is really minor. Any company having Omnichannel installed/configured will already have power users/admin familiar with what’s needed for this, so it’s a very small additional step
  • It’s possible to co-browse on any website that the customer wants to, not just a single specific website. Once the co-browse session is active, the customer can change to any other website, as long as they do so within the co-browse session tab. Most other co-browse solutions out there can’t do this, so this is a really strong point in favour of this solution.
  • The data masking is really cool, and for most customers, will be a ‘must have’ rather than ‘nice to have’. I’m looking forward to when the setup for this is updated to be more business-user friendly, and will then do a separate blog post around it, together with a video!

A few things that I think would be nice to have:

  • The agent is already able to draw on a webpage during the co-browse session, and select different colours for this. It would be great if the agent could also type text in to display on the screen (not in a specific field) in colour. Sometimes being able to see an example written in front of you (without it going into the actual field) can be quite handy.
  • Being able to transfer the co-browse session to another agent. This could be either another Omnichannel agent, or a separate specialist team. It is of course possible to transfer the chat session to another Omnichannel agent, but then they’d have to start the whole co-browse session again (with a new PIN, etc)
  • Reporting (for the most part) all occurs in LogMeIn at the moment, as Dynamics 365 only has very limited reporting on this natively. However I understand that this is due to change at some point this year, with the ability to report properly on it within Dynamics 365 itself.

At the point when new items do get released, I’ll be aiming to do a review of them, and add to the knowledge around the product.

So, with all of that, how do you think this could best help you & your customers? Please comment below – I’d love to hear!

AAD Security Teams, & saving personal views

Previously I’ve touched on how it’s possible to use Azure Active Directory for Dynamics 365 security. This can be of great benefit to an organisation, especially when needing to invite in external users. The details that I go into around it can be found at Dynamics 365 Security & AAD. As I point out there, it’s a very helpful feature, and can also help with onboarding new users within an organisation.

What I’ve found out about it, however, is that there can be some very interesting little quirks with how security actually works. Originally I thought it was a bug, and raised it with Microsoft Support, but it turns out not to be. Let me take you through the journey that I experienced last week…

The scenario is as follows. We had security set up in place, which was working perfectly (or so we thought). We’d gone through all of the following steps:

  1. Create Dynamics 365 security role/s with appropriate permissions
  2. Create AAD security group
  3. Create Dynamics 365 AAD Security Team, and link it to the AAD security group
  4. Assign users to the AAD security group

This was working exceptionally well (except, of course, when the external users hadn’t followed the setup instructions correctly). Users were logging in, searching for information, creating/updating records, etc. All was good…or so we thought.

Now, the users who are actually using the application don’t have a Dynamics 365 background. It’s the first time that they’re using the specific system, and as such, are going through a learning curve. We’re not expecting them to understand the advanced functionality at this point, though some of them are indeed venturing further/deeper into the capabilities that it brings.

The Learning Curve | Listen via Stitcher for Podcasts

One of these, of course, is the Advanced Find. Now, those experienced with Dynamics 365 will know all about it. There are good points, and there are not so good points. Functionality in it has expanded over time, though to be honest it’s still easier to run a SQL query/extract for more advanced information retrieval.

Users seemed to be fine with the Advanced Find. We showed them how it works, how to filter, set up columns, etc. We even showed them how to export data to Excel, and keep a live data connection back to refresh it! Brilliant – they were most pleased.

Then I got an email in from a user needing support. They reported that they weren’t able to save custom searches. This is of course very helpful, in order to avoid having to set up the same search/layout every time. This seemed puzzling to me, and I started to take a look into it.

Always download the error log file – it can be SO useful!

I was able to replicate the problem immediately with a test user, having assigned it the same security role. Opening the log file (which can be extremely helpful at times with troubleshooting), I looked to see what the issues were. I was thinking it was a problem with security permissions – if I assigned the system administrator role to my user, everything worked just fine.

Incidentally, there’s a really good blog post at https://www.powerobjects.com/blog/2015/02/13/access-denied-identify-fix-security-role-issue/ which covers troubleshooting security role issues. I’ve used it on several occasions previously.

In my error log, there were repeated references to ‘ObjectTypeCode”:4230’. This is the View settings in the security role. I therefore went to the security role, and ensured that it was set to allow access to Saved View across all permissions:

It’s only possible to set User-level permissions for Saved Views

Right – permissions set, all should be good. Let’s go ahead & try to save an Advanced Find as a view…but no! It’s still not working, and showing the same error message!

What I then tried to do was apply the security role directly to the user, rather than through the AAD security team. To my surprise (well, not really, actually), it worked. I was able to save Advanced Find views. I changed back to the user getting permissions through the security group (ie not directly), and again I had the issue.

OK – so I thought I had discovered a bug. As far as I was aware, I couldn’t see any reason why the user wouldn’t be able to save the Advanced Find view. After all, they’re able to create & save records within the system. There surely shouldn’t be any difference between saving records, and saving an Advanced Find view?

Stressful woman looks with puzzled expression into screen, wears formal shirt, busy with making financial report, feels worried about deadlines, feels headache from recieving bad news Premium Photo

My next step was to raise a support ticket with Microsoft, and then carry out the obligatory ‘show & tell’ to the support agent. Ivan (the agent assigned to my case) was very helpful, understood exactly what I was trying to accomplish, and what the issue seemed to be. I left him with the support case, and focused on trying to find a workaround for the situation.

After a few days, Ivan came back to me with a resolution. It wasn’t a bug in the system (which was a shame – I was looking forward to having it attributed to me!), but rather a specific case of permissions.

See, there’s something called ‘privilege inheritance’. In a nutshell, there are two ways of giving access through a security role:

  1. User privileges. This is when the user is given the permissions directly
  2. Team privileges. This is when the user is given the permissions as a member of the team. If they don’t have User privileges of their own, they can only create records with the team as the owner

There’s a good article on this at https://docs.microsoft.com/en-gb/power-platform/admin/security-roles-privileges#team-members-privilege-inheritance

So what was actually happening was as follows:

  • Users were able to read, create, update records without issues, as the team was the owner of these records
  • However as views need to be owned by a user (though they can be shared with a team), the user was unable to save them!

Thankfully it’s quite easy to fix – on the security role itself, you change it here:

With this then in place, everything then worked just fine. The user was still getting the role through the Security Team, but was now able to save these directly.

Quite an interesting little quirk, but one that is likely to come in useful when looking at other functionality within the system.

Have you come across this before? Have you found anything else that seems a little strange? Comment below – I’d love to hear!

Canvas Apps, Patch command, & Business Rules

Recently I’ve been doing a LOT of work with canvas apps. As I think I’ve mentioned before (at least once or twice!) my background is the traditional ‘model’ style app. As a result, it’s been quite a steep curve to skill up, but I think I’m handling it alright. I’m (slowly) getting used to the way that canvas apps work, the ability to put different controls on the screens, and reference each other.

Heck, I’m even starting to play with more advanced navigation concepts, based on some REALLY great ideas that I’ve seen (Clarissa, I can’t say how grateful I am to you for all of your assistance & guidance!).

Gradient Adventure

Amongst all of this incredible & wonderous journey, I’ve also been learning some code. Yup – you heard me correctly! I’ve always said that I’m not a developer – I respect them greatly, but I don’t develop code.

True, I’ve picked up some SQL here & there, and will freely admit that running SQL queries against the Dynamics 365 database is SO much more powerful than running an Advanced Find. Of course, it’s necessary to know the joins, conditions & such. Redgate’s SQL Helper has been amazing along the way. With moving to cloud systems, things got a little more….complicated. XrmToolBox has the SQL4CDS tool which I’ve used several times, but I was really excited by the recent announcement/release of being able to (properly) run SQL commands against the CDS database from SQL Management Studio….

Anyhow, I’m digressing. So, I’ve been needing to learn canvas app style code. It’s like Excel commands, though (slightly) different at times. Things don’t always make sense (to me, at least) – I STILL haven’t figured out why some expressions need to be in a certain order. After all, according to mathematical principles it doesn’t matter if you write A>B, or B<A. Going to still need to wrap my mind around all of this.

Simplifying Algebraic Expressions - Math 7 Quiz - Quizizz

So, one of the commands that I’m using quite frequently is the Patch command. If you’re really interested, you can check this out in detail at https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/function-patch.

In short, Patch allows you to set record values from places other than a form table to the data that you’re saving. It also allows you to save field values that aren’t available on the canvas form table (due to limitations). I’ve referred to this previously at https://thecrm.ninja/canvas-app-record-set-regarding-field/. The scenario that I talk about there is just one of the things that can be done in this way. Since that post, we’ve come a long way, and are doing most things with Patch statements (due to the scenario requirements).

So that’s all well & good. However, there IS actually a reason for me writing this blog post….crazy, right? And it’s not to waffle on and on about patch statements. It’s about a very specific scenario that we hadn’t come across to date, but that came up last week.

Now, obviously you’re now VERY interested in hearing all about it, and learning for your own situations. I mean, otherwise you wouldn’t have stuck with me through this article for so long. So, let me set out what happened.

As mentioned above, we’re mostly using patch statements throughout this specific app. That’s….quite a lot of patch statements (especially as we also have IF statements governing which one is being used, as it’s not possible to use IF inside a patch statement, but I’m digress…). I’d say we’re pretty familiar with this now.

However, even with being familiar with it, we suddenly had a problem. One of the forms that we’re saving down started to NOT save down. Records weren’t being saved, which obviously is a problem!

Bear in mind here that we hadn’t touched the code for this specific action for a few weeks. Nothing had changed in our code, and nothing had changed from a platform perspective (ie Microsoft hadn’t changed any of the underlying functionality.

Going into the statement, we immediately started testing it out, and saw something interesting. We were getting an error that a required field could not be NULL:

This was quite puzzling – although in a model app we can set fields as required, and users can’t save the record until they populate it, this isn’t true in a canvas app (well, when using Patch, at least). See, it’s technically possible to use a Patch statement to create/update a record, but you don’t have to pass in required field (values). It’s a sort of workaround (& can be used in some scenarios for benefit, actually). So this happening all of a sudden was quite strange to us.

It was even stranger as we hadn’t been using the field on the form at all. The field that was being referred to was being used for a totally different process, in a different team, & not surfaced into the canvas app at all. This really was causing us to scratch our heads, and try to think (more) out of the box. It didn’t seem to be the code (we could set a value in code, but didn’t want to as it wasn’t relevant), yet we weren’t able to ignore it. Really frustrating!

With all of this in mind, I decided to go back to absolute basics after a few hours of troubleshooting. The field that seemed to be causing all of these issues was a relatively new addition, so I checked all of the details around it:

  • Was the field type correct for what it should be? Yes
  • Was it set as required on the CDS field definition? No (not that I thought this would help, but still checked)
  • Was the field on the entity form? Yes
  • Was the field set as required on the entity form? No (again, I didn’t think I’d get any joy from this)
    • Hold on….on the form designer it’s not set as Required. But when I open the form, and put some values in, suddenly it IS required.

Aha! OK – I’m now starting to see some light shining on this. I headed over to Business Rules to check out what might be there. Lo & behold, there was a business rule that set the field as required (when certain conditions were filled). An example of this would be:

Now this field hadn’t been in place when the code was developed (as mentioned above) – it had come in since. I was very curious if a Business Rule could require canvas apps to set the value, and so did some testing.

Disabling the business rule removed the error from the patch statement. Re-enabling it caused the issue again. OK – so we’ve found what’s been causing this, and could put in an adequate solution to handle it.

So in short, if you’re setting a field as being required through a Business Rule, you’re going to need to address it in any canvas app as well (that’s saving data down to the same form that it’s appearing on). Why it actually happens, when just setting it as Required on the form doesn’t, I have NO idea.

But it’s a good concept to keep in the back of your mind, I believe. Especially if there are multiple people working on developing a single entity, as otherwise you could find yourself in exactly the same scenario that we did!

Have you come across anything like this, or a different piece of strange behaviour? Comment below – I’d love to hear about i!

Dynamics 365 Security & AAD

I come from an ‘on-premise’ background. I’ve spent years in organisations with on-premise systems such as Dynamics 365. Take me into a server room that’s alive with whirring fans, and I get quite nostalgic. Those were the days…well, in some ways, anyhow. But having recently discovered some quite helpful functionality, I thought I’d share it with others!

See, when it came to Dynamics 365 security, there was no way to automate things. Yes, users had to be created in Active Directory (and also, in a folder that the Dynamics install could refer to within AD!), but they had to be manually added to Dynamics 365. There was no way to automate this (from recollection – then again my memory grows dim with the fog of time).

So what the system administrators needed to do was to manually go to Settings/Security within the system, and there they could either add a single user at a time, or multiple users. They would then assign role/s (for multiple users, all of the users would need to have the same role/s – it wasn’t possible to modify individual users within this process).

One way to slightly speed up time in handling different security roles was to have teams, relating to the business needs. The security role/s would be created, assigned to a team, and then any user added to the team would automatically get all of the permissions that they needed.

Then came the heady world of Dynamics 365 being online! Well, nothing much changed really, at least not for a little while.

But then, things really did change, in May 2019. Functionality for security teams within Dynamics 365 was increased. Notably, there was now something called a ‘AAD Security Group Team’:

So what was this magical new item?

When we create a team, and we set the Team Type to ‘AAD Security Group’, we’re now able to set an AAD Object ID. In fact, it’s required! After we’ve created this object within Dynamics 365, we can then apply security role/s to it directly (as we could to any other team records beforehand):

Let’s take a moment to reflect & think on this. Until now, we’ve had to handle security directly within Dynamics 365. Now, we have the ability to have an Azure Active Directory (for that is what AAD stands for) group, and reference it within Dynamics 365.

Suddenly new possibilities open up. As part of the on-boarding process (for example) we can users to specific AAD security groups, which will then give them access with appropriate permissions within Dynamics 365. We’re also able to have multiple AAD groups, each inheriting a different set of Dynamics 365 roles, and thereby create a multi-layering approach to different business & security needs.

We’re also able to use tools such as PowerShell, LogicApps, Power Apps & Power Automate to carry out automation around this. There’s an Azure AD connector (https://docs.microsoft.com/en-us/connectors/azuread/) which gives the ability to set up & administer these.

We’re actually using this functionality now in some of our COVID-19 response apps. Instead of needing our own support desk to manage the (external) users, we’ve provided an interface where client IT departments can quickly log in, upload a list of users, and assign them to the relevant AAD group/s. It’s very quick, and allows the users to onboard to the Power Apps within minutes!

So with knowing this, how do you feel it might help benefit you? Comment below – I’d love to hear!

DateTime fields, XrmToolBox, & Dynamics 365 behaviour

Recently we’ve been rapid producing & deploying solutions, due to the current pandemic. One of the apps that I’ve been working on required quite a few fields for data capture. Well, truthfully most apps require quite a few fields, but I thought that I’d talk about this one in particular, due to something that I discovered.

Now, we all know how to create fields in the Power Platform maker experience. It’s really quite simple – you select that you want to add a new field, put in the details/type of field, & save. Hey voila – you have yourself a nice new field! You can then go on to add it to forms, views, etc etc. We all know how it’s done:

What I’ve found myself doing recently though is not to create fields through the Maker interface (make.powerapps.com), especially when there are lots of fields to create. Instead, I’ve been using the XrmToolBox to do this. There’s a very helpful tool within it called Attribute Editor, which allows you to use an Excel spreadsheet. It takes this, and creates the relevant fields through the Dynamics 365 API.

One of the reasons for doing things this way was that it allows me to get on with other things whilst the fields are being created. Although it doesn’t happen in the blink of an eye (especially when there are a lot of fields to create), I can leave it whizzing along, and do something else. This, of course, makes me feel VERY productive!

Right – back to what I was saying. So I had a lot of fields to create, and many of them needed to be datetime fields. Actually, all I needed was the time component, but unfortunately Dynamics 365 DOESN’T allow you to just show the time. It’s either Date, or DateTime, but no option for JUST Time. A flaw, in my opinion, for what it’s worth….

So I created the Excel template, started the process, and went on to do something else. I of course made sure to specify that the field type should be ‘DateTime’.

Coming back to it when it had finished, I started to place fields on forms, and noticed something strange. All of the datetime fields that I had created through this were date ONLY. This was…puzzling! Going to check the fields themselves, they were set as Date ONLY, not DateTime!

I went back to check my upload spreadsheet, and it was set correctly there. I even tried uploading another field, but still the same issue was occurring.

Now, with the way that Dynamics 365/Power Platform works, once you’ve created a field & saved it, you can’t change the field type. When it’s created it’s saved down to the underlying database structure as the specified field type, and that’s it. No way to change it…or at least not through the front end!

With this in mind, I fired up another one of the XrmToolBox tools, namely Attribute Manager. What this handy tool does is, behind the scenes, allow you to change the field type. Well, it doesn’t ACTUALLY change it directly – it clones it, deletes the original, then clones it back. There are some caveats to it working properly (ie that the field isn’t used in a view somewhere, for instance), but it’s really helpful.

Note: It only works for custom created fields, not the default OOB fields!

Depending on the field type that you’re wanting to change it to, you can select different options. However for DateTime, there’s only one option. OK – I was going to see what happened.

Well, I ran the update, but nothing changed. It was still ‘Date’ only within the interface, which was really being incredibly annoying. It wasn’t as if I could just delete & recreate it (well, I could, of course). I had dozens & dozens of these to do, and quite frankly didn’t want to spend all of that time in doing this.

Thankfully (with the help of one of my colleagues, who’s an experienced & devoted developer – thanks Sid!), we found the solution.

See, I had been doing everything within the ‘new’ interface. This is the one that Microsoft keeps pushing everyone to, as they don’t want people to really be using the Classic Interface anymore. That’s all very well & good, but the ‘new’ interface isn’t on parity (for some things).

Reverting back to the Classic interface (note that the option below is only available when working within a solution!), we discovered some hidden behaviour

We located the entity that we needed, and the field itself, and opened it in Classic. With the screen that’s presented (I do miss this in some ways – I remember the days where I almost lived permanently in here!) we AMAZINGLY have the following option:

We can CHANGE THE TYPE!! Now, this is just with the field that we’ve selected. To be frank, I have no idea at this point about any other field types, and would need to explore that separately. But for the moment, my problem has been solved! (well, to the point that I have ‘time’ values available – I’d still like to see JUST time values being an option).

So with this in mind, I merrily waded through the dozens of fields in the Classic UI, changing them all as needed. It wasn’t just a few minutes of work, but it was definitely much less time that deleting & manually creating each one!

So, really quite helpful. The only other thoughts that I had around things were that it would be nice if the various tools within the XrmToolBox could do this as well. However, the fact that they don’t seem able to actually seems to be a limitation of the API. Having gone to check the different field types & how they’re set programmatically (https://docs.microsoft.com/en-us/powerapps/maker/common-data-service/types-of-fields), I’ve noticed the following:

There really doesn’t seem to be any way to specify the different sub-type, which is a shame!

Have you ever had a similar situation with fields? Drop a comment below- I’d love to hear about it.