Security Roles & Assigning Records

Let’s face it, and call a spade a spade (or a shovel, depending on where in the world you happen to be). Security roles are very important within Dataverse, to control what users can (& can’t!) do within the system. Setting them up can be quite time-consuming, and troubleshooting them can sometimes be a bit of a nightmare.

Obviously we need to ensure that users can carry out the actions that they’re supposed to do, and stop them doing any actions that they’re not supposed to do. This, believe it or not, is generally common sense (which can be lacking at times, I’ll admit).

Depending on the size of the organisation, and of course the project, the number of security roles can range from a few, to a LOT!

Testing out security can take quite a bit of time, to ensure that testing covers all necessary functionality. It’s a very granular approach, and can often feel like opening a door, to then find another closed door behind the first one. Error messages appear, a resolution is implemented, then another appears, etc…

Most of us aren’t new to this, and understand that it’s vitally important to work through these. We’ve seen lots of different errors over our lifetime of projects, and can usually identify (quickly) what’s going on, and what we need to resolve.

Last week, however, I had something new occur, that I’ve never seen before. I therefore thought it might be good to talk about it, so that if it happens to others, they’ll know how to handle it!

The scenario is as follows:

  • The client is using Leads to capture initial information (we’re not using Opportunities, but that’s a whole other story)
  • Different teams of users have varying access requirements to the Leads table. Some need to be able to view, some need to be able to create/edit, and others aren’t allowed to view it at all
  • The lead process is driven by both region (where the lead is located), as well as products (which products the lead is interested in)

Now, initially we had some issues with different teams not having the right level of access, but we managed to handle those. Typically we’d see an error message along the following lines:

We’d then use this to narrow down the necessary permissions, adjust the security role, re-test, and continue (sometimes onto the next error message, but hey, that’s par for the course!).

However, just as we thought we had figured out all of the security roles, we had a small sub-set of users report an error that I had NEVER seen before.

The scenario was as follows:

  • The users were able to access Lead records. All good there.
  • The users were able to edit Lead records. All good there.
  • The users were trying to assign records (ie change the record owner) to another user. This generally worked, but when trying to assign the record to certain users, they got the following error:

Now this was a strange error. After all, the users were able to open/edit the lead record, and on checking the permissions in the security role, everything seemed to be set up alright.

The next step was to go look at the error log. In general, error logs can be a massive help (well, most of the time), assuming that the person looking at it can interpret what it means. The error log gave us the following:

As an aside, the most amusing thing about this particular error log, in my opinion, was that the HelpLink URL provided actually didn’t work! Ah well…

So on taking a look, we see that the user is missing the Read privilege (on what we’re assuming is the Lead table). This didn’t make sense – we then went back to DOUBLE-check, and indeed the user who was trying to carry out the action had read privileges on the table. It also didn’t make sense, as the user was able to open the lead record itself (disclaimer – I’ve not yet tried doing a security role where the user has create/write access to a table, but no read access..I’m wondering what would happen in such a scenario)

Then we had a lightbulb moment.

photo of bulb artwork

In truth, we should have probably figured this out before, which I’ll freely admit. See, if we take a look at the original error that the user was getting, they were getting this when trying to assign the record to another user. We had also seen that the error was only happening when the record was being assigned to certain users (ie it wasn’t happening for all users). And finally, after all, the error message title itself says ‘Assignee does not hold the required read permissions’.

So what was the issue? Well, it was actually quite simple (in hindsight!). The error was occurring when the record was being attempted to be assigned to a user that did not have any permissions to the Lead table!

What was the resolution? Well, to simply grant (read) access to the Lead table, and ensure that all necessary users had this granted to them! Thankfully a quick resolution (once we had worked out what was going on), and users were able to continue testing out the rest of the system.

Has something like this ever happened to you? Drop a comment below – I’d love to hear the details!

Solution deployments: Automated vs Manual

Over the holiday period, I’ve been playing around with solution deployments. OK – don’t judge me too much…I also took the necessary time off to relax & get time off work!

But with some spare time in the evenings, I decided to look a bit deeper into the world of DevOps (more specifically, Azure DevOps), and how it works. I’ll admit that I did have some ulterior motives around it (for a project that I’m working on), but it was good to be able to get some time to do this.

So why am I writing this post? Well, there’s a variety of great material out there already around DevOps, such as https://benediktbergmann.eu/ by Benedikt (check out his Twitter here), who’s really great at this. I chat to him from time to time around DevOps, to be able to understand it better.

However, I ran into some quite interesting behaviour (which I STILL have no idea why it’s the case, but more on this later), and thought that I would document it.

Right – let’s start off with manual deployments. As we know, manual deployments are done through the user interface. A user (with necessary permissions) would do the following:

  1. Go into the DEV environment, and export the solution (regardless of whether this is managed or unmanaged)
  2. Go into the target environment, and import the solution

Pretty simple, right?

Now, from an DevOps point of view, the process is similar, though not quite the same. Let’s see how it works:

  1. Run a Build pipeline, which will export the solution from the DEV environment, and put it into the repository
  2. Run a Release pipeline, which will get the solution from the repository, and deploy it to the necessary environment/s

All of that runs (usually) quite smoothly, which is great.

Now, let’s talk for a minute about managed solutions. I’m not going to get into the (heated) discussion around managed vs unmanaged solutions. There’s enough that’s been written, said, and debated on around the topic to date, and I’m sure it will continue. Obviously we all know that the Microsoft Best Practise approach is to use managed solutions in all non-DEV environments..

Anyway – why am I bringing this up? Well, there’s one key different in behaviour when deploying a managed solution vs an unmanaged solution (for a newer solution version), and this is to do with removing functionality from the solution in the DEV environment:

  • When deploying an unmanaged solution, it’s possible to remove items from the solution in the DEV environment, but when deploying to other environments, those items will still remain, even though they’re not present in the solution. Unmanaged solution deployments are additive only, and will not not remove any components
  • When deploying a managed solution, any items removed from the solution in the DEV environment, and then deploying the solution to other environments will cause those items to be removed from there as well. Managed deployments are both additive & subtractive (ie if a component isn’t present in the solution, it will remove it when the solution is deployed)

Now most of us know this already, which is great. It’s a very useful way to handle matters, and can assist with handling a variety of scenarios.

So, let’s go back to my first question – why am I writing this post? Well..it’s because of the different behaviour in manual vs automated deployment, which I discovered. Let’s look at this.

When deploying manually, we get the following options:

The default behaviour (outlined above) is to UPGRADE the solution. This will apply the solution with both additive & detractive behaviour. This is what we’re generally used to, and essentially the behaviour that we’d expect with a managed solution.

Now, when running a release pipeline from Azure DevOps, we’d expect this to work in the same way. After all, systems should be build to all work in the same way, right?

Well, no, that’s not actually what happens. See, when an Azure DevOps release pipeline runs, the default behaviour is NOT to import the solution (we’re talking managed solutions here) as an upgrade. Instead (by default), it imports it as an UPDATE!!!

This is what was really confusing me. I had removed functionality in DEV, ran the build pipeline, then ran the release pipeline. However the functionality (which I had removed from DEV) was still present in UAT! It took me a while to find out what was actually happening underneath…

So how can we handle this? Well, apart from suggesting to Microsoft that they should (perhaps) make everything work in the SAME way, there’s a way to handle it within the release pipeline. For this, it’s necessary to do two things:

Firstly, on the ‘Import Solution’ task, we need to set it to import as a holding solution.

Secondly, we then need to use the ‘Apply Solution Upgrade’ task in the release pipeline

What this will do is then upgrade the existing solution in the target environment with the holding solution that’s just been deployed.

Note: You will need to change the solution version to a higher solution number, in order for this to work properly. I’m going to write more about this another time, but it is important to know!

So in my view, this is a bit annoying, and perhaps Microsoft will change the default behaviour within DevOps at some point. But for the moment, it’s necessary to do.

Has this (or something similar) tripped you up in the past? How did you figure it out? Drop a comment below – I’d love to hear!

Environments & ‘Admin Mode’

With some recent events happening (both professional & personal), I’ve taken a slight step back from putting out posts on here. Thankfully things seem to be settling down, so I’m getting (back) into the swing of things!

I thought that it would be good to talk about a subject that I fell ‘foul’ of recently. This is around environments, and more specifically, the ‘admin mode’ that it’s possible to use on them.

So what exactly is this ‘admin mode’? Well, the aim of it to restrict access to certain users, namely System Administrators & System Customisers. Why would we want to do this? There are several scenarios that come into mind:

  • Performing a system upgrade (such as enabling new features)
  • Changing environment type (eg Production to Sandbox, or vice-versa)
  • Restoring an environment

Essentially, any time we have operation-type work that we’re wanting to carry out. This way whatever we’re doing won’t affect users, and anything that the users are doing won’t affect things either (symbiotic relationship there!).

So as an example, if we’re doing a major release, which changes functionality within a system, we wouldn’t want users in the system carrying out their usual work, as this could have data issue if saving during the actual release. We of course SHOULD be communicating to users that a release is going to take place, and that they shouldn’t be in the system at the time, but ‘admin mode’ is how we can truly enforce it.

Something to bear in mind as well is that if you’re going ahead & restoring an environment to a previous state (whether that’s an automatic save point, or a manual one), it will automatically put the environment into ‘admin mode’ once the restore has been completed. This is very important to keep in mind!

There are three settings around administration mode:

  1. ‘Administration Mode’. This sets whether admin mode is on or off!
  2. ‘Background Operations’. This sets whether background processes, such as workflows, power automate flows, and Exchange synchronisation are enabled (allowed to happen) or disabled (stopped from happening
  3. ‘Custom Message’. This allows you to set a custom message that users (who are not system administrator/system customiser) will see when they attempt to access the environment

So this is the scenario that tripped me up a few weeks back:

  • I was needing to restore an environment to an earlier save point (to be clear, this was NOT a production environment)
  • I went ahead with the restore, and it completed successfully
  • Given that I was doing this at night, one of my children woke up, and I had to deal with them
  • I came back to things, saw that it completed, and then went ahead with the release that I was needing to do

All seemed to go well. However, when users were testing (which admittedly was a few days later), they reported that some functionality wasn’t working. This was strange, as it had been working before the release (& the release that I did hadn’t actually touched it!).

It turned out to be Power Automate flows that just didn’t seem to be running. OK – I started to look into them, but couldn’t figure out why they hadn’t run.

Creating a test Power Automate flow didn’t seem to work either – despite running it to test it, the trigger never activated! I was quite puzzled by this, and couldn’t (initially) work out the reason.

Then I thought to check environment settings! Lo & behold, the environment was STILL in administration mode, and the Background Process option was disabled! Aha – I’ve found the source!

Flipping this out of administration mode thankfully then allowed all Power Automate flows to work/run, and users confirmed that functionality was indeed running as expected. As you can imagine, I was quite relieved!

man in white shirt and black pants standing on black concrete bench near white building during

Something that I hadn’t realised previously is that if you manually put an environment into administration mode, it doesn’t automatically disable background processes. However, if you restore an environment, it DOES disable background processes by default. So if you’re wanting to try out automation items within a restored environment that’s still in administration mode, you’re going to need to ensure that you toggle the Background Processes toggle to allow it to work!

One further thing to learn as well (which I’ve been asked already by some people, so thought that I would mention it here). I’ve mentioned above that users were in the system, but reporting that things weren’t working. Now given that the environment was in administration mode, people have asked how users could be in it! The answer is that these users actually had the system customiser role applied to them, which is why they could get in! If they hadn’t had the role, then perhaps I might have realised things a little sooner (ie that the environment was in administration mode).

So a (good) little lesson learned, and I’ll definitely take it forwards. Has this, or anything else like it, ever tripped you up? Drop a comment below – I’d love to hear!

Good news for Power Automate Flows!

As a starter for 10, this wasn’t actually the blog post that I was going to write today. In fact, the subject of the post wasn’t even going to be about Power Automate! However, there was some really amazing news that dropped today from Microsoft, which I just couldn’t pass up being able to talk about.

You’ve guessed it – it’s about Power Automate! Well, I suppose that the post title was somewhat of a giveaway, wasn’t it…ah well. So let’s go ahead and find out what this is all about then!

To date, we’ve been able to put Power Automate flows into a solution. Well, it wasn’t there exactly at the beginning of things, but it happened somewhere along the way. This was very convenient, as we didn’t then need to deploy each one individually to different environments. Some solutions can contain dozens & dozens of flows, and we really do love to package them all up together for ease of movement.

So that was good. But there was still a (major) ‘bugbear’ (as I like to refer to them as). This is the fact that after we deploy a Power Automate flow, we then need to go into it & (re)authenticate it. This is due to the fact that the connector/s that it uses contains what is referred to as a ‘secret’, and these can’t be moved across environments. As a result, we need to essentially recreate the ‘secret’ in the connector (ie authentication details) every time we move it. This is an annoyance (if you have one or two flows), and an absolute bloody nightmare if you have lots.

For the technical minded – every action in a flow is bound to a specific instance of a connection that it will use to “execute” that action. This is why when moving flows across environments, users are required to rebind every operation to a connection.

For example, I’ve been working with COVID-19 triage solutions. These contain lots of flows within them, connecting to multiple different sources, and doing different things. Every time we’ve performed a release (even if it’s just a simple update), we’ve needed to manually go through each flow, (re)authenticate them, and turn them on. If you forgot one, then everything can come crashing down & not work! But there’s been no other way to do it. To represent this visually, we have the following diagram

For each & every Power Automate, the connection line gets ‘broken’ when it’s deployed, and needs to be re-made.

Until now, that is. For today, Microsoft has announced the Public Preview for ‘Connection References’. Now when something is put into Preview, I usually caveat the usage of it with saying things like ‘it might go away, or not be released for a while’. But I’m going to be quietly confident about this particular piece of functionality, as I really don’t think it’s going to be pulled!

So what exactly are these? Well, in (mostly) simple terms, Connection References provide an ‘in-between’ or ‘abstraction’ layer for the connections that use them. Let’s show this visually as well

We still need to re-authenticate the Connection Reference once we deploy things. But let’s now see how we can save ourselves a massive headache, and LOTS of time:

Oooo…now this is looking better. Instead of having to update three Power Automate flows, we only have to update the SINGLE Connection Reference that’s sitting in the middle. Now multiple that by however many flows you have (eg sending emails out, etc), and start calculating how much time you’ll now be able to spend on coffee breaks, rather than doing this manually one at a time…

We can create Connection References directly from within the solution:

We then give it a name & description, choose which connector we’re going to be using, and either select an existing connection or set a new one up:

Once we’re finished, we click ‘Create’ at the bottom. Voila – we can now see it within our solution!

Note: Interestingly enough I couldn’t actually see this within the solution after I created it, even with the component selector set to show ‘All’. How I actually got them to display was changing the component selector to ‘Connection Reference’, and they then showed up. I’m thinking that this is due to it being new today/in the process of rolling out, and am expecting it to display without any issues in the near future

Let’s take a look at a Power Automate flow itself now to see how it’s referenced. When we open an item with a connector, we can now see the following:

We’re able to select the Connection Reference that we’re wanting to use. Simple, yet so powerful.

When importing a solution containing a Connection Reference, we will be prompted during the import process to set the actual connection that should be used with it:

If you don’t have any connections set up already in the environment, you’ll be able to create a new one from the dropdown.

Some things to note around this:

  • During the preview phase, Microsoft has specified that a single Connection Reference can only be used by up to 16 flows. This limitation will be removed once it goes GA
  • Existing flows will not be automatically upgraded. What you can do though is export the unmanaged solution, re-import it to the same environment, and then they will be automatically created for you. The flow/s can then be edited to update them to the correct connection reference record
  • The connection name and connection reference name are not currently synchronised. They can be different. Therefore it’s best to keep the naming conventions the same. Don’t set different names for connections and their associated connection references.

In summary – this is an awesome step forward with Power Automate functionality. I’m already tasking some of the developers on the team to re-do existing solutions to use it for ease of use. How do you think it’ll best benefit you? Drop a comment below!

Handling ‘Out of Hours’

Let’s face it – we can be quite spoiled at times. As a customer, we can sometimes expect that companies be available 24/7 to service our requests, needs, issues, etc. That would be wonderful, wouldn’t it! Imagine that you have a mobile phone issue at 2am – you could call up your provider, and have it handled (or a new handset sent out) immediately. That would be quite nice!

Unfortunately the real world doesn’t (always) quite work like that. Of course there are companies that operate on a multi-national or even global scale, and there’s always customer service available (Amazon – I’m thinking of you right now!).

Previously I’ve gone into how we can set operating hours for a company, so that the ability to contact a customer support agent is only shown during these times. Take a look at Handling Company Hours for a refresher on this.

But sometimes not showing the ability to contact support could potentially be counter-productive. Customers may think that our website isn’t working properly, and possibly attempt to try to reach us through other means. This could quite well frustrate them.

Due to this, we have a nice little piece of functionality that’s now come out in Omnichannel. It’s small, simple, but yet quite brilliant in my humble opinion. This is the ability to have a chat widget available, but let customers know that that it’s currently out of company hours.

To activate this, we need to open the Chat record in the Omnichannel Administration Hub, and go to the Design tab:

Quite helpfully, the section is labelled ‘Offline’! How much better could we get.

We do need to understand that (at the time of writing this post) it’s currently in Preview, with all of the usual caveats around how that works.

We have several items available here:

  • Show widget during offline hours. This is what actually activates the setting – leaving this to false won’t do anything for us!
  • Theme colour. This allows us to set the specific theme to be used during ‘offline’ hours. It’s actually really helpful, as it serves/gives a very visual aspect to the customer to display that it’s out of hours
  • Title. The title of the chat widget, which will be displayed to the user
  • Subtitle. This allows us to place a subtitle as well, for the user to be able to see

So what does this then look like? Well, let’s take a look:

Personally I think that being able to set a theme colour for offline access gives it that little edge. Customers will become aware of this (subconsciously) when visiting the website, and come to the point of not even trying to start a chat when they see that it’s out of hours.

One MAJOR thing to bear in mind. We’re only going to be given the option to set this when we have a value set for Operating Hours. Without this being set, we won’t be shown this option. Go try it for yourself and see!

There’s not really much else to this, to be honest. But I’m liking it. I know that from a personal perspective I’ve been on various websites, and have no idea if the support chat is actually working or not. With this in place, I’m able to see that it is available for use at the correct time, and not have to wonder about it.

Have you ever thought about implementing something like this? Have you actually done so? I’d be really interested to hear from you about how you went about it – please drop a comment below!

Lookup fields & Power Automate

This is an interesting post, for several reasons. Firstly, it’s the first one in 3 weeks – I was off on holiday, and decided to take an (almost) absolute break from all things digital, which included this blog. It was actually quite refreshing, though now coming back & starting to write again does seem a bit daunting, I’ll admit!

Thankfully, whilst wondering what exactly to start with, a scenario came up that I was working on. It seemed quite simple at first, but then actually got someone complicated. I therefore thought it would be helpful to others if I wrote about it, so here it is.

The scenario was as follows. We had records being auto-created in the system, and needed to create child records for them. This, as I’m sure you’ll agree, is really quite simple to do with Power Automate. We also needed to set lookup values on the child record, that were already populated on the parent record (for reference purposes).

So for example, the parent record has a lookup to Country (being a separate entity), and the child record also has a lookup to Country. These need to be the same.

Being both lookup fields, I figured that I’d be able to take the value from the parent record, and simply plop it into the corresponding field on the child record in Power Automate:

So I did that – and immediately hit an error. Not just any error, but the fabled ‘Resource not found for the segment’ error!

Obviously, I did what anyone would do at first – I put it into Google & Twitter, and took a look at what came up.

The ‘problem’ was coming from using the ‘CDS Current Environment’ connector, which is the latest version available (the old one is no longer available to use). It’s really great for a lot of things, but unfortunately not so great in a few areas. See, in the old CDS Connector, you could just drop the lookup field value into the field you were wanting to populate. Power Automate had no issues with that, & it would run just fine.

However in the ‘new’ CDS Connector, you can’t just do that. Instead, you need to use an OData reference (which I haven’t done much of before, to tell the truth). So based on the blogs I had come across, I went to work to try to get this working.

Part of the challenge was that there didn’t seem to be a unified consensus in how to do it. I came across the following variations:

  • /entityname(Lookup Field Value)
  • /entityname/(Lookup Field Value)
  • /pluralentityname(Lookup Field Value)
  • /pluralentityname/(Lookup Field Value)

Somewhat confusing, as I’m sure you’d agree. Nevertheless, I ploughed through all of the different possibilities. But nothing was working – every single time, I still got the ‘segment not found’ error message. This, as you can image, was extremely frustrating!

Thankfully, one of my good friends was around & able to help out. Namely, Tricia Sinclair came to the rescue!

We took a look at the code I was using, and she took a look at some of her own use cases (where it had worked for her). I was starting to think down the path of needing a capital letter in the entity name (some systems can be REALLY finicky around things like that), but thankfully it wasn’t.

Instead, it was the following. See, this was a custom entity. It turns out that for a custom entity (& heck, for all I know system entities as well) the syntax needed is ‘publisherprefix_pluralentityname(lookupfieldvalue)’. Now that’s not something that I had come across ANYWHERE at all!

Looking at it, I guess it makes sense. After all it would technically be possible to have multiple entities with the same name, though with different publishers. As a result, the system needs to know WHICH exact entity is being needed for the Power Automate, so uses this. Somewhat complicated (and hey – it worked without all of this in the OLD CDS Connector), but we got it to work!

Testing it out, everything worked smoothly. The Power Automates fired off without any issues, the data got created & populated, and everyone was happy.

So there you go. Another interesting little twist in syntax needed, which hopefully will NOT change in the (near) future!

Have you come across anything like this? I’d love to hear – drop a comment below around it!

Canvas Apps, Collections & Dropdown Fields

This post is based around some recent work that I’ve been doing, which includes canvas apps. For those of you who aren’t familiar with canvas apps, imagine if PowerPoint & Excel had a baby! Though I’m expecting most people who are reading this to already know all about them 🙂

So enough with the waffle, let’s get on with things…let me paint the scenario for you.

The app is aimed to be used by a contact centre. Part of their function is to capture address information. So far this has been done absolutely manually. The issue with this is that data can be typed incorrectly, or in the wrong fields. We’re also needing to enhance the data with geographic-specific information (for reporting purposes). This information isn’t known by either the callers, or by the contact centre agents (for those who are curious, it’s the unique property reference number, which is unique to every address in the UK).

Thankfully, we’ve been given a source from the client which we can look this up against. In essence, we pass a postcode to it, and values are returned (in a JSON format). This includes the data that we’re looking for. Brilliant, so far.

When we got to thinking about things, there are several ways in which we could implement this:

  • Capture the data as we are already doing, & use Power Automate to get the relevant additional information

or

  • Automate this within the canvas app itself, and even give the customer service agents a bespoke address picker!

Deciding to go with the second option (it was a no-brainer, really), we moved ahead with this. We had the details that we needed in order to hit the address lookup API. One of the developers on the team created the Custom Connector, and got it working. We tested it out, and amazingly we got information back!

The next step was to see how we could do this within the canvas app itself. Now I’m going to admit here that although I’ve HEARD great things about Collections, I had never used them myself. In fact not only had I NOT used them before, I had NO idea how they worked! That was to change VERY quickly though…

Within a few hours, I had learned enough about collections to get how they worked, and pull data into them. It was actually really simple – I used the ClearCollect command to create a collection that was fed by the API query, which then created the data into a collection table for me to use. I was very impressed!

The code to return the postcode data. We had to do some manipulation due to the API constraints

OK – so I had my data in the collection now:

What were my next steps? Well, I was wanting to achieve the following:

  • Give the customer service agents an ‘address picker’ to use. They’d enter the customer postcode, & then be presented with a list of addresses that they could pick the correct one from
  • Automatically populate the customer address fields on the form from the selected address

Well, the first item (the ‘address picker’) was simple enough. Using a dropdown field, I pointed it at the collection data. This worked great, but the dropdown was only allowing me to select a single column from the collection to display. This meant that I could only select ONE column of data to return:

I can only select a single column!

1 column from the collection. OK, I thought – should be simple enough to handle. Let’s go and concatenate column values in the dropdown, to present the interface I’m looking for:

Now that’s more like it! Much easier for the customer service agents to use. OK – onto the next stage. Let’s go & set the fields to point to the collection, match to the value that’s selected in the dropdown, and populate. Should be simple to do, right?

Well…um, no, it’s not simple to do. In fact, it’s actually impossible to do. I was expecting to point to the dropdown selected value, & have the columns returned (from the collection). I could then select which column to use for a specific field. This, however, was not the case:

You have to love the ‘.’ (or ‘dot’) notation used in canvas app code. It shows you what values are available, and saves having to do lots of type. In this case, however, it also showed me that there was only ONE column of data to select from to display in the field. This was the ‘Result’ column.

This got me very confused. I tried going back to basics, and stripping out the concatenation in the dropdown. Wonderfully I was then presented with all of the different collection columns to use:

So let’s sum up things so far:

  • If I want to present the best option to the customer service agents (using concatenation), I can’t select different parts of the data for auto-population into fields
  • If I want to be able to auto-populate field values from the collection, I can’t use concatenation (& therefore can’t present user-friendly data to the customer service agents).

Note: Leaving aside wanting to show the house number & street, one of the main reason for wanting to concatenate was to handle buildings that had flats (aka apartments) in them. This is stored in a different column in the collection. It would therefore be difficult to show these both to the customer service agents

In essence, the behaviour of the dropdown field seemed to be that I couldn’t just change the displayed values without it ‘losing’ connection to the rest of the data. There was no ID that I could use to match on, or display what I wanted to.

This seemed to be a massive Catch-22. I tried various things, but couldn’t see a way out of this. I started to try to create a second collection, & concatenate fields from the first collection. This seemed like a good idea, though (with being totally new to it), I got lost. I tried various things; I even ended up managing to collect the entire data from the collection into a new column for EACH ROW!!

Thankfully, the community helped me out, in the forms of Peter Bryant & Clarissa Gillingham (I had posted about my issues on Twitter – the hashtag #poweraddicts is really great!).

With the help provided, I managed to work out the CORRECT syntax to use for the ‘AddColumns’ command. This now being in hand, I was successfully able to create a second collection & add concatenated field values to it:

Now for the moments of truth. Would the dropdown show this new column, & could I point the form fields to auto-populate specific columns?

Anticipation is the way to keep consumers coming back for more
Not me, but exactly how I was feeling!

The answer….was YES! It was working! I felt SO relieved. Let’s take a peek:

This was brilliant! We’re also populating other data in the background, but that doesn’t need to be visible to the customer service agents.

So in summary, I learned about collections, & how to use them. I also learned about the limitations of dropdown controls (when referencing them from other places), but came up with a way around it. Finally I achieved the result that I was aiming for. Very pleasing all round!

Have you come across something like this in an implementation? How did you manage to handle it (if you did)? Drop a comment below – I’d love to hear all about it!

Power Automate & Lookup Fields

Recently I’ve been expanding my knowledge of Power Automate, and how it works. It really is a truly amazing tool, though there can be some quirks to things! There are so many connectors to use, though I haven’t really used that many of them to date.

Truthfully, most of my work in Power Automate is around CDS & Office 365. Occasionally I’ll dip into another system, but for the most part that keeps me busy enough. It’s not to say I don’t want to explore further, but finding the time can be quite difficult!

One of the great abilities that Power Automate has is to be able to update a record. With focusing on CDS entities for the moment, we would use the inbuilt action for this:

We’d run a query to get a specific record – this would give us the record ID (or GUID, depending on your preference). With this, we’d use the Update Record action & pass in the record GUID. After all, we need to know which record we’re going to update! So for example:

What we can then do is set values for the record. So we can pass in Dynamics Content, use Expressions, etc. These can be from records that are part of our Power Automate query chain, or from elsewhere.

For example, I can say that when a contact’s postcode changes (or zip code for USA), go away, look up the new city, and update it (Note: I haven’t shown the postcode lookup part below):

So this is all really brilliant. Different fields have different behaviours, of course, and we need to respect that. Otherwise the Power Automate flow won’t run, and will error. This is, of course, the digital equivalent of not trying to force a square brick into a round hole!

What we can also do is clear a field value. If for example we’re wanting to remove a value from a field, we can use the NULL expression on the field. When the Power Automate flow runs, it’ll clear whichever value the field is currently holding:

Now, one of the the field types available within CDS is the lookup field. I’m not going to go into what this is, as we should already know this!. We can, of course, set lookup fields values to populate the field, which works as expected.

However (& thanks for bearing with me so far), what happens if we want to clear a lookup field value?

Say for example that we have a task, that’s assigned out to someone. If they reject the task, we want to be able to remove them from the task record. We wouldn’t delete the task, as we still need it (& now would need to assign it to someone else). We need a way to do this.

I can hear what you’re thinking right now – mentioned above is the use of NULL, so we’d use this! Um…well, you’d think so. You can try that, but we’ve found that doesn’t always work. Additionally, that doesn’t actually seem to remove the underlying relationship that’s been put in place.

Update: Thanks to Lin Zaw Winn, who dropped me a line to let me know further information around this. The standard CDS connector (the first one that was available) allowed this to work, but the updated CDS connector (Current Environment) doesn’t allow it. Unfortunately the different connectors aren’t at parity, which is a pity!

So, there’s another way to clear lookup field values. This involves the Unrelate action that’s also available. The steps for this are as follows:

  1. Get the related record (lookup the record type, pass in the GUID for it)
  2. Use the Unrelate action to remove the connection

This will then remove the relationship, which actually results in clearing the lookup field value. In practise (for our scenario), this would look like:

Let’s take a bit of a further look at the options available here:

  • The Relationship field is the relationship between the two entities (eg here it’s Contact & Task). Thankfully you don’t need to manually type this – it’s easily selected from a dropdown list.
  • The URL field is the linked record itself

Note: It’s VERY important to have the Entity Name & URL values in the right order. I’d suggest looking up the connected record first (ie what the lookup field is pointing to), and using that as the Entity Name value. You’d then select the record where the lookup is saved on as the URL value.

What I’d usually suggest as best practise is to have a condition before this takes place. As mentioned earlier, removing the lookup would happen on a record update. This is because you wouldn’t be removing a field value if you’re creating the record!

But you’re not always going to want it removed. In the scenario that I’ve been dealing with, we’re only wanting to remove the volunteer if they’ve rejected the assigned task. So our Power Automate flow is set out like this:

  • When Task record is updated
    • Filtering on the field for ‘Task Accepted’, as we could have other things being updated on the Task record that we don’t want to trigger this particular process
  • Condition to check the ‘Task Accepted’ field value
    • When it’s something other than ‘Rejected’, cancel the flow
    • When it’s ‘Rejected’, run the Unrelate process set out above, and stop flow

You can obviously build out other functionality within it as you so desire.

So with this in mind, how do you think you could benefit from this? Drop a comment below – I’d love to hear!

Updating User Settings with Power Automate

Here’s a scenario that could be all too familiar to us. We’re on-boarding users (to either Dynamics 365 or a Power Platform app), & they’re new to the environment that it’s deployed to. So they’re set up, and all ready to go. Suddenly they start asking why records created (or modified) by colleagues show up as having the wrong time on them.

Reverse Wall Clock Unusual Numbers Backwards Modern Decorative ...

Does this sound familiar? I’m sure it does to quite a few people out there!. See, there’s no way to set a default system-wide time zone in Dynamics 365 (or Power Platform). At least not that I’ve come across – if you know of one, please comment below with instructions as to how to do this!

As a result, users are given the default timezone, and need to change it. This is easily done through the Personalization settings area in the app. Users click here, and then select their appropriate time-zone. Brilliant…or so you’d think.

See, when it’s one or two users, it’s generally OK to tell them to do that. However, when it’s 200 or 2000 users, you’re going to get push-back. The last thing you want is for a large number of them to start contacting you to work out how to do it (read the instructions, perhaps?).

User queue stock photo © zam ri (OneO2) (#258450) | Stockfresh

I’ve had this scenario over the last week, where the client actually told us that they didn’t want us to tell users to update it manually. They wanted a better solution.

Well, there is a solution out there to update users. It’s the ‘User Settings Utility’ app that’s in the XrmToolBox (https://www.xrmtoolbox.com/plugins/MsCrmTools.UserSettingsUtility/). Really neat & nifty, and does just what it says on the box. Simple enough to select users (or all of them at a time), select the time-zone you’re wanting to apply to them, and click a button. Hey presto – it’s been updated

Hmm. But what if you didn’t want to have to do this manually. Or (and this is what I was dealing with), there were decent enough number of users being added to the app every few days, & I didn’t want to have to do this as a manual task.

So I started digging into how the time-zone setting was actually stored. It turns out that there’s an entity called ‘User Settings’, which is associated with a User record. Oh, and if you’re going to want to take a look at this entity to see what it contains, it’s NOT available through the front end. You can’t go into the entity list and just display it (though if you’ve found a way to do this through the Power Platform NATIVELY, drop me a line, please?).

Anyhow, back to things. There’s a value for ‘TimeZoneCode’, which maps to a specific time-zone. Aha, I thought! Right – now what’s the best way that I could work out to do this automatically. Checking in with some contacts in the tech community (thanks BlackOps etc!), Power Automate was suggested, so I started to see about how I could go about it…

So, I created a Power Automate Flow (haha…I got the name right there!). On creation of a new user record, it would programmatically go away and update the value to the one for the time-zone that I wanted it to be set as. This actually worked really well.

The only drawback is that through the user interface, it’s not actually shown as being updated, though it has been. Or sometimes it changes, but doesn’t reflect it accurately. This is somewhat annoying, and caused me quite some confusion between checking the front end to see if things were working, & confirming through the back end (& opening records up) to see that it was. I still have NO idea why this was happening.

Before changing my settings
After changing my time zone to USA (EST)

For my specific scenario, all of the users are in the UK, so I set it to update every user on creation to the UK time-zone. Obviously if you have users in different time-zones, you’d want to set this differently. This shouldn’t be an issue though, as you can expand the Power Automate Flow and add logic conditions/branches to be able to do this.

Now I think that this is pretty cool, and I couldn’t find anything out there for this. I’ve therefore decided to release this in a small solution, for others to be able to use. Part of this is the entire list of time-zones with their specific codes, so that you can update to whichever one you need to.

I hope that this helps solve a small but annoying problem (at least it did for me). Please do provide feedback if you want to!

Thoughts around the Connection entity

I decided to write this post due to currently looking at the Connections entity. This is for a current project with a very specific purpose. When this came up, my thoughts went back to a previous project some years back when we also looked to use the Connections entity. I therefore thought that it would be good to recap & share my experience.

What are Connections?

Now, the Connections entity truly is a wonderful piece of work. It’s one of the core features that doesn’t actually get much time or effort devoted to it! However, it underpins a lot of the way that Dynamics 365 has been built to work over time.

The best way to summarise Connections is:

Connections are a very easy way to connections records without needing to have to create a custom relationship in the system. Connections can be used between records from the same entity, or from different entities.

See, you are able to connect one record to another record within the system. This could be account to account, account to contact, or contact to a custom entity. There are practically no limits, apart from the extent of your mind! All of this is done by leveraging the functionality that Connections brings to the table.

Note that I’m not talking about lookup fields here, which are also great, but work differently, and require creating a relationship between entities (or even within the same entity).

Just a quick reminder here that custom entities need to be enabled for connections – it doesn’t happen as standard when creating them. You can either do this when creating it, or you can edit the settings for it later:

How to use Connections

In order to connect one record to another, you need to open the first record & click the Connect button on the toolbar:

You’ll then be presented with the New Connection screen, where you’ll select the record that you want. Click the ‘All Records’ item at the top & then ‘Change View’ to select the actual entity that you’re wanting to look for:

You then select the record that you’re wanting, and save. Hey presto, the two records are now connected! To see the connected records, look at the associated ‘Connections’ setting from either record:

OK, so this is really all brilliant. For the absolute majority of situations, it works, and works well. There’s nothing better for it. There are a few small issues, such as the fact that you can’t use Business Process Flows or Business Rules for custom logic, but instead need to use Javascript, but for the most part they work well.

Edge case scenarios & issues

However, there are some edge case scenarios that I’ve come up against, which is the whole purpose for writing this blog post.

What happens if you’re trying to use Connections to establish a hierarchy of records. Eg one record is a parent of another record. Well, you could use a lookup field instead, but if you wanted to define specific attributes for the actual relationship, that wouldn’t work.

Here’s the scenario. You’re needing to capture the relationship between different people, along with certain attributes (eg if they’re a legal guardian, or a trustee, or have power of attorney, etc). You’d think that Connections would work brilliantly for this. After all, you can modify the actual Connections entity to add custom fields onto it. So for example, you could have something like the following:

Note: I’m not referencing Connection Roles, as you can only have a single connection role per connection. In the scenarios I’m handling, I’m needing to have multiple attributes per connection.

So you create the connection between the two records, and you set the attributes that you require. All good. What’s also good to remember is that Connections are bi-directional. You can view them from either ‘side’ of the connection. Eg:

Record 1

Record 2

That’s actually really helpful & useful in the normal scheme of things. You can easily see connections from either side.

But there’s a catch, or even (in our case above), an issue. If we open up each of the two Connection records, we’ll see the following data.

Joe Bloggs connecting to Helen Sommers:

Helen Sommers connecting to Joe Bloggs:

Can you spot the issue? Of course you can! On BOTH of the connection records, the custom fields that we set have the same values. We originally connected Joe Bloggs to Helen Sommers as the Legal Guardian, Power of Attorney & Trustee. Well, if we open up the connection record from Helen Sommers, we’re seeing the same values set, just in the opposite direction!

This is actually due to how Connections work. When you create a connection Record A to Record B, the system automatically creates a mirror Connection record from Record B to Record A. When it does this, it copies all of the values that you’ve set over to this mirror record.

So when you look at the data, you can’t actually see how the structure should work. It’s an issue. Especially if you’re passing the data to other system/s that may need to evaluate it. They just can’t understand this properly, and you’ll get some VERY unwanted results out of this.

Now, there is actually a field within Connections that shows which record is the ‘master’ (ie the one you actually created), and which one is the ‘mirror’ that the system created:

However even with this in place, we’ve found issues when using it:

  • If you’re relying on people looking at the record to see the information, they’re going to make mistakes (ie not checking this value). With the fact that the values are also displayed on the mirror record, this is very prone to user error, and isn’t a good way to do things
  • If passing information to another system (ie the record & the values), you need to program it to only allow it to pass records with this flag set correctly. If the other system is writing back data, it also needs to be configured to write back to the same record.

Summary

With all of this in mind (& especially considering that users may create connections from the ‘wrong direction’, which is quite possible to happen), it’s important to think of the best way to architect systems for regulatory purposes. Financial, legal & other judicatory requirements need to have a system that can handle them properly & accordingly, and not leave room for error.

Therefore, if you’re looking to handle these sorts of scenarios, I’d recommend to look at implementing a custom entity for those specific connections.

Another benefit of this is to separate out these connections from the general connections entity. That way, you’ll also be able to handle security appropriately, which is usually applicable in these sorts of situations. It will allow you to easily allow only a subset of users access (read and/or write) to this data, rather than trying to apply it to Connections (which is going to be a major headache!)