Error in Customer Insights – Data

Not a long blog post, but something that may come in handy for some people!

I was recently playing around with Customer Insights – both the Data & Journeys side of thing for a Proof of Concept I was creating for a customer. It’s definitively interesting to see how Microsoft have been evolving the product over the last year or so (which was the last time I played around with it).

One of the components that we were very interested to play around with specifically is the ‘Discovery’ part of Customer Insights – Data. As shown in the screenshot below, this is where you’re able to use natural language to query your data, to then get results using AI. This means that you don’t have to understand any specific query language (SQL, R, M etc), but rather just ‘converse’ with it as you would another person.

You’ll perhaps note that there’s NO mention of Copilot here, though perhaps Microsoft may at some point decide to call this Copilot functionality as well?

The team had loaded in the data – we had a fair few number of rows (multiple millions of them!), gone through the unification process, enrichment process, etc etc. All of this was set up & working properly.

However, when I tried to go to the ‘Discovery’ tab in my own browser, I was getting an extremely strange error:

As you can see, it’s incredibly informative…NOT!!! I mean, what does ‘Value cannot be null. (Parameter ‘key’)’ actually mean to the average person?

At first, I thought it was something to do with the underlying data, so went back to check that. However the data seemed fine. Furthermore, other people on the team were able to access Discovery in their own browsers without any issues.

Having no other option (turning it off & on again didn’t work), I raised a support ticket with Microsoft. This was responded to in a timely fashion, and I found myself working with Rohan, the Microsoft Support Representative.

In my initial ticket submission, I had included details of what was going on, what I had clicked on, that others in the team didn’t have the problem, the Organisation ID, URL’s – you name it!

Rohan jumped on a call with me, and it turned out to be the shortest support session I have EVER had. He asked me to change the system language to another language (I had been using English, to decided to change it to German). Once the language change had been applied, we navigated back to the ‘Discovery’ tab (all in German, I may add), and when the screen loaded, there was no error! Holding our breath, we then changed the language back to English (more complicated than I had imagined, with navigating in a different language).

Once this was done, and everything was back in familiar English, the ‘Discovery’ tab then loaded without issues (again!), and I was able to go ahead and start running queries in it using natural language. It was great!

In fact, it’s actually taken me longer to type out the above than the length of the support call – it was indeed that quick! Obviously lots of praise to Rohan (he did mention he had seen this once before, which is why he knew how to fix the issue).

The bigger question in my mind is what exactly was happening/going wrong underneath – I have asked this to Microsoft, but haven’t gotten a response. My guess is that something in the user/language settings hadn’t been populated properly, and therefore resulted in that error message. Updating/changing this forced it to then populate properly, and it worked.

Have you ever seen something like this, where changing a system setting (such as language) helped resolve an issue? I’d love to hear more about it –

Power BI & Dataverse Solutions

With the recent announcement of Power BI being able to be included in Power Platform solutions, LOTS of people were celebrating. Finally there would be the ability to not only include Power BI reports within solutions, but we could then also automate (aka ALM) it as well! Celebrations all round….well, for the most part.

See, although the documentation (see Power Platform solutions can now include Power BI reports and datasets – Power Platform Release Plan | Microsoft Learn) states that Power BI reports & datasets can now be included in solutions, it doesn’t actually quite work like that.

What happens is that when Power BI reports and datasets (depending on what you’re wanting to do) are included in solutions, though it does appear in the solution explorer window, it’s actually just a sort of shortcut to where they actually live. Exporting the solution then brings in the components into the exported solution file. This can be seen quite clearly when extracting the file on your computer:

As we can see from the image above, we now have the Power BI components within it

Note: If you were hoping to just go into it & see the Power BI report nicely, unfortunately you’re going to be disappointed. Instead, it’s exported as a ‘.pbipkg’ file, which doesn’t seem possible to open with Power BI Desktop at all!

But it’s there, and supposed to work. So let’s go ahead & import it into the destination environment. After all, this is the whole point of solutions – being able to move components between places!

Note: For the purpose of this blog post, I’m using manual ALM (ie manually exporting & importing the solution). However, the same will be true for automated ALM (eg using Azure DevOps).

Now this can be easier said than actually done. See, it’s quite possible that you could experience an error when importing the solution into the target environment, such as the following:

The error message (‘This solution contains Power BI components, so it couldn’t be imported here’) seems to be helpful – well, to a point. We know that there are Power BI components in the solution – after all, this is the point of it, but how comes we’re not able to import it!

Usually at this point I’d go to download the log file, and try to pinpoint the exact cause of the error. When presented with this specific error though, the log file doesn’t really seem to be of much help, despite trawling through each & every line in it. All it does is confirm that there indeed has been an import error, and it seems due to the Power BI components in the solution.

Just to double-check this, I did remove the Power BI components, export the solution, and then import it in a different environment. This worked absolutely fine without any errors! So indeed it’s got something to do with the Power BI components – but WHAT exactly is happening?

Well, the cause of this goes back to how Power BI components in Power Platform solutions actually work. As mentioned above, the Power BI items (report, dataset etc) are actually stored within Power BI itself. Yes, they’re included in the solution when we export it, but when importing them, they don’t actually save to Dataverse.

This is the absolutely KEY important thing to know and understand. When importing a solution with Power BI components, they come in as part of the solution, but are published to Power BI. Not only are they published to Power BI, a Power BI workspace is CREATED for them to live in (which will be specific per environment – a single Power BI workspace will not be shared with multiple Power Platform environments):

What this means in reality is that when the solution is imported, the Power BI workspace is created. However it’s not created by the system itself – underneath everything, the creation of the Power BI workspace is being driven by the USER ITSELF that’s importing the solution. Now, if the user account does NOT have permissions to create Power BI workspaces…well then, it’s going to error out, which is EXACTLY what is happening here!

So, it’s absolutely vital that if you are including Power BI components in a solution, you must ensure that however you’re importing it, the user account has privileges to create Power BI workspaces (as well as publish reports to an existing workspace). Without this in place, you’re going to be getting some very confusing errors happening!

It’s also important to note that even if the solution is managed, it is still possible (with the appropriate user permissions) to edit the Power BI report & dataset. Including it in a managed solution does not lock it.

Also, I’d like to thank Laura GB for her inspiration on this topic – with my limited Power BI knowledge, I usually turn to her for advice & help with Power BI.

Have you been considering including Power BI components in your solutions, or already been doing so? Have you run into this error/issue before? Drop a note below – I’d love to hear how you managed to work out the issue!

Active or inactive, that is the question?!?!

Catchy title, right? Well I was wondering what exactly I should use for this blog post, and as you’ll see as we go through things, this is probably quite a good paraphrase to use.

So, where to start? Well, with a customer, of course! Now, this customer has been running live with a custom Dynamics 365 solution for a little while. Importantly for this story, there have not been ANY releases in quite a few months. This is of course good to bear in mind, given that we can all, um, occasionally find that a release could cause an issue, somewhere, sometimes…

Part of the capabilities that they’re using is bringing in Leads, and qualifying them appropriately. As part of this process, there are various custom attributes (aka columns) that have been added to the Lead table, along with corresponding columns added to the Contact table. There’s also some custom logic that, when a lead is qualified, copies the values from Lead to Contact record, updating it (essentially extending the standard capabilities of the system).

This has all been working well to date, and the customer team has been very happy with their system. Until it stopped working, last week. Which was strange, as nothing seemed to have changed at all?

When trying to qualify leads in the system, they were getting the following error message:

Cryptic, right? This seemed a little more interesting as well, given that when only inputting basic information into a Lead record (eg First Name, Last Name, Phone Number), it didn’t matter how many leads existed with the same information, it qualified without a problem.

However, using any custom columns that had been added to the table caused this error to occur.

The first thing that I did was to check that there had been no updates released to Production. This was confirmed as being the case. I then also checked that there had been no OTHER solutions released to Production (as this could have impacted on it). Thankfully there hadn’t – the system looked to be in as fine a shape as it’s been running for a while.

OK – on to the next step. What updates have been released by Microsoft? Well, with the fact that we were able to pinpoint the date that the functionality had stopped working, we went to find the corresponding Learn article about the release (Update 22102 – Release Notes | Microsoft Learn). Don’t worry about clicking through to read it – there’s essentially not much in it, and there’s nothing at all around the Lead table or its functionality!

Continuing to dig around, I really wasn’t sure of what was causing this, but obviously had to work it out & figure out a fix! It was quite a dilemna.

This is where the amazing Microsoft community came into play. I noticed a post by Jeroen Scheper on one of the channels that I’m on. It turns out that he was having the same issues, so we started to try collaborate on it. This both reassured me (that it wasn’t just me), but also increased the confusion, as we couldn’t work out what was going on underneath to cause this!

Raising with Microsoft (we both actually raised support incidents), I had an amazing support call almost immediately. Demonstrating the problem, I was told that it was due to Duplicate Detection rules.

Now I’ll admit that this confused me somewhat. See, I had already checked the Duplicate Detection rules, but nothing had been changed, and no new rules had been implemented.

Getting the support agent to walk me through things, they told me that I had to unpublish the rules, modify a setting on them, and then re-publish the rules. This was the setting (on each one) that had to be updated:

This again caused me to be confused. Why was the system having issues with inactive records? Surely qualified leads are active records, but just qualified (& then being locked down as a result)?

Well, it turns out that my perspective of how this works is actually incorrect. As we (hopefully) all know, whilst all records have a Status value (eg Active, Inactive), there are some records that also have a Status Reason value.

In fact, the ‘State Code’ choice value in Dataverse is restricted (we can’t access it), and seems to have some quite interesting functionality running behind it. Depending on which table is accessed, there are different options available within it.

For example, the Lead table shows:

Whereas the Contact table shows:

And the Task table shows:

Anyhow – it turns out that when a Lead record is qualified or disqualified, though it’s not shown in the user interface (nor behind the scenes), the record is actually being deactivated!

More information on this can be found at Qualify and convert leads to opportunity | Microsoft Learn.

So, this was the underlying reason behind the error message. Obviously Microsoft had updated something, which then caused this to fail. I don’t know how many different customers may have been (or still be?) experiencing the issue, but I think that the error message at least could be a little clearer? Perhaps including a link to the relevant Microsoft documentation page, for a start.

Well, thankfully this was put to bed, and I was quite thankful (as was the customer). And this is how I decided to come up with the title of this blog post!

Have you ever had something similar happen to you? Drop a comment below – I’d love to hear!

Managed Solutions, & replacing a field

Well to start with, I’m sure that I’m going to get pulled up by some people for my use of the word ‘field’ in the title. After all, officially it’s now a ‘column’! But I (still) can’t let go of calling them as I’ve done so for over a decade, so field it is.

Now to the actual topic of this blog post, which is centred around Managed Solutions. Leaving aside the whole debate about whether we should be using managed or unmanaged solutions (& when/where to do each), there is one definitive benefit of using a managed solution.

See, unmanaged solutions are additive in nature. Work is done in the development environment, then deployed. Further work is done (additional items added, etc), and deployed, and they then appear in the downstream environments. However, if you delete an item in the development environment, it’s not removed when the solution is deployed downstream.

Managed solutions, on the other hand, are both additive & detractive. As with unmanaged solutions, items added in the development environment are also added downstream when deployed. However, if an item is removed from the solution in the development environment, it will also be removed when the solution is deployed downstream. It’s one of the useful ways to ensure that you don’t end up with random unused items just lying around in Production (which have a habit then of popping up in the Advanced Find window, for example). So it’s really quite handy for a lot of reasons to go down this route.

Well, I found myself going down this route recently, but with slightly unexpected results, I’ll freely admit…

The scenario was that we had deployed a managed solution to the UAT (test) environment on a client project. Then the client changed their mind (shock & horror!!) as to a specific item, and we needed to change it from a text item to a lookup item. Obviously (as per best practise, of course) this would need to be done in the development environment, and then released downstream. Given that this is a managed solution, I’d expect this to work, without any issues. Well, it didn’t…

The change in the development environment (deleted the old item, ‘re-created’ it as a lookup with the same system name) was done, we exported it as managed, and then went to import it in the UAT environment. It took the solution file, thought about it for a while (it’s somewhat of a large solution), & then errored:

Exception type: System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault] Message: Attribute mdm_field is a String, but a Lookup type was specified.

Now I was somewhat confused by this message occurring. It’s not been the first time I’ve seen it over the years, but in my previous experience I’ve seen it when handling unmanaged solutions. It’s when you delete an item in the development environment, re-create it as a different item type (with the same underlying system name), and then deploy it as unmanaged. The solution import in the second environment fails due to the different in the type (as it sees the same name). This, of course, is to be expected.

But here we’ve been using managed solutions for deployment, and as mentioned above, they’re detractive as well. The expected behaviour (at least from my side of things) would be that the system would note that the item type has changed, remove the old item, & import the new item. In my mind, that’s logical, but apparently not?

See, even managed solutions have their limitations, of which this is one of them. Having checked with several other people who I reached out to around this, I’ve discovered that it can’t work in the way that I was expecting it to. Instead, a specific process has to be followed

  1. In the development environment, remove the item, & export the solution as managed
  2. In the downstream environment(s), deploy this (interim) managed solution. This will remove the item from the environments
  3. In the development environment, re-create the item with the different system type. Then export it as managed
  4. In the downstream environments, deploy this solution. This will then add the item (with the new system type) into the environment.

This means that development & deployment teams (if separate ones) need to co-ordinate around this, to ensure it’s done in the right way. It could also be developed/exported in succession, and then imported in succession as well (either manually, or through an Azure DevOps Pipeline, for example).

This worked wonderfully for us, and to be honest, I was quite relieved after several hours of frustration with things. Even better, it was a Friday, so meant that the week could end well!

Have you ever come across this, and been frustrated as well? Have you got a similar story with something else that happened to you around solutions? Drop a comment below – I’d love to hear!

‘Ghost’ lookup value following deployment

This is something that stumped me fairly recently. It’s also something that I was trying to work out what I should use at the title for this post! Let me share what happened.

I’m working on a project that’s quite critical (COVID-19 related). This is a project that we’ve built something around Dynamics 365 as an additional wrapper, to provide specific functionality for the pandemic. It’s being rolled out (the same solution) to multiple clients, and is only using the functionality from Power Platform. No custom code at all.

Now, before going into the specifics around it, let’s take a moment to revisit what a lookup field is, and what it does. Essentially a lookup field connects two tables together (wow – that felt strange not to use the word ‘entity’!). In the front interface, it’s used for a 1:N relationship.

So for example, we can have a lookup from Account to Contact, to set the primary contact for the account. The user navigates to the field, searches for the record they’re wanting to associate, and saves it.

Underneath, there’s a relationship that’s automatically created between the two tables, showing the way that the relationship will go (ie 1:N or N:1). This is created on both sides (more on that another time around dependencies), and most people will never need to modify it

When I first started with this particular project, I got the solution, and deployed it into the Dev environment (for the project that I was on). On testing it out, I found something very interesting. We’re using the Case (Incident) table, and there are various lookup fields on it. One of these was already populated with a value. Hmm – that’s interesting, I thought. It was a new deployment, and we hadn’t set any static data up yet at all. So how could it already be populated?

How is this being set, when I’ve not entered it into the system as a record…

Furthermore, I was unable to save the Case record. When I tried to, I was getting an interesting error:

On drilling down into the error log (which admittedly is actually getting better in the details shown in it, thankfully!), it turned out to be because I didn’t have access to the referenced record (in the lookup field). It just didn’t exist.

So the lookup field value was coming in with a hard-coded GUID (record identifier). But how was this being done, especially if there weren’t any records (of that type) in the system at all?

From my experience of things, I could think of two ways in which to populate a lookup field with a hard-coded value:

  • Through a ‘real-time’ Power Automate flow, on create of the record. It’s possible to set a GUID value in the flow, and then it would be set
  • Through custom code, running on the form. Again, it’s possible to hard-code a GUID there, and then set the field

However on checking both options, none of them were happening. No Power Automate flows touching the Case record, and no custom code at all on the Case.

It was then, digging through the other parts of the solution, that I saw various Business Rules. For those unfamiliar with these, I’ll quote from the official Microsoft documentation around them:

By combining conditions and actions, you can do any of the following with business rules:

  • Set column values
  • Clear column values
  • Set column requirement levels
  • Show or hide columns
  • Enable or disable columns
  • Validate data and show error messages
  • Create business recommendations based on business intelligence.

I’ve used Business Rules (somewhat extensively) before. However on going into the one for the Case table, I found that something was happening that I wasn’t aware could happen! It’s actually possible to set a lookup field value through it:

I spy a lookup option

Even though we’ve deployed the solution from the original development environment to a different environment, this is still set. But there are no records that are available:

I had never thought that it would be possible – to set a static value (eg a number, or some text), fine. But to set referential data? Wow.

Obviously this can be quite helpful. The bit that it’s NOT helpful though is when deploying the solution to another environment (as this situation was). It doesn’t help if you re-create the record that it’s referring to with using the same record name, as it’s using the underlying GUID (which you can’t re-create). This really does take solution deployment into a whole new perspective, where you need to be careful around these sorts of things as well.

So something new that I’ve learned (I do try to learn something new each day), and specifically around an area I thought I knew quite well. It did take some time, but I’m glad that I (finally) found the root cause of it, and identified what was causing it.

Have you ever had something like this happen, where you’re searching & searching for the cause of it? Drop a line below – I’d love to hear!

Omnichannel Agent Presence Not Loading

Recently, with some of the system updates that have come out for Omnichannel, there’s been an interesting issue observed. This is essentially when the Agent Presence (which signifies the available of the agent) doesn’t load within the system.

This is of course a problem. Without agents able to set their status, it’s not possible to have conversation sessions come through to them. As a result, they’re not able to do their work!

It’s an interesting issue, and one that I’ve discussed with several other people who are deeply involved with Omnichannel for Customer Service. We’re not quite sure why it’s happening, but there seems to be some different things going on.

It can range from either the presence icon not showing up at all, to it showing up, but not being able to be selected/changed.

As there are already some stellar resources out there around this, I therefore would like to link to these, rather than just repeat information!

The first thing to try is the fix that Tricia Sinclair suggests at Omni Channel Engagement Hub: Fix Missing Presence Icon – Everything D365 (triciasinclair.com). Here Tricia points out how to use the new App Profile Manager to put a fix in. There’s even a really helpful video that she’s done to walk us through the steps!

The second thing to try is the fix that Victor Sanchez suggests at Omnichannel Error in Wave 2 | Victor Sanchez (victorsolaya.com). This involves changing the URL for the Channel Provider

Have you come across this issue, and not been able to solve it? Drop a comment below, and I’ll do my best to try to help out!

Marketing & an unusual error

I’ll be the first to admit that I have limited experience of Dynamics 365 for Marketing. In fact, I think that it would be stretching the description to say that I have even ‘limited experience’! I’ve seen it one or twice, and have attended a few presentations on it, but apart from that, nada.

I do remember what it used to be like in its previous incarnation, but even then I didn’t really touch it. Customer Service (& Sales) are my forte, and I generally stick within those walls. Marketing traditionally was its own individual application, and only more recently has been rolled into the wider Dynamics 365 application suite. Even so, it still sometimes works in a somewhat interesting way, different from the rest of the system.

Inevitably I’ve had to actually do something with it for a client project, which has brought me to putting up this post. We had created a few marketing forms, surfaced them correctly, etc. It was great, and working well.

Then we realised that we needed to capture some additional information, in this case a list of Countries. There’s no standard entity for it within Dynamics 365, so we created our own, and loaded a list of countries (& associated data) into it. Fine – that was working without issues, including in the places that we needed to surface it.

Then we came to needing to surface the Country value on a marketing form, through a lookup. Simple, you’d have though? Well, not so much. We went to create the field, and got presented with the following error as we did so:

The error says: ‘The role marketing services user does not have access to the entities you’ve chosen…’

In essence, the system was telling us that we weren’t able to access the entity. Though Country is a custom entity, we were logged in as users with the System Administrator role (which has access automatically to ALL entities). This left us puzzling around what to do.

The error message, thankfully, was quite clear. It was referring to a specific security role missing privileges. In this case, it was the ‘Marketing Services User’. I therefore went to check the permissions for it, and sure enough, it didn’t have permissions on the Country entity that I had created!

Now usually if a security role is missing permissions, what we do is create a custom security role (usually copying the existing role), and add the permissions to do. Best practise is NOT to edit the default security roles. The (main) reason behind this is that Microsoft could update the security role in a later update/release, which could impact on us. We therefore use custom roles to avoid this happening (& yes, I’ve seen it happen/impact in practise!).

The fly in the soup here (lovely phrase, I know) is that we couldn’t do that here. It seems that Dynamics 365 for Marketing uses an underlying security role that’s needed. Even if we had implemented a custom role, we didn’t have any idea of how to tell the system to actually use our custom role, rather than the default one that it’s currently using. Quite frustrating, I tell you!

So in the end we decided to give the default security role the necessary permissions, and see what happened:

With having granted the security permissions to the role, & saved it, we then attempted to create the marketing form field field. This time, we were successful! No errors occurred during it, thankfully:

So in summary, I still have no idea why this has happened. I’ve taken a look around, but can’t find anything obvious as to how/why it actually works like this. I guess that I’d need to dig ‘under the hood’ somewhat to see what’s actually going on, and how to dealt with it appropriately. For the moment, the solution is in place, and is working.

We’ve also been very careful (as mentioned above) to add just the specific custom entity to the default security role. We haven’t touched anything else within it – all other security permissions are done (as per best practise) with custom security roles, which are then allocated appropriately to users &/or teams. Hopefully this will be fine in the long-term, though we’ll definitely be keeping our eyes on it to make sure!

Have you ever come across something like this? How did you decide to go about solving it? Drop a comment below – I’d love to hear!

Update: Thanks to the amazing Carl Cookson, it turns out that this is due to an update from Microsoft in how Marketing works. See https://docs.microsoft.com/en-gb/dynamics365/marketing/marketing-fields for more information around it. Essentially it uses this role to sync to the Azure staged Marketing service, so this role needs to have the appropriate permission

Lookup fields & Power Automate

This is an interesting post, for several reasons. Firstly, it’s the first one in 3 weeks – I was off on holiday, and decided to take an (almost) absolute break from all things digital, which included this blog. It was actually quite refreshing, though now coming back & starting to write again does seem a bit daunting, I’ll admit!

Thankfully, whilst wondering what exactly to start with, a scenario came up that I was working on. It seemed quite simple at first, but then actually got someone complicated. I therefore thought it would be helpful to others if I wrote about it, so here it is.

The scenario was as follows. We had records being auto-created in the system, and needed to create child records for them. This, as I’m sure you’ll agree, is really quite simple to do with Power Automate. We also needed to set lookup values on the child record, that were already populated on the parent record (for reference purposes).

So for example, the parent record has a lookup to Country (being a separate entity), and the child record also has a lookup to Country. These need to be the same.

Being both lookup fields, I figured that I’d be able to take the value from the parent record, and simply plop it into the corresponding field on the child record in Power Automate:

So I did that – and immediately hit an error. Not just any error, but the fabled ‘Resource not found for the segment’ error!

Obviously, I did what anyone would do at first – I put it into Google & Twitter, and took a look at what came up.

The ‘problem’ was coming from using the ‘CDS Current Environment’ connector, which is the latest version available (the old one is no longer available to use). It’s really great for a lot of things, but unfortunately not so great in a few areas. See, in the old CDS Connector, you could just drop the lookup field value into the field you were wanting to populate. Power Automate had no issues with that, & it would run just fine.

However in the ‘new’ CDS Connector, you can’t just do that. Instead, you need to use an OData reference (which I haven’t done much of before, to tell the truth). So based on the blogs I had come across, I went to work to try to get this working.

Part of the challenge was that there didn’t seem to be a unified consensus in how to do it. I came across the following variations:

  • /entityname(Lookup Field Value)
  • /entityname/(Lookup Field Value)
  • /pluralentityname(Lookup Field Value)
  • /pluralentityname/(Lookup Field Value)

Somewhat confusing, as I’m sure you’d agree. Nevertheless, I ploughed through all of the different possibilities. But nothing was working – every single time, I still got the ‘segment not found’ error message. This, as you can image, was extremely frustrating!

Thankfully, one of my good friends was around & able to help out. Namely, Tricia Sinclair came to the rescue!

We took a look at the code I was using, and she took a look at some of her own use cases (where it had worked for her). I was starting to think down the path of needing a capital letter in the entity name (some systems can be REALLY finicky around things like that), but thankfully it wasn’t.

Instead, it was the following. See, this was a custom entity. It turns out that for a custom entity (& heck, for all I know system entities as well) the syntax needed is ‘publisherprefix_pluralentityname(lookupfieldvalue)’. Now that’s not something that I had come across ANYWHERE at all!

Looking at it, I guess it makes sense. After all it would technically be possible to have multiple entities with the same name, though with different publishers. As a result, the system needs to know WHICH exact entity is being needed for the Power Automate, so uses this. Somewhat complicated (and hey – it worked without all of this in the OLD CDS Connector), but we got it to work!

Testing it out, everything worked smoothly. The Power Automates fired off without any issues, the data got created & populated, and everyone was happy.

So there you go. Another interesting little twist in syntax needed, which hopefully will NOT change in the (near) future!

Have you come across anything like this? I’d love to hear – drop a comment below around it!

Omnichannel Install/Update Errors

I’ve had an interesting time over the last week or so. Several people have contacted me about trying to either install Omnichannel, or upgrading to the latest version. These differ based on what the user was trying to do.

When trying to install into a new environment, the error says ‘To add this channel, you must have an active subscription to Dynamics 365 for Customer Service Chat or Digital Marketing’. This is especially strange as a trial environment (for testing purposes) doesn’t actually require these licenses. It only requires a Customer Service Enterprise license.

When trying to upgrade an existing environment, there’s a different error. This one says ‘We are unable to check for upgrade as you don’t have the required permissions. You need to be either a global administrator or a Dynamics 365 service administrator to check for upgrade. Transaction Id: 0cc1f6be-32f1-476c-8071-acc4d8475e63’. However the user has the Global Administrator role (which obviously also includes the Dynamics 365 Administrator role as well!

image.png

Now I love being able to share my knowledge & help others. That’s one of the main reasons why I started this blog and why I share information that I feel is helpful & useful to the wider community. So I was more than happy to try to help the people who had reached out to me, and jumped on a screenshare session with them (using Microsoft Teams, I may add!).

They were indeed getting the errors mentioned. Nothing that I could suggest helped to rectify. To try to diagnose & compare, I jumped into my own environment. To my absolute surprise, I was greeted by the same issues!

Nanny Knows Best: Shock Horror Probe - People Take Responsibility ...

I knew that it hadn’t been occurring several weeks back, as I had carried out some maintenance work in my own tenant & everything was working fine. I double-checked everything on my end, and it all seemed to be set up correctly.

I therefore decided to go ahead and log a ticket with Microsoft Support. I had a sneaky feeling that it was something, somewhere, to do with the Wave 1 2020 release upgrade. This had happened 2 weeks back (since I had last been into the Omnichannel setup), and I was figuring that something could have gone wrong.

This feeling was boosted by hearing that someone else who was having the same issues had also logged a ticket with Microsoft Support, and they had resolved the issue for the affected tenant. In doing so, they had mentioned that the back-end hadn’t been configured correctly, and got it fixed.

My support agent was a lovely guy called Tomasz, based in Portugal. Emails initially exchanged, we then jumped onto a Teams screenshare session so that I could demonstrate the issues from my side. He was very helpful, and immediately got to work. Within 12 hours I had received an update from him on the situation. They had identified the problem, and were working on a fix.

I had mentioned to him that I knew it wasn’t isolated to my tenant, or even region, but that other people across the globe were also experiencing this. I suggested that whatever fix would be found should be rolled out on a global scale (if applicable).

The crux of the problem seemed to be that with the Wave 1 2020 Release, there had been a change in the architecture of the Omnichannel total solution. Everything still appeared the same through the interface, but under the hood there had been some changes (I have no idea of what actually had changed though).

For new instances (whether Trial or Production), the solution was installing with the new architecture. However all existing systems (whether Trial or Production) had the old architecture, and the Wave 1 2020 Release wasn’t upgrading it to the new one. It simply failed, giving the different error messages.

The fix that was needed was actually quite simple, and only took a few minutes. I had to spin up a new trial of Customer Service within my tenant (which would expire within 30 days). Doing this re-installed the Customer Service solution, & included the new Omnichannel architecture. As a result, after waiting around 5 minutes I was able to open the Omnichannel Administrative Settings, and upgrade my existing Omnichannel deployment. I was also able to deploy to another new environment without any issues. The problem had been solved!

Joyful Green Monster Saying Hurrah Vector Sticker Illustration ...

Overall, this support ticket was an example of how support should be/work. I’ve had times before when it’s unfortunately not gone like this, which makes me value this all the more so.

So, lessons to learn from this. Well, if there’s an issue with deploying Omnichannel to a new environment, or upgrading an existing deployment, fire up a trial of Customer Service, and that should fix it. Brill.

I do wonder how this managed to creep in. Obviously one of the main parts of deploying any new major solution is thorough testing. Perhaps it could be that due to the size of the actual Omnichannel solution, something was overlooked somewhere? It would be good if this sort of situation would be avoided for future releases, and functionality build in to automatically upgrade the Omnichannel solution if it has an old architecture.

Update. I’ve actually had feedback from the Omnichannel team around this. Essentially there’s something different about Trial environments, and this issue only affected them. Production environments (ie with paid-for licenses) wouldn’t have experience the issue. I don’t know why they’re different, but somewhere they are!

How to handle error AADSTS65001 when trying to configure Omnichannel for Dynamics 365

I’ve been contacted by several people over the last few days who have been experiencing an error when trying to get Omnichannel configured. It looks something like:

The actual text of the error is: AADSTS65001: The user or administrator has not consented to use the application with ID ’18cc9627-776c-4142-b8f5-9cd83517e3bb’ named ‘Omnichannel for Customer Service’. Send an interactive authorization request for this user and resource. Trace ID: 36dd2358-2d41-463c-a2f6-013038636400 Correlation ID: 9ac093cd-e525-4bb4-b277-3ac8e7478b6b Timestamp: 2020-01-07 12:14:15Z

No matter what people tried, they still got it. I went through the process of setting up a completely new environment – lo and behold, I got the same issue! (the screenshot above is actually from my system). Incidentally this is why it’s so important to be able to replicate an issue, so that you can confirm what’s actually causing it to happen.

Reaching out to some very helpful people at Microsoft, I (thankfully) got a quick response from them

Essentially, there are some issues with Azure Active Directory (AAD) consent flows for applications at the moment (it’s not specific to Omnichannel). There’s a fix that’s being worked on, but no idea when it will be finished and rolled out.

They were nice enough to share with me how to address it, which is what I’m now sharing here! To fix this issue, carry out the following steps to manually grant permissions to the application:

1. In your Azure Portal, search for ‘Azure Active Directory’ in the search bar, click on it when it comes up, and navigate to Enterprise Applications in the left hand bar. Alternatively you can use https://ms.portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/to get you straight there.

Click it to open, and you’ll see a list of the enterprise applications that use AAD.

2. You’ll then want to search for the application that has the issue (in this case, Omnichannel)

You’ll want to double-check that the ‘Application ID’ is the SAME as the Application ID that you’re getting in the error message, especially if there are multiple results coming up in the search list!

Once you’re sure that it’s the correct application, click it to open it.

3. You’ll see a section in the left bar called ‘Security’, and under this should be an entry for ‘Permissions’. When this opens, you’ll see a button in the main window called ‘Grant admin consent for User‘.

4. Click this – it’ll cause a window to pop up, where you’ll grant permissions for the application. Once granted, the window will automatically close.

You can then go back to the place where you were experiencing the error, and it should work!