Active or inactive, that is the question?!?!

Catchy title, right? Well I was wondering what exactly I should use for this blog post, and as you’ll see as we go through things, this is probably quite a good paraphrase to use.

So, where to start? Well, with a customer, of course! Now, this customer has been running live with a custom Dynamics 365 solution for a little while. Importantly for this story, there have not been ANY releases in quite a few months. This is of course good to bear in mind, given that we can all, um, occasionally find that a release could cause an issue, somewhere, sometimes…

Part of the capabilities that they’re using is bringing in Leads, and qualifying them appropriately. As part of this process, there are various custom attributes (aka columns) that have been added to the Lead table, along with corresponding columns added to the Contact table. There’s also some custom logic that, when a lead is qualified, copies the values from Lead to Contact record, updating it (essentially extending the standard capabilities of the system).

This has all been working well to date, and the customer team has been very happy with their system. Until it stopped working, last week. Which was strange, as nothing seemed to have changed at all?

When trying to qualify leads in the system, they were getting the following error message:

Cryptic, right? This seemed a little more interesting as well, given that when only inputting basic information into a Lead record (eg First Name, Last Name, Phone Number), it didn’t matter how many leads existed with the same information, it qualified without a problem.

However, using any custom columns that had been added to the table caused this error to occur.

The first thing that I did was to check that there had been no updates released to Production. This was confirmed as being the case. I then also checked that there had been no OTHER solutions released to Production (as this could have impacted on it). Thankfully there hadn’t – the system looked to be in as fine a shape as it’s been running for a while.

OK – on to the next step. What updates have been released by Microsoft? Well, with the fact that we were able to pinpoint the date that the functionality had stopped working, we went to find the corresponding Learn article about the release (Update 22102 – Release Notes | Microsoft Learn). Don’t worry about clicking through to read it – there’s essentially not much in it, and there’s nothing at all around the Lead table or its functionality!

Continuing to dig around, I really wasn’t sure of what was causing this, but obviously had to work it out & figure out a fix! It was quite a dilemna.

This is where the amazing Microsoft community came into play. I noticed a post by Jeroen Scheper on one of the channels that I’m on. It turns out that he was having the same issues, so we started to try collaborate on it. This both reassured me (that it wasn’t just me), but also increased the confusion, as we couldn’t work out what was going on underneath to cause this!

Raising with Microsoft (we both actually raised support incidents), I had an amazing support call almost immediately. Demonstrating the problem, I was told that it was due to Duplicate Detection rules.

Now I’ll admit that this confused me somewhat. See, I had already checked the Duplicate Detection rules, but nothing had been changed, and no new rules had been implemented.

Getting the support agent to walk me through things, they told me that I had to unpublish the rules, modify a setting on them, and then re-publish the rules. This was the setting (on each one) that had to be updated:

This again caused me to be confused. Why was the system having issues with inactive records? Surely qualified leads are active records, but just qualified (& then being locked down as a result)?

Well, it turns out that my perspective of how this works is actually incorrect. As we (hopefully) all know, whilst all records have a Status value (eg Active, Inactive), there are some records that also have a Status Reason value.

In fact, the ‘State Code’ choice value in Dataverse is restricted (we can’t access it), and seems to have some quite interesting functionality running behind it. Depending on which table is accessed, there are different options available within it.

For example, the Lead table shows:

Whereas the Contact table shows:

And the Task table shows:

Anyhow – it turns out that when a Lead record is qualified or disqualified, though it’s not shown in the user interface (nor behind the scenes), the record is actually being deactivated!

More information on this can be found at Qualify and convert leads to opportunity | Microsoft Learn.

So, this was the underlying reason behind the error message. Obviously Microsoft had updated something, which then caused this to fail. I don’t know how many different customers may have been (or still be?) experiencing the issue, but I think that the error message at least could be a little clearer? Perhaps including a link to the relevant Microsoft documentation page, for a start.

Well, thankfully this was put to bed, and I was quite thankful (as was the customer). And this is how I decided to come up with the title of this blog post!

Have you ever had something similar happen to you? Drop a comment below – I’d love to hear!

Power Platform Capacity Monitoring

If I look back at customer engagements over the last few years around Power Platform, whether it was a new capability or an existing capability, there was ONE thing that stood out above all. This was the ability to be able to track capacity usage over time, and to be honest, most organisations weren’t really doing very well at it.

For those who are unaware, there are actually three different types of capacity present within Power Platform environments. These are:

  • Data
  • File
  • Log

Each one is used for a specific purpose – broadly speaking, File holds all attachements that are uploaded directly into Dataverse, Log is used for auditing purposes, and Data holds everything else (hence the name)!

Now this data is shown within the Power Platform Admin Centre, under the ‘Resources/Capacity’ section’. An example of this is:

There’s also a nice little breakdown of capacity allocation through licenses etc, which essentially shows where the available capacity has come from:

If we drill down a bit further, we can open up a specific environment, and see not only the overall usage per capacity type, but also which tables are consuming the most amount of data:

All of this is well & good so far, for someone wanting to take a look at what is currently happening. But this is a manual action – it is possible to manually export the data, but again, this isn’t automated.

It’s also not possible (at least not at this point in time) to query the underlying records that hold these values. So we’re a little stuck. If an organisation wanted to see historical data usage, and/or predict data trends (such as ‘how much capacity would we need to have in 6 months if we continued our scaling’), there’s no way to do this. At least not automatically – someone would need to store the values down manually, then report on it. A hassle, to say the least.

Now when it comes to looking overall at Power Platform, the Centre of Excellence Starter Toolkit is really quite amazing. The Microsoft PowerCAT team continue to iterate existing functionality within it, as well as bring new functionality as well.

At this point in time, however, it doesn’t have any capacity monitoring in it. Well, it sort of does – we can implement notifications to alert us when capacity reaches a certain value. But this doesn’t solve the challenge as laid out above.

So with this in mind, I set out to create a solution to handle it. I’ve always wanted to create some sort of tool for giving back to the community & helping others, and I saw this as my chance to do so (I’m in awe of the various XrmToolBox tool creators, for the record).

So, I’m releasing a capacity monitoring tool. I’m using GitHub as the host, and the repo can be accessed at https://github.com/thecrmninja/Power-Platform-Capacity-Monitoring (it was a learning experience as well as how to use GitHub as a source repository, as I’ve not done that before!).

Model-Driven App:

Reporting Dashboard:

This is just the first version – I have various ideas about how to iterate on it, and tweak functionality. Each release will include release notes & important information to be aware of (such as security needing to run it). Also importantly, thanks to the amazing Matt Collins-Jones for reviewing some of my work around this.

The audience for this tool is aimed at IT/Power Platform admins who are already familiar with the Microsoft CoE toolkit solution, and have appropriate access to it.

If you find any issues, please raise an appropriate GitHub Issue item, and I’ll look into it. Also, if you have any ideas that you think could be worthwhile, please feel free to suggest them!

Finally, I’d be interested in hearing how you think this could support you or your organisation – feel free to drop a comment below!

Wave 2 2022 – Omnichannel

I know it’s taken me a week (or two) to get round to this, but I’ve had other things on the go (such as starting my new job, for instance). However it wouldn’t be this time of year without doing a summary of new features for the Wave 2 2022 release.

As with previous posts in this area, I’ll be focusing on the Customer Service side of things, and also more precisely with a focus on the Omnichannel capabilities. However, whilst previously I’ve tended to focus just on the Omnichannel items, Customer Service is now being much more tied together with the Omnichannel offering, so it makes sense to broaden things out a bit.

So let’s start taking a look at the wonders that will (hopefully!) be in store for us within a few months:

Customer Service Workspace – enhanced layout

Public Preview – August 2022. GA – October 2022

I’ve previously taken a look at some of the capabilities of the Customer Service Workspace (see Omnichannel vs Customer Service Workspace), and how they compare to Omnichannel. With Microsoft now rollowing out the ability to have multi-session capabilities within it, it’s sometimes a hard decision for organisations to decide which one to use (there are some key differences though).

With the upcoming release, there are going to be new layouts for the site map (navigation menu), sessions & tabs. Some of the key changes coming are:

  • Sessions and child tabs are displayed horizontally
  • Improved handling of overflowing tabs and sessions
  • Tab bar is visible only if multiple tabs are present in a session
  • Improved site map that’s accessed from the hamburger icon with support for grouping and areas
  • Improved accessibility with 400% zoom mode
  • Increased predictability of session closure in multisession apps
  • In-app notifications aligned with the multisession navigation

These look to be quite good (I definitely wouldn’t have thought of all of them!), and I can’t wait to try them out for myself.

Single sign-on capabilities

GA – October 2022

One of the things that can be quite frustrating for customers is that if they’re interacting through live chat capabilities, and then switch over to a Power Virtual Agent, they need to re-authenticate. This is of course not quite optimal for a seamless customer experience.

Microsoft are therefore enabling single sign-on capabilities. What this means in practise is the following:

  • Authenticant contexts are shared between Power Virtual Agents and Omnichannel live chat sessions. If a user authenticates in one of them, then they become authenticated across all of the capabilities. There’s no need to authenticate per communication type
  • Customers can start with an unauthenticated conversation, and then authenticate at a later point in the conversation. This will then continue as authenticated across the different channels that they’re communicating through

Voice channel – expansion of availability

GA – October 2022

The voice channel (which I still need to do a write up on!) is really amazing, allowing customers to call in directly via phone etc. It’s been rolled out already to several regions, but customers in other regions have been asking for it.

Microsoft has now confirmed that the voice channel will now be available in the following countries:

  • United Kingdom
  • Canada
  • India
  • Switzerland

This is a great move – it still doesn’t mean that every country has the voice channel available, so I expect that Microsoft will keep on adding more countries for the availability of this (I know that there’s a decent amount of back-end systems that are needed, which is why it’s taking this long to get in place).

Voicemails

GA – January 2023

This one is getting me really excited. Obviously, being able to connect to a customer service agent is important. But what if the agent isn’t around? We could of course send an email, but if we’re already connected through a specific method of contact, ideally we’d like to continue with that method.

Especially when it comes to actually calling into an organisation, it can be quite frustrating to not reach the person we’re trying to get hold of, and then need to send an email.
Voicemail capabilities, coming in early 2023, will mean that customers will be able to leave voicemails for customer service agents to pick up. The agents will be able to set up welcome messages, as well as manage & playback voicemails that have been left.
This is really cool – I’m wondering if there are going to be AI capabilities included in this in the future, so as to automatically transcibe voicemails for the agents, for instance. I don’t think that it would take a LOT more technical capabilities – we already have Azure Cognitive Services that audio can be fed through to for a written transcription to be produced.

Customer Callbacks

GA – January 2023

One of the frustrations that I think is shared universally is when contacting an organisation, and being told that you’re in a queue. Not only are you in a queue, but there may be dozens/hundreds/thousands of people ahead of you…and the number doesn’t seem to be decreasing at a rapid rate.

Some organisations offer the ability to ‘reserve’ your spot in the queue, and will call you back when you’re next. To date, this hasn’t been a feature of Omnichannel.

However, coming in early 2023, this feature will be rolling out!. It will give customers the ability to keep their queue position, and to choose if they’d like a callback to happen when they’re at the front of the queue. Note that this would require a phone number to be provided, for the customer service agent to use to contact the customer.

I think that this is a nice feature, but will be curious to see how it plays out ‘in the real world’. I know that when my local doctor surgery implemented this, it was supposed to be great, but in practise actually didn’t work well.

I’ll be looking deeper into the different functionalities when they land, and will share them here. If there’s anything you think would be helpful to focus on, drop a comment & let me know!

Environment types, capabilities & backups

Interesting title to start a blog post with, right? I can’t tell you how much I tried to work out what to call this, but then I figured that I’d just put at a high level what I’m going to be talking about!

So let’s start at the beginning. Environments within Dataverse. An environment is essentially a container for all sorts of different components, such as data models, apps, code, etc.

Examples of what an environment can contain

Within the Power Platform, there are different types of environments. As a quick recap, currently we have the following:

  • Default. Every Power Platform tenant has a default environment. We of course shouldn’t be using this for any proper development!
  • Production. Used for any Line of Business application
  • Sandbox. A sandbox environment is any non-production Dataverse environment. Isolated from production, a sandbox environment is the place to safely develop and test application changes with low risk.
  • Trial. Used to take out a trial
  • Trial (Subscription Based). Used to take out a trial when there’s subscription licensing in place
  • Developer. Personal environment, limited to one user. Previously called the Community plan.
  • Teams. Used when an app is created within Teams, to use a Dataverse for Teams environment. Doesn’t have full Dataverse capabilities, and has various limitations
  • Support. Only able to be created by Microsoft support during a support case. Is essentially a clone of an existing environment, used for diagnosis purposes.

Now, sandbox & production environments are automatically backed up – backups occur continuously, using Azure SQL Databases underneath. It’s also possible to create a manual backup instance of an environment as well, which usually takes a few seconds to carry out (restoring a backup, on the other hand, takes quite a bit longer…).

When restoring an environment, it’s not possible to restore to a production environment (though the backup could be from a production environment). It’s only possible to restore the backup to a sandbox environment – you’d then need to promote the environment from sandbox to production.

Let’s move away from backups for a moment. When we create an environment, we have the ability to select that the environment should be enabled for Dynamics 365:

This is actually a REALLY IMPORTANT CONSIDERATION! At this point in time, it’s not possible to update from a Power Platform Dataverse environment to then bring in Dynamics 365 capabilities. What this means is that if an organisation starts with just Power Apps, and then wants to expand into using Dynamics 365, IT’S NOT POSSIBLE TO DO THIS NATIVELY. Even Microsoft Support can’t do anything around this – you’d need to create a new environment, enable it for Dynamics 365, and then restore a backup to it.

It’s something that a lot of us would like be in place, but we’re not sure if it’ll ever come about. This is a tweet of mine from 2019 that Charles Lamanna responded to (I was SO thrilled that he actually responded to me!!):

https://twitter.com/clamanna/status/1176629306484637696

However, it’s still not in place. As a result, we recommend to all clients that when they deploy a Dataverse environment, they toggle the switch above (Note: A Dynamics 365 license is NOT needed to toggle this). Once this has been toggled (without deploying any of the Dynamics 365 apps), the Dynamics 365 apps and functionality can be installed/deployed at a later point in time.

There are actually various capabilities, such as the Data Export Service (yes, I know it’s now been deprecated) that actually relied on having the environment enabled as a Dynamics 365 environment in order to work. We found this out the hard way at a client, and had to do an overnight environment re-build to get the capabilities in place.

But there’s one other thing to consider around the differences between a native Dataverse environment, and an environment which has been enabled for Dynamics 365. This is around backups.

Now, backups are of course very important (thankfully they now occur automatically, as mentioned above – I remember my onpremise days when needing to run these manually!). But there are also some important differences for backup behaviour when it comes to environment types. See, it turns out that environments aren’t actually equal in backup behaviour. This is what actually happens:

  • Sandbox environments (all types) – backups retained for 7 days
  • Dataverse production environment (not enabled for Dynamics 365) – backups retained for 7 days
  • Dataverse production environment (enabled for Dynamics 365) – backups retained for 28 days

See that? Having Dynamics 365 enabled for an environment gives you FOUR TIMES as much backup retention time! That’s incredible!

Dataverse Environment enabled for Dynamics 365 – 28 days of backups available!

So not only are you able to then upgrade to Dynamics 365 applications at a later date, you then also have more peace of mind (hopefully you don’t need to use it though!) around keeping backups for longer.

This is really cool – I hope it helps you plan your environment implementation strategy! Have you ever come up against issues when using environments, or the type/s of environment? Drop a comment below – I’d love to hear!

Sharing multiple knowledge articles

Most of us are aware of the knowledge article functionality within Dynamics 365. For those who aren’t familiar with it, knowledge articles can empower users within any organisation with access to existing information.

Types of knowledge articles can include solutions to common issues, product or feature documentation, answers to frequently asked questions (FAQs), product briefs, and more. Being able to have access to this means that customer service agents can easily answer queries, without needing to spend (lots of) time investigating what’s happening, and find resolutions.

Note: At this point in time, the Knowledge Article functionality is still a restricted table within Dataverse. It requires either a Dynamics 365 plan, Customer Engagement plan, or Customer Service Enterprise plan

It’s great to be able to share information around within an organisation. There is native functionality for this, with the ability to share a knowledge article record directly with other users by clicking the ‘Email a link’ button:

Note: This is performed through going to the to Knowledge Article table, opening up a record, and carrying out the functionality from there. It cannot be done through access to Knowledge Articles on the Case form.

This will create an email (in Outlook) with a URL to the specific record:

This is of course very helpful, but is only internally facing. It’s not possible to send this to a customer who’s having an issue, as the customer wouldn’t be able to access the URL!

It’s also not particularly useful if we think about how customer service agents work, as they’d need to be moving through different areas of Dynamics 365.

Thankfully knowledge management is built into the customer service capability within the system. So for example, when we open a case record, we have the ability to search for knowledge articles directly in here:

This of course works much better from a customer service agent perspective – they have all of the functionality that they need in just one area.

So how can we share information directly with customers? Copying and pasting the information into a chat or email interaction seems quite manual and bothersome.

But there’s no need to do this manually, thankfully! Again, we have in-built functionality to handle this:

Clicking the little email icon on a knowledge article creates an email within Dynamics 365 (so you’ll need to have email enabled for users, to make this work!) with the information copied into it:

OK – this much easier. We can send customers the exact information, so that they then have it to hand.

But here’s a new scenario – what if we wanted to send MULTIPLE knowledge articles to a customer at once? We could of course click the email icon each time, but that results in a separate email being created for each one, which means the customer will be receiving multiple emails. Not the most ideal scenario, surely?

Well, thanks to my amazing colleague Ryan Hunter-Stott, there is actually a way to do this! In fact, technically you could say that there are two approaches, but holistically it’s the same thing – it involves an email.

So, you can either:

  • Select to email the knowledge article to the customer, or
  • Create a blank new email

Within the email message, we have the following option to insert a knowledge article:

Clicking this brings up an interface to be able to search for knowledge articles. Clicking the envelope icon will then insert the information in the email:

Now it’s not possible to select multiple knowledge articles in this window. BUT, it IS possible to click the button to open it up again, and insert a second one. And then a third one!

This can then be sent out to the customer, with all of the information contained in it!

It’s a nice little touch, and I think it definitely beats copying & pasting information into an email manually!

Have you ever thought about this scenario? Did you find this functionality, or end up doing it in a different way? Drop a comment below – I’d love to hear!

Staying up to date with release information

Microsoft releasing new functionality can be an interesting experience, to say the least. As a cloud platform (SAAS – Software As A Service), functionality is released the entire time. A user could log off on Friday for the weekend, and come back on Monday morning to find that something has changed slightly, or a new button is present in the interface. Over time, most of us have come to accept this.

However this is for the ‘smaller’ functionality parts within the system, whether that’s Dynamics 365, or Power Platform related. There are of course two MAIN release announcements each year. These are the Wave 1 (Spring) and Wave 2 (Autumn) release windows, with information announced about what is included in each one publicly. This information usually starts to be available around 4-6 weeks or so before the release starts to hit.

Now that’s not to say that everything within a Wave release is released in a ‘Big Bang’ moment. Far from it actually, based on my experience. Microsoft will announce what is coming as part of the Wave release, along with projected timeframes as to when it will be available. Obviously, just because it’s been announced for Day X doesn’t mean that actually happens, at least for some of the time.

But there’s an inherent time-sink to being on top of all of this information. Firstly, people need to download the Wave release information (there’s one for Dynamics 365, and a second one for Power Platform), wade through all of the information, and somehow then remember it. Let’s just say that this can be challenging for a lot of people…

But what if there was somewhere where we could track this? Well, to date there hasn’t been, at least not until now.

Microsoft have created & made available the ‘Dynamics 365 & Power Platform Release Planner’, which can be found at https://experience.dynamics.com/releaseplans:

So just as a start, this is already MUCH better than the downloadable PDF documents for wave release information (admittedly the information is also available online as a Microsoft document, but still it’s lacking in certain areas).

But there’s more to this functionality than simply presenting a list of areas. Let’s take a look into some of these.

To begin with, there’s the sitemap on the left hand side. This allows us to select a specific area of interest, whether it’s Dynamics 365 or Power Platform (amusingly this reminds me a little of a model-driven app!).

Once in an area, we can then select between Planned features, Coming Soon features, and Try Now features by using the options in the menu bar. This is a nice little piece of functionality, in my opinion, allowing us to see what falls under each ‘category’:

By default, the items are displayed in a list format. However, we’re also able to toggle the view from the menu bar to a release date format, which shows us all items grouped by release month:

There’s also some filtering functionality, allowing us to narrow down the results even further:

Opening a line item (regardless of whether it’s being displayed as a list, or arranged by date) will give further information around the specific item. It also includes a lovely little timeline widget, showing the release dates information, as well as where it’s actually up to currently (which I think is great to have it as a visual reference!):

In here, links are included to documentation around the release overview, as well as specific documentation around the selected functionality item.

Now if this was all that there was, I think that truthfully I would be quite satisfied. It’s a much more modern interface, and really looks nice. I know that various colleagues of mine would be quite satisfied as well.

But….it doesn’t stop there. There’s something else, which is really the cherry on top of the cake icing! So what is it? Well, it’s the ability to create a PERSONALISED release plan information overview.

So on each item of functionality, there’s a button called ‘+ To my plan’:

Note: You do need to be signed into the portal to have this option available to you

Clicking this will add it to a personalised release plan, which you can access from the left-side menu. Here, all of the items that you’ve selected will show up. This is really cool, I think, as it allows you to see the overall picture, but also then focus on just the areas that you’re interested in:

It’s still got all of the functionality available for filtering, date/item sorting, etc. It’s also possible to toggle back to the ‘main’ view of all release information.

So in summary, I think that this is really cool. Admittedly (as it says on the site), it’s in BETA currently. I’m hoping that it’ll stick around, and come out of Beta pretty soon! Regardless, I’m definitely starting to make use of this already in tracking the upcoming features that I’m interested in.

‘Swarming’ for Customer Service

You might be wondering as to what I mean by ‘swarming’ in the title for this post. Don’t worry – it’ll become clear pretty soon! But first of all, let’s understand the story behind this new functionality.

Where to begin? Well, let’s take a look within an organisation. It doesn’t really matter what sort of organisation it is, as most organisations will have something similar scenarios overall. So, what are we actually talking about?

Customer Service is, of course, a very important functionality of any organisations. Customers who have purchased products may need support, or perhaps are having issues, and need them to be resolved. Customer service agents are there to handle the customer queries, and look to resolve them as soon as possible.

However, it’s possible that the customer service agents don’t actually know how to resolve the customer query/issue themselves. They can, of course, use the Knowledge Base, but that requires knowledge articles to be created & maintained.

Now within the organisation, there will be SME’s (Subject Matter Experts). These are the people who know the matter in precise detail, often being the people who have created the product and/or process to begin with. But these people aren’t usually carrying out the customer service function.

So what this means is that the customer service agents need to try to work out who might actually know the answer/s, be able to help resolve the customer issue, etc. This can take time, be laborious, and perhaps not even be able to be carried out (depending on the organisation).

Hmm. So, what if the system might be able to actually SUGGEST the right people for a problem or issue? Even better, what if the system could support them being involved directly with the record/s, regardless of whether they’re a user within Dynamics 365 or not?

Enter the swarming capability onto the Dynamics 365 scene….

The aim of swarming is to bring together the necessary experts within Dynamics 365. Now, having said that, not all users will actually be interacting directly within Dynamics 365. What happens is that a specific Teams chat is created, so that users outside of the system can see the necessary information, and give input on the situation.

This builds on the existing functionality of being able to use Teams chats directly within Dynamics 365, but takes it to a whole new level, by having the system automatically suggest relevant people within the organisation, and bring them into the swarm chat!

There are some necessary steps to configure to enable this to happen.

Firstly, Teams needs to be enabled within Dynamics 365:

Once we start to turn things on, we can then see the following. This allows us to be able to specify the types of records that we can use swarming on. This is great, as we may be building out custom functionality using other tables, and can enable swarming on these as well

Once Teams chat has been enabled, we can then start setting up the swarming capabilities:

As part of the setup, we have:

  • The ability to set the general message that users will see when they create a swarm
  • Activating the case form that’s used for swarming (as this will include the functionality for swarming on the case form)
  • A Power Automate flow that will be used for sending notifications & invites within Teams for suggested (internal) users
  • Creating swarm condition rules, which allows us to bring in specific conditions around skills etc

So, how does this work in practise, once the system has been initially configured?

Users can go to the relevant record, such as a case record. They’re able to select the ‘Create swarm’ from the menu bar:

This then allows the user to provide a summary of what the swarm is for, the scenario, as well as selecting the skills needed for the swarm. Dynamics 365 can also suggest skills that it thinks would be helpful as well:

Users from across the organisation are matched, according to the skills identified:

Notifications are sent to them within Teams, requesting their help with the matter:

When they accept the invitation, they’re then brought into the swarm:

In fact, the members of the swarm aren’t actually accessing the swarm information within Dynamics 365. Instead, they’re seeing & interacting with the swarm within Teams itself!

Once the swarm is active, information can be shared, and a solution found. The swarm can then be successfully closed down:

This is truly amazing. Obviously collaboration on issues is important, especially when considering that we’re trying to resolve customer issues as quickly as possible! I’m also really excited about this, as I was part of the initial group that Microsoft reached out to initially for feedback on the capabilities of it.

To now be able to collaborate with users who sit outside of Dynamics 365, but have them access the necessary information to help resolve things, is just mind-blowing. So many scenarios that come to mind as to how this can really empower organisations!

Can you think of a way in which this could change things in your own organisation, or at a client? Drop a comment below – I’d love to hear more!

Searching tables within the Modern Advanced Find

Well for a start, I know that the title of this blog post is somewhat of a mouthful. It’s definitely longer than my usual titles! However I felt it important to do so, given the functionality that I’m actually going to talk about…

So here goes!

As part of the Wave 1 2022 release, both for Power Platform as well as Dynamics 365, we have the new ‘Modern Advanced Find’ capability. This replaces the (legacy) Advanced Find interface, which has been around since almost the beginning of Microsoft CRM…that’s quite a few years!

So within a model-app (as this covers both Power Apps as well as Dynamics 365), the classic Advanced Find was a good friend. Though using the legacy interface (& sometimes being VERY slow to load initially), we could create powerful queries through it. Being able to specify conditions, span multiple tables (with needing to understand the data model), we were able to show & filter data as we needed to.

When loading the Advanced Find interface, we could select from any of the tables within the system, with a LONG list presented to us for this purpose:

Now, just because we could see all tables (system & custom) within the list didn’t mean we could view all data within the tables. Oh no – the security roles applied to users limited what we could do.

In fact, users having security roles with NO permissions on certain tables would NOT see those tables appearing in the Advanced Find interface. Even when users had permissions on tables, but these permissions were limited (such as only being able to view our own records), the data results would be filtered based on our security role access to the records within the table.

OK – all good so far. Well, in general – there have been various complaints over the years about the Advanced Find functionality. So finally, Microsoft updated it to the ‘Modern Advanced Find’.

This needs to be enabled by a system administrator in the environment settings:

So in order to access the Modern Advanced Find, we need to do the following:

  1. Click in the search box at the top of the screen
  2. At the bottom, click the ‘Search for rows in a table using advanced filters’ (that’s a mouthful as well, isn’t it!)

After clicking this, we then get presented with the following interface:

Once we select a table (we can only select one table, as this will be the primary table used), we then switch screens to set the filters that we want to use:

Now here’s where things got a little strange. On the filter screen, we can select related tables to the primary table (ie connected through a relationship), and we get EVERY table that’s available for this. So if we’re starting with the Accounts table, we can then select from the following:

So in this list, I can see tables such as Emails, Invoices, and various others as well. In fact, it’s actually a very extensive list (limited, of course, to all tables that have a relationship in place with the Accounts table, and which the user has access to through their security role).

But if I look back at the initial list of tables, I’m MUCH more limited in my choice:

This, to me, was quite confusing. After all, what if I wanted to start the search from a different table – one that isn’t shown in this initial list?

So I started doing some digging. Initially, I thought that these tables are the ones defined in the sitemap (ie the app navigation). This could mean that I’d need to somehow create a section that shows all tables within it, just to be able to have them searchable.

Thankfully, it turns out that this isn’t actually the case. What’s happening is that with the new Modern Advanced Find, tables need to be directly associated to the APP, to be able to show up and use for search purposes.

Actually, there’s some more granularity around this. The list of tables available to search on (as the primary table) need to meet ALL of the following criteria:

  1. Table is part of the model-driven app
  2. Table is enabled for unified interface
  3. Table is valid for advanced find (set on the table settings)
  4. User has read access to the table (handled through security roles)

So essentially, the ability to search tables within an app is now limited to the tables that have been associated to the app itself! This could be very helpful in various scenarios, when users can be quite confused with seeing the entire list of tables.

To do this, we’d edit the app, and add it to the list of tables available through the app designer (note – we don’t have to include them in the sitemap, if we don’t want to display them in the app navigation):

So this now makes sense, and I think it’s a good step forward.

Also thanks to my colleague Bill (who’s an AMAZING Customer Success Manager!) for his collaboration on this.

What are your thoughts on the Modern Advanced Find? Are you finding it better for functionality? Is there something that you feel is missing, or that you’d like to see in it? Drop a comment below – I’d love to hear!

Calculated columns not working with data migration

Interesting title, isn’t it? I thought to do something that might grab peoples attention, and this was the best that I could come up with! So, let’s get into the scenario, the issue experienced, and how we managed to resolve it.

The scenario on this project was as follows. We’ve been implementing a customer service solution for a sales company, that manufacture multiple products, under multiple brands. Currently there are multiple systems used for order entries, which at some point will be moved to a single system.

However for the moment, they’re wanting to be able to carry out holistic customer service across all brands, to be able to enable all customer service agents to have access to the same data, customers able to be serviced in the same way, regardless of brand, etc.

rectangular brown wooden table

As a result, Dynamics 365 Customer Service was the ticket, and has many standard capabilities that addresses the need of the customer.

Now, whilst sales (aka orders) will not be handled within Dynamics 365 itself, we didn’t want the customer service agents to have to look up order information in the ordering systems. Instead, we wanted to be able to bring the sales/order information into Dynamics 365 for reference (at some point it’s likely that the customer will actually use Dynamics 365 capabilities for sales as well).

In order to do this, we’ve had some amazing data architects bringing the data together into Azure Data Factory (ADF)) from the multiple order systems, and then pushing the data into Dynamics 365 (users have read-only view of it).

With bringing in the data, we were looking to capitalise on the native functionality of Dynamics 365, namely the ability for columns to be automatically calculated. An example of this would be bringing in the order line amount, the tax amount, and then having the total order line amount automatically calculated. This is standard system functionality, and when working in Dynamics 365, has many different uses across the system.

Now, it’s important to note here that as we’re not actually handling orders within Dynamics 365, we’re also not holding a ‘proper’ product list within Dynamics 365 itself. However, orders need to show product information on them (bit useless otherwise!), so we’re using the capability of ‘write-in products’.

Note: If you haven’t come across write-in products before, it’s actually a really great item. Essentially, it allows products to be entered for opportunities, quotes, orders etc (wherever products are used), but for when the product/s aren’t in the system product catalogue. Write-in products allow you to simply type the name of a product or service, & then type in the price. This is very useful if, for instance, a product isn’t yet available in the product catalogue, but you still want to be able to quote it. In our scenario, we’re using write-in products to avoid the need to manage the product catalogue itself. It’s also helpful for when you don’t want to use price lists, as all products need to be associated to a price list.

So we start off the data migration, and it’s looking good. No issues being reported by the integration…

But, then users go in to the UAT system to check through things, and find that when looking at orders, the totals aren’t being calculated:

Order line not calculating
Order not calculating either!

Hmm. That’s strange. So we started to look at what could have caused this problem…

  • Is the environment in ‘admin mode’? If an environment is in admin mode, then auto-calculations won’t work at all. Well, the environment wasn’t in admin mode, so it wasn’t that
  • Is there a plugin not firing correctly? Well, this is native Microsoft standard functionality within the platform, so unlikely, but we double-checked to make sure. No, there wasn’t anything causing issues in that dimension
  • Does it work for users, when it’s created manually within the system? Yes, it DOES work when users enter an order/order line with a product. Hmm…this was getting VERY confusing

For clarification, we didn’t want to auto-calculate the information within ADF, and then push it into the relevant Dynamics 365 columns. We wanted to be able to rely on the system working in the way that it should!

Finally, we found out why the calculated columns weren’t working. There’s actually a system setting that governs how this works:

With this set, the auto calculations are now working in the system:

So, thankfully we managed to get this working, and everything went smoothly from that point.

Have you ever been caught out by something similar? I’d love to hear – please drop a comment below!

Security Roles & Assigning Records

Let’s face it, and call a spade a spade (or a shovel, depending on where in the world you happen to be). Security roles are very important within Dataverse, to control what users can (& can’t!) do within the system. Setting them up can be quite time-consuming, and troubleshooting them can sometimes be a bit of a nightmare.

Obviously we need to ensure that users can carry out the actions that they’re supposed to do, and stop them doing any actions that they’re not supposed to do. This, believe it or not, is generally common sense (which can be lacking at times, I’ll admit).

Depending on the size of the organisation, and of course the project, the number of security roles can range from a few, to a LOT!

Testing out security can take quite a bit of time, to ensure that testing covers all necessary functionality. It’s a very granular approach, and can often feel like opening a door, to then find another closed door behind the first one. Error messages appear, a resolution is implemented, then another appears, etc…

Most of us aren’t new to this, and understand that it’s vitally important to work through these. We’ve seen lots of different errors over our lifetime of projects, and can usually identify (quickly) what’s going on, and what we need to resolve.

Last week, however, I had something new occur, that I’ve never seen before. I therefore thought it might be good to talk about it, so that if it happens to others, they’ll know how to handle it!

The scenario is as follows:

  • The client is using Leads to capture initial information (we’re not using Opportunities, but that’s a whole other story)
  • Different teams of users have varying access requirements to the Leads table. Some need to be able to view, some need to be able to create/edit, and others aren’t allowed to view it at all
  • The lead process is driven by both region (where the lead is located), as well as products (which products the lead is interested in)

Now, initially we had some issues with different teams not having the right level of access, but we managed to handle those. Typically we’d see an error message along the following lines:

We’d then use this to narrow down the necessary permissions, adjust the security role, re-test, and continue (sometimes onto the next error message, but hey, that’s par for the course!).

However, just as we thought we had figured out all of the security roles, we had a small sub-set of users report an error that I had NEVER seen before.

The scenario was as follows:

  • The users were able to access Lead records. All good there.
  • The users were able to edit Lead records. All good there.
  • The users were trying to assign records (ie change the record owner) to another user. This generally worked, but when trying to assign the record to certain users, they got the following error:

Now this was a strange error. After all, the users were able to open/edit the lead record, and on checking the permissions in the security role, everything seemed to be set up alright.

The next step was to go look at the error log. In general, error logs can be a massive help (well, most of the time), assuming that the person looking at it can interpret what it means. The error log gave us the following:

As an aside, the most amusing thing about this particular error log, in my opinion, was that the HelpLink URL provided actually didn’t work! Ah well…

So on taking a look, we see that the user is missing the Read privilege (on what we’re assuming is the Lead table). This didn’t make sense – we then went back to DOUBLE-check, and indeed the user who was trying to carry out the action had read privileges on the table. It also didn’t make sense, as the user was able to open the lead record itself (disclaimer – I’ve not yet tried doing a security role where the user has create/write access to a table, but no read access..I’m wondering what would happen in such a scenario)

Then we had a lightbulb moment.

photo of bulb artwork

In truth, we should have probably figured this out before, which I’ll freely admit. See, if we take a look at the original error that the user was getting, they were getting this when trying to assign the record to another user. We had also seen that the error was only happening when the record was being assigned to certain users (ie it wasn’t happening for all users). And finally, after all, the error message title itself says ‘Assignee does not hold the required read permissions’.

So what was the issue? Well, it was actually quite simple (in hindsight!). The error was occurring when the record was being attempted to be assigned to a user that did not have any permissions to the Lead table!

What was the resolution? Well, to simply grant (read) access to the Lead table, and ensure that all necessary users had this granted to them! Thankfully a quick resolution (once we had worked out what was going on), and users were able to continue testing out the rest of the system.

Has something like this ever happened to you? Drop a comment below – I’d love to hear the details!