PL-200 Microsoft Power Platform Functional Consultant

Well, the last week has been quite busy, on many fronts! One of those is having a few new exams come out in Beta. I’ve already taken the PL-400 (see PL-400: Microsoft Power Platform Developer Exam for my review of it). Last Friday, the new PL-200 exam was released as well, so I scheduled it in for as soon as I could sit it.

Now the PL-200 is scheduled to be replacing the MB-200 exam at the end of this year (2020), assuming it comes out of beta by then of course. I remember sitting my MB-200, though I didn’t write up about it at the time. Compared to some of the other exams I’ve taken, it was hefty. I’ll freely admit that I didn’t pass on first go of it – it took me 3 tries to gain it! People will be required to take this as a pre-requisite for attaining the Microsoft Certified: Power Platform Functional Consultant Associate badge.

So I’ve been expecting this new PL-200 to be quite similar, but with more of a Power Platform focus. It’s still heavy on Dynamics 365, and I wasn’t expecting that part to change. The existing MB-2xx series are also staying in place (for the moment, anyhow).

According to the official description for the exam:

Candidates for this exam perform discovery, capture requirements, engage subject matter experts and stakeholders, translate requirements, and configure Power Platform solutions and apps. They create application enhancements, custom user experiences, system integrations, data conversions, custom process automation, and custom visualizations.

Candidates implement the design provided by and in collaboration with a solution architect and the standards, branding, and artifacts established by User Experience Designers. They design integrations to provide seamless integration with third party applications and services.

Candidates actively collaborate with quality assurance team members to ensure that solutions meet functional and non-functional requirements. They identify, generate, and deliver artifacts for packaging and deployment to DevOps engineers, and provide operations and maintenance training to Power Platform administrators.

The official Microsoft Learn page for the exam is at https://docs.microsoft.com/en-us/learn/certifications/exams/pl-200, and I’d highly recommend people to go check it out. I didn’t use it that much, but felt that I was on reasonable grounds with existing knowledge. It’s mostly there, but (at least in my exam) there were some sneaky extras that I was NOT really expecting. Hopefully I managed to get them (mostly) accurate!

Once again, I sat the exam through the proctored option (ie from home). The experience went without issues for once – sign in was fine, no issues with my headset during check-in, exam loaded & worked without problems at all.

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). I’ve tried to group things together as best as possible for the different subject areas.

  • Environments
    • Different types of environments, what each one is used for, how to set/switch them between the different types
    • How to handle security/restrict access as necessary
  • Field types. All of the available field types, what are the benefits of each, and when each type should be used
  • Data storage types. Differences between Office documents (eg Excel), CDS, SQL Server, Azure SQL. When to use each one best
  • Charts. How they’re set up, how they can be shared with other users.
  • System views. What these are, who can access them, how to set them up
  • Entity forms. The different types of forms available, how to set them up, limitations of each. When each one should be used for a given scenarios
  • Model apps. Site map. What this is, how it’s used. Implementing/customising it, the different controls available & what each one does
  • Entity editable grids
    • What these are, how they can be used, how to enable & set them up
    • Limitations that they have within the system
  • Entity/record ownership. The different types of ownerships available, benefits of each, when each should be used for a given scenario
  • Data management
    • Data importing from different sources, different methods to import data
    • What is data mapping for import, and how it’s used
  • Duplicate detection. What it is, what it does, how it works. How to implement & configure it
  • Microsoft Word templates. How they can interact with Dynamics 365, how to set them up/adjust them, what they can be used for
  • Canvas Apps
    • Expression/function types, what they are, how they’re used
    • Handling data (eg collections)
    • Offline usage & data storage
    • Controls that can be used, navigating around, loading/saving data.
  • Power Virtual Agent/Chatbots.
    • Setting them up, deploying them onto websites, deploying them into Teams
    • Configuring topics, routing, handling unknown questions
    • Bot model data, including being able to access across multiple chatbots
    • Reporting on their usage, & how customer engagements have been processed
  • Power App portals
    • Registering users, registration code process
    • Validating/confirming user accounts
    • Forms security, displaying/hiding forms & data
  • AI capabilities. AI models available. Pre-built models vs custom training, capabilities (eg text scanning), and when to use each one.
  • Omnichannel
    • What it is, when it’s used
    • How to implement, deploy & configure customers being able to be sent through to it
  • Automation
    • Workflows, Power Automate, Business Process Flows
    • What each one is, benefits/use cases for each one, when to use each for specific scenarios
  • Power Automate
    • What are triggers, & how do they work
    • What are actions, and how do they work
    • What are connectors, and how do they work
    • Prebuilt vs custom connectors, capabilities, and when to use each one
    • How to set up each type & configure them
    • Instant vs Scheduled vs Triggered
    • Security – how to enable/disable their use by users
  • Business Process Flows
    • What they are, how they’re used, limitations that they have
    • How to handle security for them
  • Business rules
    • What they are, how they’re used, how to set up/configure
    • How to use them in different parts of the system (eg forms, apps, etc)
    • Actions vs Conditions vs Recommendations
  • UI Flows (RPA)
    • What these are, how they are used
    • Requirements in order to use them
    • Desktop vs Cloud
    • Implementation, customisation, configuration & deployment
    • Limitations of them
    • Data extraction from runs
  • Security & Compliance
    • Security roles, security teams, security groups
    • What each one is, how it’s used
    • System auditing, what it is, how it’s used, how to implement & configure
    • How to access & run user audit log reports
  • PowerBI. Setting up & sharing dashboards, setting up & configuring alerts, security options/roles & how they work with data
  • Dynamics 365 integrations. What other systems can integrate directly with Dynamics 365, & any limitations that they may have

The main surprise for me was mostly around the UI flows, and the various questions I had on them. I’ve not played around with them (yet!), but they are really cool!

If you’re going to take this, I’d love to hear how your experience of it went. Drop a comment below for me to see!

PL-400: Microsoft Power Platform Developer Exam

I’ve been continuing with taking new exams as they come out. Having recently taken the MB-400 exam (see MB-400 Power Apps & Dynamics 365 Developer Exam), I was slightly surprised to see the announcement that it was going to be replaced!

Admittedly, I was also surprised (in a good way) that I passed the MB-400, not being a developer! It’s been quite amusing to tell people that I’m a certified Microsoft Dynamics Developer. It definitely puts a certain look on their faces, which always cracks me up.

Then again, the general approach seems to be to move all of the ‘traditional’ Dynamics 365 exams to the new Power Platform (PL) format. This includes obviously re-doing the exams to be more Power Platform centric, covering the different parts of the platform than just the ‘first party apps’. It’s going to be interesting to see how this landscape extends & matures over time.

The learning path came out in the summer, and is located at https://docs.microsoft.com/en-us/learn/certifications/exams/pl-400. It’s actually quite good. There’s quite a lot that overlaps with the MB-400 exam material, as well as the information that’s recently been covered by Julian Sharp & Joe Griffin.

The official description of the exam is:

Candidates for this exam design, develop, secure, and troubleshoot Power Platform solutions. Candidates implement components of a solution, including application enhancements, custom user experience, system integrations, data conversions, custom process automation, and custom visualizations.

Candidates must have strong applied knowledge of Power Platform services, including in-depth understanding of capabilities, boundaries, and constraints. Candidates should have a basic understanding of DevOps practices for Power Platform.

Candidates should have development experience that includes Power Platform services, JavaScript, JSON, TypeScript, C#, HTML, .NET, Microsoft Azure, Microsoft 365, RESTful web services, ASP.NET, and Microsoft Power BI.

So the PL-400 was announced on the Wednesday of Ignite this year (at least in my timezone). Waking up to hear of the announcement, I went right ahead to book it! Unfortunately, there seemed to be some issues with the Pearson Vue booking system. It took around 12 hours to be sorted out, & I then managed to get it booked Wednesday evening, to take it Thursday.

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). It’s also in beta at the moment, which means that things can obviously change.

There were a few glitches during the actual exam. One or two questions with answers that didn’t make sense (eg line 30 does X, but the code sample finished at line 18), and question numbers that seemed to jump back & forth (first time it’s happened to me). I guess that I’ve gotten used to at least ONE glitch happening somewhere, so this was par for the course.

I’ve tried to group things as best together as I feel (in my recollection), to make it easier to revise.

  • Model Apps.
    • Charts. How they work, what drives them, what they need in order to actually work, configuring them
    • Visualisation components for forms. What they are, examples of them, what each one does, when to use each one
    • Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
    • Entity alternate keys. What these are, when they should be used, how to set them up & configure them
    • Business Process Flows. What these are, how they can be used across different scenarios, limitations of them
    • Business Rules. What these are, how they can be used across different scenarios, limitations of them
  • Canvas apps
    • Different code types, expressions, how to use them & when to use them
    • Network connectivity, & how to handle this correctly within the app for data capture (this was an interesting one, which I’ve actually been looking at for a client project!)
    • Power Apps solution checker. How to run it, how to handle issues identified in it
  • Power Automates
    • Connectors – what these are, how to use them, security around them, querying/returning results in the correct way
    • Triggers. What is a trigger, how do they work, when to use/not use them
    • Actions. What these are, how they can be used, examples of them
    • Conditions. What these are, how to use them, types of conditions/expressions/data
    • Timeouts. How to use them, when to use them, how to configure
  • Power Virtual Agents. How to set them up, how to configure them, how to deploy them, how to connect them to other systems
  • Power App Portals. Different types, how to set them up, how to configure them, how they can work with underlying data & users
  • Solutions
    • Managed, unmanaged, differences between them, how to use each one.
    • Deploying solutions. Different methods that can be used to do it, best practise for each, when to use each one
    • Package Deployer & how to use it correctly
  • Security.
    • All of the different security types within Dynamics 365/Power Platform. Roles/Teams/Environment/Field level. How to set up, configure, use in the right way.
    • Hierarchy security
    • Wider platform security. How to use Azure Active Directory for authentication methods, what to know around this, how to set it up correctly to interact with CDS/Dynamics 365
    • What authentication methods are allowed, when/how they can be used, how to configure them
  • ‘Development type stuff’
    • API’s. The different API’s that can be used, methods that are valid with each one, the Organisation service
    • Discovery URL’s. What these are, which ones are able to be used, how they’d be used/queried
    • Plugins. How to set up, how to register, how to deploy. Steps needed for each
    • Plugin debugging/troubleshooting. Synchronous vs asynchronous
    • Component types. Actions/conditions/expressions/data operations. What these are, when each is used
    • Custom ribbon buttons. What these are, different tools able to be used to create/set them up, troubleshooting them
    • Javascript web resources. How to use these correctly, how to set them up on entities/forms/fields
    • Powerapps Component Framework (PCF). What these are, how to develop them, how to use them in the right way
  • System Design
    • Entity relationship types. What they are, what each one does, how they work, when to use them appropriately. Tools that can be used to display them for system design purposes
    • Storage considerations across different types, including CDS & Azure options
  • Azure items
    • Azure Consumption API. How to monitor, how to handle, how to change/update
    • Azure Event Grid. What it is, the different ways in which it can be used, when each source should be used
  • Dynamics 365 for Finance. Native functionality included in it

The biggest surprise that I had really when thinking back to things was the inclusion of Dynamics 365 for Finance in it. Generally the world is split into ‘front of house’ (being Dynamics 365/Power Platform), and ‘back of house’ (Dynamics 365 for Finance & Supply Chain Management). The two don’t really overlap, though they’re supposed to be coming more together over time. Being that this is going to happen, I guess it’s only natural that exam questions around each other will come up!

Overall it was quite a good exam. Some of the more ‘code-style’ questions were somewhat out of my comfort zone, and I’ll freely admit to guessing some of the answers around them! Time will tell, as they say, to see how I’ve done in it.

I hope that this is helpful for anyone who’s thinking of taking it – good luck, and please do drop a comment below to let me know how you found it!

Lookup fields & Power Automate

This is an interesting post, for several reasons. Firstly, it’s the first one in 3 weeks – I was off on holiday, and decided to take an (almost) absolute break from all things digital, which included this blog. It was actually quite refreshing, though now coming back & starting to write again does seem a bit daunting, I’ll admit!

Thankfully, whilst wondering what exactly to start with, a scenario came up that I was working on. It seemed quite simple at first, but then actually got someone complicated. I therefore thought it would be helpful to others if I wrote about it, so here it is.

The scenario was as follows. We had records being auto-created in the system, and needed to create child records for them. This, as I’m sure you’ll agree, is really quite simple to do with Power Automate. We also needed to set lookup values on the child record, that were already populated on the parent record (for reference purposes).

So for example, the parent record has a lookup to Country (being a separate entity), and the child record also has a lookup to Country. These need to be the same.

Being both lookup fields, I figured that I’d be able to take the value from the parent record, and simply plop it into the corresponding field on the child record in Power Automate:

So I did that – and immediately hit an error. Not just any error, but the fabled ‘Resource not found for the segment’ error!

Obviously, I did what anyone would do at first – I put it into Google & Twitter, and took a look at what came up.

The ‘problem’ was coming from using the ‘CDS Current Environment’ connector, which is the latest version available (the old one is no longer available to use). It’s really great for a lot of things, but unfortunately not so great in a few areas. See, in the old CDS Connector, you could just drop the lookup field value into the field you were wanting to populate. Power Automate had no issues with that, & it would run just fine.

However in the ‘new’ CDS Connector, you can’t just do that. Instead, you need to use an OData reference (which I haven’t done much of before, to tell the truth). So based on the blogs I had come across, I went to work to try to get this working.

Part of the challenge was that there didn’t seem to be a unified consensus in how to do it. I came across the following variations:

  • /entityname(Lookup Field Value)
  • /entityname/(Lookup Field Value)
  • /pluralentityname(Lookup Field Value)
  • /pluralentityname/(Lookup Field Value)

Somewhat confusing, as I’m sure you’d agree. Nevertheless, I ploughed through all of the different possibilities. But nothing was working – every single time, I still got the ‘segment not found’ error message. This, as you can image, was extremely frustrating!

Thankfully, one of my good friends was around & able to help out. Namely, Tricia Sinclair came to the rescue!

We took a look at the code I was using, and she took a look at some of her own use cases (where it had worked for her). I was starting to think down the path of needing a capital letter in the entity name (some systems can be REALLY finicky around things like that), but thankfully it wasn’t.

Instead, it was the following. See, this was a custom entity. It turns out that for a custom entity (& heck, for all I know system entities as well) the syntax needed is ‘publisherprefix_pluralentityname(lookupfieldvalue)’. Now that’s not something that I had come across ANYWHERE at all!

Looking at it, I guess it makes sense. After all it would technically be possible to have multiple entities with the same name, though with different publishers. As a result, the system needs to know WHICH exact entity is being needed for the Power Automate, so uses this. Somewhat complicated (and hey – it worked without all of this in the OLD CDS Connector), but we got it to work!

Testing it out, everything worked smoothly. The Power Automates fired off without any issues, the data got created & populated, and everyone was happy.

So there you go. Another interesting little twist in syntax needed, which hopefully will NOT change in the (near) future!

Have you come across anything like this? I’d love to hear – drop a comment below around it!

PL-100: Microsoft Power Platform App Maker Exam

As many people are aware, Microsoft is changing the certification landscape somewhat. With the emergence of the Power Platform, there’s a need to test skills other than the traditional Dynamics 365 ones.

To this end, a new series (the PL-XXX) has been created. The first (main) one of these exams is the PL-100, which is the entry level exam.

You can take a look at the exam requirements & learning paths by going to https://docs.microsoft.com/en-us/learn/certifications/exams/pl-100.

Now, when I say ‘entry level’, I’m not referring to basics. This isn’t a Fundamentals exam – for that, you’ll be wanting to take a look at the PL-900 exam (which came out a while ago). To put it into perspective, the PL-200 (which is aimed to launch in September 2020) will replace the MB-200 exam!

So, the exam went live (in Beta) just over a week ago (July 17th). I’ve been waiting for this for a while, as I’ve really been wanting to see how the new exams are structured. Taking it in Beta means I’m going to have to wait (a little while) for my results to come through, but it gives me the opportunity to see the new landscape upfront.

I booked it as soon as it was available, for Wednesday July 22nd. Nicely (as mentioned above), there were already learning paths in place, so I eagerly went through them (again) in preparation. I was feeling pretty much quite prepared, but then….

See, I had signed up to attend the Power Platform Virtual Happy Hour (PPPVHH) on the same day as I had booked the exam for. Incidentally, if you haven’t come across this before, take a look. It’s hosted every month, and has some AMAZING speakers. Clarissa Gillingham presented on the ‘Infinity Form’, and it was a joy to behold. But I’m digressing.

After the event had finished, some of us remained chatting in the virtual room. Amongst them was none other than Chris Huntingford, who we all love and adore!. I mentioned that I had to sign off soon to get ready for the exam. No sooner had I mentioned this than Chris said to me something along the lines of ‘BRO….WATCH OUT!! It’s REALLY HARD!!’.

I might mention here that I have a slight (friendly) rivalry with Chris, in seeing who can take newly released exams first. I had figured that he’d be so busy with everything going on that I’d get this one before he did. Little did I know that he had ALREADY taken it.

Here I was, about to go sit down for the exam, and he got me TOTALLY freaked out. I’m not sure how much of it he did on purpose, but I’m sure that when I get him into a corner, I’ll find out…one day!

Anyhow – I sat the exam, took most of the time available (pretty sure I hit the 2 hour mark), and found it quite good overall. One or two things that seemed to be totally random/in the wrong place, but otherwise it was fine. Definitely much better that the MB-600 (MB-600 Solution Architect Exam), and I felt much more comfortable than I did with the MB-400 (MB-400 Power Apps & Dynamics 365 Developer Exam).

It really is very cleared aimed at app developers (both model & canvas), as well as other Power Platform skills. According to the exam description:

The app maker builds solutions to simplify, automate, and transform tasks and processes for themselves and their team where they have deep expertise in the solution domain. They are skilled in key technical business analyst tasks such as data modeling, basic UX design, requirements analysis, process analysis, etc.

The app maker creates and enforces business processes, structures digital collection of information, improves efficiency of repeatable tasks, and automates business processes.

The app maker uses the maker tools of Power Platform to solve business problems. They may have experience with Visual Basic for Applications, Excel pivot tables, Teams, and other tools. They should have a basic understanding of data models, user interface, and processes. The app maker is aware of the capabilities and limitations of available tools and understands how to apply them.

The app maker is self-directed, and solution focused. They may not have formal IT training but are comfortable using technology to solve business problems with a personal growth mindset. They understand the operational need and have a vision of the desired outcome. They approach problems with phased and iterative strategies.

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!).

  • Canvas App Test Studio. What it does, how to carry out tests in it, how to set up Test Suites, etc
  • Developing Power Automate Flows. Different types of connectors, different types of steps/actions. How to deploy properly between environments using solutions
  • Field Level Security. What it is, what it does, what it can/can’t be used for
  • Canvas Apps:
    • Publishing rights
    • Access rights
    • Versioning
    • Editing vs using
    • Sharing & security
    • Saving changes, & deploying them to users
    • Collections. What they are, what they do, how they work
    • Galleries. What they are, what they can do, how to configure them in different way
    • Navigation around screens. How to set this up, how to pass information from one screen to the next
    • New vs Display vs Edit forms. What each one is, how each one is used
    • Charts. Which ones are available, how they’re configured
    • Using AI features, such as text/data recognition. What’s able to be used, how are they configured, what the benefits of each are
    • Versioning. How to handle this, what the benefits are
    • Accessibility for less-abled users. What options are available to facilitate this, how are they configured
  • Data Security. Different types of security available (roles/teams/access teams/business units) etc. Configuring security roles with different levels of permissions
  • Power BI Security. Showing/hiding information for specific users/teams, and how to configure this
  • Solution publishers. How these are set up, what you can modify after they’ve been set, considerations between default & other solutions
  • Business Logic. Differences between Business Process Flows, Business Rules & Power Automate. What each one can/can’t do, and is best suited for
  • Creating environments. Where to do this, how to do this, what steps are needed
  • Connecting to data sources. Different types of data connections, what each one is suited for
  • Model Apps:
    • Forms & Views. What these are, how to set up & configure them
    • Navigation, Sitemap etc.
  • Business Rules. How they work, what the different levels of scope are, how they affect functionality
  • Automation. Workflows vs Power Automate Flows. The different types (eg On Demand, Instant, Scheduled)
  • Arrays. What they are, what they do, how they work
  • DLP (Data Loss Prevention). How this works, how to set it up, different options available
  • Data field types. What each one is, how each one is used & able to be configured
  • Calculated/Rollup vs Autonumber. What each is, when to use each one

That’s quite a lot of stuff, with an emphasis on canvas app functionality & solutions. It definitely is important to ensure that you’re really on top of these. Thankfully not too much mention of Power BI (at least not in my exam), and for that I’m quite grateful!

I do have to say that in one respect, I found something quite amusing. See, on the same day as I took the exam, Microsoft Ignite was taking place. One of the major announcements was the ‘rename/rebrand’ of CDS to Dateflex (Pro). I therefore kept laughing when questions would refer to CDS again & again! Obviously I’m expecting this to change in the exam (at some point?).

In summary, I think that this is a good start for the new range of exams, and look forward to the other ones in the series coming out!

Have you taken this? What was your experience like? Drop a comment below – I’d love to hear!

Canvas Apps, Collections & Dropdown Fields

This post is based around some recent work that I’ve been doing, which includes canvas apps. For those of you who aren’t familiar with canvas apps, imagine if PowerPoint & Excel had a baby! Though I’m expecting most people who are reading this to already know all about them 🙂

So enough with the waffle, let’s get on with things…let me paint the scenario for you.

The app is aimed to be used by a contact centre. Part of their function is to capture address information. So far this has been done absolutely manually. The issue with this is that data can be typed incorrectly, or in the wrong fields. We’re also needing to enhance the data with geographic-specific information (for reporting purposes). This information isn’t known by either the callers, or by the contact centre agents (for those who are curious, it’s the unique property reference number, which is unique to every address in the UK).

Thankfully, we’ve been given a source from the client which we can look this up against. In essence, we pass a postcode to it, and values are returned (in a JSON format). This includes the data that we’re looking for. Brilliant, so far.

When we got to thinking about things, there are several ways in which we could implement this:

  • Capture the data as we are already doing, & use Power Automate to get the relevant additional information

or

  • Automate this within the canvas app itself, and even give the customer service agents a bespoke address picker!

Deciding to go with the second option (it was a no-brainer, really), we moved ahead with this. We had the details that we needed in order to hit the address lookup API. One of the developers on the team created the Custom Connector, and got it working. We tested it out, and amazingly we got information back!

The next step was to see how we could do this within the canvas app itself. Now I’m going to admit here that although I’ve HEARD great things about Collections, I had never used them myself. In fact not only had I NOT used them before, I had NO idea how they worked! That was to change VERY quickly though…

Within a few hours, I had learned enough about collections to get how they worked, and pull data into them. It was actually really simple – I used the ClearCollect command to create a collection that was fed by the API query, which then created the data into a collection table for me to use. I was very impressed!

The code to return the postcode data. We had to do some manipulation due to the API constraints

OK – so I had my data in the collection now:

What were my next steps? Well, I was wanting to achieve the following:

  • Give the customer service agents an ‘address picker’ to use. They’d enter the customer postcode, & then be presented with a list of addresses that they could pick the correct one from
  • Automatically populate the customer address fields on the form from the selected address

Well, the first item (the ‘address picker’) was simple enough. Using a dropdown field, I pointed it at the collection data. This worked great, but the dropdown was only allowing me to select a single column from the collection to display. This meant that I could only select ONE column of data to return:

I can only select a single column!

1 column from the collection. OK, I thought – should be simple enough to handle. Let’s go and concatenate column values in the dropdown, to present the interface I’m looking for:

Now that’s more like it! Much easier for the customer service agents to use. OK – onto the next stage. Let’s go & set the fields to point to the collection, match to the value that’s selected in the dropdown, and populate. Should be simple to do, right?

Well…um, no, it’s not simple to do. In fact, it’s actually impossible to do. I was expecting to point to the dropdown selected value, & have the columns returned (from the collection). I could then select which column to use for a specific field. This, however, was not the case:

You have to love the ‘.’ (or ‘dot’) notation used in canvas app code. It shows you what values are available, and saves having to do lots of type. In this case, however, it also showed me that there was only ONE column of data to select from to display in the field. This was the ‘Result’ column.

This got me very confused. I tried going back to basics, and stripping out the concatenation in the dropdown. Wonderfully I was then presented with all of the different collection columns to use:

So let’s sum up things so far:

  • If I want to present the best option to the customer service agents (using concatenation), I can’t select different parts of the data for auto-population into fields
  • If I want to be able to auto-populate field values from the collection, I can’t use concatenation (& therefore can’t present user-friendly data to the customer service agents).

Note: Leaving aside wanting to show the house number & street, one of the main reason for wanting to concatenate was to handle buildings that had flats (aka apartments) in them. This is stored in a different column in the collection. It would therefore be difficult to show these both to the customer service agents

In essence, the behaviour of the dropdown field seemed to be that I couldn’t just change the displayed values without it ‘losing’ connection to the rest of the data. There was no ID that I could use to match on, or display what I wanted to.

This seemed to be a massive Catch-22. I tried various things, but couldn’t see a way out of this. I started to try to create a second collection, & concatenate fields from the first collection. This seemed like a good idea, though (with being totally new to it), I got lost. I tried various things; I even ended up managing to collect the entire data from the collection into a new column for EACH ROW!!

Thankfully, the community helped me out, in the forms of Peter Bryant & Clarissa Gillingham (I had posted about my issues on Twitter – the hashtag #poweraddicts is really great!).

With the help provided, I managed to work out the CORRECT syntax to use for the ‘AddColumns’ command. This now being in hand, I was successfully able to create a second collection & add concatenated field values to it:

Now for the moments of truth. Would the dropdown show this new column, & could I point the form fields to auto-populate specific columns?

Anticipation is the way to keep consumers coming back for more
Not me, but exactly how I was feeling!

The answer….was YES! It was working! I felt SO relieved. Let’s take a peek:

This was brilliant! We’re also populating other data in the background, but that doesn’t need to be visible to the customer service agents.

So in summary, I learned about collections, & how to use them. I also learned about the limitations of dropdown controls (when referencing them from other places), but came up with a way around it. Finally I achieved the result that I was aiming for. Very pleasing all round!

Have you come across something like this in an implementation? How did you manage to handle it (if you did)? Drop a comment below – I’d love to hear all about it!

MB-400 Power Apps & Dynamics 365 Developer Exam

I haven’t usually been putting up posts around the exams that I take. A few months back I did decide to write one on the MB-600 exam (MB-600 Solution Architect Exam), which just took off! It was quite amazing (& pleasing) how many people were looking at it, & asking me questions around the exam.

As a result, I’ve decided to continue this, and am therefore now writing this post on the MB-400 exam.

There are several different ‘ranges’ of exams within the Dynamics 365/Power Platform space. These are aimed at different types of roles, or specific specialisation/s within a role. A good example of this is the MB-2xx range. It covers functional technology, and is split across the different ‘main’ areas of Dynamics 365.

The MB-400 (the only one in the range at the moment) is aimed at developers. According to the official description for the exam:

Candidates for this exam are Developers who work with Microsoft Power Apps model-driven apps in Dynamics 365 to design, develop, secure, and extend a Dynamics 365 implementation. Candidates implement components of a solution that include application enhancements, custom user experience, system integrations, data conversions, custom process automation, and custom visualizations.

Candidates must have strong applied knowledge of Power Apps model-driven apps in Dynamics 365, including in-depth understanding of customization, configuration, integration, and extensibility, as well as boundaries and constraints. Candidates should have a basic understanding of DevOps practices for Power Apps model-driven apps in Dynamics 365. Candidates must expose, store, and report on data.

Candidates should have development experience that includes JavaScript, TypeScript, C#, HTML, .NET, Microsoft Azure, Office 365, RESTful Web Services, ASP.NET, and Power BI.

As anyone who knows me will attest, I am NOT a developer. However I decided (for several reasons) to give this one a go, and see what would happen! I knew I’d be pushing myself out of my comfort zone, there would be things I wouldn’t understand/know at ALL, but hey – I was curious to see what would happen! Even more challenging, I decided to book & take it within a 24 hour period!

Now as this has been out for a little while (& isn’t in Beta), there’s thankfully some good resources on Microsoft Learn about it. Take a look at https://docs.microsoft.com/en-us/learn/certifications/exams/mb-400, where there are several learning paths that can be followed.

A big shout out as well to Julian Sharp & Joe Griffin who recently ran a multi-week course around it. The official Microsoft learning paths are great of course, but seem to miss out quite a bit of what’s actually needed to be known for this. The course that they ran covered a lot more. Hopefully there will be more courses like this run in the future!

When passing it (& assuming that you’ve passed the MB-200 as well), you get a lovely shiny badge!

Microsoft Certified: Power Apps + Dynamics 365 Developer Associate
I’m SO proud of this!

Once again, I sat the exam through the proctored option (ie from home). The experience went somewhat better than previous times. Amusingly I got told off by the proctor during the exam for ‘looking down at the keyboard’, rather than looking at the screen! I explained that I was using a different computer, & kept clicking the wrong mouse button on it (leaving aside that I was exhausted when doing it!).

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!).

  • Model driven apps:
    • User experience
    • Show/hide fields
    • Change field labels
  • Canvas apps – functionality, online/offline capabilities, field types (including searching/filtering data)
  • Plugin debugging
  • Configuring security for system connections (security types)
  • D365 Web API – how it’s used, types of calls made from/to it
  • Azure API – making calls to/from it
  • Code for importing data (debugging, variables)
  • Advanced Find
  • Types of calls (synchronous, asynchronous, )
  • Data modelling
  • Creating & deploying solutions through different methods
  • Publisher versioning
  • Identifying code variables, and saying what would happen in given scenarios
  • Power Apps Component Framework (PCF) – how to use, how to package components, how to deploy
  • PCF components & classes
  • JavaScript – code examples, what happens when a given scenario happens
  • JavaScript functions
  • Dynamics 365 Ribbon – what it is, what you can do with it, different types of functionality & ways to do things with it
  • Security & Permissions, including roles, teams, field level security, business units
  • Workflows, Power Automate Flows (how they’re set up, different functionality within them, how to do things with them given a specific scenario)
  • Business Rules (what they can/can’t do, different scopes, etc)
  • Field types (eg option-sets, calculated fields, roll-up fields, multi-select, etc)
  • Importing solutions – requirements for this, versioning, deployment between environments
  • Compatibility with Microsoft Teams

Now many of these (as I said above) are outside of my comfort zone. In fact, I’d say that even with absolutely cramming for a whole day for the exam, I still felt that I was guessing the answer for at least 30% of the questions. Admittedly though, as Julian Sharp says, a ‘gut feeling’ answer is usually right most of the time, coming from what the subconscious has absorbed during revision.

I was REALLY happy that I got a passing mark for this, & admittedly was VERY relieved as well. So now another lovely shiny badge in my collection, and I’m now going to go and update it on LinkedIn as well!

If you have any questions on this, feel free to drop them below, and I’ll try to help out as best as I can!

Omnichannel & LogMeIn

Overview

Many people in the IT scene will know of LogMeIn (https://www.logmein.com/), or LMI for short. For as long as I can remember (which means going back almost 2 decades!) they’ve been one of the main remote access solutions. With their product range, it was possible to leave your computer at home, travel abroad, and easily log into it from practically any computer anywhere.

It’s also a great product for IT professionals. Being able to deliver customer support through remote sessions, manage identity solutions, etc. The number of products over the years has grown, and been quite pleasing to watch:

Of course, LogMeIn Free (a great starter product for personal usage) was removed some years back, which to this I still believe is a great pity. Obviously the company decided to focus on the more enterprise side of things, which I can understand as a business.

So, why am I now writing about them? Quite simple, actually. LogMeIn are one of the providers that are working with Microsoft to provide Co-Browse solutions for Omnichannel! It’s a very new piece of functionality that’s been launched in the Dynamics 365 product, and there aren’t many providers out there that have integration points to it.

What is Co-Browse?

It’s important to understand what co-browsing is, and some useful stats:

“Co-browsing” refers to the ability to have a service provider & customer jointly navigate an application in real time through the web.

Co-browsing: The Gateway to Happy Customers & Better Financial Results, 2015

So co-browsing is useful. But just how useful can it actually be? Well, apparently it can be VITAL:

Co-browse has the potential to bridge the gap between human & AI-driven customer interaction, & to enable organisations to differentiate their customer service.

By 2022, co-browsing will be used in 2% of customer service interactions, up from 0.1% in 2017 (2000% growth).

Gartner 2017: How Co-browsing Can Differentiate Your Customer Service

LogMeIn has had their Rescue offering available on the general market for a while as a standalone product (alongside the rest of their offerings). They’ve now build it out into a new standalone product called Rescue Live Guide, and provided an integration into Omnichannel for Dynamics 365. Customers obviously need to have licenses for the product, but with these, they now have the ability to co-browse during support sessions. Not only can they see what’s going on, but they can also interact with the customer browser itself, providing an even better support experience.

So, let’s go ahead and take a look at how to set it up, the experience itself, and my thoughts on things.

Setup

When I first started testing out the LogMeIn offering, I had to go through a manual install process. This was due to the product just being released (in May 2020), but wasn’t actually that difficult to carry out.

However, they were in the process of switching over to an automatic installation through AppSource, as most of the other apps have. It’s great to be able to see that this has gone live, and is now available for users – it really does make the install that much easier!

Clicking ‘Get It Now’ takes you through the usual route of installing a solution from AppSource: selecting the environment, confirming the installation, etc. After around 5 minutes, I can now see the following:

Once it’s installed, we’ll need to set it as the co-browse provider for the channel that we’re wanting it for. To do this, open the chat record, go to Conversation Options, and select it there:

We’ll also need to put in two records for the LogMeIn co-browse configuration:

Finally, there’s a script block that needs to be added to the webpage where the chat widget is located. This enables the LogMeIn co-browsing ability from the customer side. It can be added right under the chat widget code itself; in the fullness of time, this may be able to be auto-generated as part of the chat widget code, but it’s not at the moment (this is dependent on Microsoft being able to offer it):

Right – setup all done, but before we see it in action, let’s take a quick look at the Rescue Live Guide admin console side of things.

Rescue Live Guide Admin Console

Although the functionality is within Omnichannel for Dynamics 365, administering agent licenses and groups takes places within the Rescue Live Guide admin console at https://console.logmeinrescue.com/admin. As companies will need to have Rescue Live Guide licenses, they would usually be familiar with this.

There’s the ability to create new users or groups, and manage them as well:

It’s also possible to set the names that are used for the agent & customer. These can be either the actual name of the agent, or instead potentially a job role/title:

I’m not going to go further into the admin functionality here – documentation can be found on the Rescue Live Guide site around this. Let’s instead take a look at the experience within Omnichannel, which after all is what we’re here to see!

Agent Experience

So how does this actually work, in practise? Well, from the customer side, they start a chat like they would usually do. When the agent responds, they’re given an option for ‘Live Guide’:

When the agent clicks on this, two things happen:

  1. Firstly, there’s a URL that’s posted in the chat. This contains a link for the customer to click, with an auto-generated ID number
  2. The agent is taken to the LogMeIn Rescue site page in a new tab.

Note: At the moment, the agent will have to sign in manually. LogMeIn have told me that their roadmap includes Single Sign On, so that after the initial setup they’ll be signed in automatically, and not have to perform this step in the future.

Once logged in, the agent will see that the session is ready, & waiting for the customer to connect to it. Once the customer has clicked the URL provided in the chat, it will open the Rescue Live Guide session, and authorise the agent to co-browse with them. They’ll then see the following prompt. This tells them that the session is connected to the agent, and that they can begin:

Once the customer has accepted to start browsing together with the agent, they get some small extra items appearing on their screen:

  • They can see that there is indeed a shared browsing session happening
  • They can also see where the agent’s mouse cursor is pointing to (by default, without the agent actually doing anything)

It’s important to note that that the co-browse session is taking place within the specific browser (tab) that is open. Therefore if the user navigates away, the session is paused until they navigate back to it.

On the agent’s side, they can view the customers browser. They can only see what’s happening in the actual tab that’s open for the co-browse session (see below for some more information around this though). It’s quite similar to the customer’s side, though has some LogMeIn features available. Well, obviously it’s similar to the customer – the agent is seeing the customer’s browser window!

They can of course still access the Omnichannel chat itself, and send information through that as well if they wish to.

Just as the customer can see the agent’s mouse position, the agent can see the customer’s mouse position. There are also gesture indicators so that each person can see what the other clicks etc as well, which can be really helpful when walking through a process.

The functionality currently available to the agents covers scrolling (within the page), highlighting, drawing and ‘virtual tabs’. As shown in the image above, the agent is able to highlight text/images, which will then be displayed as being highlighted to the customer. Agents are also able to enter text into text fields, click on buttons, and interact with the native webpage functionality.

Note: The Rescue Live Guide admin centre provides granular controls around these, so that customers can allow agents certain rights, rather than allow them to do everything.

The agent is also able to ‘draw’ on the webpage to be able to point something out, highlight a part of the page, etc.

Note: These annotations will disappear once the customer or agent starts scrolling up/down the page again.

As I’ve mentioned above, the session is taking place within a single browser tab. If the user nagivates away (to a different tab), the session is paused. The agent isn’t able to see any other tabs. So what happens if we do indeed need to open a new tab for something?

Well, there’s a really nice feature that the agent is able to use for this. It’s sort of a ‘virtual tab’ within the browser tab. Sounds interesting!

The customer is able to see this, and can navigate between the tabs. They’re now also able to open a new virtual tab themselves (which is an update to the functionality – originally they weren’t able to, and had to request the agent to do it).

Customer view of the support session

If the customer wants to pause or stop the session, the user simply has to click the ‘Stop’ button in the bottom left. They’ll then be presented with the following screen:

Whilst the session is paused, the customer can continue to use their machine as normal, but the agent won’t be able to see what’s going on. Only if the customer allows the session to resume by clicking ‘Continue Browsing’ will the agent be able to see the customer’s browser once again.

Alternatively, the agent can end the support session themselves, and the customer will be notified about this.

Security

I’m not going to dwell too much on security, as there’s a great document available at https://logmeincdn.azureedge.net/legal/gdpr-v2/Rescue_Live_Guide_SPOC_2020.pdf which goes into quite some detail.

Suffice it to say that LogMeIn have been a market leader for many years in this sector, and I’m happy that sessions through their products are adequately encrypted & protected.

Other functionality

Apart from the above, which is obviously the core of the product, there’s other functionality that’s possible to enable through the LogMeIn Rescue console:

  • Session recordings. It’s possible to record these for playback, which is then available from the LogMeIn portal. All recordings are carried out from the agent’s viewpoint, not the customers – there is therefore no issue that sensitive information from the customers side could be seen
  • Data masking. It’s possible to use data masking to hide sensitive information. At the moment the setup for this is a very manual process, so I’m not going to go into how to set it up it here. Having played with it a little, it’s really quite useful. Agents can’t see sensitive information on their screen, and if a customer needs to enter/update information, the session pauses whilst this is being done. However I understand that part of the LogMeIn roadmap for the near future is to make the setup process much more user friendly. When this is released & available, I’m planning to do a post on this
  • Reporting happens through the LogMeIn portal (see my thoughts below on this). It looks nice, and can be downloaded as a CSV file. Again, the functionality of this is going to be expanded in the near future.
Reporting in the LogMeIn website console

My thoughts

Having gone through testing out the product, I think that LogMeIn has brought a really great product of theirs into the Omnichannel experience. I used to use their products regularly (I ran an IT MSP some years back, in which we used LogMeIn products as well), and always found that they behaved well.

Now having the ability for agents to not only see, but also interact with the customer browsing experience really does take things to the next level. Audio and/or video support is great of course, but sometimes being able to see what the customer is seeing in their browser results in a much quicker resolution. This of course results in happy customers, which is what we’re striving to achieve!

As I’ve said above, I’ve used LogMeIn over the years, and always found their products to be pretty much amazing. With Rescue Live Guide, there are several differentiators that the solution brings to market:

  • For the standalone solution of Rescue Live Guide dedicated web resources aren’t needed. It’s an easy solution to set up, and for the customer to engage with – all it requires is a URL to be provided to them to get the session going. Obviously, as mentioned above, there is some slight coding needed for the Omnichannel integration, but this is really minor. Any company having Omnichannel installed/configured will already have power users/admin familiar with what’s needed for this, so it’s a very small additional step
  • It’s possible to co-browse on any website that the customer wants to, not just a single specific website. Once the co-browse session is active, the customer can change to any other website, as long as they do so within the co-browse session tab. Most other co-browse solutions out there can’t do this, so this is a really strong point in favour of this solution.
  • The data masking is really cool, and for most customers, will be a ‘must have’ rather than ‘nice to have’. I’m looking forward to when the setup for this is updated to be more business-user friendly, and will then do a separate blog post around it, together with a video!

A few things that I think would be nice to have:

  • The agent is already able to draw on a webpage during the co-browse session, and select different colours for this. It would be great if the agent could also type text in to display on the screen (not in a specific field) in colour. Sometimes being able to see an example written in front of you (without it going into the actual field) can be quite handy.
  • Being able to transfer the co-browse session to another agent. This could be either another Omnichannel agent, or a separate specialist team. It is of course possible to transfer the chat session to another Omnichannel agent, but then they’d have to start the whole co-browse session again (with a new PIN, etc)
  • Reporting (for the most part) all occurs in LogMeIn at the moment, as Dynamics 365 only has very limited reporting on this natively. However I understand that this is due to change at some point this year, with the ability to report properly on it within Dynamics 365 itself.

At the point when new items do get released, I’ll be aiming to do a review of them, and add to the knowledge around the product.

So, with all of that, how do you think this could best help you & your customers? Please comment below – I’d love to hear!

Canvas Apps, Patch command, & Business Rules

Recently I’ve been doing a LOT of work with canvas apps. As I think I’ve mentioned before (at least once or twice!) my background is the traditional ‘model’ style app. As a result, it’s been quite a steep curve to skill up, but I think I’m handling it alright. I’m (slowly) getting used to the way that canvas apps work, the ability to put different controls on the screens, and reference each other.

Heck, I’m even starting to play with more advanced navigation concepts, based on some REALLY great ideas that I’ve seen (Clarissa, I can’t say how grateful I am to you for all of your assistance & guidance!).

Gradient Adventure

Amongst all of this incredible & wonderous journey, I’ve also been learning some code. Yup – you heard me correctly! I’ve always said that I’m not a developer – I respect them greatly, but I don’t develop code.

True, I’ve picked up some SQL here & there, and will freely admit that running SQL queries against the Dynamics 365 database is SO much more powerful than running an Advanced Find. Of course, it’s necessary to know the joins, conditions & such. Redgate’s SQL Helper has been amazing along the way. With moving to cloud systems, things got a little more….complicated. XrmToolBox has the SQL4CDS tool which I’ve used several times, but I was really excited by the recent announcement/release of being able to (properly) run SQL commands against the CDS database from SQL Management Studio….

Anyhow, I’m digressing. So, I’ve been needing to learn canvas app style code. It’s like Excel commands, though (slightly) different at times. Things don’t always make sense (to me, at least) – I STILL haven’t figured out why some expressions need to be in a certain order. After all, according to mathematical principles it doesn’t matter if you write A>B, or B<A. Going to still need to wrap my mind around all of this.

Simplifying Algebraic Expressions - Math 7 Quiz - Quizizz

So, one of the commands that I’m using quite frequently is the Patch command. If you’re really interested, you can check this out in detail at https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/function-patch.

In short, Patch allows you to set record values from places other than a form table to the data that you’re saving. It also allows you to save field values that aren’t available on the canvas form table (due to limitations). I’ve referred to this previously at https://thecrm.ninja/canvas-app-record-set-regarding-field/. The scenario that I talk about there is just one of the things that can be done in this way. Since that post, we’ve come a long way, and are doing most things with Patch statements (due to the scenario requirements).

So that’s all well & good. However, there IS actually a reason for me writing this blog post….crazy, right? And it’s not to waffle on and on about patch statements. It’s about a very specific scenario that we hadn’t come across to date, but that came up last week.

Now, obviously you’re now VERY interested in hearing all about it, and learning for your own situations. I mean, otherwise you wouldn’t have stuck with me through this article for so long. So, let me set out what happened.

As mentioned above, we’re mostly using patch statements throughout this specific app. That’s….quite a lot of patch statements (especially as we also have IF statements governing which one is being used, as it’s not possible to use IF inside a patch statement, but I’m digress…). I’d say we’re pretty familiar with this now.

However, even with being familiar with it, we suddenly had a problem. One of the forms that we’re saving down started to NOT save down. Records weren’t being saved, which obviously is a problem!

Bear in mind here that we hadn’t touched the code for this specific action for a few weeks. Nothing had changed in our code, and nothing had changed from a platform perspective (ie Microsoft hadn’t changed any of the underlying functionality.

Going into the statement, we immediately started testing it out, and saw something interesting. We were getting an error that a required field could not be NULL:

This was quite puzzling – although in a model app we can set fields as required, and users can’t save the record until they populate it, this isn’t true in a canvas app (well, when using Patch, at least). See, it’s technically possible to use a Patch statement to create/update a record, but you don’t have to pass in required field (values). It’s a sort of workaround (& can be used in some scenarios for benefit, actually). So this happening all of a sudden was quite strange to us.

It was even stranger as we hadn’t been using the field on the form at all. The field that was being referred to was being used for a totally different process, in a different team, & not surfaced into the canvas app at all. This really was causing us to scratch our heads, and try to think (more) out of the box. It didn’t seem to be the code (we could set a value in code, but didn’t want to as it wasn’t relevant), yet we weren’t able to ignore it. Really frustrating!

With all of this in mind, I decided to go back to absolute basics after a few hours of troubleshooting. The field that seemed to be causing all of these issues was a relatively new addition, so I checked all of the details around it:

  • Was the field type correct for what it should be? Yes
  • Was it set as required on the CDS field definition? No (not that I thought this would help, but still checked)
  • Was the field on the entity form? Yes
  • Was the field set as required on the entity form? No (again, I didn’t think I’d get any joy from this)
    • Hold on….on the form designer it’s not set as Required. But when I open the form, and put some values in, suddenly it IS required.

Aha! OK – I’m now starting to see some light shining on this. I headed over to Business Rules to check out what might be there. Lo & behold, there was a business rule that set the field as required (when certain conditions were filled). An example of this would be:

Now this field hadn’t been in place when the code was developed (as mentioned above) – it had come in since. I was very curious if a Business Rule could require canvas apps to set the value, and so did some testing.

Disabling the business rule removed the error from the patch statement. Re-enabling it caused the issue again. OK – so we’ve found what’s been causing this, and could put in an adequate solution to handle it.

So in short, if you’re setting a field as being required through a Business Rule, you’re going to need to address it in any canvas app as well (that’s saving data down to the same form that it’s appearing on). Why it actually happens, when just setting it as Required on the form doesn’t, I have NO idea.

But it’s a good concept to keep in the back of your mind, I believe. Especially if there are multiple people working on developing a single entity, as otherwise you could find yourself in exactly the same scenario that we did!

Have you come across anything like this, or a different piece of strange behaviour? Comment below – I’d love to hear about i!

Power Automate & Lookup Fields

Recently I’ve been expanding my knowledge of Power Automate, and how it works. It really is a truly amazing tool, though there can be some quirks to things! There are so many connectors to use, though I haven’t really used that many of them to date.

Truthfully, most of my work in Power Automate is around CDS & Office 365. Occasionally I’ll dip into another system, but for the most part that keeps me busy enough. It’s not to say I don’t want to explore further, but finding the time can be quite difficult!

One of the great abilities that Power Automate has is to be able to update a record. With focusing on CDS entities for the moment, we would use the inbuilt action for this:

We’d run a query to get a specific record – this would give us the record ID (or GUID, depending on your preference). With this, we’d use the Update Record action & pass in the record GUID. After all, we need to know which record we’re going to update! So for example:

What we can then do is set values for the record. So we can pass in Dynamics Content, use Expressions, etc. These can be from records that are part of our Power Automate query chain, or from elsewhere.

For example, I can say that when a contact’s postcode changes (or zip code for USA), go away, look up the new city, and update it (Note: I haven’t shown the postcode lookup part below):

So this is all really brilliant. Different fields have different behaviours, of course, and we need to respect that. Otherwise the Power Automate flow won’t run, and will error. This is, of course, the digital equivalent of not trying to force a square brick into a round hole!

What we can also do is clear a field value. If for example we’re wanting to remove a value from a field, we can use the NULL expression on the field. When the Power Automate flow runs, it’ll clear whichever value the field is currently holding:

Now, one of the the field types available within CDS is the lookup field. I’m not going to go into what this is, as we should already know this!. We can, of course, set lookup fields values to populate the field, which works as expected.

However (& thanks for bearing with me so far), what happens if we want to clear a lookup field value?

Say for example that we have a task, that’s assigned out to someone. If they reject the task, we want to be able to remove them from the task record. We wouldn’t delete the task, as we still need it (& now would need to assign it to someone else). We need a way to do this.

I can hear what you’re thinking right now – mentioned above is the use of NULL, so we’d use this! Um…well, you’d think so. You can try that, but we’ve found that doesn’t always work. Additionally, that doesn’t actually seem to remove the underlying relationship that’s been put in place.

Update: Thanks to Lin Zaw Winn, who dropped me a line to let me know further information around this. The standard CDS connector (the first one that was available) allowed this to work, but the updated CDS connector (Current Environment) doesn’t allow it. Unfortunately the different connectors aren’t at parity, which is a pity!

So, there’s another way to clear lookup field values. This involves the Unrelate action that’s also available. The steps for this are as follows:

  1. Get the related record (lookup the record type, pass in the GUID for it)
  2. Use the Unrelate action to remove the connection

This will then remove the relationship, which actually results in clearing the lookup field value. In practise (for our scenario), this would look like:

Let’s take a bit of a further look at the options available here:

  • The Relationship field is the relationship between the two entities (eg here it’s Contact & Task). Thankfully you don’t need to manually type this – it’s easily selected from a dropdown list.
  • The URL field is the linked record itself

Note: It’s VERY important to have the Entity Name & URL values in the right order. I’d suggest looking up the connected record first (ie what the lookup field is pointing to), and using that as the Entity Name value. You’d then select the record where the lookup is saved on as the URL value.

What I’d usually suggest as best practise is to have a condition before this takes place. As mentioned earlier, removing the lookup would happen on a record update. This is because you wouldn’t be removing a field value if you’re creating the record!

But you’re not always going to want it removed. In the scenario that I’ve been dealing with, we’re only wanting to remove the volunteer if they’ve rejected the assigned task. So our Power Automate flow is set out like this:

  • When Task record is updated
    • Filtering on the field for ‘Task Accepted’, as we could have other things being updated on the Task record that we don’t want to trigger this particular process
  • Condition to check the ‘Task Accepted’ field value
    • When it’s something other than ‘Rejected’, cancel the flow
    • When it’s ‘Rejected’, run the Unrelate process set out above, and stop flow

You can obviously build out other functionality within it as you so desire.

So with this in mind, how do you think you could benefit from this? Drop a comment below – I’d love to hear!

DateTime fields, XrmToolBox, & Dynamics 365 behaviour

Recently we’ve been rapid producing & deploying solutions, due to the current pandemic. One of the apps that I’ve been working on required quite a few fields for data capture. Well, truthfully most apps require quite a few fields, but I thought that I’d talk about this one in particular, due to something that I discovered.

Now, we all know how to create fields in the Power Platform maker experience. It’s really quite simple – you select that you want to add a new field, put in the details/type of field, & save. Hey voila – you have yourself a nice new field! You can then go on to add it to forms, views, etc etc. We all know how it’s done:

What I’ve found myself doing recently though is not to create fields through the Maker interface (make.powerapps.com), especially when there are lots of fields to create. Instead, I’ve been using the XrmToolBox to do this. There’s a very helpful tool within it called Attribute Editor, which allows you to use an Excel spreadsheet. It takes this, and creates the relevant fields through the Dynamics 365 API.

One of the reasons for doing things this way was that it allows me to get on with other things whilst the fields are being created. Although it doesn’t happen in the blink of an eye (especially when there are a lot of fields to create), I can leave it whizzing along, and do something else. This, of course, makes me feel VERY productive!

Right – back to what I was saying. So I had a lot of fields to create, and many of them needed to be datetime fields. Actually, all I needed was the time component, but unfortunately Dynamics 365 DOESN’T allow you to just show the time. It’s either Date, or DateTime, but no option for JUST Time. A flaw, in my opinion, for what it’s worth….

So I created the Excel template, started the process, and went on to do something else. I of course made sure to specify that the field type should be ‘DateTime’.

Coming back to it when it had finished, I started to place fields on forms, and noticed something strange. All of the datetime fields that I had created through this were date ONLY. This was…puzzling! Going to check the fields themselves, they were set as Date ONLY, not DateTime!

I went back to check my upload spreadsheet, and it was set correctly there. I even tried uploading another field, but still the same issue was occurring.

Now, with the way that Dynamics 365/Power Platform works, once you’ve created a field & saved it, you can’t change the field type. When it’s created it’s saved down to the underlying database structure as the specified field type, and that’s it. No way to change it…or at least not through the front end!

With this in mind, I fired up another one of the XrmToolBox tools, namely Attribute Manager. What this handy tool does is, behind the scenes, allow you to change the field type. Well, it doesn’t ACTUALLY change it directly – it clones it, deletes the original, then clones it back. There are some caveats to it working properly (ie that the field isn’t used in a view somewhere, for instance), but it’s really helpful.

Note: It only works for custom created fields, not the default OOB fields!

Depending on the field type that you’re wanting to change it to, you can select different options. However for DateTime, there’s only one option. OK – I was going to see what happened.

Well, I ran the update, but nothing changed. It was still ‘Date’ only within the interface, which was really being incredibly annoying. It wasn’t as if I could just delete & recreate it (well, I could, of course). I had dozens & dozens of these to do, and quite frankly didn’t want to spend all of that time in doing this.

Thankfully (with the help of one of my colleagues, who’s an experienced & devoted developer – thanks Sid!), we found the solution.

See, I had been doing everything within the ‘new’ interface. This is the one that Microsoft keeps pushing everyone to, as they don’t want people to really be using the Classic Interface anymore. That’s all very well & good, but the ‘new’ interface isn’t on parity (for some things).

Reverting back to the Classic interface (note that the option below is only available when working within a solution!), we discovered some hidden behaviour

We located the entity that we needed, and the field itself, and opened it in Classic. With the screen that’s presented (I do miss this in some ways – I remember the days where I almost lived permanently in here!) we AMAZINGLY have the following option:

We can CHANGE THE TYPE!! Now, this is just with the field that we’ve selected. To be frank, I have no idea at this point about any other field types, and would need to explore that separately. But for the moment, my problem has been solved! (well, to the point that I have ‘time’ values available – I’d still like to see JUST time values being an option).

So with this in mind, I merrily waded through the dozens of fields in the Classic UI, changing them all as needed. It wasn’t just a few minutes of work, but it was definitely much less time that deleting & manually creating each one!

So, really quite helpful. The only other thoughts that I had around things were that it would be nice if the various tools within the XrmToolBox could do this as well. However, the fact that they don’t seem able to actually seems to be a limitation of the API. Having gone to check the different field types & how they’re set programmatically (https://docs.microsoft.com/en-us/powerapps/maker/common-data-service/types-of-fields), I’ve noticed the following:

There really doesn’t seem to be any way to specify the different sub-type, which is a shame!

Have you ever had a similar situation with fields? Drop a comment below- I’d love to hear about it.