Developer environments – new capabilities to create for users

Developer environments are awesome. There – I’ve said it for the record. Formerly known as the ‘Community Plan’, developer environments are there for users to be able to play with things, get up to speed, test out new functionality, etc. They’re free to use – even with premium capabilities & connectors, users do not need premium licensing in place (caveat – if it’s enabled as a Managed Environment, it will require premium licensing).

Originally, users were only able to create a single developer environment. However, earlier on this year Microsoft lifted this restriction – users are now able to create up to THREE developer environments for their own usage (which makes it even easier now for users to get used to ALM capabilities, and try it out for themselves).

Now, the ability for users to create developer environments is controlled at the tenant level, and it’s either On or Off. It requires a global tenant admin to modify this setting, but it’s not possible to say ‘User Group A will not be able to create developer environments for themselves, but User Group B will be able to’.

Organisations have differing viewpoints on whether they should allow their users the ability to create developer environments or not. I know this well, as usually I’m part of conversations with them when they’re debating this.

One of the main challenges that comes when organisations don’t allow users to create their own developer environments has been that historically, it’s not been possible for someone else to create the environment on their behalf. If we think of ‘traditional IT’, if we’re not able to do something due to locked down permissions, we can usually ask ‘IT’ to do it for us, and grant us access. This has not been the case with developer environments though – well, not until recently.

Something that I do from time to time is chat with the Microsoft Product Engineering groups, to provide feedback to (try to!) help iterate products forward and better. One of the conversations I had in the summer was with the team responsible for developer environments. I was able to share experiences & conversations that I had been having with large scale enterprise organisations, and (very politely!) asked if they could look to open up the ability to do something around this.

Around a month ago or so, the first iteration of this dropped – in the Power Platform Admin Centre interface, it was now possible to specify the user for whom an environment was to be created!

This was an amazing start to things, and definitely would start unblocking Power Platform IT teams to enable their users, in circumstances where their organisations had decided to turn off the ability for users to create their own developer environments.

However, this still required the need to do it manually. Unless looking into an RPA process (which, let’s face it, would be clunky & undesirable), it meant that someone with appropriate privileges would need to go & actually create the environment, and associate it to the user.

However, this has now taken another MASSIVE step forward – I’m delighted to announce that this capability has been implemented in the Power Platform CLI, and is live RIGHT NOW (you’ll need to upgrade to the latest version – it’s present in 1.28.3 onwards).

So, with this in place, it’s now possible to use PowerShell commands to be able to create developer environments on behalf of users, and assign it to them. Organisations usually already have PowerShell scripts to handle new joiners, and will therefore be able to integrate this capability into these, to automatically set up developer environments for users. Alternatively, existing users could look to raise internal requests, and have them automated through the use of PowerShell (along with appropriate approval processes, of course!).

So this is really nice to see. However, I think it can still go one step further (at least!), and am trying to use my connection network to raise with the right people.

See, we have the Power Platform for Admins connector within Power Platform already. One of the functions available in this is to be able to create Power Platform environments:

However, if we look at the action (& the advanced settings within this action), there’s no ability to set this:

Interestingly enough, the API version listed by default is actually several years old. By doing some digging around, I can see that there are multiple later API versions, so I’m not sure why it’s using an older one by default:

What would be really amazing is to have these capabilities surfaced directly within Power Platform, using this connector. Then we could look to have everything handled directly within Power Platform. Given that the CoE toolkit already includes an Environment Request feature, I would see this as building on top & enabling it even further. Obviously organisations wouldn’t need the CoE toolkit itself, as they could look to build out something custom to handle this.

What are your thoughts on this – how do you see these features enabling your organisation? If your organisation HAS locked down the ability for users to provision developer environments, are you able to share some insights as to why? I’d love to hear more – drop a comment below!¬

Developer Environment Deletion!

Strong title for a blog post, right? Well, I did want to catch your attention! So what exactly are we talking about here?

For the last few years, it’s been possible for users to sign up for a ‘Developer’ plan, which gives them a full capability Power Platform environment for free (though with some limitations to them). This used to be be called the ‘Community’ plan, and is an amazing resource for everyone, whether they’re a professional or citizen developer, to have their own personal ‘sandpit’ to play in, and try things out.

Let’s wind back a few months in time now – earlier this year, Microsoft announced that users would be able to create THREE of these Developer environments, rather than having just a single one! This was mind blowing news, and something that has been extremely welcomed. If you’re wanting to see more on the announcement, Phil Topness has a great video on it at Dataverse Environments For Everyone – New Developer Plan – Power CAT Live – YouTube.

Incidentally, I’m curious as to how much storage space Microsoft has in the background to handle these. After all, each environment takes up a minimum of 1GB of space (& can grow to 2 GB). That means that each user could have 6GB of storage being used….which when multiplied, gives a VERY large number!

Microsoft has now announced that these developer environments, however, need to be utilisied. Ie if they’ve been created, but aren’t being used, Microsoft is going to delete them! Now, from a certain perspective, this is actually quite good – after all, there are all of the storage considerations for environments that have been created, but not being used. However from a different perspective, this could be a problem. What about if you’re doing something occasionally in an environment, but not too often? What about if you decide to go on a ‘Round the World’ cruise for several months?

So let’s look at the definition for this. Microsoft states that an environment is considered to be inactive when it hasn’t been used for 90 days. At that point in time, it is disabled, and the administrator or environment owner is notified. If there is no action taken within the next 30 days, then the developer environment is automatically deleted.

Now, how does Microsoft define ‘Activity’? It goes something like this:

  • User activity: Launch an app, execute a flow (whether automatic or not), chat with a Power Virtual Agents bot
  • Maker activity: Create, read, update, or delete an app, flow (desktop and cloud flows), Power Virtual Agents bot, custom connector
  • Admin activity: Environment operations such as copy, delete, back up, recover, reset

The above is all user driven – ie a user needs to interact with something within the environment. However, it’s also important to note how automation is viewed:

  • Activity includes automated behaviors such as scheduled flow runs. For example, if there’s no user, maker, or admin activity in an environment, but it contains a cloud flow that runs daily, then the environment is considered active.

It’s also important to note that at this point in time, the above only applies to Developer environments. Other types of environment (Production, Sandbox etc) don’t have any auto-deletion policies called out for them – well, at least not yet (if something does pop up around these, I’ll definitely look to talk about them too!).

So to answer our question above about what happens with a (developer) environment that is only being used infrequently – the way to stop it being auto-deleted is to put some automation in place. This doesn’t need to be lightweight – it can be something simple & easy, just to ensure that the environment registers activity happening within it.

In my view, it would be nice to have some granularity & control over this as well – allowing organisations to set their own deletion policies. We have this in place for things like audit log retention – it would be nice to have have it in here too.

Recognition as Microsoft Partner for Business Application Solutions

It’s been a little while since I’ve previously blogged around developing customer solutions and the Microsoft Specialisations. Since I spoke about it last year (Apps & Microsoft Partner Specialisations) the landscape has moved on a little, and I thought that it would be good to take a look again at it.

Currently in the Business Applications space, there’s a single specialisation. This is the ‘Microsoft Low Code Application Development Advanced Specialisation’, which is covered in detail at the Microsoft page for it (Microsoft Low Code Application Development Advanced Specialization).

In essence, this specialisation is aimed at partners who are developing Power Apps (yes, this is specifically aimed at Power Apps), and has been around for a year or so.

In order for Microsoft to track the qualifying metrics against this specialisation, it’s very important to carry out the PAL (Partner Attach Link) process. The details of how to do this is in my earlier post, which includes some of my thoughts at the time around how a partner should best implement the procedure.

Since then, my blog post has gained a good amount of traction, and several Microsoft partners have engaged with me directly to understand this better, and to implement the process into their project playbook. I’m really delighted at having been able to help others understand the process, and the reasoning behind it.

Now that’s all good for a partner who is staying in place at a customer. However there are multiple scenarios that can differ from this. Examples of this are:

  1. Multiple partners developing a single application together
  2. One partner handing over the application to a second partner for further development
  3. One partner implementing a solution, with a second partner providing support

Now, there’s really a single answer to all of the above scenarios, but it’s a matter of how to go about implementing this properly. Let me explain.

Originally, all developers would register PAL, and this would then be tracked through the environment cadence, and associated appropriately to the partner. This would be from the developers having been the creators of the apps.

This has now changed a little bit. Microsoft now recognises the capabilities of PAL using both the Owner of the app, as well as any Co-Owners of the app. This is a little more subtle, so let’s explain this in some detail.

It is possible, of course, to change the owner of an app. More commonly, however, is the practice of adding co-owner/s to an app (I always recommend this as best practice actually, to remove key-person responsibility risks).

Note: Changing the actual owner of an app requires the usage of a PowerShell command

So what happens now is that Microsoft will track the owners/co-owners of any app that’s deployed, and PAL association will flow through this. But there are a couple of caveats which it’s important to be very aware of!

  1. All owners/co-owners must have registered PAL with their user accounts (if using a service principal/service account as an owner, there’s a way of doing this using PowerShell)
  2. Microsoft will recognise the LATEST owner/co-owner association with the app as the partner organisation that will receive PAL recognition

Now if a customer adds co-owners to an app, this shouldn’t be an issue (as none of the users would have registered PAL). But if there are multiple partners in place, ONLY THE LATEST ONE WILL BE RECOGNISED.

Therefore to take the three scenarios above, let’s see how this would apply.

  1. Multiple partners developing a single app. Recognition would not work for all partners involved, just the latest one to associate with the app
  2. Partner 1 handing over app to Partner 2. Recognition would stop for Partner 1, and would then start for Partner 2
  3. Partner 1 implementing solution, Partner 2 providing support. Care would need to be taken that the appropriate partner is associated as owner/co-owner to the app, for PAL recognition.

It’s also important for both partners & customers to understand this, in the wider context of being careful about app ownership, and the recognition that it brings from Microsoft for partners delivering solutions. If a partner would go into a customer, and suddenly start taking ownership of apps that it’s not involved in, I don’t think that Microsoft would be very approving of it.

Now, all of the above is in relation to Power Apps specifically, as I’ve noted. However, the PAL article was updated last week (located at Link a partner ID to your Power Platform and Dynamics Customer Insights accounts with your Azure credentials | Microsoft Docs) and also interestingly talks about:

Note the differences between each item

Reading between the lines here, I think that we’re going to be seeing more advanced specialisations coming out at some point. Either that, or else partner status will be including these as well, as I can’t think of any other reason why PAL would need to be tracked for these as well! I’m also wondering if other capabilities (eg Power Virtual Agents, Power Pages, etc) will be added at some point as well…

Have you had any challenges with the PAL process? Is there anything more you’d like to find out about it? Drop a comment below, and I’ll do my best to respond!

Security Roles & Assigning Records

Let’s face it, and call a spade a spade (or a shovel, depending on where in the world you happen to be). Security roles are very important within Dataverse, to control what users can (& can’t!) do within the system. Setting them up can be quite time-consuming, and troubleshooting them can sometimes be a bit of a nightmare.

Obviously we need to ensure that users can carry out the actions that they’re supposed to do, and stop them doing any actions that they’re not supposed to do. This, believe it or not, is generally common sense (which can be lacking at times, I’ll admit).

Depending on the size of the organisation, and of course the project, the number of security roles can range from a few, to a LOT!

Testing out security can take quite a bit of time, to ensure that testing covers all necessary functionality. It’s a very granular approach, and can often feel like opening a door, to then find another closed door behind the first one. Error messages appear, a resolution is implemented, then another appears, etc…

Most of us aren’t new to this, and understand that it’s vitally important to work through these. We’ve seen lots of different errors over our lifetime of projects, and can usually identify (quickly) what’s going on, and what we need to resolve.

Last week, however, I had something new occur, that I’ve never seen before. I therefore thought it might be good to talk about it, so that if it happens to others, they’ll know how to handle it!

The scenario is as follows:

  • The client is using Leads to capture initial information (we’re not using Opportunities, but that’s a whole other story)
  • Different teams of users have varying access requirements to the Leads table. Some need to be able to view, some need to be able to create/edit, and others aren’t allowed to view it at all
  • The lead process is driven by both region (where the lead is located), as well as products (which products the lead is interested in)

Now, initially we had some issues with different teams not having the right level of access, but we managed to handle those. Typically we’d see an error message along the following lines:

We’d then use this to narrow down the necessary permissions, adjust the security role, re-test, and continue (sometimes onto the next error message, but hey, that’s par for the course!).

However, just as we thought we had figured out all of the security roles, we had a small sub-set of users report an error that I had NEVER seen before.

The scenario was as follows:

  • The users were able to access Lead records. All good there.
  • The users were able to edit Lead records. All good there.
  • The users were trying to assign records (ie change the record owner) to another user. This generally worked, but when trying to assign the record to certain users, they got the following error:

Now this was a strange error. After all, the users were able to open/edit the lead record, and on checking the permissions in the security role, everything seemed to be set up alright.

The next step was to go look at the error log. In general, error logs can be a massive help (well, most of the time), assuming that the person looking at it can interpret what it means. The error log gave us the following:

As an aside, the most amusing thing about this particular error log, in my opinion, was that the HelpLink URL provided actually didn’t work! Ah well…

So on taking a look, we see that the user is missing the Read privilege (on what we’re assuming is the Lead table). This didn’t make sense – we then went back to DOUBLE-check, and indeed the user who was trying to carry out the action had read privileges on the table. It also didn’t make sense, as the user was able to open the lead record itself (disclaimer – I’ve not yet tried doing a security role where the user has create/write access to a table, but no read access..I’m wondering what would happen in such a scenario)

Then we had a lightbulb moment.

photo of bulb artwork

In truth, we should have probably figured this out before, which I’ll freely admit. See, if we take a look at the original error that the user was getting, they were getting this when trying to assign the record to another user. We had also seen that the error was only happening when the record was being assigned to certain users (ie it wasn’t happening for all users). And finally, after all, the error message title itself says ‘Assignee does not hold the required read permissions’.

So what was the issue? Well, it was actually quite simple (in hindsight!). The error was occurring when the record was being attempted to be assigned to a user that did not have any permissions to the Lead table!

What was the resolution? Well, to simply grant (read) access to the Lead table, and ensure that all necessary users had this granted to them! Thankfully a quick resolution (once we had worked out what was going on), and users were able to continue testing out the rest of the system.

Has something like this ever happened to you? Drop a comment below – I’d love to hear the details!

1E Tachyon – Realtime endpoint management review

As a few people may know, some years back I ran an MSP (Managed Service Provider). In essence, small companies would outsource their IT needs to us, whether it was hardware, networking, software licensing or support. It generally proved to be cost-effective to do this, rather than each company needing a full internal IT support desk.

It was quite exciting, and I built up relationships with various vendors & providers. Attending exhibitions & conferences was always great, especially with the free SWAG that could be collected! Although having moved away from working in the MSP space a while back, I do still keep my eye out, attend some exhibitions, etc. It’s always great to see what new offerings are on the market, especially as I’m able to include suggestions for clients in holistic solution projects.

Which brings me on to 1E…

Let me start off by saying that I haven’t previously come across the Tachyon offering from 1E before, but having now watched a webinar around their v8 release, I’m decently impressed.

Having worked in the IT industry (in a number of different capacities across the years) I know that’s it’s vital for any organisation to understand what’s going on. That’s not being just system or application focused, that’s also including the employees & staff.

Especially in today’s world, where so many different things can be happening, it’s absolutely vital that employees are now considered vital & key to practically all decisions that an organisation can make.

Of course, there are a wide number of challenges that todays ‘work from home’ workforce has, which traditionally were never around. From coping with needing to use home broadband connections, to not having IT support easily available for issues, there are modern challenges that not only need to be faced, but need to be worked on pro-actively.

The Tachyon strapline is that it’s not enough to observe a problem (ie be reactive to it), it’s absolutely key to be able to be pro-active with issues, & try to help before they actually cause problems.

With the monitoring capabilities that Tachyon V8 has, it’s clear to see what a differentiator it’s likely to be in the current marketplace.

Some of the key capabilities are:

  • Being able to send announcements out to the workforce, and tailor these for specific groups
  • Giving employees the ability to interact with announcements, providing feedback & other information needed
  • Monitoring employee wellness, such as identifying if employees are working more hours than they should be, and then checking if everything is alright with them
  • Interfacing with existing ITSM software suites, to ensure that IT has the relevant support systems in place

The aim is really to be the platform to manage the digital employee experience, to enable and empower organisations holistically across everything that they’re doing.

Take rolling out new software, for example:

  • IT can engage with users to get their feedback on the proposed system. With this, they can ensure that hardware will be compatible, as well as identify prospective ‘Early Adopters’ as well as ‘Detractors’
  • Entire campaigns can be constructed to target each group with relevant information
  • Initial rollout phases can be offered to those employees who are most interested in the software. This can then be used to gauge effectiveness, and identify any issues
  • Automated installation options for users, directly managed from within Tachyon 8
  • Satisfaction surveys to get feedback on how the process went, find out any issues or bugs, and work out how satisfied users have been with the overall process

With easy to use management dashboards, information is presented clearly & allows IT the capabilities to drill down into the information gathered.

But the product doesn’t offer just the above. There are many other capabilities, such as monitoring network devices to ensure that they’re working properly, and interacting correctly with the network. There are also options to carry out SAAS monitoring, where existing SAAS systems can be checked to ensure that they’re up and running, and not giving any issues.

All in all, Tachyon V8 looks to be a really amazing product, being able to give organisations the ability to focus on seeing what’s going on, understanding the metrics being gathered, and being able to then move forward to action changes on them

Apps & Microsoft Partner Specialisations

We all know how much we love Microsoft specialisations as users. Studying for exams, passing the certifications, earning those wonderful badges. It’s great on a personal level, and can show that someone has (usually) been actively researching the material, and knowing what it’s about.

In the consulting space, Microsoft partners also can qualify for different specialisations. There are Silver Partners, Gold Partners, as well as other qualifications that partners can achieve as well. It’s something that partners strive towards, as they can get various benefits based on the level that they’re at, and the specialisation/s that they hold.

emediaIT achieves additional Microsoft Partner Gold and Silver competencies  - Gold Application Integration, Silver Data Analytics and Silver Cloud  Platform - emediaIT - Innovative Technology Solutions
Examples of Silver & Gold level Partner competencies

Some of the specialisations depend on the people that they employ. So, for example, to gain Silver status they’d need 3 people who have taken the PL-200 exam (I’m not going to list specific details around each level here, as they change semi-frequently, and Microsoft has good amount of information around it on their website).

One of the things in recent times that Microsoft has started actively tracking is something called MAU. This stands for ‘Monthly Active Users’. Ideally partners should be creating & deploying solutions that will attract more users within their customer organisations, so this should be a number that grows over time. In fact, partners are actually rated based on their customers MAU figures, so it’s something that’s actually quite important for partners to keep on top of!

Example of MAU chart

So why am I bringing apps (all types!) into this, and talking about it? Well, off the back of a conversation I had with our Microsoft PTS (Partner Technical Specialist) recently, I thought it would be helpful to expand on something.

In the ‘Low Code Application Development’ specialisation, for example, there’s a section around performance. Note that this also appears on other advanced specialisations as well:

We were talking to our PTS around how exactly, as a partner, we can ‘register’ the app so that Microsoft knows that we built it, and how the MAU is measured etc. It really was quite fascinating to delve deeper into this, to gain a better understanding of such things.

Firstly, the process by which a partner registers that they’re a partner who is working in the customer organisation is by a process called PAL (Partner Admin Link). If you’re curious about the process, and wanting to know more, take a look at https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/link-partner-id. There’s a good amount of information there, as well as the process for it.

Now, how does this work exactly? Well, the creator of the app needs to carry out the registration. This essentially says that the creator works for a partner, and associates the partner ID. Microsoft can see who the the creator of the app is, and automatically connects all of the apps that they’ve created to the partner that they’re associating as. This is really the only step that’s needed to be done.

But how does this actually work, behind the scenes? Well, this is the interesting part!

See, every app has an ID that’s automatically generated when the app is first created/saved. This is an automatic part of the process, and isn’t something that we, as app creators, have any control over.

It’s possible to view this by going into the Maker portal, finding the app, and clicking ‘Details’:

This will then show a variety of information about the app, included the App ID:

Now what is interesting is that the underlying app code doesn’t actually show/use this App ID. It has a different GUID that’s used within it to reference the app. So somewhere behind the scenes Microsoft has something that maps one to the other:

Notepad++ – what I use to view code!

So we’ve now seen how each app has it’s own ID. Now for the interesting part (well, I guess that’ll depend on your experience with such things). When an app is deployed through different environments (within the same tenant), the App ID remains the same! So if I have a canvas app in DEV, and then deploy it to UAT, Staging, Pre-Prod, Prod, etc, the App ID will always remain the same.

This is how Microsoft tracks the app through the different environments. This is quite important to note, because the creator of an app may not actually have access to the Production environment (or others), for example. So when we’re needing to register that we’re the owners of an app, we can do so in the Development environment, and it will counted for us across all environments.

Note: This will not work cross-tenant. It’s important to note that even when deploying across tenants, the App ID will remain the same. However Microsoft do NOT track it across tenants – it needs to be registered by the partner in the tenant that it will be used in (regardless of which actual environment). For customers therefore who have a a multi-tenant deployment approach, necessary conversations would need to be had with them as to the best way to handle this.

Now above we’ve mentioned MAU, and that Microsoft tracks this to assess if the partner (who’s created the app) is meeting necessary requirements or not.

Given that developers may not have access to the Production environment, and that some customers like to ‘soft launch’ new apps, measuring MAU at a late point isn’t going to be beneficial to the partner. After all, if you only register PAL when you have 100 users on the app, and the customer only has 120 users overall, the MAU isn’t going to be very significant.

Based on my conversations with Microsoft people around this, it would seem that the best practise would therefore be as follows for the process:

  1. App creater registers PAL in the client tenant (this needs to be done once per app creator – all apps created by the same user will automatically be tracked)
  2. App creator creates the app (by saving it). App ID is automatically generated
  3. Microsoft can start tracking the necessary statistics
  4. ALM process for deploying the app (& any other components) to take place as usual. Nothing else needs to be done

I hope that this information & guide is useful for people working at Microsoft Partners, and can help them understand how this process works.

If you have any questions or comments, please feel free to drop a comment below – I’d love to hear from you around this!

Reconnecting to previous chat session

We’ve all been there. We’re in the middle of a chat session with a support agent, or talking to a salesperson, etc. Suddenly things go wrong – our browser hangs, the internet loses connection, or something else…

Alternatively, I do know of situations where kids have pulled out the internet cables during ‘playtime’ – it really does happen!

Immediately we’re frustrated. Not only have we not finished what we were trying to achieve, but we’re going to need to start all over again. Perhaps the agent took notes & logged them against our contact record, but the likelihood is that it hasn’t happened. It’s going to take time to get through to an agent again, then we have to explain the whole situation from the absolute beginning. It’s heartrending, and can cause our day to absolutely go down the tubes!

Well, what if we could just re-connect to the chat session with all our data saved? Better still, what if we could go back and continue chatting with the specific agent that we had been communicating with? Sounds amazing, but wishful, right?

Well, we now have this ability within Omnichannel, to be able to enable our customers even further. There are even two ways in which we can offer this:

  • Reconnecting with a link (URL). If the agent is concerned that the chat session may be interrupted, they can provide a URL at the start of the session. If the customer becomes disconnected from the session for whatever reason, they can click the link, and it’ll take them right back to it. This works for both authenticated & unauthenticated users
  • Reconnecting through a prompt. For authenticated chat users, if the session drops they can be presented with a prompt. This will allow them to choose whether to connect to the previous session, or start a new session.

Let’s take a look at it, and how it works.

In the Omnichannel Administration Centre, we need to go to the specific Chat record that we’re wanting to set this up for. We open the record, and are now presented with the following (we do need to scroll down the screen a bit):

Note that this is in Preview currently, so just be a bit careful with it!

There are several options available. We don’t need to use each one, but let’s understand what each one does:

  • Turn on reconnect to previous chat. This is the option to enable if we’re wanting to offer this. Without it set, it’s not going to work!
  • Reconnect time limit. How long we’ll offer the option to the customer to reconnect for. See the note below around this
  • Reconnect to previous agent for. How long we’ll allow the customer to connect back to the same agent. This needs to be equal or less to the ‘Reconnect Time Limit’ value that we’ve set. During this period of time, the agent’s capacity is blocked, unless the agent uses the ‘Close’ button on their interface to end the conversation (which then releases the agents availability)
  • Portal URL. As mentioned higher up, the agent can provide a URL for the customer to auto-reconnect if the session drops. This value is the URL that the chat widget is deployed to
  • Redirection URL. If the connection drops, and the re-connection timeout occurs, we can redirect the customer to a specific web-page. If this isn’t set, the customer will see the option to start a new chat conversation

Note: The ‘Reconnect Time Limit’ value is auto-set by the system to the value specified in the work-stream that’s associated with the chat widget. It’s not possible to manually change this in the chat widget itself. Instead, the work-stream ‘Auto-close after inactivity’ value would need to be changed. This is shown below:

Note: It’s also important that the customer hasn’t closed THEIR chat window! All of this relies on the customer chat still being there. If the customer has closed their window/browser, they won’t be offered this option.

Have you ever needed to offer customer capability along these lines? How did you go about it? Drop a comment below – I’d love to hear!

Keeping belief in oneself

Although I usually post around technical matters & such, occasionally I digress into personal reflection. After all, this is my personal blog, & I feel it’s sometimes good/relevant to share certain personal things. Today’s post is along those lines, though it does relate to a technical matter.

Let’s set the scene. As many of you know (either from knowing me personally, or from reading my blog posts), I’m from the ‘model-driven app’ background. Canvas apps are really cool, but I wouldn’t say that I’m a very advanced creator of them. I’m learning the whole time about them (well, when I have a free minute here & there). There are many people in the community who are extremely more advanced than I am, and I love being able to learn from them.

I’m also considered to be in ‘Delivery’, This is the fancy word for those who run/are involved in projects, rather than selling concepts to clients. I’d run a mile if someone tried to put me in a Sales role (though I do admire the power suits that Sales have, occasionally). I’ve done a bit of Pre-Sales (where I’m helping out from a technical perspective), but haven’t been heavily involved. It’s actually something that I’m trying to work on, with being a tech evangelist. After all, if people already know/rave about the tech, how can you evangelise about it to them!!

Account Managers vs Sales People - davidmarkshaw

So last week I get a call from our Sales team. They’re really nice, and know their stuff. However they’re not ‘techies’. They had a situation – we’d been talking to a client about a potential project, and the client told us to pitch for it. Brilliant, right? Well…

The client told us that we had 4 days until the pitch deadline. Not only were we needing to pitch with the usual presentation pack (however would Sales operate without PowerPoint…?), we also had to do a live demo. Not for a completed product, but rather a Proof of Concept (PoC).

The only person available was….yes, you guess it…me. There wasn’t anyone else around with the necessary knowledge/skills to create the PoC in the time-frame needed. I’ll freely admit that I was absolutely slammed with existing projects, but wanted to be able to help out.

However, things then got ‘better’. And by ‘better’, I meant ‘interesting’. I got told who else was pitching to the client. Obviously I’m not going to mention any specific details here, but I knew who they are. More importantly, I figured that I had a very good idea of who from their side would be creating the tech, & doing the pitch.

Now as I’m not mentioning any identifiable details, I’m feeling free to say this. They’re not at my level of tech skills. They’re nowhere NEAR my level of tech skills. This is NOT because I’m better than they are. Totally the opposite – they’re SO far ahead of me with their knowledge of things, I can barely see the dust that they kick up in a race.

Knowing this, I knew that I couldn’t build a model-driven app (though it would have worked perfectly for the scenario/s we were given). I HAD to do a canvas app. But even with doing that, it wasn’t going to be anywhere near as good as what the other side would be able to put on.

The phrase ‘gibbering in fear’ does come to mind with my reaction to finding all of this out. I did feel slightly like a deer caught in the headlights. I wanted to do well, both for myself & my company, but I honestly had no idea how we could stack up.

Deer in the Headlights: By Generation Success – Generation Success
How I felt I looked like!

Thankfully, my company has an extremely open culture, and I was therefore able to talk to my manager about it. He understood where I was coming from, but encouraged me to go for it & do what I would be able to create.

My wife also encouraged me to go for it. Well, actually her words were ‘it’s not sexy when a husband says that he can’t do it, so man up and go for it!’. Ha…after that I couldn’t very well NOT do it.

So I applied myself, and with some VERY late nights (I did have other projects on, as I mentioned above), managed to get something in place. Not only did I create it, I think it looked really good. There was some really nice (canvas app) functionality, and it all came together pretty well.

Everything was in place in time (including some last minute tweaks). I even decided to spice up the demo a bit, and borrowed some dinosaurs from the kids to use for personas. We were using live camera feeds for part of the demo, and suddenly the demo was joined by ‘Rexy’, the ‘Customer Service Representative’ T-Rex! They were quite amused by it (thankfully!), and our team thought it was absolutely hilarious.

Hire A Dinosaur - Creature Events
‘Good afternoon, how may I be of assistance?’

I have no idea how the other partner pitched to the client, or what the decision will be from the client. It’s way too early for that.

What I do know is that sometimes we can lose track of ourselves. I’m not going to go into the subject of ‘Imposter Syndrome’ (check out Em D’Arcy if you want to read up about that). Rather that having others around to encourage us, even though others may be more skilled, can really make the difference.

In life, we can often face challenges. How we handle them, and how we decide to move forward, can define who we are. When dealing with technology items such as the Power Platform, where there’s constant change, it can sometimes feel very daunting, but we still need to push ahead.

Yesterday I was listening to Lisa Crosbie talking about her journey into technology (and canvas apps). As she put it – ‘there is no comfort zone here – you need to find a place to feel comfortable with this level of discomfort, and ride it to be successful’. It’s really so true. It’s not just needing to push ourselves in the traditional way, but to keep up our own confidence in our skills & abilities. With this, we can continue to drive forward, keep on learning, and continue our journey of greatness!

I’m really glad that I was able to do this, and hope that I can keep this with me. By doing so, I’ll be able to continue along my own journey.

Have you ever had a time when a challenge seemed insurmountable? How did you cope with it? Drop a comment below – I’d love to hear!

Canvas Apps & renaming field labels

Today I want to share with you something that I’ve realised. Changing field labels can have unintended consequences!

Let’s cast our minds back to the days of ‘traditional’ Microsoft CRM, or as it’s so lovingly referred to nowadays, ‘model-driven’ apps. What you had were a number of entities (eg Accounts, Contacts, etc), all of which contained fields. Fields could be different types (text, integer, boolean etc), and have varying properties on them. You could set them to be required (or not), searchable (or not), and have so much fun.

At the heart of a field is the name that it has. Well, technically there are two names. One is the actual database name. Once a field was created & saved, this was effectively written in stone. The only way to handle a situation where you spelled this incorrectly was to delete it, and then recreate it. Even then, it could still be floating around in the back-end database in its original form.

The second name is the Display name (or Display label). This was the text used on the entity form itself, & could be changed as desired. This was actually really useful – many a time a business unit would say something like ‘we don’t want the field to show as Zip/Postal Code’; we want it to say ‘Postcode’. Well, that was easy enough to address – simply go ahead, load up customisations, & change the display name property for the field. Everyone was impressed & happy, and could get on with their work.

There were of course times that Business Unit A would say ‘I want ‘ABC’ as the display name’, and then Business Unit B would say ‘Ah, but I want ‘XYZ’ as the display name!’. To handle this, it was very possible to customise the label on the form itself, which would then override the display name value. This, of course, would only be valid for that specific form, so it was then imperative to have different forms for the different business units.

Now, in the good old days we use to create SQL queries against the database, SSRS reports, etc. In order to do this, we needed to know the actual underlying (database) field name. We could of course open up customisations, & start trawling through, but there are better methods for doing this. One of these is Level Up by Natraj Yegnaraman. This can be found at https://github.com/rajyraman/Levelup-for-Dynamics-CRM, and is an extension which can be run on Chrome, Edge on Chromium, & FireFox).

Using this amazing tool, it was possible to merely load an entity form up in an existing system, and then TADA! At the click of a button (well, two clicks actually), the underlying database name was revealed. This was an absolute lifesaver, so many times.

So there we’ve been, toddling along for many years like this. It worked, and worked well. All was good.

Then came along canvas apps. Now I’m not a canvas app guru by any means – I’m quite new to them, and still trying to wrap my head around the ‘special’ way in which they operate. Thankfully there are quite a few gurus in the community who have given me help in one way or another to learn how to carry out various functions, and I think that I may JUST be starting to get the hang of it.

With the current COVID-19 situation, I’ve been working on a series of apps for work, to help local authorities. One of these is a canvas app for call centres, to record information easily & quickly. We chose to go down the canvas route due to being able to have a clean layout, as well as being able to display information for the operators to read. This would have been much more difficult in a traditional model-driven app, especially as such things as dialogues have been deprecated.

One of the functions that I’ve had to learn to do this has been to use the ‘Patch’ function (see https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/function-patch for more information on this. The following is an example of one of the Patch statements that I was using:

This was working remarkably well – it was creating the task record, and setting all of the different values that I needed. For those who are curious as to why I was using a Patch statement, rather than submitting the form, it was due to needing to set the ‘Regarding’ field, which has some very special behaviour!

Then someone on the team said ‘Hold on – we’re only storing one address. Let’s change the field display names to remove the ‘Address 1′ part, so that we don’t confuse users’. OK – I didn’t INITIALLY see any issue with this. I bet that you can see what’s coming through…

Yes – you’re right. The patch statement isn’t referring to the field database name. It’s referring to the field display name! The reason for this is that this is the syntax that Canvas Apps use – there doesn’t seem to be a way to refer to the actual underlying field database name

Of course, I only actually discovered this when I ran through the canvas app again. And indeed, it was whilst demonstrating it to other people! Oh joys – what a wonderful time for it to happen.

So, I then had to figure out what had happened – thankfully that didn’t take too much time. What DID take time was going through every single place in the canvas app that had code referring to the specific fields, and update them to the new (correct) values. This therefore ended up looking like:

So, the vitally important lesson to learn here is be VERY careful when changing field display names, especially if you have one (or more) canvas apps that are referencing them. The last thing that you want is a major headache in having to go back through every place that refers to them, and changing/updating the values.

The only workaround that I’d suggest, is that if you’re wanting to change how fields display in the canvas app itself, change the ‘Text’ value for the field:

That way, HOPEFULLY, nothing will break moving forward.

I hope that you’ve found this useful. If you have a different way in which you’ve handled this situation, feel free to leave a comment below!

Canvas App record set Regarding field

For the last few days, I’ve been working on an app. Not just any app – it’s a canvas app! (It actually happens to be a COVID-19 related app, for local authorities to use to contact vulnerable people & check they’re OK etc).

Now, my background isn’t canvas apps – it’s the model-driven app approach. I’ve been doing this for years – after all, my experience goes back to Microsoft CRM 3.0! So that’s all really nice & easy for me (even with some of the more modern ‘tweaks’ that have been brought in). Canvas apps, on the other hand, are very different from what I’m used to, and are taking quite a bit of getting used to.

See, the following example is easy in a traditional model-driven app:

Create a contact, save various attributes to the contact record. Then create a task, and set the Task Regarding field to the contact that you’ve just created

Looking at that, my mind says ‘easy-peasy’!. I create the fields required for the contact entity (& task entity as well, if needed). I then add them to the entity form/s (creating or modifying the form view/s as well). Finally, I create a Business Process Flow for users on the contact entity, and append the task creation to it. Simple, and done – not much time needed to be spent.

But when needing to do this as a canvas app, things change around QUITE a bit. I can’t create that business process flow, and I have multiple screens to have all of the information on.

Now, if I could add the ‘Regarding’ field to the edit form grid, and apply formatting to it, I could hopefully then just submit & save the grid. However, that unfortunately doesn’t work. I can add the field, but when I do so, I get the following:

So that doesn’t work. Hmmm – how then should I go around doing it?

I did (obviously!!) take a look online. Here I came across this wonderful article all about polymorphic lookups (https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/working-with-references). Having read, & re-read through it, I’m STILL not understanding what exactly I should be doing by this!

So I was stumped. Thankfully we have an amazing community, and on reaching out to someone within it (thanks Eric!), I was helped out. I therefore thought to write this post up, so that it can help others as well.

There are two parts to this, for my specific scenario:

  • Saving the contact record down. This is a matter of using (in my case) the command ‘SubmitForm.ContactInformation’ on my contact form screen. I can then also set a variable if I want to, to refer to the Contact record GUID (hey – I’m trying to be cool here & show that I can!)
  • Finding a different way to save my task record. I accomplished this using the Patch statement – this thankfully wasn’t too difficult for me to grasp how it worked.

So, how did I go about using the Patch statement? Well, the function is referenced here – https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/function-patch. With Eric’s help, I soon started to see how to do it.

What I did was add the following line in my Patch statement when I was wanting to save the task: ‘Regarding:ContactForm.LastSubmit’ (‘ContactForm was the name of the form for the contact information). What this did was write into the record the GUID for the contact record that I last saved.

An alternative to this would be to use a variable instead, and set it there.

Thankfully this all worked. I’m now able to create Task records and set their Regarding field value to the Contact that I set up before them – which is the exact thing that I was trying to do!

I hope that this has been helpful – leave a note in the comments if you’ve found another way of doing this.