Chatting to Michelle about her long-time interest in anime/manga, the wonders of #PowerAutomate cloud flows, & what might happen as a result of an unexpected loop…
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
Let’s face it, and call a spade a spade (or a shovel, depending on where in the world you happen to be). Security roles are very important within Dataverse, to control what users can (& can’t!) do within the system. Setting them up can be quite time-consuming, and troubleshooting them can sometimes be a bit of a nightmare.
Obviously we need to ensure that users can carry out the actions that they’re supposed to do, and stop them doing any actions that they’re not supposed to do. This, believe it or not, is generally common sense (which can be lacking at times, I’ll admit).
Depending on the size of the organisation, and of course the project, the number of security roles can range from a few, to a LOT!
Testing out security can take quite a bit of time, to ensure that testing covers all necessary functionality. It’s a very granular approach, and can often feel like opening a door, to then find another closed door behind the first one. Error messages appear, a resolution is implemented, then another appears, etc…
Most of us aren’t new to this, and understand that it’s vitally important to work through these. We’ve seen lots of different errors over our lifetime of projects, and can usually identify (quickly) what’s going on, and what we need to resolve.
Last week, however, I had something new occur, that I’ve never seen before. I therefore thought it might be good to talk about it, so that if it happens to others, they’ll know how to handle it!
The scenario is as follows:
The client is using Leads to capture initial information (we’re not using Opportunities, but that’s a whole other story)
Different teams of users have varying access requirements to the Leads table. Some need to be able to view, some need to be able to create/edit, and others aren’t allowed to view it at all
The lead process is driven by both region (where the lead is located), as well as products (which products the lead is interested in)
Now, initially we had some issues with different teams not having the right level of access, but we managed to handle those. Typically we’d see an error message along the following lines:
We’d then use this to narrow down the necessary permissions, adjust the security role, re-test, and continue (sometimes onto the next error message, but hey, that’s par for the course!).
However, just as we thought we had figured out all of the security roles, we had a small sub-set of users report an error that I had NEVER seen before.
The scenario was as follows:
The users were able to access Lead records. All good there.
The users were able to edit Lead records. All good there.
The users were trying to assign records (ie change the record owner) to another user. This generally worked, but when trying to assign the record to certain users, they got the following error:
Now this was a strange error. After all, the users were able to open/edit the lead record, and on checking the permissions in the security role, everything seemed to be set up alright.
The next step was to go look at the error log. In general, error logs can be a massive help (well, most of the time), assuming that the person looking at it can interpret what it means. The error log gave us the following:
As an aside, the most amusing thing about this particular error log, in my opinion, was that the HelpLink URL provided actually didn’t work! Ah well…
So on taking a look, we see that the user is missing the Read privilege (on what we’re assuming is the Lead table). This didn’t make sense – we then went back to DOUBLE-check, and indeed the user who was trying to carry out the action had read privileges on the table. It also didn’t make sense, as the user was able to open the lead record itself (disclaimer – I’ve not yet tried doing a security role where the user has create/write access to a table, but no read access..I’m wondering what would happen in such a scenario)
Then we had a lightbulb moment.
In truth, we should have probably figured this out before, which I’ll freely admit. See, if we take a look at the original error that the user was getting, they were getting this when trying to assign the record to another user. We had also seen that the error was only happening when the record was being assigned to certain users (ie it wasn’t happening for all users). And finally, after all, the error message title itself says ‘Assignee does not hold the required read permissions’.
So what was the issue? Well, it was actually quite simple (in hindsight!). The error was occurring when the record was being attempted to be assigned to a user that did not have any permissions to the Lead table!
What was the resolution? Well, to simply grant (read) access to the Lead table, and ensure that all necessary users had this granted to them! Thankfully a quick resolution (once we had worked out what was going on), and users were able to continue testing out the rest of the system.
Has something like this ever happened to you? Drop a comment below – I’d love to hear the details!
It’s been a while since I’ve taken an exam. Admittedly, this is for two reason. Firstly, the renewal process for exams now (as updated last year) is not to take it again, but rather to re-qualify through Microsoft Learn. The second reason is that I’ve been waiting for some new exams to come out (OK – there’s the DA-100, which is still on my list of things to do…).
Well, there’s a new exam on the block. In fact, it’s a different type of exam – this is a ‘Speciality’ exam, rather than focusing on a specific type of application. It’s the first of its kind, though there are likely to be more to follow in the future.
It’s the MB-260, which is all around Customer Data. That’s right – it’s not about how to do sales, or customer service, or something else. It’s about taking the (holistic) approach to ALL of the data that we can hold on customers, and do something with it.
Candidates for this exam implement solutions that provide insights into customer profiles and that track engagement activities to help improve customer experiences and increase customer retention.
Candidates should have firsthand experience with Dynamics 365 Customer Insights and one or more additional Dynamics 365 apps, Power Query, Microsoft Dataverse, Common Data Model, and Microsoft Power Platform. They should also have direct experience with practices related to privacy, compliance, consent, security, responsible AI, and data retention policy.
Candidates need experience with processes related to KPIs, data retention, validation, visualization, preparation, matching, fragmentation, segmentation, and enhancement. They should have a general understanding of Azure Machine Learning, Azure Synapse Analytics, and Azure Data Factory.
Note that there’s quite a bit of Azure in there – it’s not just about Power Platform, Dataverse, or Dynamics 365. People who handle reporting on customer data should have various Azure skills as well.
There’s also a new type of badge that will be available:
At the time of writing, there are no official Microsoft Learning paths available to use to study. I do expect this to change in the near future, and will update this article when they’re out. However the objectives/sub-objectives are available to view from the main exam page, and I’d highly recommend going ahead & taking a good look at these.
As in my previous exam posts, I’m going to stress that it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else! ). I’ve tried to group things together as best as possible for the different subject areas.
Overall, I had 51 questions, which was towards the higher number of questions that I’ve experienced in my exams over the last year or so. There was only a single case study though.
Some of the naming conventions weren’t updated to the latest methods, which I would have expected. I still had a few references to ‘entities’ and ‘fields’ come up, though for the most part ‘tables’ and ‘columns’ were used. I guess it’s a matter of time to get everything up to speed with it.
Differences between Audience Insights and Engagement Insights
What are the benefits of each
When would you use each one
What types of users will benefit from each type
How to create customer insights
Environments
Types of environments
How to create a new environment
What options are available when creating an environment
What is possible to copy from an existing environment
Relationships
Different types of relationships
What is each one used for
Limitations of different relationship types
Business level measures vs customer level measures
What each one is, and what they’re used for
Power Query
How to use
How to configure
How to load data
Data mapping
Different types available to use
Scenarios each type should be used for
Limitations of each type
How to set it up
Segments
What are segments, how are they set up, how are they used What are quick segments, how are they set up, how are they used What are segment overlaps, how are they set up, how are they used What are segment differentiators, how are they set up, how are they used
Measures
What are measures, how are they set up, how are they used
Data refresh
Automated vs manual options
Limitations of each type
Availability of each type
How to set up each type
How to apply each type
Data Unification
What is this
How it can be used
How to set it up
Limitations of it
Process validation
Changing existing models
AI for Audience Insights
What is this
What can it be used for
How to use it
Factors that can affect outcomes
Security
Using Azure Key Vault
Capabilities of this
How to set it up
How to use it
Dynamics 365
Capabilities for interacting with Dynamics 365
How to set it up
How to display data, and where it can be displayed
What actions users are able to carry out within Dynamics 365
Wow. It’s a lot of stuff. It’s definitely an exam that if you’re not already currently hands-on with the skills needed, I’d highly recommend you get a decent amount of experience with it before taking the exam!
I can’t tell you if I’ve passed it or not…YET!. Results aren’t going to be out for several months, and to be honest, I’m not quite sure how well I’ve actually done.
So, if you’re aiming to take it – I wish you the very best of luck, and let me know your experience!
Speaking to Antti around whiskey collecting (something that a lot of the #community seems to enjoy?), and the importance of PROPER requirements gathering during projects (especially around ‘low code’!)
If you’d like to come appear on the show, please sign up at http://bit.ly/2NqP5PV – I’d love to have you on it!
Click here to take a look at the other videos that are available to watch.
Over the holiday period, I’ve been playing around with solution deployments. OK – don’t judge me too much…I also took the necessary time off to relax & get time off work!
But with some spare time in the evenings, I decided to look a bit deeper into the world of DevOps (more specifically, Azure DevOps), and how it works. I’ll admit that I did have some ulterior motives around it (for a project that I’m working on), but it was good to be able to get some time to do this.
So why am I writing this post? Well, there’s a variety of great material out there already around DevOps, such as https://benediktbergmann.eu/ by Benedikt (check out his Twitter here), who’s really great at this. I chat to him from time to time around DevOps, to be able to understand it better.
However, I ran into some quite interesting behaviour (which I STILL have no idea why it’s the case, but more on this later), and thought that I would document it.
Right – let’s start off with manual deployments. As we know, manual deployments are done through the user interface. A user (with necessary permissions) would do the following:
Go into the DEV environment, and export the solution (regardless of whether this is managed or unmanaged)
Go into the target environment, and import the solution
Pretty simple, right?
Now, from an DevOps point of view, the process is similar, though not quite the same. Let’s see how it works:
Run a Build pipeline, which will export the solution from the DEV environment, and put it into the repository
Run a Release pipeline, which will get the solution from the repository, and deploy it to the necessary environment/s
All of that runs (usually) quite smoothly, which is great.
Now, let’s talk for a minute about managed solutions. I’m not going to get into the (heated) discussion around managed vs unmanaged solutions. There’s enough that’s been written, said, and debated on around the topic to date, and I’m sure it will continue. Obviously we all know that the Microsoft Best Practise approach is to use managed solutions in all non-DEV environments..
Anyway – why am I bringing this up? Well, there’s one key different in behaviour when deploying a managed solution vs an unmanaged solution (for a newer solution version), and this is to do with removing functionality from the solution in the DEV environment:
When deploying an unmanaged solution, it’s possible to remove items from the solution in the DEV environment, but when deploying to other environments, those items will still remain, even though they’re not present in the solution. Unmanaged solution deployments are additive only, and will not not remove any components
When deploying a managed solution, any items removed from the solution in the DEV environment, and then deploying the solution to other environments will cause those items to be removed from there as well. Managed deployments are both additive & subtractive (ie if a component isn’t present in the solution, it will remove it when the solution is deployed)
Now most of us know this already, which is great. It’s a very useful way to handle matters, and can assist with handling a variety of scenarios.
So, let’s go back to my first question – why am I writing this post? Well..it’s because of the different behaviour in manual vs automated deployment, which I discovered. Let’s look at this.
When deploying manually, we get the following options:
The default behaviour (outlined above) is to UPGRADE the solution. This will apply the solution with both additive & detractive behaviour. This is what we’re generally used to, and essentially the behaviour that we’d expect with a managed solution.
Now, when running a release pipeline from Azure DevOps, we’d expect this to work in the same way. After all, systems should be build to all work in the same way, right?
Well, no, that’s not actually what happens. See, when an Azure DevOps release pipeline runs, the default behaviour is NOT to import the solution (we’re talking managed solutions here) as an upgrade. Instead (by default), it imports it as an UPDATE!!!
This is what was really confusing me. I had removed functionality in DEV, ran the build pipeline, then ran the release pipeline. However the functionality (which I had removed from DEV) was still present in UAT! It took me a while to find out what was actually happening underneath…
So how can we handle this? Well, apart from suggesting to Microsoft that they should (perhaps) make everything work in the SAME way, there’s a way to handle it within the release pipeline. For this, it’s necessary to do two things:
Firstly, on the ‘Import Solution’ task, we need to set it to import as a holding solution.
Secondly, we then need to use the ‘Apply Solution Upgrade’ task in the release pipeline
What this will do is then upgrade the existing solution in the target environment with the holding solution that’s just been deployed.
Note: You will need to change the solution version to a higher solution number, in order for this to work properly. I’m going to write more about this another time, but it is important to know!
So in my view, this is a bit annoying, and perhaps Microsoft will change the default behaviour within DevOps at some point. But for the moment, it’s necessary to do.
Has this (or something similar) tripped you up in the past? How did you figure it out? Drop a comment below – I’d love to hear!
As a few people may know, some years back I ran an MSP (Managed Service Provider). In essence, small companies would outsource their IT needs to us, whether it was hardware, networking, software licensing or support. It generally proved to be cost-effective to do this, rather than each company needing a full internal IT support desk.
It was quite exciting, and I built up relationships with various vendors & providers. Attending exhibitions & conferences was always great, especially with the free SWAG that could be collected! Although having moved away from working in the MSP space a while back, I do still keep my eye out, attend some exhibitions, etc. It’s always great to see what new offerings are on the market, especially as I’m able to include suggestions for clients in holistic solution projects.
Which brings me on to 1E…
Let me start off by saying that I haven’t previously come across the Tachyon offering from 1E before, but having now watched a webinar around their v8 release, I’m decently impressed.
Having worked in the IT industry (in a number of different capacities across the years) I know that’s it’s vital for any organisation to understand what’s going on. That’s not being just system or application focused, that’s also including the employees & staff.
Especially in today’s world, where so many different things can be happening, it’s absolutely vital that employees are now considered vital & key to practically all decisions that an organisation can make.
Of course, there are a wide number of challenges that todays ‘work from home’ workforce has, which traditionally were never around. From coping with needing to use home broadband connections, to not having IT support easily available for issues, there are modern challenges that not only need to be faced, but need to be worked on pro-actively.
The Tachyon strapline is that it’s not enough to observe a problem (ie be reactive to it), it’s absolutely key to be able to be pro-active with issues, & try to help before they actually cause problems.
With the monitoring capabilities that Tachyon V8 has, it’s clear to see what a differentiator it’s likely to be in the current marketplace.
Some of the key capabilities are:
Being able to send announcements out to the workforce, and tailor these for specific groups
Giving employees the ability to interact with announcements, providing feedback & other information needed
Monitoring employee wellness, such as identifying if employees are working more hours than they should be, and then checking if everything is alright with them
Interfacing with existing ITSM software suites, to ensure that IT has the relevant support systems in place
The aim is really to be the platform to manage the digital employee experience, to enable and empower organisations holistically across everything that they’re doing.
Take rolling out new software, for example:
IT can engage with users to get their feedback on the proposed system. With this, they can ensure that hardware will be compatible, as well as identify prospective ‘Early Adopters’ as well as ‘Detractors’
Entire campaigns can be constructed to target each group with relevant information
Initial rollout phases can be offered to those employees who are most interested in the software. This can then be used to gauge effectiveness, and identify any issues
Automated installation options for users, directly managed from within Tachyon 8
Satisfaction surveys to get feedback on how the process went, find out any issues or bugs, and work out how satisfied users have been with the overall process
With easy to use management dashboards, information is presented clearly & allows IT the capabilities to drill down into the information gathered.
But the product doesn’t offer just the above. There are many other capabilities, such as monitoring network devices to ensure that they’re working properly, and interacting correctly with the network. There are also options to carry out SAAS monitoring, where existing SAAS systems can be checked to ensure that they’re up and running, and not giving any issues.
All in all, Tachyon V8 looks to be a really amazing product, being able to give organisations the ability to focus on seeing what’s going on, understanding the metrics being gathered, and being able to then move forward to action changes on them