AI, Microsoft Copilot, and Copyright

Firstly, a Happy New Year to everyone – I’m sure that you have amazing plans for 2024, and wish you the best of luck with them!

Over the last few months, I’ve been keeping a close eye on Copilot (since it was announced as being in GA), and various happenings around the wider AI scene. It seems almost impossible to find someone who is NOT aware of the OpenAI happenings towards the end of 2023, and so many more conversations now include mention of ChatGPT, Azure OpenAI, etc.

But there’s one item I’d like to pick up on & discuss, which given the news events of last week, is extremely pertinent. This is the topic of copyright, which is a very important topic to understand. However in order to understand it properly, we need to understand how AI offerings are generated in the first instance.

All of the various AI offerings rely on LLM’s. This stands for ‘Large Language Model’, and these are deep learning models that are pre-trained on vast amounts of data, and by vast, the number of data points are truly staggering:

Incidentally, users should consider the use case that they’re using AI for, and look to use the best optimum model for it. As an example – if wanting to find out the best routine for making a cup of coffee, it is unlikely that a GPT-4 model would be neededa suitable model could be one with less parameters in it. This is in part due to the amount of resources needing to be used to run queries on more advanced data models.

When using an AI offering, these datasets are used to present answers back to the users. Some AI models are not limited to just their LLM datasets, and are able to actively trawl & access websites as well for more up to date information.

But there are two inherent problems with the way that this can work:

Data

When users interact with chatbots or other AI capabilities, they’re inputting data into them. This data could be used by the AI capability to further train models forward, ingesting the data provided to them. Given that data could be sensitive or proprietary, this can be problematic. Not all AI organisations use data just for processing, as has been discovered by various users.

Copyright

Given that responses back to users (whether in text format, image format, or other formats) are based on the datasets from the underlying LLM’s, users could potentially be provided with information that is actually copyright, and which they’re not able to use. This is very problematic, as it can result in users passing off material as their own, whilst it actually belongs to someone else.

Microsoft’s approach

Microsoft’s approach to AI capabilities has been made extremely clear. You may be familiar with seeing slides similar to the following slide:

What this means is the following:

  • Microsoft will not use any customer data to train AI models & capabilities for any standard AI offering
  • Any custom Copilots created by Microsoft customers will remain their own – Microsoft will not use data or capabilities from customer created collateral within the Microsoft AI offerings. This means that a bespoke Copilot will only offer its functionality to the customer that created it – other organisations, even within the same sector & creating Copilots themselves, will not benefit from this

Microsoft has also confirmed publicly that any information generated through the usage of Copilot or Azure OpenAI can be used without concern about copyright claims (Microsoft announces new Copilot Copyright Commitment for customers – Microsoft On the Issues). In fact, Microsoft has even gone so far as to say that Microsoft will assume responsibility for any potential legal risks involved.

If a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters,

Brad Smith, Microsoft Chief Legal Officer

This is quite important – customers are able to use Copilot & Azure OpenAI capabilities, and be assured that they will not have to be concerned about copyright issues or challenges. There are of course some conditions around this, in the way that prompts & interactions need to be handled (see Customer Copyright Commitment Required Mitigations | Microsoft Learn for further information on this).

Microsoft has this called out specifically within their Universal License Terms, available to view in full at Microsoft Product Terms.

Recent news events

With the announcement last week that the New York Times has filed suit against the OpenAI Corporation & Microsoft, this is very timely to look at (New York Times Sues OpenAI and Microsoft Over Use of Copyrighted Work – The New York Times (nytimes.com)).

The implications of such a lawsuit will affect how AI capabilities will be able to be created & used on an on-going basis. Copyright is of course very important to respect, and it will be quite interesting to see how this plays out. Having taken a look at some of the material included in the lawsuit, there are most definitely similarities between the New York Times information, and the AI generated output.

So, in my opinion, this is going to be a very interesting space moving forward, and I look forward to seeing how it goes, and any effects that it has on the usage of AI within organisations.

‘Swarming’ for Customer Service

You might be wondering as to what I mean by ‘swarming’ in the title for this post. Don’t worry – it’ll become clear pretty soon! But first of all, let’s understand the story behind this new functionality.

Where to begin? Well, let’s take a look within an organisation. It doesn’t really matter what sort of organisation it is, as most organisations will have something similar scenarios overall. So, what are we actually talking about?

Customer Service is, of course, a very important functionality of any organisations. Customers who have purchased products may need support, or perhaps are having issues, and need them to be resolved. Customer service agents are there to handle the customer queries, and look to resolve them as soon as possible.

However, it’s possible that the customer service agents don’t actually know how to resolve the customer query/issue themselves. They can, of course, use the Knowledge Base, but that requires knowledge articles to be created & maintained.

Now within the organisation, there will be SME’s (Subject Matter Experts). These are the people who know the matter in precise detail, often being the people who have created the product and/or process to begin with. But these people aren’t usually carrying out the customer service function.

So what this means is that the customer service agents need to try to work out who might actually know the answer/s, be able to help resolve the customer issue, etc. This can take time, be laborious, and perhaps not even be able to be carried out (depending on the organisation).

Hmm. So, what if the system might be able to actually SUGGEST the right people for a problem or issue? Even better, what if the system could support them being involved directly with the record/s, regardless of whether they’re a user within Dynamics 365 or not?

Enter the swarming capability onto the Dynamics 365 scene….

The aim of swarming is to bring together the necessary experts within Dynamics 365. Now, having said that, not all users will actually be interacting directly within Dynamics 365. What happens is that a specific Teams chat is created, so that users outside of the system can see the necessary information, and give input on the situation.

This builds on the existing functionality of being able to use Teams chats directly within Dynamics 365, but takes it to a whole new level, by having the system automatically suggest relevant people within the organisation, and bring them into the swarm chat!

There are some necessary steps to configure to enable this to happen.

Firstly, Teams needs to be enabled within Dynamics 365:

Once we start to turn things on, we can then see the following. This allows us to be able to specify the types of records that we can use swarming on. This is great, as we may be building out custom functionality using other tables, and can enable swarming on these as well

Once Teams chat has been enabled, we can then start setting up the swarming capabilities:

As part of the setup, we have:

  • The ability to set the general message that users will see when they create a swarm
  • Activating the case form that’s used for swarming (as this will include the functionality for swarming on the case form)
  • A Power Automate flow that will be used for sending notifications & invites within Teams for suggested (internal) users
  • Creating swarm condition rules, which allows us to bring in specific conditions around skills etc

So, how does this work in practise, once the system has been initially configured?

Users can go to the relevant record, such as a case record. They’re able to select the ‘Create swarm’ from the menu bar:

This then allows the user to provide a summary of what the swarm is for, the scenario, as well as selecting the skills needed for the swarm. Dynamics 365 can also suggest skills that it thinks would be helpful as well:

Users from across the organisation are matched, according to the skills identified:

Notifications are sent to them within Teams, requesting their help with the matter:

When they accept the invitation, they’re then brought into the swarm:

In fact, the members of the swarm aren’t actually accessing the swarm information within Dynamics 365. Instead, they’re seeing & interacting with the swarm within Teams itself!

Once the swarm is active, information can be shared, and a solution found. The swarm can then be successfully closed down:

This is truly amazing. Obviously collaboration on issues is important, especially when considering that we’re trying to resolve customer issues as quickly as possible! I’m also really excited about this, as I was part of the initial group that Microsoft reached out to initially for feedback on the capabilities of it.

To now be able to collaborate with users who sit outside of Dynamics 365, but have them access the necessary information to help resolve things, is just mind-blowing. So many scenarios that come to mind as to how this can really empower organisations!

Can you think of a way in which this could change things in your own organisation, or at a client? Drop a comment below – I’d love to hear more!

AI Translation for Omnichannel

How to start off this post? I’ve been trying to work out how exactly I can express my excitement around this new feature for Omnichannel. Included in the Wave 2 2020 release, it’s just AMAZING. That, however, doesn’t give it true justice. So let’s see how I can describe it properly to give it due respect.

Previously I’ve mentioned the ability to use skills within Omnichannel (see https://thecrm.ninja/omnichannel-for-dynamics-365-queues-users-skills/). This can be used to indicate, for example, agents who can communicate in a certain language. That’s useful of course, but what happens when you don’t have anyone who can speak the language that the customer wants to use? It’s a problem, and one that’s really not easily solved. At least, not until now.

So, what exactly does this new translation feature do? Simple – it translates from one language to another. OK, it’s actually a little more awesome than just that. Having delved into it quite a bit over the last week or so, there are (in my view) three main benefits (with a bonus one as well!):

  1. It translates incoming text from the customer (through chat) from the language that they’re using to the language that the agent is using
  2. It translates outgoing text from the agent (through chat) from the language that the agent is using to the language that the customer is using
  3. It translates text between agents from one language to the other & vice versa (eg on an internal consult)

Now for the bonus. It doesn’t just translate text from one language to another. It follows the languages being used! So if the customer switches in mid-conversation to a different language, the system picks it up. Not only is the new incoming language translated into the agents language, but the replies from the agent are shown in the (new) language being used by the customer. It’ll automatically show text in the ‘last used’ language, which is really quite incredible (at least in my opinion).

There’s no fiddling around of needing agents to select the language that they need, or anything else. It’s a simple click to turn it on, and then another click to turn it off. I’m going to go through the setup of it below, as there are a few fiddly bits that did confuse me for a bit.

It’s also possible to use different translation tools. At the time of writing this post, it’s possible to use Bing, Google or Azure translation models. I’m sure that there will be other options available in the future as well to use, which really opens up possibilities for clients with differing digital estates.

Translation happens in real time, so there’s no waiting around for it to actually get on with it. It’s displayed immediately on the screen for the agent to see.

Setup for translation

I found the general guides to be alright, but weren’t too clear on a few items. I’m therefore sharing below how I went about it, in order to get things working properly. Please be aware that this isn’t in the order specified in the documentation, but in retrospect means less switching between screens:

  1. Ensure that you have the latest updates to your Omnichannel environment (this is always a good idea, regardless of anything else!)
  1. Go to https://github.com/microsoft/Dynamics365-Apps-Samples/tree/master/customer-service/omnichannel/real-time-translation & download the ‘webResourceV2.js’ file there (if you’re unfamiliar with how to do this, click to open the file, click the ‘Raw’ button, and then save the page (ensure it’s got the ‘.js’ extension when you save it!).
  1. Ensure you have an API key to enter into the web resource file! This is what tripped me up at first. You can use any text editor (I use Notepad++) to open it up. How you get the API key will depend on the provider. For example, to set up a free account in Azure, take a look at https://docs.microsoft.com/en-us/azure/cognitive-services/translator/translator-how-to-signup. There are also some additional things that you can configure in the web resource file, but I’m not going to go into that here
  1. Go to your solutions (this can either be through the Classic interface, or through http://make.powerapps.com). You can either create a new solution to hold the web resource file, or alternatively if you have existing solutions that you’d deploy, you can add the web resource file to that. Either:
    1. In the classic interface, navigate to Web Resources, click to create a new web resource, and upload the file (ensure you select the type to be ‘Script (JScript)’, or
    2. In the modern interface, click the ‘New’ button, select ‘Web Resource’ from the ‘Other’ section, and then follow the steps above

Once it’s saved, it’ll give you a URL. Copy that, and publish the solution.

  1. Go to the Omnichannel Administration Hub, find ‘Real Time Translation’ under Settings, and set this to Yes. You can also select a default input language from the selection. Also enter the URL that you copied above. Save it
  1. You’re all done!

Agent Experience

Depending on how you’ve configured your web resource, auto translation will either by on by default, or be off. If it’s not on by default, the agent can simply click within their chat window to select it to be active:

Once active, it’ll then start to translate everything, in both directions. Below are side by side screens of the customer & agent experiences. You’ll note that the customer is seeing the initial agent response in English, as the agent was the first in the conversation

From the agent side of things, both the original language, as well as the translated language, are shown. The customer is only shown the language that they’re actually using

If the agent isn’t sure what language the customer is using (as it’s being auto-translated for them), they can hover over the text, and it’ll show the details for it:

If the agent will consult, or transfer the session to another agent, the second agent will see the conversation in the language that they are themselves using (with the original text as well). This allows for the possibility to pass a customer to a specialist to assist them, even if they don’t speak the same language! It’s really cool to see this in action.

Even more wonderfully, this is even stored down to the transcript level:

This is really opening up major new concepts that Omnichannel can be used for, which will be supported entirely by this feature. As I said at the beginning of this post, I’m absolutely excited for it, and we’re already envisioning how this will be able to empower our clients even more.

Do you have any questions around this? Can you think of any scenarios that this could solve for you? Drop a comment below – I’d love to hear!

AI-900: Microsoft Azure AI Fundamentals

One of my recent decisions has been to explore the Azure space. There are several reasons behind this. CDS, as we (hopefully!) know sits on top of Azure, and it’s useful to know the broader digital estate available on the platform.

I’ve also been looking into some of the Cognitive Services functions that are available within Power Platform. These all live in Azure, and are surfaced into Power Apps etc. It’s therefore good to know what can be done outside of the ‘Power Platform bubble’, and the options there.

Incidentally, a year ago I even built a canvas app that allowed you to take a picture of a motorbike tyre. Using AI Builder functionality, it then analysed if the tyre tread was legal or not! That was a really cool proof of concept.

So a good place to start, I thought, would be with the AI-900. This covers the fundamentals of the AI offerings that are in Azure. I had forgotten though that with fundamental exams, there’s only 60 minutes available! Seeing the timer ticking down from that give me a little surprise, though I managed to get through it (& pass!) in good time.

The official description of the exam is

Candidates for this exam should have foundational knowledge of machine learning (ML) and artificial intelligence (AI) concepts and related Microsoft Azure services.

This exam is an opportunity to demonstrate knowledge of common ML and AI workloads and how to implement them on Azure.

This exam is intended for candidates with both technical and non-technical backgrounds. Data science and software engineering experience are not required; however, some general programming knowledge or experience would be beneficial.

The official page for the exam is at https://docs.microsoft.com/en-us/learn/certifications/exams/ai-900, where it gives quite a good overview of things. Go take a look at it, and also take a look at the associated learning paths.

Once again, I sat the exam through the proctored option (ie from home). Honestly I think that my experience this time has probably been the best so far. I went through the usual system checks for signing in. The proctor came alone, and within 30 seconds they had released the exam!

So, as before, it’s not permitted to share any of the exam questions. This is in the rules/acceptance for taking the exam. I’ve therefore put an overview of the sorts of questions that came up during my exam. (Note: exams are composed from question banks, so there could be many things that weren’t included in my exam, but could be included for someone else!). I’ve tried to group things together as best as possible for the different subject areas.

  • Image recognition types
    • What each one is, what it’s used for
    • When to use for a specific scenario
  • Facial recognition
    • Different types available
    • What each one is, what it’s used for, when to use for a specific scenario
    • Limitations & issues that can occur when using it
  • Text:
    • Different recognition types
    • What each one is, what it’s used for, when to use for a specific scenario
    • Analytics. How this works, how to set up & use
    • Translation. Different options available, how they work, when to use for a specific scenario
    • Sentiment analysis. How it works, limitations, what’s needed to train a model
  • QnA Maker
    • What this does, how to set it up, how to train it
    • Generating material with it
    • Use with chatbots
  • Machine Learning
    • What this actually is, and what it does
    • How it works
    • Different types that are available, how they work, how to train a model
    • Classification options
  • Machine Learning Designer
    • How to use & set up
    • Different types of data/options used within it
    • Training & evaluation models. The steps needed for this, how to set it up correctly
    • Types of modules available
    • Validation sets
  • Chatbots
    • What they are
    • How/where they can be used
    • Limitations
    • Integration with other systems
  • Charts
    • Different charts that are available for use
    • Reading them correctly
    • Model types shown on them
    • Metrics!
  • Microsoft AI Principles
    • The different principles that are published
    • What each one means/refers to

Overall, it was quite good. The Microsoft AI Principles were new to me, and I had to guess at those (I went to look them up afterwards!). Other than that, some bits I breezed through, other parts I took careful stock of.

This is definitely an area that I’m going to continue exploring, and will be writing up further exams that I take in it. I’m curious what your experience of it has been – please drop a comment below to let me know!