Omnichannel & Sentiment Analysis (II)

I’ve previously touched upon sentiment analysis within Omnichannel in several articles (https://thecrm.ninja/omnichannel-sentiment-analysis/ and https://thecrm.ninja/omnichannel-supervisor-tools/). It’s really a great feature that allows agents to quickly & easily see how the customer is interacting. It also allows for supervisors to see at a glance how interactions are going overall.

With all of that, I thought it would be helpful to take a further look into how sentiment analysis actually works, so that we can understand it a little better.

Now, the actual nuts & bolts for sentiment analysis are provided by Azure Cognitive Services. There are a wide range of tools available through this, but we have no need to go into Azure to configure this. It’s a simple setting within Omnichannel to get it working, rather than needing to fiddle around with many different things:

However, what’s actually going on during a conversation, and how is the sentiment analysis worked out/calculated? We see the pretty little face icons (with the different colours), but how are these actually being set?

Well, there are two ways in which algorithms are used to calculate the sentiment that’s shown:

  • Natural language processing (NLP)
  • Machine learning (ML) algorithms

With these two ways methods, it’s possible to not only see what the current interactions are showing, but also to enhance the model to understand sentiment better.

Note: In a session that I presented recently, one of the attendees asked if it’s possible to train the model, to result in a custom algorithm. Unfortunately this isn’t possible to do – the machine learning that takes place is the general Azure one, rather than one for a single company or customer

The following diagram shows the sentiments that are used. They’re nicely colour-coded, for ease of reference as well:

When a customer interacts through Omnichannel, the sentiment shown is based on the last 6 messages received from the customer. As a result, the sentiment shown can very well fluctuate & change during the conversation, based on how it’s going.

The Sweetest Languages in the World - | Beyond Exclamation

Obviously, customers aren’t just going to use English to communicate. Companies are based around the world, and will use their native/local language when providing support. Omnichannel allows for this without an issue, utilising the Azure Text Translator API behind the scenes to provide this. If you’re interested to see which languages are supported for this, head to https://docs.microsoft.com/en-us/azure/cognitive-services/translator/language-support which is the latest source of information for this.

There are some interesting things to know around how this actually works:

  • When a language other than English is used, the Text Translator API translates the text to English, and then it’s analysed/scored for sentiment
  • If a language isn’t supported by the Text Translator API, it won’t be scored
  • If profanity (eg a swearword) is detected, the sentiment will automatically be shown as Negative or Very Negative, regardless of the rest of the last 6 lines of conversation

Some people have expressed their concern to me around how accurate the Azure translation actually is, but to date I haven’t seen any major concerns resulting out from it. As with the other Azure services, Microsoft is continually refining & improving it. That being said, there are several languages with very nuanced terms. I’d like to think that these would be supported without issues.

There is, however, somewhat of an interesting behaviour when starting off the analysis at the beginning of the conversation:

  • If the initial language is detected as English, it’s assumed that all of the subsequent conversation will be in English. As a result, if the customer switches away from English, the system won’t recognise this, and a Neutral sentiment score will be shown
  • If the initial conversation is not in English, then the system will check every conversation line & re-detect the language as necessary.

This seems somewhat strange to me, as I’d have thought that the system would automatically check the language for each conversation line. I can think of plenty of scenarios where different languages are used in a single conversation, even if it does start with English being used. I’d like to think that this will be updated at some point, to make the experience better.

Leave a Reply