Are Sentiment Analysis Algorithms Accurate?

<a style="background-color:black;color:white;text-decoration:none;padding:4px 6px;font-family:-apple-system, BlinkMacSystemFont, &quot;San Francisco&quot;, &quot;Helvetica Neue&quot;, Helvetica, Ubuntu, Roboto, Noto, &quot;Segoe UI&quot;, Arial, sans-serif;font-size:12px;font-weight:bold;line-height:1.2;display:inline-block;border-radius:3px" href=";utm_campaign=photographer-credit&amp;utm_content=creditBadge" target="_blank" rel="noopener noreferrer" title="Download free do whatever you want high-resolution photos from Lewis Fagg"><span style="display:inline-block;padding:2px 3px"><svg xmlns="" style="height:12px;width:auto;position:relative;vertical-align:middle;top:-2px;fill:white" viewBox="0 0 32 32"><title>unsplash-logo</title><path d="M10 9V0h12v9H10zm12 5h10v18H0V14h10v9h12v-9z"></path></svg></span><span style="display:inline-block;padding:2px 3px">Lewis Fagg</span></a>

Reliably automating sentiment analysis online with artificial intelligence could totally change reputation management. Media monitoring providers know it, and are working to crack the code.

For a public relations professional’s perspective, if Google knows what I want to search, Amazon knows what I want to buy and Netflix knows what I want to watch, shouldn’t media monitoring platforms be able to automatically find the relevant news and online conversations I need to see?

Why do I have to spend so much time tweaking my Boolean filters and risk either choking them down too tight and missing good stories, or opening them up so wide I have to manually separate the wheat from the chaff?

Why can sentiment analysis algorithms It’s a fair question and one that I answered in depth in the Media Monitoring Buyer’s Guide I released yesterday, which examines the capabilities of Burrelles, Critical Mention, Cision, Intrado, Meltwater, Muck Rack, PublicRelay, Signal, Talkwalker and TVEyes.

But recommendation engines provide a much narrower band of intelligence that what it would take to understand the entire world of natural language. Recommendation engines analyze specific patterns for a concrete outcome, like autocompleting a search term or making a product or viewing recommendation based on past behavior.

And recommendation engines deal with structured data and provide concrete recommendations. News and social media are unstructured, from a broad domain of sources with no consistent standards, complicated by slang, sarcasm and emotion.

So while recommendation engines are no doubt impressive they are incomparable to humans beings when it comes to natural language understanding, which is s tep above natural language processing.

The goal of traditional and social media monitoring for public relations is tracking message penetration. In other words, you have a message and you have targets. Rather than just counting keyword mentions In articles, the more important goal of media monitoring is determining if your targets are repeating your messages.

Tracking message penetration is less about aggregating articles then it is about understanding and identifying concepts and ideas being shared digitally. Unfortunately this requires a level of common sense reasoning that automated solutions can’t deliver. This requires artificial general intelligence (AGI), and what we have today are neural networks that can solve narrow AI problems, such as product recommendations.

AI can be used to compare apples to apples, but it can’t be used effectively just yet to compare apples to oranges, and this is what’s required to analyze concepts and ideas, which is the problem a media monitoring service is built to solve. 

These aren’t just my opinions by the way, these are the opinions of scientists and academics I interviewed in the report who are leading the natural language processing in charge. 

Debunking Fake News

I spoke with Jurij Leskovec, the chief scientist at Pinterest — who is also a professor of computer science at Stanford — about why AI can’t solve the fake news problem. And what he said is that in order to debunk fake news “basically what you have to do is build a machine that knows all the truth in the world so that then you can say what is truthful and what is not.” 

Essentially, he told me, “you have to build a machine that knows everything that’s the truth, because only then, when you understand everything that’s true, can you say whether something is fake.” And at this stage in the game, this is still a bridge too far for AI. 

In his book AI Superpowers, the former president of Google China says that, “…in order to build machines that can think as well as humans would require multi-domain learning, domain Independent Learning, natural language processing, common Sense reasoning, planning and learning from a small number of examples. Taking the next step to emotionally intelligent robots may require self-awareness, humor, love, empathy and an appreciation for beauty. These are the key hurdles that separate what AI does today spotting correlations and data making predictions and what he calls artificial general intelligence.”

So how in the world can you rely on artificial intelligence to determine relevance or analyze the sentiment of traditional and social media if you can’t build machines that understand natural language? 

At the same time, there are some impressive AI features in media monitoring platforms available and if you reduce the number of results with Boolean filters before introducing AI algorithms there is a lot you can do to parse and interpret the data. But if accuracy is mission critical, you’re going to need to allocate significant human resources to manually analyze the data to make sure the machines get it right. 

A lot of organizations might think automated sentiment analysis is good enough, but when a crisis with severe economic and health consequences like the coronavirus pandemic hits, the ability to leverage monitor media for business intelligence can be the difference between layoffs, bankruptcy and survival. 

unsplash-logoLewis Fagg

Our Podcasts