Conversations occurring via social media channels present a valuable listening opportunity for organizations. Customers are rewarding and getting even with brands, products and services by sharing their experiences on Yelp, Twitter, Charity Navigator and Facebook. But with so many voices and so few ears, is the notion of offloading the analysis of those conversations to computers feasible or practical?
Toyota’s recent decision to halt manufacturing for eight of its models underscores the difficulties and complexities involved with respect to surfacing meaningful business intelligence from customer data. According to the National Highway Traffic and Safety Administration, which conducted six separate investigations, there were no defects other than unsecured floor mats. Despite numerous customer complaints filed with the government regulator, the agency was unable to connect the dots and deduce a real problem, partly because the various customer complaints had been tagged with different keywords, preventing them from connecting the dots.
These are the same types of shortcomings that result in the failure of US national security agencies to foil terror. Yet the business community is rife with vendors promising listening solutions with ability to gauge customer sentiment from social media conversations, and in so doing, afford the process of automated listening and data mining. But in a world where CAPTCHA codes are required to discern people from computers, is it on realistic for organizations to rely on computers to analyze the sentiment of these conversations online?
I spoke with Rob Key Converseon
about this topic at the PRSA International Conference in San Diego and according to him, we’re still a good 10 years away from accurate sentiment analysis. In his description of the seven layers of data analysis
, Rob says the human analysis is key because computers struggle with sarcasm, neologism, images, implicit and explicit information. He went on to suggest that any sentiment analysis vendor promising 90% accuracy should be disqualified from the group of listening platforms you may be considering.
Public relations measurement specialist Katie Paine
, who has been featured on my podcast “On the Record… Online
” prescribes a listening method based on the concept of cost per message communicated (CPMC), a metric derived from outputs, outtakes and outcomes. But even those outputs must be ranked by people to assess positive versus neutral versus negative messages, a process that seems to me excessively error prone.
While these models may be a significant improvement over measuring advertising equivalency, the notion of sentiment analysis has, perhaps, as many blind spots. Mark Weiner PRIME Research
, who I have also interviewed for my podcast
, says ad equivalency may be meaningless as an absolute, but as a relative measure of progress, it does have value for some organizations.
In our new book on business-to-business social media communications, Paul Gillen and I will address the issue of metrics and return on investment through listening and the issue of sentiment analysis is one I’m skeptical of. Here the questions we’ll be asking:
1. How are practitioners determining return on investment?
2. What are the strengths and weaknesses of sentiment analysis?
3. Mapping objectives to measurement programs.
4. How frequently should the program be tweaked?
5. What are the best return on investment methodologies?
6. How can organizations rely on technology to listen, particularly when the number of voices significantly outweigh the number of ears?
Are there other questions we should be asking too? If you have specific ideas or case studies that you would like to share with Paul and I for the book, please leave them here. As I mentioned, we are specifically interested in business-to-business applications of social media, of which listening is, of course, integral.