February 12-13, 2019
As marketers and market researchers, making sense of the vast amounts of data available to us is hard. In particular, the analysis of large volumes of text data in a meaningful and useful way poses many challenges. Text data from surveys, social media, call centers, reviews and so on is plentiful but very time consuming to process with traditional, manual techniques.
We needn’t worry though because AI is going to save us, right? Well yes and no.
At Digital Taxonomy, we’ve been looking at this problem in detail for nearly two years.
We have experimented with lots of approaches: text analytics, NLP, pattern matching, machine learning, deep learning and good old-fashioned manual human coding.
We’ve been busy!
Along the way, we’ve learned that each of these techniques has their strengths and weaknesses. We’ve also learned that setting up these techniques as mutually exclusive is a mistake. The secret to cracking this problem lies in melding these techniques, allowing each to play to their strengths and complement each other.
With the aid of several client case studies, we will illustrate how taking a blended approach can make light work of quantifying large amounts of text data. We will show the contribution of each technique on its own and how they complement each other when combined.
We will show how the blended, automated approach can drastically reduce the amount of manual labor required and produce a higher quality result than a purely automated approach.
We’ll conclude with a brief description of an ambitious new project we are working on called MetaCode and additional thoughts on how we see this technology evolving over the coming years, what this will mean for our industry and what we can all do to tap into the power of this new technology.
With the number of subscription video-on-demand (SVoD) users worldwide at an estimated 283 million and the expectations of it continuing to skyrocket, ProdegeMR and RealityMine wanted to learn a little [...]
Tuesday | 10:00-10:30 | Room 2