Tagged: tools

7 Rules for Surviving the AI Hype Machine

AI header

People are losing it over AI. From analyst predictions, to industry events, to earnings calls, artificial intelligence and machine learning are all the rage. And dollars are flowing in response: the number of AI start-ups getting funding has spiked 4x from 2012 to 2016 (from 160 deals to 658 deals, according to CB Insights) and a recent Forrester study projects a 300% increase in corporate investments from 2016 to 2017. Yet, we’ve seen this hype before. Especially those of us who were at the front lines of AI back in the early days (or at least the mid-90’s!).

Yes, AI is super-exciting. Yes, machine learning (a sub-set of AI, but sometimes not) offers to revolutionize the way we monitor, and model and pattern match and interpret all the big and small data that is swirling around all of us. But no, it’s not all going to happen overnight. And (especially) no, many businesses and consumers still don’t have a clue about the best ways to select, apply and monetize AI for practical, everyday stuff. The following rules should help in this regard – but mainly are envisioned as a (time-tested) sanity check for surviving the latest edition of the AI hype machine.

Rule #1: Scope Matters. Creating a general-purpose thinking machine is hard. Creating an intelligent agent (or bot) that automates a single or small set of everyday, repetitive, “standard” tasks is a lot more tractable. Just as the key to early AI – and the Knowledge Management movement that followed – was finding narrow but high value applications like streamlining problem resolution in call centers or processing loan applications, the same type of “think global, act local” approach applied to today’s AI is equally important. For the same reason, starting with small-ish data vs. super-large big data sets can make sense when applying analytical techniques to many non-scientific business applications (more on this next).

Rule #2: Machine Learning is Not Magic. And it’s not easy (for most) to get right the first time either. Experimentation with a tool like RStudio is key, and there are many algorithms to choose from (Bayes, decision trees, regressions, and once-again popular neural network models aka “deep learning”). Of course training deep learning models can be both an art and a science. However, the good news is that an excellent recent article by Andrew Beam shows that you don’t need Google-scale data to use deep learning. I saw this in some of my graduate work when I was training neural nets to do simple pattern recognition, and it’s great to see this type of small data approach continuing to stand the test of time!

you don’t need Google-scale data to use deep learning”

Rule #3: Data is King. Getting close to customers, understanding their journey, tailoring their experience, and selecting just the right offer are all outcomes that are enabled by insights powered by big and small data. Generating these insights in a timeframe and cost that make them readily available to front line teams (and consumers themselves) is where advanced analytics and techniques like deep learning need to go. But as mentioned above, what you get out is very much a function of what (data) you put in. Where will your training data come from? How will you prepare it? Who will test the performance? These are questions as important as what tool or algorithm you’ll use.

Rule #4: People Matter. Even as AI systems become more skilled at complex decision-making, and take over some “back of house” functions previously performed by humans, we are a long way from creating virtual human beings, a point that Om Malik made in his excellent piece for The New Yorker last summer. Which is why some of the best, most impactful use cases will continue to augment rather than replace human workers, such as the AI voice analysis and feedback system from Boston-based Cogito that gives real-time guidance to employees as they engage customers on the phone, or the “davis” AI powered virtual assistant for IT ops from APM pioneer Dynatrace.

Rule #5: Consumers Don’t Care About Your Technology. Data nerds want to know what flavor of machine learning you are using. If you are selling to them or other techies, than skip to Rule #6. For everyone else, take note that it’s more important to focus on the “why” than the “how” when selling the value of your AI initiative to internal or external stakeholders. Why is the problem interesting? Why is it hard to solve with traditional (non-AI) approaches? Why is this repeatable/scalable vs. one-off solution? Even more so, what unique value is AI providing to your initiative or app? And how will you show ROI going forward?

Rule #6: Embedding AI Drives Adoption. Back in the day, the old joke among AI researchers was that when something in AI become successful, it wasn’t called AI any more. Today, many successful AI powered apps and services have AI “in them” but the technology is not apparent to the end-user. And that’s the point really – embedding AI drives adoption. Fortunately there are a growing number of tools to add AI or machine learning or other intelligent capabilities. These include open source development frameworks and engines like Apache Lucene (NLP) and Mahout (ML), Eclipse BIRT (Developed by Actuate – now part of OpenText) for embedded analytics and visualization, and RapidMiner for machine learning; embedded analytics specialists like Izenda and Sisense; developer platforms like IBM Watson APIs (conversation, speech, vision) and Microsoft Cognitive Services (decision rules, search, vision); and even custom hardware like Nvidia’s Jetson TX2 card.

Rule #7: Focus on Improving Everyday Work. Much of my research and writing over the past few years has focused on turning small and big data insights into everyday value. For marketers there are established use cases for data-driven marketing (see some of them in the piece I did for DMN a few years back), and there’s also a helpful framework for considering which marketing processes are mostly likely to be disrupted by AI from the folks at TopRight Partners. And for others looking at the bigger picture, there’s a very cool study (and poster!) on the overall potential of automation in the workplace – “Where machines could replace humans” – from McKinsey that is worth checking out.

Advertisements