Last week I attended the KeyBanc Capital Markets Emerging Technology Summit. As a participant in their Mosaic industry leaders program, my role at the conference was to participate in one-on-one and group meetings with their institutional investor clients as a subject matter expert on machine learning and artificial intelligence.
Having spent a couple of packed days answering questions, from various angles, about the state of the AI market and those serving it, I thought I’d reflect on some of the key themes that arose from their questions, along with my take on each.
The rise of enterprise machine learning
Over the course of the past five years or so, enterprises have worked hard to deploy machine learning. Much of this work began in those parts of an organization with experience applying statistical analyses to core businesses challenges. These groups began experimenting with applying machine learning techniques in a few high-value areas, like upselling/cross-selling, site selection, lead scoring, or churn reduction.
In the past couple of years, building on their early successes, these organizations focused on making ML more easily and broadly used in the enterprise. This involved building or expanding their data science teams and supporting them with end-to-end data pipelines and shared machine learning tools, platforms, and practices.
Enter conversational AI
More recently, early adopter enterprises have begun to explore “real AI.” These efforts have evolved very differently and from different parts of the organization. They often begin as C-suite initiatives and find a home under the stewardship of a chief digital officer or a vice president responsible for customer experience. Most commonly, the starting point for these efforts involves applying conversational AI to the challenge of customer sales and service via chatbots, personal assistant devices, or plain old email and telephone.
Unlike tech companies, which are also very active in the conversational space but shun pre-built platforms, enterprises tend to use commercially available platforms to support their chatbot and voice assistant efforts. This seems to be a rather fragmented space, with all the large IT players in the mix but also a good many specialty vendors. With most of these projects still in their infancy, the host organizations are learning as they go along and building the tools required to overcome the gaps inherent in today’s commercial offerings.
AI and the cloud
This came up in nearly every discussion. In a nutshell, applications and data are quickly moving to the cloud, and thus ML and AI will move to the cloud as well. The big cloud vendors, AWS, Microsoft, Google, and IBM, all have similar offerings at each layer of the data stack. Thus, in these early days, most enterprises are choosing a cloud first based on broad criteria and making do with that cloud’s data stack. This scenario tends to advantage AWS and, to a lesser extent, Microsoft. Highly technical buyers and enterprises making a decision based primarily on the data stack often go Google. IBM shops and those looking for complete solutions often turn to that vendor.
Google is betting big on Tensorflow, by seeding the market with it and trying to build the best cloud upon which to run it and the models it produces. The company is making a similar bet with Kubernetes. In both cases, the market recognizes the threat and is responding in ways that impede Google’s ability to translate open source success into cloud dominance. Examples include AWS throwing its weight behind the Open Neural Network Exchange project (ONNX), which promises interoperability among deep learning frameworks, and EKS, Amazon’s managed Kubernetes service.
AI chips and hardware
Another topic that came up often is the state of the AI acceleration market. Nvidia, with its GPUs, is the clear leader here, and it’s naturally got everyone and their brother gunning for them. The company has created a pretty big moat with CUDA and its ecosystem, and all the deep learning frameworks are built to take advantage of it. My thinking, though, is that the significance of this moat is reduced in a hyperscale (i.e., cloud-first) world. None of the cloud giants want to be beholden to a single source, and they’re all building their own chips (e.g., Google’s TPU) to reduce the dependency on Nvidia and to lower costs and increase performance. Nvidia knows this and has been running like hell to drive up performance/price.
Based on the hyperscale assumption, I’m somewhat bearish on the prospects of independent chip vendors like Graphcore, Cerebras, and Wave Computing as long-term, sustainable, independent companies. That said, if they can get their products to market quickly enough and carve out a niche, plenty of interesting exit opportunities remain.
Intel remains a wildcard in this space. It owns the broader server and datacenter CPU market, has great enterprise and hyperscale relationships, and generally brings tremendous scale and resources to the fight. It was slow off the starting blocks, though, and faces the classic innovator’s dilemma at every turn. Yet they recognize the threat to their business and are running hard (and acquiring) to catch up. It’s looking to edge out Nvidia with projects like nGraph, which aspires to compile deep neural networks written in any framework to run efficiently on any hardware backend, including current and future CPUs, GPU, and accelerators.
Winners and losers
Even when it wasn’t directly articulated, underlying all the investor questions was an ultimate desire to suss out the winners and losers in the shift to AI. While I wasn’t dispensing any investment advice, if you buy into an AI-everywhere and cloud-first vision of how this all plays out, there are some clear winners and losers.
Cloud vendors that can differentiate themselves and execute well enough to retain scale will win big in ML and AI. For startups in the ML/AI tools space, the path to a long-term, sustainable, independent company is a difficult one, as they’re squeezed by both open source and cloud (increasingly funded by the same adversary). But the market is still young, so, again, there’s an opportunity to differentiate, create outsized customer value, and still win.
Even bigger potential winners are application providers — for example, SaaS vendors who are early to AI and build it deeply into their products. This will allow them to create outsized value by better serving their customers and, in the process, building superior proprietary data sources, a virtuous cycle indeed.