Generally, emerging technologies that are valuable enough to become popular tend to decentralize at the earliest opportunity. From the print bureau to the home printer, the processing lab to the smartphone camera, the mainframe to the personal computer — the phase prior to this consumerization is hallmarked by business services gatekeeping the new technology and meting it out to consumer demand in small and increasingly profitable measures as hardware costs reduce — eventually reducing enough to diffuse the technology and kill the 'gatekeeper' business model.
The explosion of data storage and processing needs over the last twenty years has not only kept this from happening in the business information services sector, but has, according to some sources, practically eradicated the in-house data center1 in favor of the cloud.
Now, under pressure from the latest iteration of the AI hype machine, the explosive interest in machine learning and intelligent process automation once again raises the question of where the market for a hot new technology will ultimately settle — in external provisioning or in-house?
In the case of machine learning, the core model is very different from general cloud provisioning, in that the data is still voluminous but the enabling technologies are free, relatively lightweight, and practically immune to the forces of market capture.
However, the critical differentiator is the current need for expensive computing resources in model development and updating — a factor that promises to change as more lightweight machine learning systems such as Apple Core ML evolve, new players enter the AI ASIC market, and Nvidia's acquisition of Arm promises to democratize access to GPU-accelerated AI.
In this article, we'll take a look at some of the current indicators that favor an in-house or platform-based approach to enterprise AI development and deployment, with an eye to emerging trends.