Photo from The74.
Nvidia recently reported impressive revenue figures, and Elon Musk has predicted that human-level artificial intelligence will arrive next year. Major tech companies are scrambling to acquire AI chips, indicating that the AI boom is just beginning, and it's a prime time to get involved.
However, there could be major letdowns ahead regarding both AI's capabilities and the profits it will yield for investors.
AI advancements are decelerating, and there are fewer practical applications for even the most advanced models than initially expected. Building and operating AI is extremely costly. While new, competing AI models are frequently emerging, it takes considerable time for them to significantly influence everyday work.
These issues cast doubt on whether AI might become commoditized, its ability to generate revenue and profits, and whether a new economy is truly emerging. They also imply that AI spending may be outpacing reality, reminiscent of the late 1990s fiber-optic boom, which preceded significant crashes in the first dot-com bubble.
The rate of progress in artificial intelligences is decelerating.
The majority of the observable enhancements in contemporary large language models such as OpenAI's ChatGPT and Google's Gemini, encompassing their writing and analytical capabilities, primarily result from continuously feeding them increasing amounts of data.
These models operate by processing immense quantities of text, and it's indisputable that thus far, augmenting the data has resulted in enhanced capabilities. However, a significant obstacle to persisting along this trajectory is that companies have already trained their AIs on nearly the entire internet and are now reaching a limit in acquiring additional data. There isn't the equivalent of another ten internets' worth of human-generated content available for today's AIs to absorb.
To train next generation AIs, engineers are turning to “synthetic data,” which is data generated by other AIs. That approach didn’t work to create better self-driving technology for vehicles, and there is plenty of evidence it will be no better for large language models, says Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016.
Marcus asserts that AIs such as ChatGPT demonstrated swift improvement during their initial stages, but over the past 14-and-a-half months, advancements have been marginal. According to him, the fundamental abilities of these systems have either reached a plateau or experienced a slowdown in their enhancement.
Additional proof of the deceleration in AI advancement can be observed in research indicating the diminishing disparities in performance among different AI models. The top proprietary AI models are converging towards similar scores on tests evaluating their capabilities, and even free, open-source models such as those from Meta and Mistral are making strides to catch up.
AI may transition into a commodity.
In a mature technology landscape, widespread understanding of its construction prevails. In the absence of groundbreaking innovations, which become increasingly infrequent, no entity holds a significant performance advantage. Simultaneously, companies seek efficiencies, shifting the focus from leading in innovation to achieving cost reductions. The most recent instance of this transition occurred with electric vehicles, and now it seems to be unfolding within the realm of AI.
The potential commoditization of AI is a factor leading Anshu Sharma, CEO of data and AI-privacy startup Skyflow and former vice president at Salesforce, to believe that the outlook for AI startups such as OpenAI and Anthropic might be bleak. Although he remains hopeful that large corporations like Microsoft and Google can attract sufficient users to justify their AI investments, achieving this would demand substantial financial investments over an extended period. Consequently, even well-funded AI startups, with their comparatively limited resources, may struggle to compete effectively.
This trend is already underway. Several AI startups have encountered difficulties, such as Inflection AI, whose co-founder and other staff members joined Microsoft in March. The CEO of Stability AI, known for creating the widely-used image-generation AI tool Stable Diffusion, departed suddenly in March. Numerous other AI startups, even those with substantial funding, are reportedly exploring options for selling their businesses.
Operating today's AIs still incurs prohibitively high costs.
A frequently cited statistic in discussions about an AI bubble is a calculation by Sequoia, a venture capital firm in Silicon Valley, which indicates that in 2023, the industry spent $50 billion on Nvidia chips for AI training but generated only $3 billion in revenue.
The alarming aspect lies in the disparity, yet the true determinant of the industry's longevity is the operational cost of maintaining artificial intelligence systems.
Gathering precise figures is a challenging task, with estimates varying widely. However, the essential point is that for a popular service utilizing generative AI, the operational expenses far surpass the already substantial costs of initial training. This is because AI must engage in fresh thinking with each query, requiring significant resources compared to conventional search result retrieval. Particularly for companies heavily reliant on advertising revenue, like Google, which now incorporates AI-generated summaries into billions of search outcomes, analysts predict that providing AI-generated responses will impact profit margins.
In their latest financial reports, Google, Microsoft, and other companies noted an increase in revenue from cloud services, attributing it partially to these services supporting the AI operations of other businesses. However, the longevity of this revenue stream hinges on whether other companies and startups derive sufficient value from AI to justify ongoing investment in training and maintenance costs. This leads us to the crucial issue of adoption.
Specific applications with limited scope and sluggish uptake.
A recent study conducted jointly by Microsoft and LinkedIn revealed that seventy-five percent of professional employees now incorporate AI into their work routines. Additionally, according to a survey by Ramp, a corporate expense management firm, approximately one-third of companies invest in at least one AI tool, a notable increase from twenty-one percent reported a year earlier.
This indicates a significant disparity between the large pool of individuals experimenting with AI and the smaller subset who heavily depend on it and are willing to invest financially. For instance, Microsoft's AI Copilot subscription, priced at $30 per month, exemplifies this trend.
OpenAI does not publicly reveal its yearly earnings; however, according to a report by the Financial Times in December, it was estimated to be at least $2 billion. The company believes it can potentially double this figure by 2025.
Even with a substantial estimated revenue, it falls short of justifying OpenAI's current valuation of nearly $90 billion. After OpenAI's recent demonstration of its voice-powered capabilities, mobile subscriptions surged by 22% in a single day, as reported by analytics firm Appfigures. While this demonstrates the company's proficiency in attracting interest and attention, it remains uncertain how many of these users will remain engaged in the long term.
According to Peter Cappelli, a management professor at the Wharton School of the University of Pennsylvania, evidence suggests that AI falls short of being the productivity enhancer it's been portrayed as. While these systems can assist individuals in their tasks, they are unable to fully replace them. Consequently, they are unlikely to lead to significant payroll savings for companies. Cappelli draws a comparison to the slow implementation of self-driving trucks, pointing out that driving is only one aspect of a truck driver's job.
Consider the numerous obstacles involved in implementing AI in the workplace. For instance, AI systems still generate false information, necessitating expertise to operate them effectively. Additionally, optimizing open-ended chatbots is not straightforward, requiring substantial training and adjustment time for workers.
One of the most significant hurdles to rapid AI adoption will be the challenge of shifting people's mindsets and behaviors. This pattern of resistance to change is consistently observed in the introduction of all new technologies.
This doesn't imply that today's AI won't eventually revolutionize various professions and sectors. However, the concern lies in the current level of investment, both from startups and major corporations, which appears to be based on the assumption that AI will rapidly improve and be adopted at a swift pace, leading to an incomprehensible impact on our lives and the economy. Yet, accumulating evidence indicates that this may not unfold as anticipated.