December 24, 2024

How your gaming PC will unleash a new wave of AI innovation

0

 

[[{“value”:”

With data becoming the foundation of our economy, we must find ways to diversify the use of compute resources that harness its potential, and reduce our dependence on just a handful of models and manufacturers. That will prove one of the best ways to support the AI revolution as competition for GPUs surges and will ensure that we sustain innovation rather than rely on only a few players in control of the market. Already, we’re seeing novel approaches that will allow your gaming PC to power new AI models.

Market concentration in the chip industry has led to shortages of most GPUs that train these models, raising concern among CTOs and government policy makers alike. And while Big Tech has the resources and clout to secure what’s available, smaller businesses often delay projects as they struggle to power their models. That’s because GPUs are hard to contract if companies aren’t buying large quantities, and they’re expensive, with it costing upwards of $600,000-$1M to train even the most basic large language models (LLMs) – an insurmountable price tag for many. Steering towards a more diversified landscape of chip use is not merely a pragmatic response to current challenges; it’s a proactive stance to future-proof our technological evolution and ensure the enduring vitality of the AI ecosystem.

Misguided solution

The popular solution to the supply crunch seems to be boosting output of ultra-advanced chips – particularly the powerful A100s and H100s – and for other tech giants to manufacture similar components. While that’s good news for the biggest AI companies, it does little to reduce market concentration and decrease prices. And importantly, it fails to make AI acceleration hardware more accessible to smaller players. The massive orders for top-of-the-line GPUs decrease availability for other organizations aspiring to gain a foothold in AI training. It also allows the big tech firms to hold pricing power and dulls incentives that could otherwise drive crucial innovations in the space. And with ever-more-powerful GPUs being built, an attitude is emerging where a company’s ability to secure the biggest, baddest and newest models becomes a competitive advantage.

That thinking is misguided – or at least underexplored – as existing technologies and novel techniques offer a way to diversify the use of chips and allow startups to secure compute. In the coming three-to-five years we will see AI companies start working with a wider range of GPUs – from the highly advanced to the less powerful – that will free up the market and unleash a new wave of innovation. This strategic pivot holds the promise of liberating the market from the grip of high-end exclusivity, heralding a more inclusive, dynamic, and resilient AI ecosystem, primed for sustained growth and creativity.

Maturing space

The maturation of the AI space will drive much of this change, as we will see more language models tailored to specific niches, rather than the one-size-fits-all LLMs such as ChatGPT and Claude. This diversification not only addresses the unique demands of various industries and applications, but also marks a departure from the homogeneity that has characterized the AI landscape thus far. And developers increasingly will fine-tune their models with less powerful chips, motivating them to seek out access to consumer-grade GPUs that offer efficiency and accessibility. This departure from a reliance on high-end components democratizes access to computational resources, and spurs innovation by challenging the industry-wide assumption that only the most advanced chips can facilitate groundbreaking advancements in AI.

To some extent, this already is taking place, as developers use efficient techniques like low-rank adaptation (LoRA) that reduce the number of training variables in language models. They’re also parallelizing workloads, where they deploy clusters of say 100,000 lesser chips to do the job of 10,000 H100s. These solutions could spark a wave of innovation away from the “bigger is better” arms race in the chip market – one marked by a focus on efficiency, collaboration, and inventive problem-solving.

Meanwhile, existing technologies including Kubernetes and open source cloud infrastructure will provide access to these less powerful chips. Individuals and organizations that own GPUs will be able to sell or rent their capacity on these networks, as we’re already starting to see with some projects. The intersection of technology and community-driven initiatives presents an opportunity to break down barriers, both economic and technological, fostering an environment where computational power is not confined to a select few but is distributed widely across a diverse array of contributors.

Second wave

In the not-too-distant future, this market has the potential to expand even further, with owners of consumer-grade GPUs making idle capacity available to AI companies – especially with large players supercharging consumer GPUs at nearly one-third of the cost of high-end models that enable AI to run on everyday PCs or laptops. The use of gaming GPUs has the potential to accelerate innovation cycles since they update on a yearly cadence; this would allow AI training to use new architecture advancements more quickly than it would with specialized enterprise hardware, which evolves more slowly.

Since many everyday items have GPUs, this opens up a world of opportunity for people to monetize unused compute. Think blockchain miners pointing their GPUs at cloud markets when their projects move to proof of stake, or students doing the same with gaming PCs when not playing. In addition, smaller and more efficient AI models can run on personal devices. Already we’re seeing Gemini Nano can run offline on Google’s Pixel 8 devices; this makes locally hosted AI models on mobile devices a real thing.

These developments could provide new sources of revenue for providers and additional GPU supply for startups. To be sure, none of this replaces the need for top-quality GPUs. But it will reduce market concentration, making businesses less dependent on any single company – or country – that produces the chips they need. We will have a mature, complex market where GPUs of varying speeds and quality play critical roles in an array of AI projects. That will usher in a second wave of AI innovation that will benefit everyone. As the AI landscape evolves, the fusion of consumer-grade GPUs with AI capabilities is set to unleash unprecedented opportunities for innovation and collaboration across industries, and have profound economic impacts by distributing capability across a broader segment of society.

We’ve featured the best AI writers.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

“}]]