As artificial intelligence (AI) grows more powerful, the infrastructure required to run it will reach its limits, and those limits could open the door for decentralized physical infrastructure networks (DePINs), said Trevor Harries-Jones, director at the Render Network Foundation.
Speaking with TheStreet Roundtable host Jackson Hinkle, Harries-Jones said decentralized GPU networks are not aiming to replace traditional data centers, but rather to complement them by solving some of AI’s most pressing scaling challenges.
DePIN: A Complement to Centralized Infrastructure
In simple terms, DePIN allows individuals worldwide to share real-world network infrastructure in return for rewards, thereby eliminating dependence on or control by a centralized company.
One such project is the Render Network, a decentralized GPU rendering platform designed to democratize the digital creation process and free creators from the constraints of centralized entities.
Hinkle pointed to recent examples from the centralized AI world, including OpenAI’s release of the video generation app Sora, where usage had to be capped due to GPU constraints. He inquired whether decentralized models could eventually overtake centralized data centers.
Harries-Jones pushed back on the idea of an outright replacement.
“I don’t think it’s a question of replacing. I actually think it’s a question of utilization of both.”
Centralized GPU clusters remain critical for training large AI models, which benefit from massive memory pools and tightly integrated hardware. However, Harries-Jones noted that training represents only a fraction of the total computational workload in AI.
Harries-Jones explained that inference—the process of running AI models—accounts for almost 80% of the GPU work. This distinction is where decentralized networks like Render become relevant. While early versions of AI models are resource-intensive, Harries-Jones stated that they quickly become more efficient as engineers optimize and compress them. Over time, models that once required massive infrastructure can run on far simpler devices like smartphones.
"So we tend to see this on all models that come out. They start being really heavy and unrefined, and over a very short period, they get refined so that they can run on decentralized, simple devices."
From a cost perspective, this shift makes decentralized GPU networks increasingly attractive, Harries-Jones argued. Instead of relying solely on expensive, high-end data centers, inference workloads can be distributed across idle GPUs globally.
"It's going to be cheaper to run them on decentralized idle consumer nodes than on centralized nodes."
Bullish Outlook for the DePIN Sector
Harries-Jones framed DePINs as a solution to alleviate growing AI bottlenecks across both compute and energy infrastructure. When centralized power systems face strain, decentralized compute offers a parallel solution by tapping into underutilized resources globally.
“So I'm very bullish on the sector as a whole.”
Harries-Jones underlined that global GPU demand significantly outstrips supply, stating, “There aren't enough GPUs in the world today.” Therefore, the key is to utilize all idle GPUs rather than competing for the undersupplied high-end GPUs, he proposed.
According to Harries-Jones, the future of AI infrastructure is not exclusively centralized networks or DePINs, but rather a flexible integration of both to meet the explosive demand for AI.

