Is anyone else watching what Qubic is doing with distributed compute and AI training? Seems underreported in AI cirles

March 29, 2026
Is anyone else watching what Qubic is doing with distributed compute and AI training? Seems underreported in AI cirles

Here's something that’s been flying under the radar — Qubic’s approach to distributed AI compute. Think of it like this: instead of hardware solving meaningless puzzles, Qubic uses what they call Useful Proof of Work, training neural networks while securing their network. ((slower)) According to /u/srodland01, it’s verified at a staggering 15.52 million transactions per second — faster than Visa’s peak. That’s because they run on bare metal hardware, no virtual machines slowing things down. Now, here’s where it gets even more interesting — Qubic plans to run Dogecoin mining alongside AI training, using the same hardware for both. So what does this actually mean for AI? Well, unlike Bittensor, which rewards AI subnets, Qubic’s infrastructure is directly training models through mining. ((thoughtful)) As /u/srodland01 points out, big players are pouring billions into bigger models, but true AGI might need more than just scaling. The big question — are distributed, mining-funded AI networks like Qubic the future, or just a side show? That’s what I’ll be watching.

I follow AI infrastructure pretty closely and Qubic keeps coming up in my research in a way I find intersting but havent seen much discussion of in AI-focused comunities.

Quick background for people who havent heard of it: Qubic uses what they call Useful Proof of Work - instead of hardware solving random hash puzzles, the compute runs neural network training tasks for thier Aigarth AI project. The same hardware is contributing to AI training while securing things.

The network was independently verifed at 15.52 million transactions per second by CertiK on live mainnet. For context, thats faster than Visas theoretical peak throughput. The architecture runs on bare metal hardware without a virtual machine layer, which is aparently what enables the throughput.

Theyre also aparently launching a DOGE mining integration immenantly (around April 1) where thier infrastructure will run Dogecoin mining simultaniously with everything else - the ASIC hardware for DOGE Scrypt mining runs in paralel with thier CPU/GPU hardware for other workloads.

For comparison, people often bring up Bittensor, but from what I see Bittensor is more about competing AIs and subnets rewarding each other rather than actually using the distributed compute to train models from scratch with raw hardware power. Qubic seems different in that the mining itself is the training.

Big companies are pouring billions into building massive data centers and training ever bigger LLMs, but I dont think true AGI is gonna come just from scaling up these trained models no matter how much money they throw at it.

My interest is specifically in the distributed AI compute angle. Is the model of mining-funded distributed AI training something that gets serius discussion in AI research cirles? Or is this considered a fundementaly different category from serius AI infrastructure?

submitted by /u/srodland01
[link] [comments]
Audio Transcript

I follow AI infrastructure pretty closely and Qubic keeps coming up in my research in a way I find intersting but havent seen much discussion of in AI-focused comunities.

Quick background for people who havent heard of it: Qubic uses what they call Useful Proof of Work - instead of hardware solving random hash puzzles, the compute runs neural network training tasks for thier Aigarth AI project. The same hardware is contributing to AI training while securing things.

The network was independently verifed at 15.52 million transactions per second by CertiK on live mainnet. For context, thats faster than Visas theoretical peak throughput. The architecture runs on bare metal hardware without a virtual machine layer, which is aparently what enables the throughput.

Theyre also aparently launching a DOGE mining integration immenantly (around April 1) where thier infrastructure will run Dogecoin mining simultaniously with everything else - the ASIC hardware for DOGE Scrypt mining runs in paralel with thier CPU/GPU hardware for other workloads.

For comparison, people often bring up Bittensor, but from what I see Bittensor is more about competing AIs and subnets rewarding each other rather than actually using the distributed compute to train models from scratch with raw hardware power. Qubic seems different in that the mining itself is the training.

Big companies are pouring billions into building massive data centers and training ever bigger LLMs, but I dont think true AGI is gonna come just from scaling up these trained models no matter how much money they throw at it.

My interest is specifically in the distributed AI compute angle. Is the model of mining-funded distributed AI training something that gets serius discussion in AI research cirles? Or is this considered a fundementaly different category from serius AI infrastructure?

submitted by /u/srodland01
[link] [comments]
0:00/0:00
Is anyone else watching what Qubic is doing with distributed compute and AI training? Seems underreported in AI cirles | Speasy