FAQs
Straightforward answers to general queries.
Q: What is opGPU’s mission, and what are we working towards?
opGPU is a decentralized GPU network designed to provide unlimited computational power for AI and machine learning (ML) applications. Our mission is to democratize access to computing resources by leveraging the power of 1 million+ GPUs sourced from independent data centers, crypto miners, and blockchain projects like Filecoin and Render. We aim to make computing more scalable, accessible, and efficient for the next generation of AI, Web3, and decentralized applications.
Q: How big is the GPU shortage, and how is opGPU solving it?
Current cloud providers offer around 10-15 exaFLOPS of GPU compute capacity. However, as AI/ML model training and inferencing workloads surge, demand for GPU compute in the cloud could reach 20-25 exaFLOPS—leaving a 5-10 exaFLOPS gap in capacity. This growing shortage means that cloud GPU resources need to expand 2-3x over the next few years.
At opGPU, we tackle this challenge by tapping into underutilized GPU sources outside of traditional cloud providers, including:
Independent Data Centers: Thousands of data centers with utilization rates as low as 12-18% in the U.S. alone.
Crypto Miners: With the Ethereum shift to Proof-of-Stake, many miners have excess GPUs that can now be repurposed in our decentralized network.
Consumer GPUs: 90% of the global GPU supply is in the hands of consumers, often lying dormant in homes or small-scale farms.
Together, these resources could provide an additional 200 exaFLOPS of GPU capacity, bridging the gap and meeting the growing demand for compute.
Q: How is opGPU different from AWS?
opGPU takes a fundamentally different approach to cloud computing. While AWS operates on a centralized, permission-driven model, opGPU utilizes a decentralized, distributed system that gives users greater control and flexibility. This model is cost-efficient, faster, and eliminates the need for complex permission structures, allowing users to access GPU resources quickly and easily.
How and why is opGPU cheaper and faster than other providers like AWS?
opGPU is significantly more affordable and faster than traditional cloud services. By utilizing underused GPU resources from independent data centers, crypto miners, and consumers, we’re able to offer compute for up to 90% less than traditional cloud providers.
Speed is another key advantage: Traditional cloud providers like AWS can take weeks to provision GPU resources, often requiring detailed KYC information, long-term contracts, and waiting lists for hardware. opGPU removes these barriers, allowing users to access GPU clusters in less than 90 seconds, making us 10-20x more efficient than AWS.
Q: What is a DePIN, and how does opGPU fit in?
A DePIN (Decentralized Physical Infrastructure Network) combines blockchain, IoT, and the Web3 ecosystem to create and manage physical infrastructure. opGPU is the first GPU-based DePIN, optimized for machine learning but adaptable for any GPU use case. Our network offers decentralized GPU compute where traditional models fall short, providing access to resources that are secure, flexible, and scalable.
Q: What type of GPUs does opGPU offer?
We offer a diverse selection of GPUs, including:
NVIDIA RTX Series
AMD Ryzen Series
In addition, we provide CPUs such as Intel, AMD, and the Apple M2 Chip with its neural engine.
Our minimum hardware requirements are:
12 GB+ RAM
500 GB+ Free Disk Space
Internet Speed: Download 500+ Mbps, Upload 250+ Mbps, with <30ms ping
Check our pricing page for the full list of supported hardware and contact support if your hardware isn’t listed.
Why is opGPU needed for machine learning?
Built on Ray.io, the same framework used by OpenAI to train GPT-3, opGPU is tailored for distributed machine learning (ML). Whether it's for reinforcement learning, deep learning, hyperparameter tuning, or model serving, opGPU provides a global GPU network optimized for ML workloads. We support all the major ML frameworks, including Anyscale, Pytorch, TensorFlow, and Predibase, allowing for seamless workload distribution across our extensive GPU grid.
Who are opGPU’s target customers?
Our target customers include anyone looking to develop or deploy an ML model or AI application. As no-code platforms like Predibase and model creation tools like Hugging Face explode in popularity, we see a massive potential customer base for opGPU in the coming years.
How do you manage availability and allocation across the global network of GPUs?
opGPU connects a global network of clients and suppliers, using our smart algorithm to efficiently match available GPU resources with user needs. Our system monitors device availability in real-time, allowing us to deploy fully-configured GPU clusters in under 90 seconds. The result is a highly efficient, responsive, and reliable network for AI and ML workloads.
What is the connectivity requirement for suppliers?
While the minimum connectivity requirement for suppliers is 250 Mbps, we recommend 1 Gbps download and upload speeds for optimal performance and to remain competitive in the network. We expect average data traffic to be around 5GB/hour.
Can I customize cluster creation?
Yes, opGPU offers unmatched flexibility in cluster creation. You can select:
Cluster type (use case-specific)
Sustainability options (e.g., Green GPUs powered by clean energy)
Geographic location
Security compliance (SOC2, HIPAA, end-to-end encryption)
Connectivity tier
Clusters can be deployed with zero setup through our intuitive out-of-the-box configuration.
Explain the pricing model. Do pricing tiers differ based on GPU model/performance?
opGPU’s pricing model is dynamic and based on supply and demand. Factors like GPU specs, internet speed, security certifications, and hardware types (enterprise-grade vs. consumer-grade) impact the final cost. High-performance, SOC2-compliant GPUs with faster connectivity are priced higher than standard consumer-grade GPUs.
What’s the maximum amount of GPUs allowed in a single cluster?
The only limitation on cluster size is the total available GPU supply in the network. Whether you need 10 or 10,000 GPUs, we’ve got you covered.
How long does it take to create a cluster of GPUs?
It takes less than 90 seconds to create a GPU cluster with opGPU.
Can I adjust the number of GPUs in my cluster as my requirements change?
Yes, clusters are fully adjustable. You can scale up or down manually or use auto-scaling features to meet your workload needs in real-time.
What is the minimum and maximum duration for GPU cluster rentals?
Rent GPU clusters for as little as 1 hour or for as long as you need, with no time limit.
Does the Docker container launch with the --privileged flag?
No, our Docker containers are not launched with the --privileged flag for enhanced security.
Why do we mount the Docker socket while starting containers?
We mount the Docker socket to orchestrate and manage GPU compute on each worker node. This ensures the stability and integrity of the environment and allows us to effectively manage container states across the system.
What are the staking tiers and benefits?
Bronze
10k
5%
Early queue
Silver
25k
10%
Priority execution
Gold
50k
15%
Premium GPUs access
Platinum
100k
20%
DAO voting + max rev share
🌐 Nodes & Providers
How can I become a GPU provider?
You can register your hardware (e.g., RTX 3090, A100) via the node onboarding portal. You’ll need to:
Stake a small amount of $OGPN
Run the node software
Pass basic uptime & performance checks
How do providers get paid?
GPU nodes are paid directly from the user’s job payment. It’s all on-chain, no middlemen. Higher-performing nodes get better jobs and better rates.
What happens if a node goes offline or misbehaves?
Bad nodes are penalized via the slashing mechanism — they lose their staked $OGPN and get temporarily banned from job selection.
🔐 Trust & Transparency
Is OpGPU secure and audited?
Yes. All smart contracts are either audited or undergoing audits. Payments, refunds, staking, and job validation are all verifiable and recorded on-chain.
Can I see the logs or receipts of my jobs?
Every job comes with an on-chain receipt including:
Input/output hashes
Runtime duration
Logs
Node address
Payment details
These receipts are stored via IPFS/Filecoin and linked in your dashboard.
📈 Roadmap & Future
What’s next for OpGPU?
Some key milestones:
✅ Q1 2025: Cloudverse Launch + $OGPN staking
🔜 Q2: Fiat payments, multi-chain rollout, job templates
🔜 Q3: GPU NFTs, decentralized scheduler
🔜 Q4: DAO governance, rep system for nodes
Full roadmap is here.
How can I get involved?
Use the platform and provide feedback
Stake $OGPN and earn rev share
Join the DAO and vote on key decisions
Become a GPU provider
Contribute to the GitHub or build tooling
Last updated