Accelerate your AI/ML, deep learning, high-performance computing, and data analytics tasks with DigitalOcean GPU Droplets. Scale on demand, manage costs, and deliver actionable insights with ease. Plus, gain access to the DigitalOcean product stack, including our GenAI Platform and Kubernetes Service.
Zero to GPU in just 2 clicks. Get a GPU Droplet running in under a minute.
Save up to 75% vs. hyperscalers* for the same on-demand GPUs. With a bill that you can actually understand.
The same easy-to-use platform that has delivered your cloud needs for over 10 years.
HIPAA-eligible and SOC 2 compliant products backed by enterprise-grade SLAs and the 24/7 Support Team you trust to keep you online.
*Up to 75% cheaper than AWS for on-demand H100s and H200s with 8 GPUs each. As of April 2025.
Use cases: Large model training, fine-tuning, inference, and high-performance computing
Key benefit: High memory bandwidth and capacity to efficiently handle larger models and datasets
Up to 1.3X the performance of AMD MI250X for AI use cases
Use cases: Training LLMs, inference, and high-performance computing
Key benefit: Fast training speed for LLMs
Up to 4X faster training over NVIDIA A100 for GPT-3 (175B) models
Use cases: Inference, graphical processing, rendering, 3D modeling, video, content creation, and media & gaming
Key benefit: Versatile, cost-efficient capabilities for content creation, 3D modeling, rendering, video, and inference workflows
Up to 1.7X higher performance than NVIDIA RTX A4000
Use cases: Inference, graphical processing, rendering, virtual workstations, compute, and media & gaming
Key benefit: Versatile, cost-efficient capabilities for content creation, 3D modeling, rendering, video, and inference workflows (with 2X more memory than 4000 Ada)
Up to 10X higher performance than NVIDIA RTX A6000
Use cases: Generative AI, cost-effective inference & training, 3D graphics, rendering, virtual workstations, and streaming & video content
Key benefit: Versatile, cost-efficient capabilities for inference, graphics, digital twins, and real-time 4K streaming
Up to 1.7X the performance of NVIDIA A100 for AI use cases
Benchmarks available at nvidia.com and amd.com.
Looking for more help on which GPU Droplet to choose? Review How to Choose the Right GPU Droplet for Your AI/ML Workload and How to Choose a Cloud GPU for your Projects..
GPUs are currently available in our NYC2, TOR1 and ATL1 data centers, with more data centers coming soon. All GPU models offer a 10 Gbps public and 25 Gbps private network bandwidth.
GPU Model | GPU Memory | Droplet Memory | Droplet vCPUs | Local Storage: Boot Disk | Local Storage: Scratch Disk | Architecture |
---|---|---|---|---|---|---|
NVIDIA H100 | 80 GB | 240 GiB | 20 | 720 GiB NVMe | 5 TiB NVMe | Hopper |
NVIDIA H100x8 | 640 GB | 1,920 GiB | 160 | 2 TiB NVMe | 40 TiB NVMe | Hopper |
NVIDIA RTX 4000 Ada Generation | 20 GB | 32 GiB | 8 | 500 GiB NVMe | Ada Lovelace | |
NVIDIA RTX 6000 Ada Generation | 48 GB | 64 GiB | 8 | 500 GiB NVMe | Ada Lovelace | |
NVIDIA L40S | 48 GB | 64 GiB | 8 | 500 GiB NVMe | Ada Lovelace | |
AMD Instinct™ MI300X | 192 GB | 240 GiB | 20 | 720 GiB NVMe | 5 TiB NVMe | CDNA 3™ |
AMD Instinct™ MI300Xx8 | 1,536 GB | 1,920 GiB | 160 | 1.875 TiB NVMe | 40 TiB NVMe | CDNA 3™ |
NVIDIA H200 is designed for generative AI and high-performance computing workloads, offering better energy efficiency, lower total cost of ownership, and nearly double the memory capacity of the NVIDIA H100. Learn more about how you can power your projects with NVIDIA H200s and DigitalOcean.
Don't need a full GPU Droplet? The GenAI Platform offers a serverless inference API and an agent development toolkit, backed by some of the world's most powerful LLMs. Add inferencing to your app within days, not weeks. And only pay for what you use.
I just need some GPUs… I need a cost-effective, reliable Kubernetes solution that is easy for everyone on the team to access. And that's DO for us.
Richard Li
Amorphous Data, Founder and CEO
DigitalOcean GPU Droplets are virtual machines powered by GPUs for AI/ML workloads. With GPU Droplets, you can run training and inference on AI/ML models, process large data sets and complex neural networks for deep learning use cases, and serve additional use cases like high-performance computing (HPC).
GPU Droplets are available in our New York, Atlanta, and Toronto data centers.
DigitalOcean provides a 99.5% uptime SLA for GPU Droplets.
When you power off your GPU Droplet, you are still billed for it. This is because your disk space, CPU, RAM, and IP address are all reserved, even while powered off. Therefore, charges are made until you destroy the instance.
GPU Droplets are billed per second with a 5-minute minimum billing period.
No. You will only be charged for the time you use your GPU Droplet in seconds. If you use your GPU Droplet for more than the standard 672-hour monthly limit, you will still be charged for the time you use your GPU Droplets in seconds.
What is a Cloud GPU?
Scaling GenAI with GPU Droplets and DigitalOcean Networking
Droplet Features
Getting Started with 1-Click Models on GPU Droplets - A Guide to Llama 3.1 with Hugging Face
Stable Diffusion Made Easy: Get Started on DigitalOcean GPU Droplets
Choosing the Right Offering for your AI ML Workload
Choosing the Right GPU Droplet for Your AI/ML Workload
What is GPU Virtualization?
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.