GPU DEDICATED SERVERS

Compute. Transcode. Render. Mine.

Perfect for streaming to YouTube.

GPU Configurations starting at $69/mo.

Yes, Get my Server

Get instant access to computing power, graphics rendering, video transcoding, desktop virtualization, and crypto mining.

ENTERPRISE GRADE GPU SERVERS
HPE ENTERPRISE SERVERS
HPE ENTERPRISE SERVERS

Your GPU configuration is installed on Hewlett Packard Enterprise servers, stress tested for 100% compatibility and stability.

CHOOSE YOUR DATA CENTER
CHOOSE YOUR DATA CENTER

Get a GPU dedicated server, deployed in one of our New York or Bucharest data centers.

LOW LATENCY NETWORK
LOW LATENCY NETWORK

Your server is connected to a custom built, low latency global network.

SUPPORT
SUPPORT

Get access to instant support, from real humans, available around the clock via phone or live chat.

NVIDIA TESLA T4

NVIDIA TESLA T4


The T4 introduces Tensor Core technology with multi-precision computing, making it up to 40 times faster than a CPU and up to 3.5 times faster than its Pascal predecessor, the Tesla P4.


Get access to 8.1 TFLOPS of single precision performance from a single T4 GPU.


Transcode up to 38 full HD video streams simultaneously with a single Tesla T4 GPU paired with our HP BL460c blade server.

*Results may vary, based on server configuration.

  • TURO TU104
  • 320 TURING TENSOR CORES
  • 2560 CUDA CORES
  • 16GB GDDR6
  • 8.1 TFLOPS SINGLE PRECISSION
  • 65 FP16 TFLOPS
  • 130 INT8 TOPS
  • 260 INT4 TOPS
  • 320 GB/s Max Bandwidth

Compatible: VMWare ESXi, Citrix Xenserver, KVM, Linux, Windows.

Coral Gpu

The Coral USB Accelerator


You can now add an Edge TPU coprocessor to any Linux-based system with the Coral USB Accelerator designed by Google. The small ASIC chip provides high-performance ML inferencing with low power cost. For example, it can execute 100fbps on MobileNet v2 models, while using very little power (500mA at 5V).


Specifications

  • ARM 32 Bit Cortex 32 MHz
  • Edge TPU ASIC (for Lite TensorFlow models)
  • USB 3.1 5Gb/s transfer speed

Compatible with Linux machines, Debian 6.0 or higher, or any derivative (such as Ubuntu 10.0+), but also with Rasberry Pi (213 Mode B/B+).

NVIDIA GeForce RTX 2080

NVIDIA GeForce RTX 2080 / RTX 2080 Ti


NVIDIA’s new Turing chip architecture delivers up to six times the performance of previous generation GPU’s, with breakthrough technologies and next generation, ultra-fast GDDR6 memory.


RTX 2080 Specifications

  • 8GB GDDR6
  • 2944 CUDA Cores
  • 448 GB/s Max Bandwidth
  • NVIDIA GPU Boost 4.0

RTX 2080 TI Specifications

  • 11GB GDDR6
  • 2944 CUDA Cores
  • 616 GB/s Max Bandwidth
  • NVIDIA GPU Boost 4.0

Compatible with Linux, CUDA/OpenCL, KVM.

GeForce GTX 1080

NVIDIA GeForce GTX 1080/1070 TI


NVIDIA’s previous chip architecture, great for mining, graphics rendering and computing. The NVIDIA Pascal architecture delivers excellent performance at a budget friendly price.


  • 8GB DDR5
  • 2560 CUDA Cores
  • 320 GB/s Max Bandwidth
  • NVIDIA GPU Boost 3.0

Compatible with Linux, CUDA/OpenCL, KVM.

Tesla P4

NVIDIA TESLA P4/P40/P100


A optimal chip for machine learning and video transcoding, can be found in the NVIDIA Tesla P4 and P100 GPU’s. NVIDIA’s Pascal chip architecture has been proven to be faster and more power efficient than its Maxwell predecessor.

Transcode up to 20 simultaneous video streams with a single Tesla P4 paired with our HP BL460c blade server. *Results may vary, based on server configuration and video resolution of each stream.

A more powerful version of the Tesla P4 is the Tesla P40, with more than twice the processing power of the Tesla P4.

The Tesla P100 GPU, is most suitable for deep learning and remote graphics. With 18.7 TeraFLOPS of inference performance, a single Tesla P100 can replace over 25 CPU servers. *Results may vary based on server configuration.


  • Pascal GP100 or GP104 chip
  • Up to 3584 CUDA cores
  • Up to 16GB CoWoS
  • Enterprise grade hardware

Compatible: VMWare ESXi, Citrix Xenserver, KVM, Linux, Windows.

Titan V

NVIDIA TITAN V


The first GPU to break the 100 teraflop barrier of deep learning performance. NVIDIA’s Volta chip, is up to 3x faster than it’s Pascal chip predecessor.

Your deep learning project design can now be a reality, with little investment. Get the maximum per machine deep learning performance, replacing up to 30 single CPU servers with just one Titan V configuration.

Use the Titan V for high performance computing, from predicting the weather to discovering or finding new energy sources. Get your results up to 1.5x faster than NVIDIA’s Pascal predecessor.


  • NVIDIA Volta Chip
  • 5120 CUDA cores
  • 640 Tensor Cores
  • 12 GB CoWoS Stacked HBM2
  • 653 Gbps max bandwidth

Compatible: VMWare ESXi, Citrix Xenserver, KVM, Linux, Windows.

Why Primcast?

Add a GPU to HP enterprise hardware, designed specifically for use with GPU add-ons, eliminating incompatibility issues or poor/underperformance of hardware. Your services are deployed on our global low latency network, backed by a 99.9% uptime SLA and supported by GPU server experts, around the clock.

Start Today

Create your Primcast GPU dedicated server account today.

Get Started