GPU Comparison
H100 80GB
Memory: 80 GB HBM3 @ 3.35 TBps
Interconnect: NVLink Full Mesh @ 900 GBps
NVIDIA RTX Virtual Workstation (vWS) support: No
Best used for:
Large modelsML TrainingInferenceHPCBERTDLRM
A100 80GB
Memory: 80 GB HBM2e @ 1.9 TBps
Interconnect: NVLink Full Mesh @ 600 GBps
NVIDIA RTX Virtual Workstation (vWS) support: No
Best used for:
Large modelsML TrainingInferenceHPCBERTDLRM
A100 40GB
Memory: 40 GB HBM2 @ 1.6 TBps
Interconnect: NVLink Full Mesh @ 600 GBps
NVIDIA RTX Virtual Workstation (vWS) support: Yes
Best used for:
ML TrainingInferenceHPC
L4
Memory: 24 GB GDDR6 @ 300 GBps
Interconnect: N/A
NVIDIA RTX Virtual Workstation (vWS) support: Yes
Best used for:
ML InferenceTrainingRemote Visualization WorkstationsVideo TranscodingHPC
T4
Memory: 16 GB GDDR6 @ 320 GBps
Interconnect: N/A
NVIDIA RTX Virtual Workstation (vWS) support: Yes
Best used for:
ML InferenceTrainingRemote Visualization WorkstationsVideo Transcoding
V100
Memory: 16 GB HBM2 @ 900 GBps
Interconnect: NVLink Ring @ 300 GBps
NVIDIA RTX Virtual Workstation (vWS) support: No
Best used for:
ML TrainingInferenceHPC
P4
Memory: 8 GB GDDR5 @ 192 GBps
Interconnect: N/A
NVIDIA RTX Virtual Workstation (vWS) support: Yes
Best used for:
Remote VisualizationML InferenceVideo Transcoding
P100
Memory: 16 GB HBM2 @ 732 GBps
Interconnect: N/A
NVIDIA RTX Virtual Workstation (vWS) support: Yes
Best used for:
ML TrainingInferenceHPCRemote Visualization
Source: https://cloud.google.com/compute/docs/gpus