Choose Supported EC2 Instance Machine Types
Use this page to discover machine types supported in Cloud Center with MATLAB® Parallel Server™ clusters.
On the Create Cluster page, under Machine Configuration, select a Worker Machine Type from the list. You can also edit the instance type on existing clusters. Amazon EC2 provides instance types with various combinations of CPU, GPU, memory, network performance, and storage. Choose an instance that suits your application.
Tip
For deep learning, choose an instance with NVIDIA® GPUs such as the P3, G4dn, or G5 instances. P3s have GPUs with high performance for general computation. G4dn and G5 instances have GPUs with high single-precision performance for deep learning, image processing, computer vision, and automated driving simulations.
For clusters, Cloud Center supports only the instance types belonging to the instance classes in the table below. For example, the instance type m5.8xlarge is supported as it belongs to the instance class m5. To check support for NVIDIA GPU architectures by MATLAB release, consult the column GPU Architecture and Compute Capability and compare it with the information in GPU Computing Requirements (Parallel Computing Toolbox).
Instance Class | Available Instance Sizes (in Number of CPU cores) | Memory Density (GB/ core) | Max Network Performance (Gigabit) | GPU Model | GPU Architecture and Compute Capability |
---|---|---|---|---|---|
General Purpose | |||||
m6a | 2,4,8,16,24,32,48,64 | 8.0 | 50 | ||
m6i | 2,4,8,16,24,32,48,64 | 8.0 | 50 | ||
m6id | 2,4,8,16,24,32,48,64 | 8.0 | 50 | ||
m6in | 2,4,8,16,24,32,48,64 | 8.0 | 200 | ||
m6idn | 2,4,8,16,24,32,48,64 | 8.0 | 200 | ||
m5 | 2,4,8,16,24,32,48 | 8.0 | 25 | ||
m5a | 2,4,8,16,24,32,48 | 8.0 | 20 | ||
m5ad | 2,4,8,16,24,32,48 | 8.0 | 20 | ||
m5d | 2,4,8,16,24,32,48 | 8.0 | 25 | ||
m5dn | 2,4,8,16,24,32,48 | 8.0 | 100 | ||
m5n | 2,4,8,16,24,32,48 | 8.0 | 100 | ||
m5zn | 2,4,6,12,24 | 8.0 | 100 | ||
Accelerated Computing (GPUs) | |||||
g5 | 2,4,8,16,24,32,48,96 | 8.0 | 100 | NVIDIA A10G | Tensor (cc8.0) |
g4dn | 2,4,8,16,24,32 | 8.0 | 50 | NVIDIA T4 | Turning (cc7.5) |
g3 | 8,16,32 | 15.25 | 25 | NVIDIA Tesla M60 | Maxwell (cc5.2) |
p3 | 4,16,32 | 15.25 | 25 | NVIDIA V100 Tensor | Volta (cc7.0) |
p3dn | 48 | 16.0 | 100 | NVIDIA V100 Tensor | Volta (cc7.0) |
p2 | 2,16,32 | 30.5 | 25 | NVIDIA Tesla K80 | Kepler (cc3.7) |
Memory Optimized | |||||
r6a | 2,4,8,16,24,32,48,64,96 | 16.0 | 50 | ||
r6i | 2,4,8,16,24,32,48,64 | 16.0 | 50 | ||
r6id | 2,4,8,16,24,32,48,64 | 16.0 | 50 | ||
r6in | 2,4,8,16,24,32,48,64 | 16.0 | 200 | ||
r6idn | 2,4,8,16,24,32,48,64 | 16.0 | 200 | ||
r5 | 2,4,8,16,24,32,48 | 16.0 | 25 | ||
r5a | 2,4,8,16,24,32,48 | 16.0 | 20 | ||
r5ad | 2,4,8,16,24,32,48 | 16.0 | 20 | ||
r5b | 2,4,8,16,24,32,48 | 16.0 | 25 | ||
r5d | 2,4,8,16,24,32,48 | 16.0 | 25 | ||
r5dn | 2,4,8,16,24,32,48 | 16.0 | 100 | ||
r5n | 2,4,8,16,24,32,48 | 16.0 | 100 | ||
r4 | 2,4,8,16,32 | 15.25 | 25 | ||
r3 | 2,4,8,16 | 15.25 | 10 | ||
x1e | 2,4,8,16,32,64 | 61.0 | 25 | ||
Compute Optimized | |||||
c6a | 2,4,8,16,24,32,48,64,96 | 4.0 | 50 | ||
c6i | 2,4,8,16,24,32,48,64 | 4.0 | 50 | ||
c6id | 2,4,8,16,24,32,48,64 | 4.0 | 50 | ||
c6in | 2,4,8,16,24,32,48,64 | 4.0 | 200 | ||
c5 | 2,4,8,18,24,36,48 | 4.0 | 25 | ||
c5a | 2,4,8,16,24,32,48 | 4.0 | 20 | ||
c5ad | 2,4,8,16,24,32,48 | 4.0 | 20 | ||
c5d | 2,4,8,18,24,36,48 | 4.0 | 25 | ||
c5n | 2,4,8,18,36 | 4.0 | 100 | ||
c4 | 2,4,8,18 | 3.75 | 10 | ||
c3 | 1,16 | 3.75 | 10 | ||
Storage Optimized | |||||
i3 | 2,4,8,16,32 | 15.25 | 25 |
For details on other cluster settings, see Create a Cloud Cluster.
For more details on newly added compute optimized instances and regional availability, see the Amazon Web Services web site: Amazon EC2 Instance Types. Note that Amazon Web Services describes instances in terms of vCPUs, where v means virtual core (or logical core). Usually, each physical core has two virtual cores. For example, a m5.8xlarge has 16 physical CPU cores, which corresponds to 32 vCPUs.
Note
For clusters, Cloud Center only supports Linux On-demand instances.
c5d.xlarge is the default headnode instance type.
m5.8xlarge is the default worker instance type.
Not all instances are available in all regions.
Cloud Center currently supports clusters in the following regions:
US East (N.Virginia)
EU West (Ireland)
AP Northeast (Tokyo)
For clusters, Cloud Center supports reserved instances in addition to On-demand. Cloud Center does not support dedicated or spot instances.
Cloud Center supports at most one worker per physical core. Although Amazon Web Services machines can have many virtual cores, Cloud Center restricts use to at most one worker per physical core for optimal performance. Each physical core has two virtual cores with a shared Floating Point Unit. Most MATLAB computations use this unit because they are double-precision floating point. Restricting to one worker per physical core ensures that each worker has exclusive access to a Floating Point Unit and optimizes performance.
To use reserved instances for clusters with Cloud Center, you need to purchase reserved instances with the following Cloud Center supported attributes:
Instance type: one of the machine types supported by Cloud Center for clusters, to identify supported instances, consult the table above.
Platform description: Linux.
Tenancy: default.
Region: one of regions supported by Cloud Center for clusters (US East (N.Virginia), EU West (Ireland), AP Northeast (Tokyo)).
Availability Zone: Availability Zone within the selected region, must match the Availability Zone of the subnet selected.
For pricing and billing information, see the Amazon web site: Amazon EC2 Pricing.
AWS Resource Limits
The maximum number of instances that you can start in Cloud Center depends on your AWS On-demand instance limits. On-demand instance limits determine the maximum number of virtual central processing units (vCPUs) that you can use. In most cases, a physical core corresponds to 2 vCPUs. For example, a m5.8xlarge instance has 16 physical CPU cores, which corresponds to 32 vCPUs. To determine how many vCPUs you need, use the vCPU limits calculator, which you can find in your AWS EC2 console by selecting Limits>Calculate vCPU limit. For more information on On-demand instance limits, see On-Demand Instances.
For more information on AWS EC2 limits for any type of resource, see Amazon EC2 Service Quotas.