You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GPU and Accelerator Model and Manufacturer Support w/ free-tier, auto-recovery, dedicated-hosts, and cpu-manufacturer (#128)
* support model for gpus & accelerators, manufacturers for gpus, accelerators and cpus, and freeTier, dedicatedHosts, and autoRecovery
* upgrade go to 1.18
* update readme
* parallelized filter prep and processing
* fix license test
* fix inference model
--allow-list string List of allowed instance types to select from w/ regex syntax (Example: m[3-5]\.*)
169
+
--auto-recovery EC2 Auto-Recovery supported
169
170
-z, --availability-zones strings Availability zones or zone ids to check EC2 capacity offered in specific AZs
170
171
--baremetal Bare Metal instance types (.metal instances)
171
172
-b, --burst-support Burstable instance types
172
173
-a, --cpu-architecture string CPU architecture [x86_64/amd64, x86_64_mac, i386, or arm64]
174
+
--cpu-manufacturer string CPU manufacturer [amd, intel, aws]
173
175
--current-generation Current generation instance types (explicitly set this to false to not return current generation instance types)
176
+
--dedicated-hosts Dedicated Hosts supported
174
177
--deny-list string List of instance types which should be excluded w/ regex syntax (Example: m[1-2]\.*)
175
178
--disk-encryption EBS or local instance storage where encryption is supported or required
176
179
--disk-type string Disk Type: [hdd or ssd]
@@ -187,14 +190,19 @@ Filter Flags:
187
190
--efa-support Instance types that support Elastic Fabric Adapters (EFA)
188
191
-e, --ena-support Instance types where ENA is supported or required
189
192
-f, --fpga-support FPGA instance types
193
+
--free-tier Free Tier supported
194
+
--gpu-manufacturer string GPU Manufacturer name (Example: NVIDIA)
190
195
--gpu-memory-total string Number of GPUs' total memory (Example: 4 GiB) (sets --gpu-memory-total-min and -max to the same value)
191
196
--gpu-memory-total-max string Maximum Number of GPUs' total memory (Example: 4 GiB) If --gpu-memory-total-min is not specified, the lower bound will be 0
192
197
--gpu-memory-total-min string Minimum Number of GPUs' total memory (Example: 4 GiB) If --gpu-memory-total-max is not specified, the upper bound will be infinity
198
+
--gpu-model string GPU Model name (Example: K520)
193
199
-g, --gpus int Total Number of GPUs (Example: 4) (sets --gpus-min and -max to the same value)
194
200
--gpus-max int Maximum Total Number of GPUs (Example: 4) If --gpus-min is not specified, the lower bound will be 0
195
201
--gpus-min int Minimum Total Number of GPUs (Example: 4) If --gpus-max is not specified, the upper bound will be infinity
196
202
--hibernation-support Hibernation supported
197
203
--hypervisor string Hypervisor: [xen or nitro]
204
+
--inference-accelerator-manufacturer string Inference Accelerator Manufacturer name (Example: AWS)
205
+
--inference-accelerator-model string Inference Accelerator Model name (Example: Inferentia)
198
206
--inference-accelerators int Total Number of inference accelerators (Example: 4) (sets --inference-accelerators-min and -max to the same value)
199
207
--inference-accelerators-max int Maximum Total Number of inference accelerators (Example: 4) If --inference-accelerators-min is not specified, the lower bound will be 0
200
208
--inference-accelerators-min int Minimum Total Number of inference accelerators (Example: 4) If --inference-accelerators-max is not specified, the upper bound will be infinity
0 commit comments