#Configuration
The configurations described in this article are all for the business application side.
#TOC
#Pod configs: Annotations
| Argument | Type | Description | Example |
|---|---|---|---|
nvidia.com/use-gpuuuid | String | If set, devices allocated by this pod must be one of the UUIDs defined in this string. | "GPU-AAA,GPU-BBB" |
nvidia.com/nouse-gpuuuid | String | If set, devices allocated by this pod will NOT be in the UUIDs defined in this string. | "GPU-AAA,GPU-BBB" |
nvidia.com/nouse-gputype | String | If set, devices allocated by this pod will NOT be in the types defined in this string. | "Tesla V100-PCIE-32GB, NVIDIA A10" |
nvidia.com/use-gputype | String | If set, devices allocated by this pod MUST be one of the types defined in this string. | "Tesla V100-PCIE-32GB, NVIDIA A10" |
hami.io/node-scheduler-policy | String | GPU node scheduling policy: "binpack" allocates the pod to used GPU nodes for execution. "spread" allocates the pod to different GPU nodes for execution. | "binpack" or "spread" |
hami.io/gpu-scheduler-policy | String | GPU scheduling policy: "binpack" allocates the pod to the same GPU card for execution. "spread" allocates the pod to different GPU cards for execution. | "binpack" or "spread" |
nvidia.com/vgpu-mode | String | The type of vGPU instance this pod wishes to use. | "hami-core" or "mig" |
#Container configs: Env
| Argument | Type | Description | Default |
|---|---|---|---|
GPU_CORE_UTILIZATION_POLICY | String | Defines GPU core utilization policy: - "default": Default utilization policy.- "force": Limits core utilization below "nvidia.com/gpucores".- "disable": Ignores the utilization limitation set by "nvidia.com/gpucores" during job execution. | "default" |
CUDA_DISABLE_CONTROL | Boolean | If "true", HAMi-core will not be used inside the container, leading to no resource isolation and limitation (for debugging purposes). | false |