Configuration

The configurations described in this article are all for the business application side.

TOC

Pod configs: Annotations

ArgumentTypeDescriptionExample
nvidia.com/use-gpuuuidStringIf set, devices allocated by this pod must be one of the UUIDs defined in this string."GPU-AAA,GPU-BBB"
nvidia.com/nouse-gpuuuidStringIf set, devices allocated by this pod will NOT be in the UUIDs defined in this string."GPU-AAA,GPU-BBB"
nvidia.com/nouse-gputypeStringIf set, devices allocated by this pod will NOT be in the types defined in this string."Tesla V100-PCIE-32GB, NVIDIA A10"
nvidia.com/use-gputypeStringIf set, devices allocated by this pod MUST be one of the types defined in this string."Tesla V100-PCIE-32GB, NVIDIA A10"
hami.io/node-scheduler-policyStringGPU node scheduling policy: "binpack" allocates the pod to used GPU nodes for execution. "spread" allocates the pod to different GPU nodes for execution."binpack" or "spread"
hami.io/gpu-scheduler-policyStringGPU scheduling policy: "binpack" allocates the pod to the same GPU card for execution. "spread" allocates the pod to different GPU cards for execution."binpack" or "spread"
nvidia.com/vgpu-modeStringThe type of vGPU instance this pod wishes to use."hami-core" or "mig"

Container configs: Env

ArgumentTypeDescriptionDefault
GPU_CORE_UTILIZATION_POLICYStringDefines GPU core utilization policy:
- "default": Default utilization policy.
- "force": Limits core utilization below "nvidia.com/gpucores".
- "disable": Ignores the utilization limitation set by "nvidia.com/gpucores" during job execution.
"default"
CUDA_DISABLE_CONTROLBooleanIf "true", HAMi-core will not be used inside the container, leading to no resource isolation and limitation (for debugging purposes).false