#为 Pipeline 组件配置资源配额
#概述
配置与 Pipeline 组件相关的资源配额。
#使用场景
- 调整 Pipeline 组件的资源配额
- 为 TaskRun 创建的 init-containers 和容器设置默认资源配额
#前提条件
- 必须已安装 Tekton Operator 组件
- 环境中必须已自动创建 TektonConfig 资源
- 应先阅读为子组件调整可选配置项文档
#资源配置指南
在配置资源配额之前:
- 评估集群可用资源和容量
- 考虑工作负载特征和性能要求
- 先从保守的值开始,并根据监控数据进行调整
- 首先在非生产环境中测试配置
#步骤
#步骤 1
编辑 TektonConfig 资源
$ kubectl edit tektonconfigs.operator.tekton.dev config#步骤 2
WARNING
修改配置可能会触发组件 Pods 的滚动更新,从而导致暂时的服务不可用。请在合适的时间执行。
修改 spec.pipeline.options.deployments 配置的示例:
apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
name: config
spec:
pipeline:
options:
disabled: false
configMaps:
config-defaults:
data:
# Add default container resource quotas
# Adjust the values below according to your cluster's resource capacity and workload requirements
default-container-resource-requirements: |
place-scripts: # updates resource requirements of a 'place-scripts' container
requests:
memory: "<MEMORY_REQUEST>" # e.g., "128Mi"
cpu: "<CPU_REQUEST>" # e.g., "250m"
limits:
memory: "<MEMORY_LIMIT>" # e.g., "512Mi"
cpu: "<CPU_LIMIT>" # e.g., "500m"
prepare: # updates resource requirements of a 'prepare' container
requests:
memory: "<MEMORY_REQUEST>" # e.g., "128Mi"
cpu: "<CPU_REQUEST>" # e.g., "250m"
limits:
memory: "<MEMORY_LIMIT>" # e.g., "256Mi"
cpu: "<CPU_LIMIT>" # e.g., "500m"
working-dir-initializer: # updates resource requirements of a 'working-dir-initializer' container
requests:
memory: "<MEMORY_REQUEST>" # e.g., "128Mi"
cpu: "<CPU_REQUEST>" # e.g., "250m"
limits:
memory: "<MEMORY_LIMIT>" # e.g., "512Mi"
cpu: "<CPU_LIMIT>" # e.g., "500m"
prefix-scripts: # updates resource requirements of containers which starts with 'scripts-'
requests:
memory: "<MEMORY_REQUEST>" # e.g., "128Mi"
cpu: "<CPU_REQUEST>" # e.g., "250m"
limits:
memory: "<MEMORY_LIMIT>" # e.g., "512Mi"
cpu: "<CPU_LIMIT>" # e.g., "500m"
prefix-sidecar-scripts: # updates resource requirements of containers which starts with 'sidecar-scripts-'
requests:
memory: "<MEMORY_REQUEST>" # e.g., "128Mi"
cpu: "<CPU_REQUEST>" # e.g., "250m"
limits:
memory: "<MEMORY_LIMIT>" # e.g., "512Mi"
cpu: "<CPU_LIMIT>" # e.g., "500m"
sidecar-tekton-log-results: # updates resource requirements of a 'sidecar-tekton-log-results' container
requests:
memory: "<MEMORY_REQUEST>" # e.g., "128Mi"
cpu: "<CPU_REQUEST>" # e.g., "100m"
limits:
memory: "<MEMORY_LIMIT>" # e.g., "256Mi"
cpu: "<CPU_LIMIT>" # e.g., "250m"
deployments:
# Adjust the resource values below according to your cluster's capacity and performance requirements
tekton-pipelines-controller:
spec:
replicas: <REPLICA_COUNT> # e.g., 1
template:
spec:
containers:
- name: tekton-pipelines-controller
resources:
requests:
cpu: <CPU_REQUEST> # e.g., "500m"
memory: <MEMORY_REQUEST> # e.g., "512Mi"
limits:
cpu: <CPU_LIMIT> # e.g., "1"
memory: <MEMORY_LIMIT> # e.g., "1Gi"
tekton-pipelines-remote-resolvers:
spec:
replicas: <REPLICA_COUNT> # e.g., 1
template:
spec:
containers:
- name: controller
resources:
requests:
cpu: <CPU_REQUEST> # e.g., "200m"
memory: <MEMORY_REQUEST> # e.g., "256Mi"
limits:
cpu: <CPU_LIMIT> # e.g., "500m"
memory: <MEMORY_LIMIT> # e.g., "512Mi"
tekton-pipelines-webhook:
spec:
replicas: <REPLICA_COUNT> # e.g., 1
template:
spec:
containers:
- name: webhook
resources:
requests:
cpu: <CPU_REQUEST> # e.g., "500m"
memory: <MEMORY_REQUEST> # e.g., "256Mi"
limits:
cpu: <CPU_LIMIT> # e.g., "1"
memory: <MEMORY_LIMIT> # e.g., "500Mi"
tekton-events-controller:
spec:
replicas: <REPLICA_COUNT> # e.g., 1
template:
spec:
containers:
- name: tekton-events-controller
resources:
requests:
cpu: <CPU_REQUEST> # e.g., "100m"
memory: <MEMORY_REQUEST> # e.g., "100Mi"
limits:
cpu: <CPU_LIMIT> # e.g., "200m"
memory: <MEMORY_LIMIT> # e.g., "256Mi"#步骤 3
提交配置并等待 Pods 更新。
$ kubectl get pods -n tekton-pipelines -w
NAME READY STATUS RESTARTS AGE
tekton-pipelines-controller-648d87488b-fq9bc 1/1 Running 0 2m21s
tekton-pipelines-remote-resolvers-79554f5959-cbm6x 1/1 Running 0 2m21s
tekton-pipelines-webhook-5cd9847998-864zf 1/1 Running 0 2m20s
tekton-events-controller-5c97b7554c-m59m6 1/1 Running 0 2m21s#操作结果
可以看到,Pipeline 相关组件的资源配额配置已生效。
$ kubectl get deployments.apps -n tekton-pipelines tekton-pipelines-controller tekton-pipelines-remote-resolvers tekton-pipelines-webhook tekton-events-controller -o yaml | grep 'resources:' -A 6
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
--
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
--
resources:
limits:
cpu: "2"
memory: 500Mi
requests:
cpu: "1"
memory: 256Mi
--
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 100Mi#验证 Pod 资源配额配置
#创建一个 TaskRun
$ cat <<'EOF' | kubectl create -f -
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: hello
namespace: default
spec:
taskSpec:
steps:
- name: hello
image: alpine
command: ["echo", "hello"]
EOF#等待 TaskRun 完成
$ kubectl get taskruns.tekton.dev -n default hello
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
hello True Succeeded 2m41s 2m28s#查看 Pod 资源配额配置
$ kubectl get pods -n default hello-hello-pod -o yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
namespace: default
spec:
containers:
- image: alpine
name: step-hello
resources: {}
initContainers:
- name: prepare
resources:
limits:
cpu: 100m
memory: 256Mi
requests:
cpu: 50m
memory: 64Mi可以看到,Pod 中 initContainers 容器 prepare 的资源配额与在 config-defaults ConfigMap 中配置的资源配额一致。