Planning Infra Nodes for Logging Storage
This guide explains the planning considerations for running Logging storage plugins on dedicated Kubernetes infra nodes.
TOC
ObjectivesWhere to Configure PlacementBefore You Configure PlacementCheck Local PVs and nodeAffinityHistorical Kafka and ZooKeeper NodesTroubleshootingObjectives
- Isolate resources: Prevent contention with business workloads.
- Enforce stability: Reduce evictions and scheduling conflicts.
- Simplify management: Centralize infra components with consistent scheduling rules.
Where to Configure Placement
- For Alauda Container Platform Log Storage for Elasticsearch, configure placement through
spec.valuesOverride.ait/chart-alauda-log-center.global.nodeSelectorandspec.valuesOverride.ait/chart-alauda-log-center.global.tolerationsin Installation. - For Alauda Container Platform Log Storage for ClickHouse, configure placement through Advanced Configuration in the console or through
spec.config.components.nodeSelectorandspec.config.components.tolerationsin Installation.
Do not patch the generated StatefulSets, Deployments, or ClickHouseInstallation resources as the standard way to place Logging storage workloads on infra nodes.
Before You Configure Placement
- Plan the infra nodes according to Cluster Node Planning.
- Confirm whether your storage uses LocalVolume or other PVs with
spec.nodeAffinity. - Make sure the selected infra nodes can satisfy both the scheduling rules and the storage placement constraints.
Check Local PVs and nodeAffinity
If your components use local storage (for example TopoLVM, local PV), confirm whether PVs have spec.nodeAffinity. If so, either:
- Add all nodes referenced by
pv.spec.nodeAffinityto the infra node group, or - Redeploy components using a storage class without node affinity (for example Ceph/RBD).
Example (Elasticsearch):
If the PV shows:
Then Elasticsearch data is pinned to node 192.168.135.243. Ensure that node is part of the infra node group, or migrate storage.
The same principle applies to any Logging storage component that uses node-bound local storage.
Historical Kafka and ZooKeeper Nodes
Due to historical reasons, ensure Kafka and ZooKeeper nodes are also labeled/tainted as infra:
Troubleshooting
Common issues and fixes:
Example error:
Fix: add matching tolerations to the plugin configuration and make sure the selected infra nodes also satisfy the required storage placement constraints.