Release Notes
TOC
4.3.0Features and EnhancementsSupport for Kubernetes 1.34CVO-Based Cluster Upgrade WorkflowStandalone Cluster Plugin UpgradeMicroOS-Based Global Clusters on Huawei DCSHuawei Cloud Stack Support in Immutable InfrastructureVMware vSphere Support in the 4.3 CycleNew Web Console Preview EntryContainerd 2.0 BaselineExpanded Third-Party Cluster Management RangeExpanded Monitoring Plugin ConfigurationStatefulSet Cross-Cluster Application Disaster Recovery SolutionAlauda Container Platform Registry - Image Management EnhancementsAlauda Container Platform Project Application Essential (Alpha)Underlay and Egress Gateway EnhancementsGateway API EnhancementsStateful Application Disaster Recovery with PVC-Based ProtectionCeph Storage Management EnhancementsVirtualization Platform EnhancementsDeprecated and Removed FeaturesDecommissioning of Operation StatisticsFixed IssuesKnown Issues4.3.0
Features and Enhancements
Support for Kubernetes 1.34
ACP 4.3 adds support for Kubernetes 1.34 for platform-managed cluster scenarios.
For upgrades to ACP 4.3, the workload-cluster compatible versions are 1.34, 1.33, 1.32, and 1.31. This compatible-version requirement determines whether the global cluster can be upgraded and is separate from the third-party cluster management range.
For more information, see Kubernetes Support Matrix.
CVO-Based Cluster Upgrade Workflow
ACP 4.3 introduces a Cluster Version Operator (CVO)-based upgrade workflow for both global and workload clusters.
Key capabilities include:
- Preparing upgrade artifacts and the upgrade controller with
bash upgrade.sh - Running preflight checks before execution
- Requesting upgrades from the Web Console or by updating
ClusterVersionShadow.spec.desiredUpdate - Inspecting conditions, preflight results, stages, and history from
cvsh.status
ACP CLI also introduces upgrade-oriented administrator commands such as ac adm upgrade, ac adm upgrade status, --to-latest, --to, and --allow-explicit-upgrade for requesting and troubleshooting workload cluster upgrades from the current context.
For operational guidance, see Upgrade.
Standalone Cluster Plugin Upgrade
ACP 4.3 adds standalone upgrade support for cluster plugins that use the Aligned or Agnostic life cycle.
The Cluster Plugins page now shows the plugin life cycle, and eligible plugins can be upgraded independently from the list page or details page. Core plugins continue to follow cluster upgrades.
MicroOS-Based Global Clusters on Huawei DCS
ACP 4.3 allows administrators to create the global cluster on Huawei DCS with MicroOS-based immutable infrastructure. This extends the immutable operating model from workload clusters to platform installation scenarios on DCS.
For more information, see About Immutable Infrastructure.
Huawei Cloud Stack Support in Immutable Infrastructure
ACP 4.3 adds Immutable Infrastructure support for Huawei Cloud Stack (HCS). The HCS provider documentation now covers provider overview, installation, cluster creation, node management, cluster upgrades, and provider APIs in the Immutable Infrastructure documentation set.
For more information, see About Immutable Infrastructure.
VMware vSphere Support in the 4.3 Cycle
ACP 4.3 begins introducing Immutable Infrastructure support for VMware vSphere. The provider work is now tracked in the Immutable Infrastructure documentation set, while the public installation details and finalized plugin naming are still being published.
For more information, see About Immutable Infrastructure.
New Web Console Preview Entry
ACP Core now provides the top-navigation anchor required by the next-generation Web Console experience. When Alauda Container Platform Web Console Base is installed on the global cluster, users in the Container Platform and Administrator views can open the new console through a Preview Next-Gen Console entry in a separate browser tab.
The experience is designed for gradual migration and works with the Web Console Base plugin on the global cluster and the Web Console Collector plugin on workload clusters.
Containerd 2.0 Baseline
ACP 4.3 upgrades the platform runtime baseline to containerd 2.0. Review runtime-dependent operational procedures before upgrading environments that rely on customized containerd configuration.
Expanded Third-Party Cluster Management Range
For third-party clusters, ACP 4.3 now accepts Kubernetes versions in the range >=1.19.0 <1.35.0.
This management range is separate from the compatible Kubernetes versions used to determine whether the global cluster can be upgraded.
Product documentation continues to publish only the Kubernetes versions that have passed product validation for third-party cluster support and the default Extend baseline.
Product validation for the Extend baseline covers the following capability areas:
- Installing and using Operators
- Installing and using Cluster Plugins
- ClickHouse-based logging
- VictoriaMetrics-based monitoring
This does not mean that all specific Operators or Cluster Plugins are covered by product validation.
For specific Operators or Cluster Plugins outside this baseline, refer to the relevant product documentation or contact technical support.
For more information, see Kubernetes Support Matrix and Import Standard Kubernetes Cluster.
Expanded Monitoring Plugin Configuration
ACP 4.3 expands the configuration options for the monitoring plugins, making it easier to adapt monitoring deployments to infra-node placement and different storage layouts.
For ACP Monitoring with VictoriaMetrics, administrators can now:
- Configure plugin-level node selectors and tolerations for workload placement on dedicated infra nodes
- Configure the data storage directory for VictoriaMetrics when
Storage TypeisLocalVolume - Remove the previous three-node limit for VictoriaMetrics deployments
For ACP Monitoring with Prometheus, administrators can now configure plugin-level node selectors and tolerations, so monitoring workloads can be scheduled to dedicated infra nodes through plugin configuration.
If you previously used patch resources or override-based customizations to define node selectors or tolerations separately, you need to update the plugin configuration after upgrading to ACP 4.3. After the updated plugin configuration takes effect, you must remove the related patch resources or override settings.
For operational guidance, see Installation and Planning Infra Nodes for Monitoring.
StatefulSet Cross-Cluster Application Disaster Recovery Solution
This release introduces cross-cluster disaster recovery capabilities for stateful applications. Built on an Active-Passive dual-center architecture, it combines Alauda Build of VolSync asynchronous data sync and GitOps configuration distribution to achieve minute-level RTO failover.
Key Highlights:
- The primary cluster handles all
read/writetraffic; the standby cluster maintains a warm data replica via periodic rsync snapshots (RPO > 0). - Supports three operational scenarios: planned migration, emergency failover, and failback.
- The standby cluster runs with
replicas=0by default; storage and compute resources remain in cold standby, handling no business traffic. - Suitable for workloads without strict zero-data-loss requirements (RPO = 0). For financial or transactional core applications, use native database replication instead.
For more details, see: Cross-Cluster Application Disaster Recovery for Stateful Applications
Alauda Container Platform Registry - Image Management Enhancements
This release introduces ac images and ac adm prune images commands, enabling full lifecycle management of Registry images from the command line.
ac get images: List images in the registry. Results are scoped to namespaces the current user has permissions for, with support for namespace filtering and multiple output formats (table,json,yaml,wide).ac delete images: Delete one or more image tags by registry path. Built-in namespace permission checks; runs in dry-run mode by default to preview the impact, and requires--confirmto perform actual deletion.ac adm prune images: Admin command to prune image manifests that are not referenced by any cluster Pod. Flexible pruning policies include retention duration, retention count, allowlist, and--allscope. Optionally triggers Registry GC after pruning. Also supports scheduled cleanup via CronJob.
For more details, see: Cluster Image Registry Cleanup: Administrator Guide for Manual and Scheduled Tasks
Alauda Container Platform Project Application Essential (Alpha)
This release introduces the Alauda Container Platform Project Application Essential plugin, built on the brand-new Next-Gen Console frontend framework. Deployed on the global cluster, it delivers cross-cluster application orchestration and full lifecycle management from a project-centric perspective, fully respecting user permissions.
Key Highlights:
- Cross-cluster orchestration: Unified deployment of applications to multiple member clusters within a single project.
- Full lifecycle management: Supports
create,update,scale,rollback,delete, with real-time sync of application status across clusters. - Project isolation: All operations scoped to the project boundary, ensuring natural isolation between projects.
- Permission-aware: Strictly enforces RBAC permissions, displaying only resources the user is authorized to access.
Underlay and Egress Gateway Enhancements
ACP 4.3 expands core CNI networking capabilities around underlay access and egress gateway operations.
Key enhancements include:
- Better high-availability and fast-switching design for egress gateway workloads, reducing service impact during node maintenance or failover.
- Resource protection guidance and platform support for egress gateway Pods, helping reduce the risk of node resource contention under traffic spikes or replica growth.
- Support for configuring taints for egress gateway workloads, allowing better placement isolation on dedicated nodes.
- Support for managing VLAN sub-interfaces for underlay NICs.
- Added YAML editing support for subnet resources.
- Added support for node selector settings for centralized gateways.
- Added subnet CRD support for centralized gateway scenarios.
These enhancements make ACP more adaptable to complex enterprise networking environments and simplify migration from earlier exposure models to underlay-based designs.
Gateway API Enhancements
ACP 4.3 strengthens Gateway API as a key Layer 7 load balancing capability in the platform.
Key enhancements include:
- Support for host-network-based gateway deployment scenarios.
- Support for exposing services through
metalLB + Envoy Gateway proxy + underlay, so business traffic can avoid the management network. - Support for custom VIP addresses for Gateway API, helping keep service exposure addresses stable across rebuilds or lifecycle changes.
Stateful Application Disaster Recovery with PVC-Based Protection
ACP 4.3 introduces stronger disaster recovery capabilities for stateful workloads, including PVC-based disaster recovery and support for VolSync-based backup and restore workflows for storage-backed applications such as MinIO.
This enhancement improves cross-cluster recovery readiness for stateful applications and provides a more practical protection path for storage-heavy production environments.
Ceph Storage Management Enhancements
ACP 4.3 improves storage operations and Ceph-based workload support.
Key enhancements include:
- Added support for placing disks into different Ceph pools from the UI.
- Improved operational support for Ceph disk replacement scenarios.
These changes improve day-2 storage operations and make Ceph-based environments easier to manage in production.
Virtualization Platform Enhancements
ACP 4.3 delivers several important virtualization-related improvements.
Key enhancements include:
- Improved VM creation and display workflows.
- Added support for Astra Linux in virtualization-related scenarios.
- Added support for multi-NIC and NIC hot-plug capabilities for virtual machines.
These enhancements improve virtualization usability and expand guest workload compatibility in enterprise environments.
Deprecated and Removed Features
Decommissioning of Operation Statistics
The metering and billing plugins are now generally available and fully cover the capabilities previously provided by the Operations Statistics feature. Therefore, the top-level Operations Statistics entry under Platform Management will be removed.
- For newly deployed platforms, Operations Statistics components are no longer installed. If you need metering or billing capabilities, use the Cost Management plugin.
- For upgraded platforms, metering collection by Operations Statistics stops after the upgrade, while historical data remains available. If you need data cleanup or migration, submit a support request.
Fixed Issues
- Fixed an issue where the olm-registry pod would continuously restart, preventing the OperatorHub from functioning properly. This was caused by the `seccompProfile: RuntimeDefault` security configuration added during CIS compliance hardening, which blocked the `clone` syscall required by CGO operations. The seccomp profile has been adjusted to allow necessary syscalls while maintaining security compliance.Fixed in ACP 4.3.0.
- Fixed a performance issue where the permission validation during native application creation became extremely slow (10+ seconds) when the cluster had 60+ operators installed.Fixed in ACP 4.3.0.
- When using the etcd backup feature provided by Alauda Container Platform Cluster Enhancer, if users configure to back up etcd to S3 storage, the plugin fails to retrieve the Secret object referenced in secretRef. The root cause was that the plugin lacked the necessary RBAC permissions to read Secrets, resulting in S3 authentication information retrieval failure. This issue has been fixed in ACP 4.3.0.
- When using Alauda Container Platform Monitoring for VictoriaMetrics with multiple clusters sharing the same Storage, the alert rule cpaas-certificates-rule has two issues: alert notifications do not differentiate between clusters when triggered, and the rule monitors customer secrets instead of only platform certificates.
- Fix metis component storage limit configuration is too small and causes metis container to restart after exceeding the limit
- Fixed the issue where pushing container images with a large number of data layers (over 100) to the built-in image repository failed.
- Fixed an issue where imagePullSecret was not automatically injected when workloads used custom ServiceAccounts, resulting in image pull failures.
- Fixed an issue where Pods could not pull images when image-registry imagePullSecret auto-rotation used “create new Secret + delete old Secret”, and legacy Pods still referenced the old Secret but started only after it had expired.
- Fixed an exception triggered under specific scenarios during namespace creation. When entering the namespace creation page, if the page request response is slow, the default selected cluster information may not be available upon initial page access, triggering errors in other page interfaces and causing the project quotas on the page to fail to display correctly.
- The text in the real-time logging component has been adjusted: Logging has ended => End of logs
- Fixed an issue where line breaks were inconsistent between Windows and Mac when editing configmaps.
Known Issues
- When using violet push to upload a chart package, the push operation may complete successfully, but the package does not appear in the public-charts repository.
Workaround: Push the chart package again. - When using violet push to upload a chart package, the push operation may complete successfully, but the package does not appear in the public-charts repository.
Workaround: Push the chart package again. - Application creation failure triggered by the defaultMode field in YAML.
Affected Path: Alauda Container Platform → Application Management → Application List → Create from YAML. Submitting YAML containing the defaultMode field (typically used for ConfigMap/Secret volume mount permissions) triggers validation errors and causes deployment failure.
Workaround: Manually remove all defaultMode declarations before application creation. - When pre-delete post-delete hook is set in helm chart.
When the delete template application is executed and the chart is uninstalled, the hook execution fails for some reasons, thus the application cannot be deleted. It is necessary to investigate the cause and give priority to solving the problem of hook execution failure.