Release Notes
TOC
4.2.0
Features and Enhancements
Support for Kubernetes 1.33
ACP now supports Kubernetes 1.33, delivering the latest upstream features, performance improvements, and security enhancements from the Kubernetes community.
ACP CLI (ac)
The new ACP CLI (ac) enables you to develop, build, deploy, and run applications on ACP with a seamless command-line experience.
Key capabilities include:
- kubectl-compatible commands
- Integrated authentication with ACP platform environments
- Unified session management across multiple environments
- ACP-specific extensions for platform access and cross-environment workflows
For full feature details, see:
ACP CLI (ac)
Hosted Control Plane (HCP)
Released:
- Alauda Container Platform Kubeadm Provider
- Alauda Container Platform Hosted Control Plane
- Alauda Container Platform SSH Infrastructure Provider
Lifecycle: Agnostic (released asynchronously with ACP)
Hosted Control Plane decouples the control plane from worker nodes by hosting each cluster's control plane as containerized components within a management cluster. This architecture reduces resource usage, speeds up cluster creation and upgrades, and provides improved scalability for large multi-cluster environments.
For more information, see:
About Hosted Control Plane
Enhanced User Permission Management
We've optimized RBAC management with the following enhancements to improve usability and maintainability:
Enhanced Pod Security Policies with Kyverno
We've strengthened workload security capabilities through the Kyverno policy engine:
- Ready-to-Use Security Templates: 8 validated security policy templates built into the console, covering Pod Security Standards levels including Privileged, Baseline, Nonroot, and Restricted
- One-Click Configuration: Quickly create policies from templates in the business view without manual YAML writing, effective immediately in specified namespaces
Next-Generation Gateway API powered by Envoy Gateway
This release introduces a new Gateway API implementation based on Envoy Gateway. It provides a unified L7 traffic entry, stays aligned with the community Gateway API specification, and lays the foundation for richer traffic policies and ecosystem integrations.
Domain-Based Rules for Egress Firewall
Egress Firewall now supports allow/deny rules based on domain names instead of only IP addresses. This enables fine-grained outbound access control for public SaaS services and external resources whose IP addresses change frequently.
New Endpoint Health Checker for Faster Failover
A new Endpoint Health Checker is introduced to detect failures such as node crashes and network partitions more quickly and to remove unhealthy backends in time. This significantly shortens traffic failover duration and reduces the risk of service interruption.
New Local Storage Operator for Easier Ceph/TopoLVM Management
The newly introduced Local Storage (Alauda Build of Local Storage) Operator greatly simplifies deployment and disk management for Ceph and TopoLVM. During deployment, you can list all available disks across the cluster, including model, capacity, and other key attributes, and select which disks to bring under management.
For disk binding, Ceph and TopoLVM now prefer using device IDs rather than mount paths, preventing storage issues caused by device name changes after node reboot or device re-detection.
Other Key Changes
Lifecycle Change for Logging Plugins
The lifecycle status for logging-related plugins has been changed from Aligned to Agnostic (released asynchronously with ACP).
Affected plugins:
- Alauda Container Platform Log Essentials (new in this release)
- Alauda Container Platform Log Storage for ClickHouse
- Alauda Container Platform Log Storage for Elasticsearch
- Alauda Container Platform Log Collector
For more information, see:
About Logging Service
Enhanced Default Security Level for Namespaces
Starting from v4.2.0, the default PSA policy for newly created namespaces (via web console or CLI) is changed from Baseline to Restricted.
- Baseline: Prohibits known privilege escalations, provides moderate security
- Restricted: Follows Pod security best practices with strictest requirements
WARNING
The Restricted policy enforces very strict configuration requirements for Pods. If your business requires capabilities such as privileged mode, running as the root user, mounting host paths, or using the host network, these workloads will fail to run in namespaces that default to the Restricted policy.
Impact Analysis:
- This change only affects newly created namespaces
- Workloads requiring privileged capabilities (e.g., root user, hostPath mounts) will not run directly
Recommended Solutions:
- Modify your application configuration to meet the security requirements of the Restricted policy
- Manually set the namespace policy back to Baseline if necessary
MinIO in Maintenance Mode
The MinIO (Alauda Build of MinIO) has entered maintenance mode. Only security fixes will be provided in the future, and no new features are planned. Existing MinIO clusters can continue to run, while new object storage requirements should prefer Ceph Object as the recommended solution.
Calico in Maintenance Mode
The Calico (Alauda Container Platform Networking for Calico) CNI plugin has entered maintenance mode. We will only address security-related issues, and it is no longer the default recommended network option. Existing Calico clusters remain supported, while new clusters should use kube-ovn as the standard CNI.
Ingress Nginx Switched to Operator
The Ingress Nginx (Alauda Build of Ingress Nginx) has been migrated from a cluster plug-in to an Operator-based deployment and management model. Existing Ingress resources will continue to work after the upgrade, and subsequent operations are expected to be carried out through the Operator. Although the upstream community version of Ingress Nginx is no longer updated, we will continue to provide bug fixes and security patches for this distribution.
Deprecated and Removed Features
Kubernetes Version Upgrade Policy Update
Starting from ACP 4.2, upgrading the Kubernetes version is no longer optional. When performing a cluster upgrade, the Kubernetes version must be upgraded together with other platform components.
This change ensures version consistency across the cluster and reduces future maintenance windows.
ALB Deprecated Starting from v4.2.0
The ALB (Alauda Container Platform Ingress Gateway) is marked as deprecated as of v4.2.0. New clusters and new users should adopt the Envoy Gateway–based Gateway API as the primary option. Existing clusters using ALB will keep working after the upgrade, but we strongly recommend planning and executing a migration to the Gateway API for long-term support and feature evolution.
Flannel Fully Removed
The Flannel (Alauda Container Platform Networking for Flannel) CNI plugin has been completely removed from the platform. Clusters still using Flannel must migrate to kube-ovn before upgrading to this release or any later version. Please plan and complete the migration in advance to avoid service disruption caused by switching the CNI.
Fixed Issues
- Previously, the status field of an upmachinepool resource stored the associated machine resources without any ordering. This caused the resource to be updated on every reconcile loop, resulting in excessively large audit logs. This issue has now been fixed.
- When the platform has a large number of clusters, the project quota of a single cluster cannot be updated after the quota is set for the project using the batch set project quota function. This issue has been fixed.
- When updating the password of the LDAP bind account, submitting the configuration returns a network validation error, causing the change to fail. This issue has been fixed.
- Previously, when creating a cluster-level Instance in OperatorHub, the web console automatically injected a metadata.namespace field, which caused a 404 error. This issue has now been fixed.
- When a user is automatically disabled by the system due to long-term lack of login, it will be automatically disabled again after being manually activated by the administrator. This issue has been fixed.
- Previously, after uninstalling an Operator, the Operator status was incorrectly displayed as Absent, even though the Operator was actually Ready. Users had to manually re-upload the Operator using violet upload. This issue has now been resolved, and the Operator correctly appears as Ready after uninstallation.
- In some cases, installing a new Operator version after uploading it via violet upload would fail unexpectedly. This intermittent issue has been fixed.
- When an Operator or Cluster Plugin included multiple frontend extensions, the left-side navigation of these extensions could become unresponsive. The temporary workaround required users to add the annotation cpaas.io/auto-sync: "false" to the extension’s ConfigMap. This behavior has now been permanently fixed in the code.
- Previously, if a cluster contained nodes with an empty Display Name, users were unable to filter nodes by typing in the node selector dropdown on the node details page. This issue has been resolved.
- The temporary files were not deleted after log archiving, preventing disk space from being reclaimed. This issue has been fixed.
- Uploading multiple packages from a folder using violet upload previously failed when disk space became insufficient. Violet now proactively cleans up uploaded packages in time, preventing these errors.
- Fixed an issue where modifying the Pod Security Policy when importing a namespace into a project did not take effect.
- Fixed an issue where the monitoring dashboards for workloads (e.g., Applications, Deployments) in Workload clusters failed to display when the global cluster was upgraded while the Workload clusters remained un-upgraded.
- Fixed an issue causing the KubeVirt Operator deployment to fail when upgrading on Kubernetes versions prior to 1.30.
- Fixed an inconsistency where Secrets created through the web console only stored the Username and Password, and lacked the complete authentication field (auth) compared to those created via kubectl create. This issue previously caused authentication failures for build tools (e.g., buildah) that rely on the complete auth data.
Known Issues
- When using violet push to upload a chart package, the push operation may complete successfully, but the package does not appear in the public-charts repository.
Workaround: Push the chart package again.
- Although the installation page provides form fields for configuring labels and annotations for the global cluster, these configurations are not applied in practice.
- If a Custom Application includes an alert resource whose metrics expression uses customized metrics, deploying the application to a namespace whose name differs from the original—whether after exporting it as a chart or application YAML, importing the chart into the platform, or creating the application directly from the YAML — will cause the deployment to fail.
This issue can be resolved by manually updating the metrics expression in the alert resource within the chart or YAML file, changing the namespace tag value to match the target deployment namespace.
- The default pool .mgr created by ceph-mgr uses the default Crush Rule, which may fail to properly select OSDs in a stretched cluster. To resolve this, the .mgr pool must be created using CephBlockPool. However, due to timing uncertainties, ceph-mgr might attempt to create the .mgr pool before the Rook Operator completes its setup, leading to conflicts.
If encountering this issue, restart the rook-ceph-mgr Pod to trigger reinitialization.
If unresolved, manually clean up the conflicting .mgr pool and redeploy the cluster to ensure proper creation order.
- Application creation failure triggered by the defaultMode field in YAML.
Affected Path: Alauda Container Platform → Application Management → Application List → Create from YAML. Submitting YAML containing the defaultMode field (typically used for ConfigMap/Secret volume mount permissions) triggers validation errors and causes deployment failure.
Workaround: Manually remove all defaultMode declarations before application creation.
- When pre-delete post-delete hook is set in helm chart.
When the delete template application is executed and the chart is uninstalled, the hook execution fails for some reasons, thus the application cannot be deleted. It is necessary to investigate the cause and give priority to solving the problem of hook execution failure.