Manage PAC Component

For Administrators Only

This guide is for cluster administrators only. It covers PAC component deployment, configuration, and maintenance tasks that require cluster administrator permissions.

Regular users should refer to:

This guide explains how to deploy, update, and uninstall the Pipelines-as-Code (PAC) component on Kubernetes platforms.

Prerequisites

Before managing PAC, ensure you have:

  • Kubernetes cluster (version 1.24 or higher)
  • Tekton Operator installed and running
  • Cluster administrator permissions
  • kubectl installed and configured to access your cluster

Deploy PAC Component

NOTE

Platform Support: Although the resource name contains "OpenShift", PAC can be deployed on Kubernetes platforms through the Tekton Operator patch that adds PAC controller support.

PAC is deployed by directly creating the OpenShiftPipelinesAsCode CR. The operator will automatically create and manage all necessary resources for PAC.

Deploy PAC Component

Step 1: Create the OpenShiftPipelinesAsCode CR

Create a YAML file named pac.yaml:

apiVersion: operator.tekton.dev/v1alpha1
kind: OpenShiftPipelinesAsCode
metadata:
  name: pipelines-as-code
spec:
  settings:
    application-name: Pipelines as Code CI
    hub-url: http://tekton-hub-api.tekton-pipelines:8000/v1
    remote-tasks: "true"
    secret-auto-create: "true"
  targetNamespace: tekton-pipelines  # Default namespace, you can customize this

Important:

  • The resource name must be pipelines-as-code, otherwise the operator will not deploy the PAC component.
  • The targetNamespace field specifies where PAC components will be deployed. The default is tekton-pipelines, but you can use any namespace name.

Create the namespace if it doesn't exist:

kubectl create namespace tekton-pipelines  # Or your custom namespace name

Step 2: Apply the Configuration

Apply the CR to your cluster:

kubectl apply -f pac.yaml

Example output:

openshiftpipelinesascode.operator.tekton.dev/pipelines-as-code created

Step 3: Verify Deployment

Check the OpenShiftPipelinesAsCode CR status:

kubectl get openshiftpipelinesascodes.operator.tekton.dev

The output should show the CR with Ready status:

NAME                  VERSION   READY   REASON
pipelines-as-code    0.x.x     True    Ready

Check the TektonInstallerSet status (replace <pac-namespace> with your actual PAC namespace, default is tekton-pipelines):

kubectl get tektoninstallersets -n <pac-namespace> | grep pipelinesascode

Example output:

NAME                              READY   REASON
pipelinesascode-installer-set     True    Ready
Understanding TektonInstallerSet

TektonInstallerSet is an internal operator resource used by the Tekton Operator to manage PAC component installation and lifecycle. It acts as a template that the operator uses to create and manage all PAC-related resources (Deployments, Services, ConfigMaps, RBAC, etc.).

Important:

  • You should never create, modify, or delete TektonInstallerSet directly
  • The operator automatically creates and manages it when you create the OpenShiftPipelinesAsCode CR
  • You can check its status for troubleshooting, but all changes should be made through the OpenShiftPipelinesAsCode CR
  • When you delete the OpenShiftPipelinesAsCode CR, the operator automatically deletes the TektonInstallerSet and all related resources

Example output:

NAME                              AGE
pipelines-as-code-installer-set   5m

Verify the PAC pods are running:

kubectl get pods -n <pac-namespace> | grep pipelines-as-code

Example output:

NAME                                      READY   STATUS    RESTARTS   AGE
pipelines-as-code-controller-xxxxx        1/1     Running   0          5m
pipelines-as-code-watcher-xxxxx          1/1     Running   0          5m
pipelines-as-code-webhook-xxxxx          1/1     Running   0          5m

Note: Throughout this document, <pac-namespace> or tekton-pipelines refers to the namespace where PAC is deployed. Replace it with your actual namespace name if different.

You should see three pods in Running state:

  • pipelines-as-code-controller-*
  • pipelines-as-code-watcher-*
  • pipelines-as-code-webhook-*

Configure Access

PAC component needs to be exposed so that webhook events from Git providers can reach it. The PAC controller URL is used when configuring webhooks in your Git provider.

Important: Before configuring repositories, you must expose the PAC controller using one of the methods below. The tkn pac create repo command can automatically detect the controller URL if it's exposed via Ingress, but you need to set it up first.

You can use one of the following methods to expose the PAC controller:

Using Ingress (HTTP)

Domain Name and DNS Configuration

If you have a domain name:

  • Configure the host field with your domain name (e.g., pac.example.com)
  • Ensure the domain name can be resolved via DNS to your Ingress Controller's IP address
  • Configure DNS A record: pac.example.com<Ingress-Controller-IP>

If you don't have a domain name:

  • Leave the host field empty or remove the host line from the Ingress configuration
  • Access PAC using IP address and port: http://<Ingress-Controller-IP>:<port>
  • Git providers (GitLab, GitHub) can still send webhooks to the IP address

Create an IngressClass (if not already exists):

Create an Ingress resource (replace <pac-namespace> with your PAC namespace, default is tekton-pipelines):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: pipelines-as-code
  namespace: <pac-namespace>  # Default: tekton-pipelines
spec:
  rules:
  - host: pac.example.com
    http:
      paths:
      - backend:
          service:
            name: pipelines-as-code-controller
            port:
              number: 8080
        path: /
        pathType: Prefix

Apply the Ingress:

kubectl apply -f ingress.yaml

Example output:

ingress.networking.k8s.io/pipelines-as-code created

Alternative: Ingress without domain name (host-less)

If you don't have a domain name, you can create an Ingress without the host field:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: pipelines-as-code
  namespace: <pac-namespace>  # Default: tekton-pipelines
spec:
  rules:
  - http:  # No host field specified
      paths:
      - backend:
          service:
            name: pipelines-as-code-controller
            port:
              number: 8080
        path: /
        pathType: Prefix

Then access PAC using the Ingress Controller's IP address:

# Get Ingress Controller IP
kubectl get ingress pipelines-as-code -n <pac-namespace>

# Access PAC
# http://<INGRESS-IP>/
# Example: http://192.168.1.100/

Using Ingress (HTTPS)

HTTPS Configuration Requirements

TLS Certificate Requirements:

  • You need a valid TLS certificate for your domain name
  • The certificate's Common Name (CN) or Subject Alternative Name (SAN) must match your domain
  • For production, use certificates from trusted CA (Let's Encrypt, DigiCert, etc.)

Domain Name Configuration:

  • HTTPS Ingress requires a domain name configured in the host field
  • Ensure DNS resolution: pac.example.com<Ingress-Controller-IP>
  • Access via IP address with HTTPS may cause certificate validation errors

First, create a TLS Secret (replace <pac-namespace> with your PAC namespace, default is tekton-pipelines):

apiVersion: v1
kind: Secret
metadata:
  name: pipelines-as-code-tls
  namespace: <pac-namespace>  # Default: tekton-pipelines
type: kubernetes.io/tls
data:
  tls.crt: <base64-encoded-tls-certificate>
  tls.key: <base64-encoded-tls-key>

Then create an HTTPS Ingress (replace <pac-namespace> with your PAC namespace, default is tekton-pipelines):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: pipelines-as-code
  namespace: <pac-namespace>  # Default: tekton-pipelines
spec:
  rules:
  - host: pac.example.com
    http:
      paths:
      - backend:
          service:
            name: pipelines-as-code-controller
            port:
              number: 8080
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - pac.example.com
    secretName: pipelines-as-code-tls

Using NodePort

Create a NodePort Service (replace <pac-namespace> with your PAC namespace, default is tekton-pipelines):

apiVersion: v1
kind: Service
metadata:
  name: pipelines-as-code-controller-nodeport
  namespace: <pac-namespace>  # Default: tekton-pipelines
spec:
  ports:
    - name: http-listener
      port: 8080
      protocol: TCP
      targetPort: 8082  # PAC controller listens on port 8082
      nodePort: 30080  # Optional: specify a fixed NodePort
  selector:
    app.kubernetes.io/part-of: pipelines-as-code
    app.kubernetes.io/component: controller
  type: NodePort

Important:

  • The targetPort must be 8082, which is the port the PAC controller pod listens on for webhook events
  • The port (8080) is the Service port (used for internal cluster communication)
  • The nodePort (30080) is the external port accessible from outside the cluster
  • For Ingress, the Service port is 8080, which routes to the controller's port 8082 internally

Get the NodePort (replace <pac-namespace> with your PAC namespace, default is tekton-pipelines):

kubectl get service -n <pac-namespace> pipelines-as-code-controller-nodeport -o jsonpath='{.spec.ports[?(@.name=="http-listener")].nodePort}'

Example output:

30080

Access PAC at http://<node-ip>:<node-port>.

How to Get PAC Controller URL

After exposing the PAC controller, you can get the URL using the following methods:

If Using Ingress

Get the Ingress host (replace <pac-namespace> with your PAC namespace, default is tekton-pipelines):

kubectl get ingress pipelines-as-code -n <pac-namespace> -o jsonpath='{.spec.rules[0].host}'

Example output:

pac.example.com

The controller URL will be:

  • HTTP: http://<ingress-host>
  • HTTPS: https://<ingress-host>

If Using NodePort

Get the NodePort and node IP:

# Set your PAC namespace (default: tekton-pipelines)
PAC_NAMESPACE="tekton-pipelines"

# Get NodePort
NODEPORT=$(kubectl get service -n ${PAC_NAMESPACE} pipelines-as-code-controller-nodeport -o jsonpath='{.spec.ports[?(@.name=="http-listener")].nodePort}')

# Get Node IP (use the first node's internal IP)
NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')

echo "PAC Controller URL: http://${NODE_IP}:${NODEPORT}"

Example output:

PAC Controller URL: http://192.168.1.100:30080

Automatic Detection by tkn pac

When using tkn pac create repo, the CLI automatically detects the controller URL by:

  1. Checking for Ingress resources pointing to the PAC controller service
  2. Checking for LoadBalancer services
  3. Checking for NodePort services
  4. If none found, prompting you to manually enter the URL

If automatic detection fails, you can manually enter the controller URL when prompted.

For Regular Users

If you're a regular user and need to find the PAC controller URL:

  1. Try the query commands above (if you have cluster access)
  2. If the commands don't work or you don't have permission, contact your PAC administrator to get the URL
  3. The administrator can find it using the methods in this section

Note: The controller URL must be accessible from Git provider servers to receive webhook events.

Configuration Settings

You can customize PAC behavior through the settings field in the OpenShiftPipelinesAsCode CR:

SettingDescriptionDefault
application-nameName displayed in Git provider UIPipelines as Code CI
hub-urlTekton Hub API URLhttp://tekton-hub-api.tekton-pipelines:8000/v1 (cluster-internal, default namespace)
remote-tasksEnable remote task resolutiontrue
secret-auto-createAutomatically create secretstrue
error-detection-from-container-logsDetect errors from container logsfalse
error-log-snippetShow error log snippetstrue
custom-console-nameDisplay name for custom console links in Git provider UI`` (empty)
custom-console-urlBase URL for custom console (e.g., OpenShift Console)`` (empty)
custom-console-url-pr-detailsURL template for PR/MR details page. Supports variables: {{namespace}}, {{pipelinerun}}`` (empty)
custom-console-url-pr-tasklogURL template for PR/MR task log page. Supports variables: {{namespace}}, {{pipelinerun}}, {{taskrun}}`` (empty)
custom-console-url-namespaceURL template for namespace page. Supports variable: {{namespace}}`` (empty)

Note on hub-url:

  • Default value points to the cluster-internal Tekton Hub service in the default namespace (tekton-pipelines)
  • Format: http://<service-name>.<namespace>:<port>/v1
  • If Tekton Hub is deployed in a different namespace, adjust the namespace in the URL accordingly
  • The namespace in the hub-url is the namespace where Tekton Hub is deployed, which may differ from your PAC namespace (specified by targetNamespace)
  • To use an external Hub (e.g., public Tekton Hub), set to https://api.hub.tekton.dev/v1
  • Only the PAC controller needs access to this URL

Note on custom-console-name & custom-console-url:

  • These settings are used to configure custom console links in Git provider UI
  • The custom-console-name is the display name for the custom console links
  • The custom-console-url is the base URL for the custom console (e.g., Devops Console)
  • The custom-console-url-pr-details is the URL template for PR/MR details page. Supports variables: {{namespace}}, {{pipelinerun}}
  • The custom-console-url-pr-tasklog is the URL template for PR/MR task log page. Supports variables: {{namespace}}, {{pipelinerun}}, {{taskrun}}
  • The custom-console-url-namespace is the URL template for namespace page. Supports variable: {{namespace}}
  • It provides a way to customize the console links in Git provider UI to point to the Devops Console.

For more information about PAC settings, see the Common Configuration Updates section.

Update PAC Component

Update Configuration

  1. Edit the OpenShiftPipelinesAsCode CR:

    kubectl edit openshiftpipelinesascodes.operator.tekton.dev pipelines-as-code
  2. Update the settings field as needed:

    spec:
      settings:
        application-name: "My Custom PAC"
        hub-url: http://tekton-hub-api.tekton-pipelines:8000/v1
        remote-tasks: "true"
        error-detection-from-container-logs: "true"
  3. Save and exit. The operator will automatically update the TektonInstallerSet and apply the changes.

Update Component Version

To update the PAC component version, upgrade the Tekton Operator:

  1. Update the Tekton Operator to the desired version
  2. The operator will automatically:
    • Delete the old TektonInstallerSet
    • Create a new TektonInstallerSet with the new PAC version
    • Update the OpenShiftPipelinesAsCode CR with the new version

Check the version after upgrade:

kubectl get openshiftpipelinesascodes.operator.tekton.dev pipelines-as-code -o jsonpath='{.status.version}'

Verify Updates

After updating the configuration:

  1. Check the OpenShiftPipelinesAsCode CR status:

    kubectl get openshiftpipelinesascodes.operator.tekton.dev pipelines-as-code

Example output:

NAME                VERSION           READY   REASON
pipelines-as-code   v0.36.0-2881c2a   True    
  1. Check if pods are restarting (replace <pac-namespace> with your PAC namespace):

    kubectl get pods -n <pac-namespace> | grep pipelines-as-code

Example output:

pipelines-as-code-controller-665b9588c8-q7ftg        1/1     Running   0          4d2h
pipelines-as-code-watcher-86d58cb688-xkzxg           1/1     Running   0          4d2h
pipelines-as-code-webhook-6f6b79c7b6-w8shs           1/1     Running   0          4d2h
  1. Check pod logs for any errors:

    kubectl logs -n <pac-namespace> -l app.kubernetes.io/part-of=pipelines-as-code --tail=50

Example output:

{"level":"info","ts":"2025-11-27T19:28:35.773Z","logger":"pipelinesascode","caller":"adapter/adapter.go:84","msg":"Starting Pipelines as Code version: nightly-20251126-"}
{"level":"info","ts":"2025-11-27T19:28:35.774Z","logger":"pipelinesascode","caller":"injection/health_check.go:43","msg":"Probes server listening on port 8080"}
...
  1. Verify the configuration is applied:

    kubectl get configmap -n <pac-namespace> pipelines-as-code -o yaml

Example output:

apiVersion: v1
kind: ConfigMap
metadata:
  name: pipelines-as-code
  namespace: <pac-namespace>
data:
  application-name: Pipelines as Code CI
  hub-url: https://api.hub.tekton.dev/v1
  remote-tasks: "true"
  secret-auto-create: "true"
  error-detection-from-container-logs: "true"
  error-log-snippet: "true"
  ...

Rollback Configuration

If you need to rollback a configuration change:

  1. Restore the previous configuration in the OpenShiftPipelinesAsCode CR:

    kubectl edit openshiftpipelinesascodes.operator.tekton.dev pipelines-as-code
  2. Or restore from a backup:

Create a backup before making changes:

kubectl get openshiftpipelinesascodes.operator.tekton.dev pipelines-as-code -o yaml > pac-backup.yaml

Restore from the backup:

kubectl apply -f pac-backup.yaml

Common Configuration Updates

Change Application Name

spec:
  settings:
    application-name: "New Application Name"

Enable Error Detection

spec:
  settings:
    error-detection-from-container-logs: "true"
    error-detection-max-number-of-lines: "100"

Update Hub URL

If your Tekton Hub is deployed in a different namespace or you want to use an external Hub:

spec:
  settings:
    # For cluster-internal Hub in different namespace
    hub-url: "http://tekton-hub-api.<your-namespace>:8000/v1"
    
    # Or for external/public Hub
    hub-url: "https://api.hub.tekton.dev/v1"

To find your Tekton Hub service:

kubectl get svc -A | grep tekton-hub-api

Example output:

tekton-pipelines   tekton-hub-api   ClusterIP   10.96.123.45   <none>   8000/TCP   10m

Disable Remote Tasks

spec:
  settings:
    remote-tasks: "false"

To integrate PAC with your custom console, configure custom console links. This allows pipeline status links in Git providers (GitLab, GitHub, etc.) to point to your custom console.

Configuration Example
spec:
  settings:
    # Custom console display name
    custom-console-name: "My Console"
    
    # Base URL for the custom console (Console entry point)
    custom-console-url: "https://console.example.com"
    
    # URL template for PipelineRun details page
    # Format: /console-acp/workspace/{{ namespace }}~CLUSTER_NAME~{{ namespace }}/pipeline/pipelineRuns/detail/{{ pr }}
    # Replace CLUSTER_NAME with your actual cluster name (e.g., my-cluster, business-1)
    custom-console-url-pr-details: "https://console.example.com/console-acp/workspace/{{ namespace }}~my-cluster~{{ namespace }}/pipeline/pipelineRuns/detail/{{ pr }}"
    
    # URL template for TaskRun log page
    # Format: /console-acp/workspace/~CLUSTER_NAME~{{ namespace }}/pipeline/pipelineRuns/detail/{{ pr }}?tab=task_overview&id={{ task }}
    custom-console-url-pr-tasklog: "https://console.example.com/console-acp/workspace/{{ namespace }}~my-cluster~{{ namespace }}/pipeline/pipelineRuns/detail/{{ pr }}?tab=task_overview&id={{ task }}"
    
    # URL template for namespace pipeline list page
    # Format: /console-acp/workspace/~CLUSTER_NAME~{{ namespace }}/pipeline/pipelineRuns
    custom-console-url-namespace: "https://console.example.com/console-acp/workspace/{{ namespace }}~my-cluster~{{ namespace }}/pipeline/pipelineRuns"
How to Configure

Step 1: Identify your cluster name

Contact your cluster administrator to get the correct cluster name. Common examples:

  • my-cluster - generic cluster name
  • business-1 - for business cluster 1
  • dev-cluster - for development cluster

The cluster name appears in the URL path as ~CLUSTER_NAME~ (note the tilde characters).

Step 2: Set the console URL

Set custom-console-url to your console's entry point (without trailing slash):

  • https://console.example.com
  • https://devops.example.com

Step 3: Configure URL templates

Replace CLUSTER_NAME in the URL templates with your actual cluster name. PAC will automatically replace the following variables:

  • {{ namespace }}: The namespace where the PipelineRun is executed
  • {{ pr }}: The PipelineRun name
  • {{ task }}: The Task name in the PipelineRun
Example: Complete Configuration

For a cluster named my-cluster with console at https://console.example.com:

spec:
  settings:
    custom-console-name: "My Console"
    custom-console-url: "https://console.example.com"
    custom-console-url-pr-details: "https://console.example.com/console-acp/workspace/{{ namespace }}~my-cluster~{{ namespace }}/pipeline/pipelineRuns/detail/{{ pr }}"
    custom-console-url-pr-tasklog: "https://console.example.com/console-acp/workspace/{{ namespace }}~my-cluster~{{ namespace }}/pipeline/pipelineRuns/detail/{{ pr }}?tab=task_overview&id={{ task }}"
    custom-console-url-namespace: "https://console.example.com/console-acp/workspace/{{ namespace }}~my-cluster~{{ namespace }}/pipeline/pipelineRuns"

When PAC creates a PipelineRun named my-app-build-abc123 with a task build-task in namespace my-project:

  • PipelineRun Details:

    https://console.example.com/console-acp/workspace/my-project~my-cluster~my-project/pipeline/pipelineRuns/detail/my-app-build-abc123
  • TaskRun Logs:

    https://console.example.com/console-acp/workspace/my-project~my-cluster~my-project/pipeline/pipelineRuns/detail/my-app-build-abc123?tab=task_overview&id=build-task
  • Namespace Pipeline List:

    https://console.example.com/console-acp/workspace/my-project~my-cluster~my-project/pipeline/pipelineRuns

These links will appear in your Git provider's UI (GitLab merge requests, GitHub pull requests, etc.), allowing developers to quickly navigate to your custom console.

Uninstall PAC Component

Delete OpenShiftPipelinesAsCode CR

Step 1: Delete the CR

kubectl delete openshiftpipelinesascodes.operator.tekton.dev pipelines-as-code

Example output:

openshiftpipelinesascode.operator.tekton.dev "pipelines-as-code" deleted

The operator will automatically:

  • Delete the TektonInstallerSet (internal operator resource)
  • Remove all PAC-related resources (Deployments, Services, ConfigMaps, etc.)
  • Clean up RBAC resources

Note: TektonInstallerSet is an internal operator resource. You should not delete it manually. The operator manages its lifecycle automatically.

Step 2: Verify Uninstallation

Check that the OpenShiftPipelinesAsCode CR is deleted:

kubectl get openshiftpipelinesascodes.operator.tekton.dev

Verify TektonInstallerSet is deleted (replace <pac-namespace> with your PAC namespace):

kubectl get tektoninstallersets -n <pac-namespace> | grep pipelinesascode

Note: This is a read-only check to verify the internal operator resource has been cleaned up. Do not attempt to delete TektonInstallerSet manually.

Example output (should be empty if deleted successfully):

No resources found

Check that PAC pods are terminated:

kubectl get pods -n <pac-namespace> | grep pipelines-as-code

Example output (should be empty if deleted successfully):

No resources found

Clean Up Additional Resources

After uninstalling PAC, you may want to clean up additional resources:

Delete Repository CRs

If you have Repository CRs created for Git provider integration:

kubectl get repositories -A
kubectl delete repositories --all -n <namespace>

Example output:

repository.pipelinesascode.tekton.dev "my-repo" deleted
repository.pipelinesascode.tekton.dev "another-repo" deleted

Delete Secrets

Important: When you delete the OpenShiftPipelinesAsCode CR, the operator automatically cleans up secrets with the label app.kubernetes.io/part-of=pipelines-as-code in the PAC namespace. However, you need to manually delete Repository CR-related secrets that were created for Git provider authentication.

Secrets that are automatically cleaned up (in PAC namespace):

  • pipelines-as-code-secret - PAC controller's internal secret (with label app.kubernetes.io/part-of=pipelines-as-code)

Secrets that need manual cleanup (in Repository CR namespaces):

  • gitlab-secret / github-secret - Git provider access tokens
  • webhook-secret - Webhook validation secrets
  • git-auth-secret - Private repository access tokens
  • git-ssh-secret - SSH keys for repository access

To find and delete Repository CR-related secrets:

  1. List all secrets in the namespace where Repository CRs are located:

    kubectl get secrets -n <namespace>
  2. Identify secrets that you created specifically for PAC Repository CRs.

    Note: Secret names may vary depending on how they were created. Review the list carefully to identify which secrets are related to PAC.

  3. Before deleting, ensure:

    • All Repository CRs that use the secret have been deleted
    • The secret is not used by other applications or resources in the cluster
    • You have verified the secret is safe to remove
  4. Delete the secret only if you are certain it's no longer needed:

    kubectl delete secret <secret-name> -n <namespace>

Warning:

  • Secrets may be shared across multiple Repository CRs or other resources
  • Deleting a secret that is still in use will cause authentication failures
  • Always verify the secret is not referenced elsewhere before deletion

Example output:

secret "gitlab-secret" deleted

Delete Ingress/Service

If you created Ingress or NodePort Service for PAC (replace <pac-namespace> with your PAC namespace):

kubectl delete ingress pipelines-as-code -n <pac-namespace>
kubectl delete service pipelines-as-code-controller-nodeport -n <pac-namespace>

Example output:

ingress.networking.k8s.io "pipelines-as-code" deleted
service "pipelines-as-code-controller-nodeport" deleted

Troubleshooting

PAC Pods Not Starting

Check pod logs (replace <pac-namespace> with your PAC namespace):

kubectl logs -n <pac-namespace> -l app.kubernetes.io/part-of=pipelines-as-code

Example output (example log entries):

{"level":"info","ts":"2024-01-01T12:00:00Z","logger":"controller","msg":"Starting PAC controller"}
{"level":"info","ts":"2024-01-01T12:00:01Z","logger":"controller","msg":"PAC controller ready"}

OpenShiftPipelinesAsCode CR Not Ready

Check the CR status and events:

kubectl describe openshiftpipelinesascodes.operator.tekton.dev pipelines-as-code

Example output (abbreviated):

Name:         pipelines-as-code
Namespace:    
Status:       Ready
Version:      0.x.x
Events:
  Type    Reason   Age   From              Message
  ----    ------   ----  ----              -------
  Normal  Ready    5m    tekton-operator   PAC component deployed successfully

TektonInstallerSet Issues

Important: TektonInstallerSet is an internal operator resource. If you encounter issues with it, do not modify or delete it directly. Instead, troubleshoot through the OpenShiftPipelinesAsCode CR.

Check TektonInstallerSet status for troubleshooting (replace <pac-namespace> with your PAC namespace):

kubectl get tektoninstallersets -n <pac-namespace> -o yaml

If the TektonInstallerSet shows errors:

  1. Check the OpenShiftPipelinesAsCode CR status for underlying issues
  2. Review operator logs for detailed error messages
  3. If necessary, delete and recreate the OpenShiftPipelinesAsCode CR (the operator will automatically recreate the TektonInstallerSet)

Example output (abbreviated):

apiVersion: tekton.dev/v1alpha1
kind: TektonInstallerSet
metadata:
  name: pipelines-as-code-installer-set
  namespace: tekton-pipelines
status:
  conditions:
  - status: "True"
    type: Ready

CR Not Deleting

If the OpenShiftPipelinesAsCode CR is not deleting, check for finalizers:

kubectl get openshiftpipelinesascodes.operator.tekton.dev pipelines-as-code -o yaml | grep finalizers

Example output (if finalizers exist):

  finalizers:
  - tekton.dev/operator

If no finalizers are present, the output will be empty, indicating the CR can be deleted.

If finalizers are present, the operator may still be processing. Wait a few moments and try again.

Resources Not Removed

If some resources are not automatically removed:

  1. Check TektonInstallerSet status for troubleshooting (replace <pac-namespace> with your PAC namespace):

    kubectl get tektoninstallersets -n <pac-namespace> -o yaml

    Note: This is a read-only check. TektonInstallerSet is an internal operator resource. Do not delete it manually.

  2. Manually delete remaining resources:

    kubectl delete deployment -n <pac-namespace> -l app.kubernetes.io/part-of=pipelines-as-code
    kubectl delete service -n <pac-namespace> -l app.kubernetes.io/part-of=pipelines-as-code

Next Steps