创建实例

您可以创建 Kafka 实例,构建高吞吐、低延迟的实时数据管道,以支持流式数据处理、服务解耦等场景下业务系统的多样化需求。

目录

创建 Kafka 实例

操作步骤

CLI
Web Console

通过 CLI 创建 Kafka 实例:

cat << EOF | kubectl -n default create -f -
apiVersion: middleware.alauda.io/v1
kind: RdsKafka
metadata:
  name: my-cluster
spec:
  entityOperator:
    topicOperator:
      resources:
        limits:
          cpu: 1
          memory: 2Gi
        requests:
          cpu: 1
          memory: 2Gi
    userOperator:
      resources:
        limits:
          cpu: 1
          memory: 2Gi
        requests:
          cpu: 1
          memory: 2Gi
    tlsSidecar:
      resources:
        limits:
          cpu: 200m
          memory: 128Mi
        requests:
          cpu: 200m
          memory: 128Mi
  version: 3.8
  replicas: 3
  config:
    auto.create.topics.enable: "false"
    auto.leader.rebalance.enable: "true"
    background.threads: "10"
    compression.type: producer
    default.replication.factor: "3"
    delete.topic.enable: "true"
    log.retention.hours: "168"
    log.roll.hours: "168"
    log.segment.bytes: "1073741824"
    message.max.bytes: "1048588"
    min.insync.replicas: "1"
    num.io.threads: "8"
    num.network.threads: "3"
    num.recovery.threads.per.data.dir: "1"
    num.replica.fetchers: "1"
    unclean.leader.election.enable: "false"
  resources:
    limits:
      cpu: 2
      memory: 4Gi
    requests:
      cpu: 2
      memory: 4Gi
  storage:
    size: 1Gi
    # Replace with available storage class
    class: local-path
    deleteClaim: false
  kafka:
    listeners:
      plain: {}
      external:
        type: nodeport
        tls: false
  zookeeper:
    # Currently the same as Kafka broker
    replicas: 3
    resources:
      limits:
        cpu: 1
        memory: 2Gi
      requests:
        cpu: 1
        memory: 2Gi
    # Storage maintained the same as Kafka broker
    storage:
      size: 1Gi
      # Replace with available storage class
      class: local-path
      deleteClaim: false
EOF

创建实例后,您可以使用以下命令查看实例状态:

$ kubectl get rdskafka -n <namespace> -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,STATUS:.status.phase,MESSAGE:.status.reason,CreationTimestamp:.metadata.creationTimestamp
NAME         VERSION   STATUS   MESSAGE                                   CreationTimestamp
my-cluster   3.8       Active   <none>                                    2025-03-06T08:46:57Z
test38       3.8       Failed   Pod is unschedulable or is not starting   2025-03-06T08:46:36Z

输出表格字段含义如下:

字段说明
NAME实例名称
VERSION当前仅支持以下 4 个版本:2.5.02.7.02.8.23.8
STATUS实例当前状态,可能有以下状态:
  • Creating:实例正在创建中
  • Updating:实例正在更新中
  • Failed:实例遇到不可恢复错误
  • Paused:实例已被手动暂停
  • Restarting:实例正在重启
  • Active:实例已准备好可用
MESSAGE实例当前状态的原因
CreationTimestamp实例创建的时间戳

创建单节点 Kafka 实例

重要提示

建议创建实例时 Broker 节点数设置为 3。如果 Broker 节点数小于推荐值 3,则需要在创建时修改部分参数。

操作步骤

CLI
Web Console

通过 CLI 创建单节点 Kafka 实例:

cat << EOF | kubectl -n default create -f -
apiVersion: middleware.alauda.io/v1
kind: RdsKafka
metadata:
  name: my-cluster
spec:
  entityOperator:
    topicOperator:
      resources:
        limits:
          cpu: 1
          memory: 2Gi
        requests:
          cpu: 1
          memory: 2Gi
    userOperator:
      resources:
        limits:
          cpu: 1
          memory: 2Gi
        requests:
          cpu: 1
          memory: 2Gi
    tlsSidecar:
      resources:
        limits:
          cpu: 200m
          memory: 128Mi
        requests:
          cpu: 200m
          memory: 128Mi
  version: 3.8
  replicas: 1
  config:
    auto.create.topics.enable: "false"
    auto.leader.rebalance.enable: "true"
    background.threads: "10"
    compression.type: producer
    delete.topic.enable: "true"
    log.retention.hours: "168"
    log.roll.hours: "168"
    log.segment.bytes: "1073741824"
    message.max.bytes: "1048588"
    min.insync.replicas: "1"
    num.io.threads: "8"
    num.network.threads: "3"
    num.recovery.threads.per.data.dir: "1"
    num.replica.fetchers: "1"
    unclean.leader.election.enable: "false"
    ## 确保以下参数配置正确
    default.replication.factor: "1"
    offsets.topic.replication.factor: 1
    transaction.state.log.replication.factor: 1
    transaction.state.log.min.isr: 1
  resources:
    limits:
      cpu: 2
      memory: 4Gi
    requests:
      cpu: 2
      memory: 4Gi
  storage:
    size: 1Gi
    # Replace with available storage class
    class: local-path
    deleteClaim: false
  kafka:
    listeners:
      plain: {}
      external:
        type: nodeport
        tls: false
  zookeeper:
    # Currently the same as Kafka broker
    replicas: 1
    resources:
      limits:
        cpu: 1
        memory: 2Gi
      requests:
        cpu: 1
        memory: 2Gi
    # Storage maintained the same as Kafka broker
    storage:
      size: 1Gi
      # Replace with available storage class
      class: local-path
      deleteClaim: false
EOF