配置/root/.kube/config
的 context
,如何配置 context 管理多集群请参考。
实现可以切换集群,这里 两个集群的context 名称分别为kubernetes-admin@cluster.local
与default
$ kubectl get node --context kubernetes-admin@cluster.local
NAME STATUS ROLES AGE VERSION
dbscale-control-plan01 Ready 20h v1.25.6
kube-control-plan01 Ready control-plane 20h v1.25.6
kube-node01 Ready 20h v1.25.6
kube-node02 Ready 20h v1.25.6
kube-node03 Ready 20h v1.25.6$ kubectl get node --context default
NAME STATUS ROLES AGE VERSION
k3smaster Ready control-plane,master 131d v1.25.3+k3s1
安装 clusteradm
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
安装 OCM,先配置环境变量,确认托管集群
export CTX_HUB_CLUSTER=kubernetes-admin@cluster.local
$ clusteradm init --wait --context ${CTX_HUB_CLUSTER}
CRD successfully registered.
Registration operator is now available.
ClusterManager registration is now available.
The multicluster hub control plane has been initialized successfully!You can now register cluster(s) to the hub control plane. Log onto those cluster(s) and run the following command:clusteradm join --hub-token eyJhbGciOiJSUzI1NiIsImtpZCI6Ik5VcVp4YXkyanFyZTllMWt2Z21UWXZwbmRvNkdkSHhnM005X3lmVTRMRjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjc4OTUwNTYzLCJpYXQiOjE2Nzg5NDY5NjMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJvcGVuLWNsdXN0ZXItbWFuYWdlbWVudCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJjbHVzdGVyLWJvb3RzdHJhcCIsInVpZCI6ImE5OGE4ZWE1LWVlMzYtNDUzNS05OWY5LTA0OTFmZjFhZTc1ZCJ9fSwibmJmIjoxNjc4OTQ2OTYzLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6b3Blbi1jbHVzdGVyLW1hbmFnZW1lbnQ6Y2x1c3Rlci1ib290c3RyYXAifQ.VU_I_67dlIJ9WlMnFaJL-930OHAkm0tvYmWqzGRURYa31wOnPQMTzq-mUddNU_pvEuFBbqX__b9QGz5WEisHlGnPUcBjpyPGQihDFKaz-UciK_A03D9Rpy6VA1cE4vcDM0lr2uZv7edf09F_9LI9Oo7MajHWK0bdAF6UMkOmWHIlHVVIC9DMqSrzsyZrZf-4mv4ciyVQp3PgpZEgXfogi--_-qWWuTGZM-el5z29c3uJfPdFnxDopL3YFedWJ9dkepnasO4l1RWhwxYGTUFN5lIkKSMx-RFH8BkNfJfz8Vb4AO8HvUb9niWCRt1dn61UmLClsrjipvBCN8gtItUGjg --hub-apiserver https://10.168.110.21:6443 --wait --cluster-name Replace with a cluster name of your choice. For example, cluster1.
将 另一个集群加入托管集群。将
换成 default
$ clusteradm join --hub-token eyJhbGciOiJSUzI1NiIsImtpZCI6Ik5VcVp4YXkyanFyZTllMWt2Z21UWXZwbmRvNkdkSHhnM005X3lmVTRMRjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjc4OTUwNTYzLCJpYXQiOjE2Nzg5NDY5NjMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJvcGVuLWNsdXN0ZXItbWFuYWdlbWVudCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJjbHVzdGVyLWJvb3RzdHJhcCIsInVpZCI6ImE5OGE4ZWE1LWVlMzYtNDUzNS05OWY5LTA0OTFmZjFhZTc1ZCJ9fSwibmJmIjoxNjc4OTQ2OTYzLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6b3Blbi1jbHVzdGVyLW1hbmFnZW1lbnQ6Y2x1c3Rlci1ib290c3RyYXAifQ.VU_I_67dlIJ9WlMnFaJL-930OHAkm0tvYmWqzGRURYa31wOnPQMTzq-mUddNU_pvEuFBbqX__b9QGz5WEisHlGnPUcBjpyPGQihDFKaz-UciK_A03D9Rpy6VA1cE4vcDM0lr2uZv7edf09F_9LI9Oo7MajHWK0bdAF6UMkOmWHIlHVVIC9DMqSrzsyZrZf-4mv4ciyVQp3PgpZEgXfogi--_-qWWuTGZM-el5z29c3uJfPdFnxDopL3YFedWJ9dkepnasO4l1RWhwxYGTUFN5lIkKSMx-RFH8BkNfJfz8Vb4AO8HvUb9niWCRt1dn61UmLClsrjipvBCN8gtItUGjg --hub-apiserver https://10.168.110.21:6443 --wait --cluster-name default
CRD successfully registered.
Registration operator is now available.
Klusterlet is now available.
Please log onto the hub cluster and run the following command:clusteradm accept --clusters default
测试连接
$ clusteradm accept --clusters default
Starting approve csrs for the cluster default
CSR default-rzgxm approved
set hubAcceptsClient to true for managed cluster defaultYour managed cluster default has joined the Hub successfully. Visit https://open-cluster-management.io/scenarios or https://github.com/open-cluster-management-io/OCM/tree/main/solutions for next steps.
查看托管集群部署了哪些内容
$ kubectl get ns --context ${CTX_HUB_CLUSTER}
NAME STATUS AGE
clusternet-system Active 3h37m
default Active 20h
kube-node-lease Active 20h
kube-public Active 20h
kube-system Active 20h
open-cluster-management Active 34m
open-cluster-management-agent Active 17m
open-cluster-management-agent-addon Active 17m
open-cluster-management-hub Active 31m$ kubectl get clustermanager --context ${CTX_HUB_CLUSTER}
NAME AGE
cluster-manager 32m$ kubectl -n open-cluster-management get pod --context ${CTX_HUB_CLUSTER}
NAME READY STATUS RESTARTS AGE
cluster-manager-79dcdf496f-c2lgx 1/1 Running 0 35m
klusterlet-6555776c99-2s84c 1/1 Running 0 19m
klusterlet-6555776c99-ksmxl 1/1 Running 0 19m
klusterlet-6555776c99-pxpgr 1/1 Running 0 19m$ kubectl -n open-cluster-management-hub get pod --context ${CTX_HUB_CLUSTER}
NAME READY STATUS RESTARTS AGE
cluster-manager-placement-controller-6597644b5b-znhv2 1/1 Running 0 33m
cluster-manager-registration-controller-7d774d4866-mlhfm 1/1 Running 0 33m
cluster-manager-registration-webhook-f549cb5bd-j45pv 2/2 Running 0 33m
cluster-manager-work-webhook-64f95b566d-wmvhk 2/2 Running 0 33m
查看 cluster-manager
信息
kubectl get clustermanager cluster-manager -o yaml --context ${CTX_HUB_CLUSTER}
apiVersion: operator.open-cluster-management.io/v1
kind: ClusterManager
metadata:creationTimestamp: "2023-03-16T05:57:14Z"finalizers:- operator.open-cluster-management.io/cluster-manager-cleanupgeneration: 4managedFields:- apiVersion: operator.open-cluster-management.io/v1fieldsType: FieldsV1fieldsV1:f:spec:.: {}f:deployOption:.: {}f:mode: {}f:placementImagePullSpec: {}f:registrationConfiguration:.: {}f:featureGates: {}f:registrationImagePullSpec: {}f:workImagePullSpec: {}manager: clusteradmoperation: Updatetime: "2023-03-16T05:57:14Z"- apiVersion: operator.open-cluster-management.io/v1fieldsType: FieldsV1fieldsV1:f:metadata:f:finalizers:.: {}v:"operator.open-cluster-management.io/cluster-manager-cleanup": {}f:spec:f:nodePlacement: {}manager: Go-http-clientoperation: Updatetime: "2023-03-16T06:09:23Z"- apiVersion: operator.open-cluster-management.io/v1fieldsType: FieldsV1fieldsV1:f:status:.: {}f:conditions: {}f:generations: {}f:observedGeneration: {}f:relatedResources: {}manager: Go-http-clientoperation: Updatesubresource: statustime: "2023-03-16T06:09:24Z"name: cluster-managerresourceVersion: "68256"uid: 0249a3ff-bc76-488d-8a8d-1960367ac8be
spec:deployOption:mode: DefaultnodePlacement: {}placementImagePullSpec: quay.io/open-cluster-management/placement:v0.10.0registrationConfiguration:featureGates:- feature: DefaultClusterSetmode: EnableregistrationImagePullSpec: quay.io/open-cluster-management/registration:v0.10.0workImagePullSpec: quay.io/open-cluster-management/work:v0.10.0
status:conditions:- lastTransitionTime: "2023-03-16T06:03:32Z"message: Registration is managing credentialsobservedGeneration: 4reason: RegistrationFunctionalstatus: "False"type: HubRegistrationDegraded- lastTransitionTime: "2023-03-16T06:00:57Z"message: Placement is scheduling placement decisionsobservedGeneration: 4reason: PlacementFunctionalstatus: "False"type: HubPlacementDegraded- lastTransitionTime: "2023-03-16T05:57:18Z"message: Feature gates are all validreason: FeatureGatesAllValidstatus: "True"type: ValidFeatureGates- lastTransitionTime: "2023-03-16T06:06:07Z"message: Components of cluster manager are up to datereason: ClusterManagerUpToDatestatus: "False"type: Progressing- lastTransitionTime: "2023-03-16T05:57:18Z"message: Components of cluster manager are appliedreason: ClusterManagerAppliedstatus: "True"type: Appliedgenerations:- group: appslastGeneration: 1name: cluster-manager-registration-controllernamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appslastGeneration: 1name: cluster-manager-registration-webhooknamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appslastGeneration: 1name: cluster-manager-work-webhooknamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appslastGeneration: 1name: cluster-manager-placement-controllernamespace: open-cluster-management-hubresource: deploymentsversion: v1observedGeneration: 4relatedResources:- group: apiextensions.k8s.ioname: clustermanagementaddons.addon.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: managedclusters.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: managedclustersets.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: manifestworks.work.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: managedclusteraddons.addon.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: managedclustersetbindings.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: placements.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: addondeploymentconfigs.addon.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: placementdecisions.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: apiextensions.k8s.ioname: addonplacementscores.cluster.open-cluster-management.ionamespace: ""resource: customresourcedefinitionsversion: v1- group: ""name: open-cluster-management-hubnamespace: ""resource: namespacesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-registration:controllernamespace: ""resource: clusterrolesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-registration:controllernamespace: ""resource: clusterrolebindingsversion: v1- group: ""name: cluster-manager-registration-controller-sanamespace: open-cluster-management-hubresource: serviceaccountsversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-registration:webhooknamespace: ""resource: clusterrolesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-registration:webhooknamespace: ""resource: clusterrolebindingsversion: v1- group: ""name: cluster-manager-registration-webhook-sanamespace: open-cluster-management-hubresource: serviceaccountsversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-work:webhooknamespace: ""resource: clusterrolesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-work:webhooknamespace: ""resource: clusterrolebindingsversion: v1- group: ""name: cluster-manager-work-webhook-sanamespace: open-cluster-management-hubresource: serviceaccountsversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-placement:controllernamespace: ""resource: clusterrolesversion: v1- group: rbac.authorization.k8s.ioname: open-cluster-management:cluster-manager-placement:controllernamespace: ""resource: clusterrolebindingsversion: v1- group: ""name: cluster-manager-placement-controller-sanamespace: open-cluster-management-hubresource: serviceaccountsversion: v1- group: ""name: cluster-manager-registration-webhooknamespace: open-cluster-management-hubresource: servicesversion: v1- group: ""name: cluster-manager-work-webhooknamespace: open-cluster-management-hubresource: servicesversion: v1- group: appsname: cluster-manager-registration-controllernamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appsname: cluster-manager-registration-webhooknamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appsname: cluster-manager-work-webhooknamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: appsname: cluster-manager-placement-controllernamespace: open-cluster-management-hubresource: deploymentsversion: v1- group: admissionregistration.k8s.ioname: managedclustervalidators.admission.cluster.open-cluster-management.ionamespace: ""resource: validatingwebhookconfigurationsversion: v1- group: admissionregistration.k8s.ioname: managedclustermutators.admission.cluster.open-cluster-management.ionamespace: ""resource: mutatingwebhookconfigurationsversion: v1- group: admissionregistration.k8s.ioname: managedclustersetbindingvalidators.admission.cluster.open-cluster-management.ionamespace: ""resource: validatingwebhookconfigurationsversion: v1- group: admissionregistration.k8s.ioname: managedclustersetbindingv1beta1validators.admission.cluster.open-cluster-management.ionamespace: ""resource: validatingwebhookconfigurationsversion: v1- group: admissionregistration.k8s.ioname: manifestworkvalidators.admission.work.open-cluster-management.ionamespace: ""resource: validatingwebhookconfigurationsversion: v1
OCM 代理在您的托管集群上运行后,它将向您的 hub 集群发送“握手”并等待 hub 集群管理员的批准。在本节中,我们将从 OCM 中心管理员的角度逐步接受注册请求。
等待 CSR 对象的创建,该对象将由您的托管集群的 OCM 代理在 hub 集群上创建:
# or the previously chosen cluster name
kubectl get csr -w --context ${CTX_HUB_CLUSTER} | grep cluster1
待处理 CSR 请求的示例如下所示:
cluster1-tqcjj 33s kubernetes.io/kube-apiserver-client system:serviceaccount:open-cluster-management:cluster-bootstrap Pending
使用工具接受加入请求clusteradm
:
clusteradm accept --clusters cluster1 --context ${CTX_HUB_CLUSTER}
运行该accept命令后,来自名为“cluster1”的托管集群的 CSR 将获得批准。此外,它将指示 OCM hub 控制平面自动设置相关对象(例如 hub 集群中名为“cluster1”的命名空间)和 RBAC 权限。
通过运行以下命令验证托管集群上 OCM 代理的安装:
kubectl -n open-cluster-management-agent get pod --context ${CTX_HUB_CLUSTER}
NAME READY STATUS RESTARTS AGE
klusterlet-registration-agent-598fd79988-jxx7n 1/1 Running 0 19d
klusterlet-work-agent-7d47f4b5c5-dnkqw 1/1 Running 0 19d
注册托管群集后,测试是否可以从中心群集将Pod部署到托管群集。创建一个manifest-work.yaml
,如下例所示:
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:name: mw-01namespace: ${MANAGED_CLUSTER_NAME}
spec:workload:manifests:- apiVersion: v1kind: Podmetadata:name: hellonamespace: defaultspec:containers:- name: helloimage: busyboxcommand: ["sh", "-c", 'echo "Hello, Kubernetes!" && sleep 3600']restartPolicy: OnFailure
将yaml文件应用到集线器集群。
kubectl apply -f manifest-work.yaml --context ${CTX_HUB_CLUSTER}
验证manifestwork资源是否已应用于中心。
kubectl -n ${MANAGED_CLUSTER_NAME} get manifestwork/mw-01 --context ${CTX_HUB_CLUSTER} -o yaml
检查托管集群,查看hello Pod是否已从集线器集群部署。
$ kubectl -n default get pod --context ${CTX_MANAGED_CLUSTER}
NAME READY STATUS RESTARTS AGE
hello 1/1 Running 0 108s
参考: