欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 科技 > 名人名企 > Ubuntu 22.04.5 LTS 基于 kubesphere 安装 cube studio

Ubuntu 22.04.5 LTS 基于 kubesphere 安装 cube studio

2025/5/13 19:35:52 来源:https://blog.csdn.net/gs80140/article/details/147515946  浏览:    关键词:Ubuntu 22.04.5 LTS 基于 kubesphere 安装 cube studio

Ubuntu 22.04.5 LTS 基于 kubesphere 安装 cube studio

前置条件 已经成功安装 kubesphere v4.3.1

参考教程: https://github.com/data-infra/cube-studio/wiki/%E5%9C%A8-kubesphere-%E4%B8%8A%E6%90%AD%E5%BB%BA-cube-studio

1. 安装基础依赖

# ubuntu安装基础依赖
apt install -y socat conntrack ebtables ipset ipvsadm

2. 服务nodeport可用端口范围要放大到10~60000

vi /etc/kubernetes/manifests/kube-apiserver.yaml修改添加apiserver配置spec:containers:- command:- kube-apiserver- --service-node-port-range=1-65535      # 添加这一行- --advertise-address=172.16.0.17修改后,通过reboot命令重启机器

3. 如果使用containerd运行时,替换脚本中的docker命令

如果知道是否为containerd运行时, 通过命令 kubectl get nodes 看到节点 kubectl describe node node1 | grep "Container Runtime" , 输出: Container Runtime Version:     containerd://1.7.27

# 替换拉取文件中的拉取命令
cd install/kubernetes/
sed -i 's/^docker/crictl/g' pull_images.sh

4.  对于kubekey部署的ipvs模式的k8s

通过如下命令可以检查是否为ipvs模式的k8s

kubectl get configmap -n kube-system kube-proxy -o yaml | grep mode

输出:  mode: ipvs

(1)要将install/kubernetes/start.sh 以及 start-with-kubesphere.sh 脚本最后面的kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalIPs":["'"$1"'"]}}'注释掉。取消注释代码kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"type":"NodePort"}}'

(2)将配置文件install/kubernetes/cube/overlays/config/config.py中的 CONTAINER_CLI的值 改为 nerdctl,K8S_NETWORK_MODE的值 改为ipvs

其中安装 nerdctl  nerdctl-2.0.4-linux-amd64.tar.gz

下载地址: https://github.com/containerd/nerdctl/releases

解压到 /usr/local/bin 目录

tar zxvf nerdctl-2.0.4-linux-amd64.tar.gz -C /usr/local/bin nerdctl

5. 将k8s集群的kubeconfig文件(默认位置:~/.kube/config)复制到install/kubernetes/config文件中,然后执行下面的部署命令,其中xx.xx.xx.xx为机器内网的ip(不是外网ip)

解释  (单机版本)

# 在k8s worker机器上执行如果只部署了k8s,没有部署kubesphere,执行
sh start.sh xx.xx.xx.xx如果部署了k8s 同时部署了kubesphere,执行
sh start-with-kubesphere.sh xx.xx.xx.xx

由于我是单机版本的, 所以注释掉 start.sh 下面的脚本, 不需要重新下载

#mkdir -p ~/.kube && rm -rf ~/.kube/config && cp config ~/.kube/config#ARCH=$(uname -m)#if [ "$ARCH" = "x86_64" ]; then
#  wget https://cube-studio.oss-cn-hangzhou.aliyuncs.com/install/kubectl && chmod +x kubectl  && cp kubectl /usr/bin/ && mv kubectl /usr/local/bin/
#elif [ "$ARCH" = "aarch64" ]; then
#  wget -O kubectl https://cube-studio.oss-cn-hangzhou.aliyuncs.com/install/kubectl-arm64 && chmod +x kubectl  && cp kubectl /usr/bin/ && mv kubectl /usr/local/bin/
#fi

6.  kubectl拿version时, 由于1.28版本高的原因问题会报错

version=`kubectl version --short | awk '/Server Version:/ {print $3}'`修改成version=`kubectl version | awk '/Server Version:/ {print $3}'`

按第5步执行, 由于脚本中没有处理权限问题, 实际上应该使用root运行, 因为创建 /data目录, 根本没有考虑到其它用户的问题, 所以在上述第五步, 把没办法要脚本注释掉, 因为是当机版本, 如果是别的机器部署, 可以保持不动

由于我是安装了kubesphere的, 所以我运行如下脚本执行安装

如果部署了k8s 同时部署了kubesphere,执行cd /data1/cube-studio/install/kubernetes  这是我下载 cube-studio的对应目录sh start.sh 10.33.34.166不要尝试使用 start-with-kubesphere.sh , 容易出现istio启不来的情况, 因为缺少了一些服务的安装, 不过使用start.sh启动也一样遇到istio无法使用的问题

部署成功后会显示如下消息

通过kubesphere查看也是没有错的

但是发现

2025-04-30T16:55:35.867847176+08:00 This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (pymysql.err.IntegrityError) (1452, 'Cannot add or update a child row: a foreign key constraint fails (`kubeflow`.`etl_pipeline`, CONSTRAINT `etl_pipeline_ibfk_1` FOREIGN KEY (`changed_by_fk`) REFERENCES `ab_user` (`id`))')
2025-04-30T16:55:35.867852596+08:00 [SQL: INSERT INTO etl_pipeline (created_on, changed_on, name, `describe`, project_id, workflow, dag_json, config, expand, created_by_fk, changed_by_fk) VALUES (%(created_on)s, %(changed_on)s, %(name)s, %(describe)s, %(project_id)s, %(workflow)s, %(dag_json)s, %(config)s, %(expand)s, %(created_by_fk)s, %(changed_by_fk)s)]
2025-04-30T16:55:35.867858045+08:00 [parameters: {'created_on': datetime.datetime(2025, 4, 30, 16, 55, 33, 497374), 'changed_on': datetime.datetime(2025, 4, 30, 16, 55, 33, 497402), 'name': 'dau', 'describe': 'dau计算', 'project_id': 1, 'workflow': 'airflow', 'dag_json': '{\n    "cos导入hdfs-1686184253953": {\n        "label": "数据导入",\n        "location": [\n            304,\n            96\n        ],\n        "color":  ... (7480 characters truncated) ... \n            "label": "数据导出"\n        },\n        "upstream": [\n            "hive出库至hdfs-1686184293917"\n        ],\n        "task_id": 7\n    }\n}', 'config': '{\n    "alert_user": "admin"\n}', 'expand': '[]', 'created_by_fk': 1, 'changed_by_fk': 1}]
2025-04-30T16:55:35.867863640+08:00 (Background on this error at: https://sqlalche.me/e/14/gkpj) (Background on this error at: https://sqlalche.me/e/14/7s2a)
2025-04-30T16:55:35.867869011+08:00 begin add notebook
2025-04-30T16:55:35.867874369+08:00 This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (pymysql.err.IntegrityError) (1452, 'Cannot add or update a child row: a foreign key constraint fails (`kubeflow`.`etl_pipeline`, CONSTRAINT `etl_pipeline_ibfk_1` FOREIGN KEY (`changed_by_fk`) REFERENCES `ab_user` (`id`))')
2025-04-30T16:55:35.867879533+08:00 [SQL: INSERT INTO etl_pipeline (created_on, changed_on, name, `describe`, project_id, workflow, dag_json, config, expand, created_by_fk, changed_by_fk) VALUES (%(created_on)s, %(changed_on)s, %(name)s, %(describe)s, %(project_id)s, %(workflow)s, %(dag_json)s, %(config)s, %(expand)s, %(created_by_fk)s, %(changed_by_fk)s)]
2025-04-30T16:55:35.867903881+08:00 [parameters: {'created_on': datetime.datetime(2025, 4, 30, 16, 55, 33, 497374), 'changed_on': datetime.datetime(2025, 4, 30, 16, 55, 33, 497402), 'name': 'dau', 'describe': 'dau计算', 'project_id': 1, 'workflow': 'airflow', 'dag_json': '{\n    "cos导入hdfs-1686184253953": {\n        "label": "数据导入",\n        "location": [\n            304,\n            96\n        ],\n        "color":  ... (7480 characters truncated) ... \n            "label": "数据导出"\n        },\n        "upstream": [\n            "hive出库至hdfs-1686184293917"\n        ],\n        "task_id": 7\n    }\n}', 'config': '{\n    "alert_user": "admin"\n}', 'expand': '[]', 'created_by_fk': 1, 'changed_by_fk': 1}]
2025-04-30T16:55:35.867916345+08:00 (Background on this error at: https://sqlalche.me/e/14/gkpj) (Background on this error at: https://sqlalche.me/e/14/7s2a)解决办法: 如果可以删除/data/k8s/infra/mysql的话。就把这个删了,然后重启mysql和kubeflow-dashboard

另外遇到 prometheus没启成功 的问题, 报错

openebs.io/local_openebs-localpv-provisioner-7bf6f464c-c6j58_77b994b8-de73-4b58-8d29-e0fc8d194a38  failed to provision volume with StorageClass "local": claim.Spec.Selector is not supported

解决办法: kubectl edit prometheus k8s -n monitoring 将  selector 那一段删掉 , 然后由于使用 openebs.io/local 记得将 ReadWriteMany 改成 ReadWriteOnce 否则报错 openebs.io/local_openebs-localpv-provisioner-7bf6f464c-c6j58_77b994b8-de73-4b58-8d29-e0fc8d194a38  failed to provision volume with StorageClass "local": Only support ReadWriteOnce access mode

就算是这样, 重启也会失败,需要到存储那边把 待绑定的pvc删除掉, 系统会自动创建.

总结一下: 最终还是没有用起来istio, 看着是启动了, 但是访问不了. 解决办法是手工放开 kubeflow-dashboard-frontend 的服务进行访问 , 但这样的方式还是不行的, 里面的应用无法跳转, 还是要解决istio的问题

最终发现 gateway没有创建成功报错了 

Error from server: error when creating "gateway.yaml": admission webhook "validation.istio.io" denied the request: configuration is invalid: port name must be set: number:80  protocol:"HTTP"
Error from server: error when creating "gateway.yaml": admission webhook "validation.istio.io" denied the request: configuration is invalid: port name must be set: number:8080  protocol:"HTTP"

解决办法:  修改gateway.yaml 文件 然后重新部署 kubectl apply -f gateway.yaml

port:
number: 80
name: http # 👈 必须加这一行
protocol: HTTP

port:
number: 8080
name: http-8080 # 👈 必须加 name
protocol: HTTP

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词