helm 2
一、Helm 简介
Helm 的作用相当于 node.js 下的 npm ,对于应用发布者而言,可以通过 Helm 打包应用,管理应用依赖关系,管理应用版本并发布应用到软件仓库。
Helm 是一个命令行下的客户端工具。主要用于 Kubernetes 应用程序 Chart 的创建、打包、发布以及创建和管理本地和远程的 Chart 仓库。
1
| helm create chart-ts-data-index-group
|
二、Chart 相关文件
chart 是 helm 的应用打包格式。chart 是描述相关的一组 Kubernetes 资源的文件集合。
.Release.Name 为 helm install 时候指定的名字。helm list 的列表名称,就是 .Release.Name 。
Chart.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
| apiVersion: v2
name: busdevops-tool description: A Helm chart for Mysql Service
type: application
version: 0.1.0
appVersion: 1.16.0
maintainers: - name: liuyzh email: create17@126.com
icon: https://**/**.png
|
service.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| apiVersion: v1 kind: Service metadata: name: {{ include "busdevops-tool.fullname" . }}-datacount-svc labels: app: {{ template "busdevops-tool.name" . }}-datacount chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} version: {{ .Chart.AppVersion }} spec: type: ClusterIP ports: - port: 80 targetPort: 5103 protocol: TCP name: tcp-{{ .Values.service.busdevopsdata.name }} nodePort: {{.Values.service.busdevopstool.nodePort}} selector: app: {{ template "busdevops-tool.name" . }}-console release: {{ .Release.Name }}
|
提示:
1 2 3 4 5 6 7 8 9
| k8s service.yml
spec: ports: - protocol: TCP port: 80 容器暴露的端口号 targetPort: 9376 容器内程序端口 nodePort: 通过装有kube-proxy组件k8s节点访问端口 hostPort: 通过pod调度到的主机可以访问的端口
|
deployment.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75
| apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "busdevops-tool.fullname" . }} labels: {{- include "busdevops-tool.labels" . | nindent 4 }} spec: replicas: 1 selector: matchLabels: app: {{ template "historyservice.name" . }}-dc-eventquery {{- include "busdevops-tool.selectorLabels" . | nindent 8 }} template: metadata: labels: app: {{ template "busdevops-tool.name" . }}-console {{- include "busdevops-tool.selectorLabels" . | nindent 8 }} spec: hostAliases: - hostnames: - cdh-manager-1 ip: 0.0.0.0 - hostnames: - cdh-master-1 ip: 0.0.0.0 - hostnames: - cdh-worker-1 ip: 0.0.0.0 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node operator: In values: - public 在部署前,需要给要调度的机器上面打label。比如:kubectl label node agent-2 node=public podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - {{ template "busdevops-tool.name" . }}-console topologyKey: kubernetes.io/hostname volumes: - name: tz-config hostPath: path: /usr/share/zoneinfo/Asia/Shanghai - name: zoneinfo-config hostPath: path: /usr/share/zoneinfo containers: - name: wechart image: "registry.cn-beijing.aliyuncs.com/hiacloud:v1.0.0" imagePullPolicy: IfNotPresent/Always volumeMounts: - name: tz-config mountPath: /etc/localtime - name: zoneinfo-config mountPath: /usr/share/zoneinfo ports: - containerPort: {{ .Values.service.busdevopstool.internalPort }} env: - name: test value: ceshi resources: {{ toYaml .Values.resources.busdevopsdata | indent 12 }} {{- if .Values.nodeSelector }} nodeSelector: {{ toYaml .Values.nodeSelector | indent 8 }} {{- end }}
|
ingress.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
| {{- if .Values.ingress.enabled -}} {{- $fullName := include "busdevops-tool.fullname" . -}} {{- $svcPort := .Values.service.busdevopstool.externalPort -}} {{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}} apiVersion: networking.k8s.io/v1beta1 {{- else -}} apiVersion: extensions/v1beta1 {{- end }} kind: Ingress metadata: name: {{ $fullName }} labels: app: {{ template "busdevops-tool.name" . }} {{- include "busdevops-tool.labels" . | nindent 4 }} {{- with .Values.ingress.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: {{- if .Values.ingress.tls }} tls: {{- range .Values.ingress.tls }} - hosts: {{- range .hosts }} - {{ . | quote }} {{- end }} secretName: {{ .secretName }} {{- end }} {{- end }} rules: {{- range .Values.ingress.hosts }} - host: {{ .host | quote }} http: paths: {{- range .paths }} - path: {{ . }} backend: serviceName: {{ $fullName }} servicePort: {{ $svcPort }} {{- end }} {{- end }} {{- end }}
|
三、chart 调试部署
1、chart 调试
Helm 提供了 debug chart 的工具:helm lint 和 helm install –dry-run –debug。
1 2 3
| # 以chart包为wechart为例: helm lint wechart helm install wechart --dry-run --debug
|
2、给要调度的node节点打标签
需要预先给node打标签
该chart中的pod已设置为默认调度在public机器上,所以要确保目标机器的label是public
1 2 3 4 5 6 7 8
| # 查看所有k8s机器的标签 kubectl get node --show-labels
# 如果要调度的机器上没有node标签,那么就执行以下命令: kubectl label node agent-2 node=public
# 移除agent-2节点上的node标签 kubectl label node agent-2 node-
|
3、chart 部署
chart包离线部署(helm 3版本):
1 2 3 4 5 6
| # helm3 安装。 # busdevops-tool为Release名称;public为NAMESPACE;./busdevops-tool是chart目录文件夹 helm install busdevops-tool -n public ./busdevops-tool
# 卸载 helm uninstall busdevops-tool
|
chart包离线部署(helm 2版本):
1 2 3 4 5
| # helm2 安装。 helm install ./busdevops-tool --name busdevops-tool --namespace test-project
# 卸载 helm del --purge <release名称>
|
参考博客:https://www.cnblogs.com/benjamin77/p/9977238.html