欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 汽车 > 时评 > 普罗米修斯监控

普罗米修斯监控

2025/7/4 18:32:59 来源:https://blog.csdn.net/Hai990218/article/details/142323921  浏览:    关键词:普罗米修斯监控

目录

概念

部署方法

1. 二进制(源码包)

2. 部署在k8s集群当中,用pod形式部署


概念

prometheus是开源的系统监控和告警。在k8s分布式的容器化管理系统当中,一般都是搭配prometheus来进行监控。它是服务监控系统,也可以监控主机,它自带时序数据库,这个时序数据库提供了数据模型和采集的指标项、存储、查询接口。

prometheus组件:

promql语言:如何采集和统计。

nodeexporter:在k8s集群当中部署在node节点上,用来收集节点上的数据(主机指标:硬盘、CPU、网络,pod的使用情况)。需要部署在每个节点上。

pushgateway:把数据上传到prometheus,然后再根据promql语句来进行分类的展示。

工作流程图:

prometheus的特点:

1. 多维的数据模型,它是按照顺序记录,记录设备状态的变化,为每个数据指定一个样本(服务的指标、应用性能的监控、网络数据等等)

2. 内置时间序列数据库:TSDB

TSDB的特点:1. 存储的量级非常大

                        2. 大部分都是写入操作

                        3. 写入操作是按照时序进行添加

                        4. 高并发性能很强大

3.promql语句

4. http协议拉取数据

5. 自带服务自动发现功能

6. grafana插件可以更人性化的展示指标数据

Alertmanager:告警管理,它是一个独立的模块,需要独立的配置,告警方式有电子邮件、钉钉、企业微信。

面试题:prometheus和zabbix的区别

1. 指标采集的方式

zabbix分为服务端和客户端,agent都是部署在客户端,然后把数据发送给服务端。它是基于tcp协议通信(ip+端口)

prometheus是根据客户端进行数据收集,服务端和客户端进行交互,通过拉取的方式获取监控指标。它是基于http协议通信

2. 数据存储

zabbix使用外部数据库存储数据:mysql、postgreSQL、oracle,它们都是关系型数据库。

prometheus自带内置的时序数据库:TSDB,它只支持存储时间序列的值

3. 查询性能

zabbix查询功能较弱

prometheus的查询功能更强大,速度更快

4. 告警功能

都是内置告警功能,但是prometheus不能电话告警。

5. 监控的内容

zabbix主要是为了监控设备(服务器的状态:CPU、内存、磁盘、网络流量、自定义的监控项(非虚拟化部署的程序))。zabbix的时间更长,更成熟。适用于监控方便要求不高,只需要对服务设备监控的场景。

prometheus是专门为k8s定制的监控软件,对于容器产品兼容度更好,定制化程度更高。它适用于微服务场景。

部署方法

1. 二进制(源码包)

把node_exporter-1.5.0.linux-amd64.tar拖入到三台节点主机的opt目录下

把prometheus-2.45.0.linux-amd64.tar和grafana-enterprise-7.5.11-1.x86_64拖入到master主节点主机的opt目录下

在master主节点上操作

tar -xf prometheus-2.45.0.linux-amd64.tar.gz

mv prometheus-2.45.0.linux-amd64 prometheus

cat > /usr/lib/systemd/system/prometheus.service <<'EOF'
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io
After=network.target[Service]
Type=simple
ExecStart=/opt/prometheus/prometheus \
--config.file=/opt/prometheus/prometheus.yml \
--storage.tsdb.path=/opt/prometheus/data/ \
--storage.tsdb.retention=15d \
--web.enable-lifecycleExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload

systemctl restart prometheus.service

netstat -antp | grep 9090  查看prometheus的9090端口是否启动

cd prometheus/

vim prometheus.yml

三个节点主机统一操作:

cd /opt/

tar -xf node_exporter-1.5.0.linux-amd64.tar.gz

mv node_exporter-1.5.0.linux-amd64 node_exporter-1.5.0

cd node_exporter-1.5.0/

mv node_exporter /usr/local/bin/

cat > /usr/lib/systemd/system/node_exporter.service <<'EOF'
[Unit]
Description=node_exporter
Documentation=https://prometheus.io/
After=network.target[Service]
Type=simple
ExecStart=/usr/local/bin/node_exporter \
--collector.ntp \
--collector.mountstats \
--collector.systemd \
--collector.tcpstatExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure[Install]
WantedBy=multi-user.target
EOF

systemctl restart node_exporter.service

netstat -antp | grep 9100 查看端口起没起

systemctl restart prometheus.service

netstat -antp | grep 9090

此时访问浏览器 192.168.233.10:9090 

在master主节点上操作

rpm -ivh grafana-enterprise-7.5.11-1.x86_64.rpm

systemctl restart grafana-server.service

netstat -antp | grep 3000

然后回到浏览器访问 192.168.233.10:3000

账号:admin 密码:admin

模版地址:Grafana dashboards | Grafana Labs

添加数据库

使用模版

2. 部署在k8s集群当中,用pod形式部署

组件:

node_exporter:节点数据收集器,用daemonset部署

prometheus:监控的主程序

grafana:图形化.

altermanager:告警模块

步骤:

kubectl create ns monitor-sa

cd /opt/

mkdir prometheus

cd prometheus/

1.部署node_exporter数据收集器

vim node_exporter.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:name: node-exporternamespace: monitor-salabels:name: node-exporter
spec:selector:matchLabels:name: node-exportertemplate:metadata:labels:name: node-exporterspec:hostPID: truehostIPC: truehostNetwork: truecontainers:- name: node-exporterimage: prom/node-exporterports:- containerPort: 9100resources:limits:cpu: "0.5"securityContext:privileged: trueargs:- --path.procfs- /host/proc- --path.sysfs- /host/sys- --collector.filesystem.ignored-mount-points- '"^/(sys|proc|dev|host|etc)($|/)"'volumeMounts:- name: devmountPath: /host/dev- name: procmountPath: /host/proc- name: sysmountPath: /host/sys- name: rootfsmountPath: /rootfsvolumes:- name: prochostPath:path: /proc- name: devhostPath:path: /dev- name: syshostPath:path: /sys- name: rootfshostPath:path: /

kubectl apply -f node_exporter.yaml

kubectl get pod -n monitor-sa -o wide

到浏览器访问收集器:192.168.233.31:9100/metrics

kubectl create serviceaccount monitor -n monitor-sa

kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor-sa --clusterrole=cluster-admin  --serviceaccount=monitor-sa:monitor

2.部署altermanager告警模块

把prometheus-alertmanager-cfg拖到/opt/prometheus/

kubectl apply -f  prometheus-alertmanager-cfg.yaml

vim alter-email.yaml

kind: ConfigMap
apiVersion: v1
metadata:name: alertmanagernamespace: monitor-sa
data:alertmanager.yml: |-global:resolve_timeout: 1msmtp_smarthost: 'smtp.qq.com:25'smtp_from: '1332344799@qq.com'smtp_auth_username: '1332344799@qq.com'smtp_auth_password: 'wrhdyfylhfyriijc'smtp_require_tls: falseroute:group_by: [alertname]group_wait: 10sgroup_interval: 10srepeat_interval: 10mreceiver: default-receiverreceivers:- name: 'default-receiver'email_configs:- to: '1332344799@qq.com'send_resolved: true

kubectl apply -f alter-email.yaml

3.部署prometheus监控主程序

vim prometheus-svc.yaml

apiVersion: v1
kind: Service
metadata:name: prometheusnamespace: monitor-salabels:app: prometheus
spec:type: NodePortports:- port: 9090targetPort: 9090protocol: TCPselector:app: prometheuscomponent: server

vim prometheus-alter.yaml

apiVersion: v1
kind: Service
metadata:labels:name: prometheuskubernetes.io/cluster-service: 'true'name: alertmanagernamespace: monitor-sa
spec:ports:- name: alertmanagernodePort: 30066port: 9093protocol: TCPtargetPort: 9093selector:app: prometheussessionAffinity: Nonetype: NodePort

vim prometheus-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:name: prometheus-servernamespace: monitor-salabels:app: prometheus
spec:replicas: 1selector:matchLabels:app: prometheuscomponent: servertemplate:metadata:labels:app: prometheuscomponent: serverannotations:prometheus.io/scrape: 'false'spec:serviceAccountName: monitorinitContainers:- name: init-chmodimage: busybox:latestcommand: ['sh','-c','chmod -R 777 /prometheus;chmod -R 777 /etc']volumeMounts:- mountPath: /prometheusname: prometheus-storage-volume- mountPath: /etc/localtimename: timezonecontainers:- name: prometheusimage: prom/prometheus:v2.45.0command:- prometheus- --config.file=/etc/prometheus/prometheus.yml- --storage.tsdb.path=/prometheus- --storage.tsdb.retention=720h- --web.enable-lifecycleports:- containerPort: 9090volumeMounts:- name: prometheus-configmountPath: /etc/prometheus/- mountPath: /prometheus/name: prometheus-storage-volume- name: timezonemountPath: /etc/localtime- name: k8s-certsmountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/- name: alertmanagerimage: prom/alertmanager:v0.20.0args:- "--config.file=/etc/alertmanager/alertmanager.yml"- "--log.level=debug"ports:- containerPort: 9093protocol: TCPname: alertmanagervolumeMounts:- name: alertmanager-configmountPath: /etc/alertmanager- name: alertmanager-storagemountPath: /alertmanager- name: localtimemountPath: /etc/localtimevolumes:- name: prometheus-configconfigMap:name: prometheus-configdefaultMode: 0777- name: prometheus-storage-volumehostPath:path: /datatype: DirectoryOrCreate- name: k8s-certssecret:secretName: etcd-certs- name: timezonehostPath:path: /usr/share/zoneinfo/Asia/Shanghai- name: alertmanager-configconfigMap:name: alertmanager- name: alertmanager-storagehostPath:path: /data/alertmanagertype: DirectoryOrCreate- name: localtimehostPath:path: /usr/share/zoneinfo/Asia/Shanghai

kubectl apply -f prometheus-deployment.yaml

kubectl apply -f prometheus-svc.yaml

kubectl apply -f prometheus-alter.yaml

kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor-sa --clusterrole=cluster-admin  --serviceaccount=monitor-sa:monitor

kubectl get pod -n monitor-sa

kubectl get svc -n monitor-sa

4.部署grafana图形化工具

vim grafana.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: grafananamespace: kube-system
spec:accessModes:- ReadWriteManystorageClassName: nfs-client-storageclassresources:requests:storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:name: monitoring-grafananamespace: kube-system
spec:replicas: 1selector:matchLabels:task: monitoringk8s-app: grafanatemplate:metadata:labels:task: monitoringk8s-app: grafanaspec:containers:- name: grafanaimage: grafana/grafana:7.5.11securityContext:runAsUser: 104runAsGroup: 107ports:- containerPort: 3000protocol: TCPvolumeMounts:- mountPath: /etc/ssl/certsname: ca-certificatesreadOnly: false- mountPath: /varname: grafana-storage- mountPath: /var/lib/grafananame: graf-testenv:- name: INFLUXDB_HOSTvalue: monitoring-influxdb- name: GF_SERVER_HTTP_PORTvalue: "3000"- name: GF_AUTH_BASIC_ENABLEDvalue: "false"- name: GF_AUTH_ANONYMOUS_ENABLEDvalue: "true"- name: GF_AUTH_ANONYMOUS_ORG_ROLEvalue: Admin- name: GF_SERVER_ROOT_URLvalue: /volumes:- name: ca-certificateshostPath:path: /etc/ssl/certs- name: grafana-storageemptyDir: {}- name: graf-testpersistentVolumeClaim:claimName: grafana
---
apiVersion: v1
kind: Service
metadata:labels:name: monitoring-grafananamespace: kube-system
spec:ports:- port: 80targetPort: 3000selector:k8s-app: grafanatype: NodePort

kubectl apply -f grafana.yaml

kubectl get svc -n kube-system 

到浏览器访问prometheus:192.168.233.31:30369

如果遇到这样的问题:

解决方法:

处理 kube-proxy 监控告警
kubectl edit configmap kube-proxy -n kube-system
......
metricsBindAddress: "0.0.0.0:10249"
#因为 kube-proxy 默认端口10249是监听在 127.0.0.1 上的,需要改成监听到物理节点上

修改为:

重新启动 kube-proxy
kubectl get pods -n kube-system | grep kube-proxy |awk '{print $1}' | xargs kubectl delete pods -n kube-system

访问grafana:192.168.233.31:31193

把模版拖进去

压力测试:

vim ylcs.yaml

apiVersion: apps/v1
kind: Deployment
metadata:name: hpa-testlabels:hpa: test
spec:replicas: 1selector:matchLabels:hpa: testtemplate:metadata:labels:hpa: testspec:containers:- name: centosimage: centos:7        command: ["/bin/bash", "-c", "yum install -y stress --nogpgcheck && sleep 3600"]volumeMounts:- name: yummountPath: /etc/yum.repos.d/volumes:- name: yum

kubectl apply -f ylcs.yaml

进入容器测试:

此时你的邮箱就会收到信息

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词