kubernetes event数据持久化存储

巧了我就是萌 提交于 2019-12-15 19:39:07

Event

Event作为kubernetes的一个对象资源,记录了集群运行所遇到的各种大事件,有助于排错,但大量的事件如果都存储在etcd中,会带来较大的性能与容量压力,所以etcd中默认只保存最近1小时,而在日常的kubernetes环境排查过程中有需要借助到event所反映的问题提供思路,所以就需要借助其他工具进行持久化存储

查看Event

[root@master events]# kubectl get event 
LAST SEEN   TYPE      REASON      OBJECT                       MESSAGE
5m27s       Warning   Unhealthy   pod/nginx-8458d4c6b6-6t94d   Liveness probe failed: Get http://10.244.1.12:9020/ywpt/health: dial tcp 10.244.1.12:9020: connect: connection refused
45s         Warning   BackOff     pod/nginx-8458d4c6b6-6t94d   Back-off restarting failed container

收集event的方案

使用开源项目eventrouter进行收集
项目地址: https://github.com/heptiolabs/eventrouter

[root@master events]# cat event-rabc.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: eventrouter 
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: eventrouter 
rules:
- apiGroups: [""]
  resources: ["events"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: eventrouter 
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: eventrouter
subjects:
- kind: ServiceAccount
  name: eventrouter
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: eventrouter-cm
  namespace: kube-system
data:
  config.json: |- 
    {
      "sink": "glog"
    }
---
[root@master events]#cat event-cm.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
data:
  filebeat.yml: |-
    filebeat.prospectors:
    - input_type: log
      paths:
        - "/data/log/eventrouter/*"
    output.kafka:   #配置filebeat所收集到的集群event事件信息输出kafka
      hosts: ["kafka1.bd.com:9092","kafka2.bd.com:9092","kafka2.bd.com:9092"]  #kafka集群信息
      topic: "test_event"  #topic   如果kafka关闭自动创建topic功能,需要手动创建
      codec.format:
        string: "%{[message]}"
[root@master events]#cat eventrouter.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: eventrouter
  namespace: kube-system
  labels:
    app: eventrouter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: eventrouter
  template:
    metadata:
      labels:
        app: eventrouter
        tier: control-plane-addons
    spec:
      hostAliases:
      - ip: "192.168.1.1"
        hostnames:
        - "kafka1.bd.com"
      - ip: "192.168.1.2"
        hostnames:
        - "kafka2.bd.com"
      - ip: "192.168.1.3"
        hostnames:
        - "kafka3.bd.com"
      containers:
        - name: kube-eventrouter
          image: baiyongjie/eventrouter:v0.2
          command:
            - "/bin/sh"
          args:
            - "-c"
            - "/eventrouter -v 3 -log_dir /data/log/eventrouter"
          volumeMounts:
          - name: eventrouter-cm
            mountPath: /etc/eventrouter
          - name: log-path
            mountPath: /data/log/eventrouter
        - name: filebeat
          image: docker.elastic.co/beats/filebeat:6.3.2
          command:
            - "/bin/sh"
          args:
            - "-c"
            - "filebeat -c /etc/filebeat/filebeat.yml"
          volumeMounts:
          - name: filebeat-config
            mountPath: /etc/filebeat/
          - name: log-path
            mountPath: /data/log/eventrouter
      serviceAccount: eventrouter
      volumes:
        - name: eventrouter-cm
          configMap:
            name: eventrouter-cm
        - name: filebeat-config
          configMap:
            name: filebeat-config
        - name: log-path
          emptyDir: {}
[root@master events]# kubectl apply -f eventrouter.yaml
[root@master events]# kubectl get pods -n kube-system  |grep event
eventrouter-7bb898ff4b-qfv6f     2/2     Running   0          26s

查看kafka集群有没无相关topic:

[root@master kafka_2.12-2.1.3]# source /etc/profile && ./bin/kafka-topics.sh --list --zookeeper kafka1.bd.com:9092
test_event

通过logstash对收集到的kubernetes的event进行过滤筛选

[root@master home]# cat /home/logstash-6.5.3-k8s-event/config/produce.conf
input {   
    kafka {
        bootstrap_servers => "kafka1.bd.com:9092,kafka2.bd.com:9092,192.kafka3.bd.com:9092"
        group_id => "test_event_group"
        topics => ["test_event"]
        consumer_threads => 5 
        decorate_events => true
    }
}
#filter {    原本计划将对从kafka输入的event数据进行过滤截取关键字段,使得kibana展示的数据更加简洁,结果发现message并非简单的json格式
#    grok {
#        match => {"message" => "(?<test_event>(?:verb)(.*$)?)"}
#       remove_field => ["message"]
#    }
#    json {
#        source => "message"
#        target => "jsoncontent"
#       remove_field => ["temMsg"]
#    }

output {
  elasticsearch {
    codec => rubydebug
    hosts => [ "192.168.1.1:9200","192.168.1.2:9200","192.168.1.3:9200" ]  #elasticsearch集群地址
    index => "test_event-%{+YYYY.MM.dd}" 
  }
}
[root@master home]# (/home/logstash-6.5.3-k8s-event/bin/logstash -f /home/logstash-6.5.3-k8s-event/config/produce.conf) &
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!