Resend old logs from filebeat to logstash

与世无争的帅哥 提交于 2021-02-08 08:13:20

问题


Thanks in advance for your help. I would like to reload some logs to customize additional fields. I have noticed that registry file in filebeat configuration keeps track of the files already picked. However, if I remove the content in that file, I am not getting the old logs back. I have tried also to change the timestamp of the source in registry file with no sucsess. What changes are needed to sent old logs from filebeat to logstash?

How can I get the logs back?

Update:

This is the last log in tomcat container:

2019-03-11 06:22:48 [Thread-4            ] DEBUG:   ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor Connection cache monitor in thread: Thread-4 shutting down for pool: WEB

This is the log obtained by filebeat:

2019-03-14T16:18:50.377-0700    DEBUG   [publish]       pipeline/processor.go:308       Publish event: {
  "@timestamp": "2019-03-14T23:18:45.376Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.6.0"
  },
  "host": {
    "name": "tomcat",
    "architecture": "x86_64",
    "os": {
      "codename": "Core",
      "platform": "centos",
      "version": "7 (Core)",
      "family": "redhat",
      "name": "CentOS Linux"
    },
    "id": "6aaed308aa5a419f880c5e45eea65414",
    "containerized": true
  },
  "source": "/app/logs/WEB/WEB-rest-api/WEB-rest-api.log",
  "log": {
    "file": {
      "path": "/app/logs/WEB/WEB-rest-api/WEB-rest-api.log"
    }
  },
  "message": "2019-03-11 06:22:48 [Thread-4            ] DEBUG:   ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor Connection cache monitor in thread: Thread-4 shutting down for pool: WEB",
  "beat": {
    "name": "tomcat",
    "hostname": "tomcat",
    "version": "6.6.0"
  },
  "offset": 6771071,
  "prospector": {
    "type": "log"
  },
  "input": {
    "type": "log"
  },
  "meta": {
    "cloud": {
      "instance_name": "tomcat",
      "machine_type": "Standard_D8s_v3",
      "region": "CanadaCentral",
      "provider": "az",
      "instance_id": "6452bcf4-7f5d-4fc3-9f8e-5ea57f00724b"
    }
  }
}

This is the log ingest by Logstash:

[2019-03-15T10:32:25,982][DEBUG][logstash.outputs.gelf    ] Sending GELF event {:event=>{"short_message"=>["2019-03-11 06:22:48 [Thread-4            ] DEBUG:   ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor Connection cache monitor in thread: Thread-4 shutting down for pool: WEB", " Connection cache monitor in thread: Thread-4 shutting down for pool: WEB"], "full_message"=>"2019-03-11 06:22:48 [Thread-4            ] DEBUG:   ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor Connection cache monitor in thread: Thread-4 shutting down for pool: WEB, Connection cache monitor in thread: Thread-4 shutting down for pool: WEB", "host"=>"{\"name\":\"tomcat\",\"os\":{\"name\":\"CentOS Linux\",\"version\":\"7 (Core)\",\"codename\":\"Core\"}}", "_source"=>"/app/logs/WEB/WEB-rest-api/WEB-rest-api.log", "_class"=>"ca.bc.gov.WEB.dbpool.WEBConnectionCacheMonitor, %{JAVACLASS}", "_tags"=>"beats_input_codec_plain_applied", "_beat_hostname"=>"tomcat", "_beat_name"=>"tomcat", "_meta_cloud"=>{}, "_log_file"=>{"path"=>"/app/logs/WEB/WEB-rest-api/WEB-rest-api.log"}, "level"=>6}}

Filebeat.yml:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /apps/logs/WEB/web-api/web-api.log
    - /apps/logs/WEB/web-api/web-rest-api.log

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  # Ignore files which were modified more then the defined timespan in the past
  # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
  ignore_older: 0

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  multiline.negate: true

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
#setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["log1.cgi-dev.ca:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["log1.cgi-dev.ca:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]

  # Certificate for SSL client authentication
  ##ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  ##ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

However, I do not get the log neither Kibana nor Graylog. It is worth noting that the same kind of logs for INFO level related to the same class are visible in Kibana and Graylog, but not the ones with DEBUG level.

Do you know what would be wrong?

Thanks a lot


回答1:


  1. Stop filebeat and logstash.
  2. Clear old data from Elasticsearch if there.
  3. Delete registry files registry and registry.old.
  4. Run logstash.
  5. Run filebeat using command filebeat -e -once.



回答2:


The registry keeps the inode and byte offset. Removing the content doesn't change the inode. Try shutting down filebeat and removing/resetting the byte offset in the registry.



来源:https://stackoverflow.com/questions/55170033/resend-old-logs-from-filebeat-to-logstash

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!