How to continuously feed sniffed packets to kafka?

这一生的挚爱 提交于 2019-12-01 21:02:12

With netcat

No need to write a server, you can use netcat (and tell your script to listen on the standard input):

shell1> nc -l 8888 | ./producer.sh
shell2> sudo tshark -l | nc 127.1 8888

The -l of tshark prevents it from buffering the output too much (flushes after each packet).


With a named pipe

You could also use a named pipe to transmit tshark output to your second process:

shell1> mkfifo /tmp/tsharkpipe
shell1> tail -f -c +0 /tmp/tsharkpipe | ./producer.sh
shell2> sudo tshark -l > /tmp/tsharkpipe

I think you can either

If you use Node, you can use child_process and kafka_node to do it. Something like this:

var kafka = require('kafka-node');
var client = new kafka.Client('localhost:2181');
var producer = new kafka.Producer(client);

var spawn = require('child_process').spawn;
var tshark = spawn('sudo', ['/usr/sbin/tshark']);

tshark.stdout.on('data', (data) => {
  producer.send([
    {topic: 'spark-kafka', messages: [data.split("\n")]}
  ], (err,result) => { console.log("sent to kafka")});
});

Another option would be to use Apache NiFi. With NiFi you can execute commands and pass the output to other blocks for further processing. Here you could have NiFi execute a tshark command on the local host and then pass the output to Kafka.

There is an example here which should demonstrate this type of approach in slightly more detail.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!