geoip

How to install a PHP extension witn Amazon AWS Elastic Beanstalk?

泄露秘密 提交于 2019-12-22 07:39:08
问题 We are using aws elastic beanstalk for our PHP application on EC2 instance. Since we opted for load balancing, it keeps changing the instance time and again. I am wondering if we install a PHP plugin, will it be affected by change of instance or it will be available in new instance as well? Asking this question because we have observed everytime instance is changed by elastic beanstalk, our application is redeployed. We need to install Geoip plugin. How to install it without affecting it on

How to install a PHP extension witn Amazon AWS Elastic Beanstalk?

风流意气都作罢 提交于 2019-12-22 07:38:03
问题 We are using aws elastic beanstalk for our PHP application on EC2 instance. Since we opted for load balancing, it keeps changing the instance time and again. I am wondering if we install a PHP plugin, will it be affected by change of instance or it will be available in new instance as well? Asking this question because we have observed everytime instance is changed by elastic beanstalk, our application is redeployed. We need to install Geoip plugin. How to install it without affecting it on

using maxmind geoip in spark serialized

时光总嘲笑我的痴心妄想 提交于 2019-12-21 06:57:26
问题 I am trying to use the MaxMind GeoIP API for scala-spark which is found https://github.com/snowplow/scala-maxmind-iplookups. I load in the file using standard: val ipLookups = IpLookups(geoFile = Some("GeoLiteCity.dat"), memCache = false, lruCache = 20000) I have a basic csv file which I load in that contains time and IP adresses: val sweek1 = week1.map{line=> IP(parse(line))}.collect{ case Some(ip) => { val ipadress = ipdetect(ip.ip) (ip.time, ipadress) } } The function ipdetect is basically

How does the binary DAT from Maxmind work?

二次信任 提交于 2019-12-21 05:14:21
问题 Maxmind offers a binary DAT file format for downloading their GeoIP database. http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz Does anyone know how this has been packaged? Also, is there any kind of copy protection on the data? I'd like to offer up a set of data in a similar way. Anyone with any knowledge of this will receive my undying gratitude :-) 回答1: It's just a proprietary binary format, heavily optimized for IP address querying. It doesn't have any copy protection.

Get Cloudflare's HTTP_CF_IPCOUNTRY header with javascript?

眉间皱痕 提交于 2019-12-21 04:55:24
问题 There are many SO questions how to get http headers with javascript, but for some reason they don't show up HTTP_CF_IPCOUNTRY header. If I try to do with php echo $_SERVER["HTTP_CF_IPCOUNTRY"]; , it works, so CF is working just fine. Is it possible to get this header with javascript? 回答1: Assuming you are talking about client side JavaScript: no, it isn't possible. The browser makes an HTTP request to the server. The server notices what IP address the request came from The server looks up

日志管理系统ELK6.2.3

你说的曾经没有我的故事 提交于 2019-12-21 04:00:54
https://www.jianshu.com/p/88f2cbedcc2a 写在前面 刚毕业工作的时候,处理日志喜欢自己写脚本抓取数据分析日志,然后在zabbix上展示出来。但是开发要看日志的时候,还是要登录服务器,使用tailf、grep加一些正则,很是麻烦。来到一个新环境,需要搭建一套日志管理系统,接触了elk,相见恨晚,记录下自己从零开始学习使用elk的过程。 日志管理系统ELK 目录 部署架构图 部署版本 部署地址 服务部署 总结 部署架构图: elk.png 部署前了解: 1、elk现在又叫elfk,是elasticsearch、logstash、filebeat、kibana的简称。 2、elk架构类似于C/S,由客户端的日志收集工具收集日志,服务端的日志收集工具收集分析客户端的日志。之前客户端的日志收集工具logstash是用java写的,比较占用内存,为了不给生产环境造成负担,生产环境上的日志收集工具换成了用go语言写的filebeat,filebeat将日志收集到redis里面,利用redis做消息队列,服务端的logstash从redis里面取数据,分析,传到elasticsearch,最后用kibana展示出来 3、本次安装是安装在内网,故没有考虑到安全的问题,安装过程中会提到 4、本次安装是基于debian,如果是centos注意从官网下载不同的软件包

ELK日志分析系统(原创)

不羁的心 提交于 2019-12-21 03:59:49
一、简介 ELK由Elasticsearch、Logstash和Kibana三部分组件组成; Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。 Logstash是一个完全开源的工具,它可以对你的日志进行收集、分析,并将其存储供以后使用 kibana 是一个开源和免费的工具,它可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。 二、工作流程 在需要收集日志的服务器(本文用nginx服务器,地址为192.168.5.148)上部署logstash,用于监控并过滤收集日志,将过滤后的内容按照特定的格式收集在一起交给全文搜索服务ElasticSearch,可以用ElasticSearch进行自定义搜索通过Kibana 来结合自定义搜索进行页面展示。 三、ELK帮助手册 ELK官网:https://www.elastic.co/ ELK官网文档:https://www.elastic.co/guide/index.html ELK中文手册: http://kibana.logstash.es/content/elasticsearch/monitor/logging.html 视频教程

ELK采集之nginx 之高德地图出城市IP分布图

戏子无情 提交于 2019-12-21 03:57:02
1、采用拓扑: 角色扮演: Agent:采用logstash,IP:192.168.10.7 Redis队列: IP:192.168.10.100 Indexer:logstash,IP:192.168.10.205 Es+kibana:放在192.168.10.100(大的日志环境可以单独存放) 说明:下面是一台日志服务器下面nginx的日志格式 log_format backend '$http_x_forwarded_for [$time_local] ' '"$host" "$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"' 1、192.168.10.7上面agnet的配置: [luohui@BJ-huasuan-h-web-07 ~]$ cat /home/luohui/logstash-5.0.0/etc/logstash-nginx.conf input { file { path => ["/home/data/logs/access.log"] type => "nginx_access" } } output { if [type] == "nginx_access"{ redis { host => ["192.168.10.100:6379"] data_type

<二>ELK-6.5.3学习笔记–使用rsyslog传输管理nginx日志

白昼怎懂夜的黑 提交于 2019-12-21 03:56:05
http://www.eryajf.net/2362.html 转载于 本文预计阅读时间 28 分钟 文章目录[隐藏] 1,nginx日志json化。 2,发送端配置。 3,接收端配置。 4,配置logstash。 5,简单使用kibana。 现在有好几台主机的nginx日志想要监控分析一下,那么,如何让远程主机的日志都乖乖的来到elk主机之上呢,这是一个需要考虑的问题,而这里,我就使用rsyslog来完成。 这种方式貌似针对于远程主机上只有单项日志的情况,就像我们现在做的,只处理nginx的访问日志一般的,如果还有更多的日志需要从远程转发到elk集群中,就需要用其他方式了,或者说我只会配置这么一种,更复杂的,目前还玩不转。后来哈,在当我测试rsyslog转发多个日志的时候,发现,里边配置规则过于复杂,分类方面也非常不给力,就此打住,一旦多个日志,直接用filebeat即可。 现在,废话不多说,直接进入正题。 目前的思路,可以通过下图了解: 说明: 图中举例了两台nginx,其实可以有更多台nginx日志可以想后方转发。 双方以rsyslog作为桥梁,从而实现日志的中转运输。其中nginx主机上是发送端,elk主机上是接收端。 针对不同的nginx日志,可以启动不同的logstash实例,然后将日志转发给es。 由es,统一将日志转给kibana。

GeoIP table join with table of IP's in MySQL

懵懂的女人 提交于 2019-12-18 16:51:08
问题 I am having a issue finding a fast way of joining the tables looking like that: mysql> explain geo_ip; +--------------+------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+------------------+------+-----+---------+-------+ | ip_start | varchar(32) | NO | | "" | | | ip_end | varchar(32) | NO | | "" | | | ip_num_start | int(64) unsigned | NO | PRI | 0 | | | ip_num_end | int(64) unsigned | NO | | 0 | | | country_code | varchar(3) | NO