Druid

Druid: how to cache all historical node data in memory

流过昼夜 提交于 2019-12-06 11:21:25
问题 I have about 10GB of data stored on a historical node. However the memory consumption for that node is about 2GB. When I launch a select query, results are returned the first time in more than 30 secondes. Next, they are in second (because of brokers cache). My concern is to reduce the first time select on whatever query to one second. To achieve such performance, I think it is a good start if historical node store all the data in memory. Question: what are the configuration parameters in

JAVA--高级基础开发Druid

让人想犯罪 __ 提交于 2019-12-06 10:57:28
Druid阿里巴巴属性文件 driverClass=com.mysql.cj.jdbc.Driver url=jdbc:mysql://localhost:3306/ab_wzy?serverTimezone=UTC&character=utf8 user=root password=root #配置Druid连接池参数 initialSize=5 minIdle=3 maxActive=10 maxWait=60000 timeBetweenEvictionRunsMillis=2000 //阿里 Druid 连接池 import com.alibaba.druid.pool.DruidDataSource; import java.io.IOException; import java.io.InputStream; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.Properties; public class JdbcUtils3 { //创建阿里巴巴连接池对象 private static DruidDataSource ds; private static

Druid 监控日志持久

一世执手 提交于 2019-12-06 10:23:38
durid 监控日志保存主要实现的类为: com.alibaba.druid.pool.DruidDataSourceStatLoggerImpl 默认是通过 logger 保存的,并且日志级别为 info. 在 logback 可以配置为: <logger name="com.alibaba.druid.pool.DruidDataSourceStatLoggerImpl" level="debug" additivity="false"> <appender-ref ref="STDOUT_SIMPLE"/> </logger> 此外还可以配置参数,设置为定时清理及记录至日志中,spring-boot 配置为: # spring.datasource.druid.time-between-log-stats-millis: 300000 #配置每5分钟输出一次统计日志,统计后将清空日志 参数官方文档: https://github.com/alibaba/druid/wiki/%E5%AE%9A%E6%97%B6%E8%BE%93%E5%87%BA%E7%BB%9F%E8%AE%A1%E4%BF%A1%E6%81%AF%E5%88%B0%E6%97%A5%E5%BF%97%E4%B8%AD 此外,还可以通过 com.alibaba.druid.support.http.stat

Druid 监控日志持久

爷,独闯天下 提交于 2019-12-06 10:20:36
durid 监控日志保存主要实现的类为: com.alibaba.druid.pool.DruidDataSourceStatLoggerImpl 默认是通过 logger 保存的,并且日志级别为 info. 在 logback 可以配置为: <logger name="com.alibaba.druid.pool.DruidDataSourceStatLoggerImpl" level="debug" additivity="false"> <appender-ref ref="STDOUT_SIMPLE"/> </logger> 此外还可以配置参数,设置为定时清理及记录至日志中,spring-boot 配置为: # spring.datasource.druid.time-between-log-stats-millis: 300000 #配置每5分钟输出一次统计日志,统计后将清空日志 参数官方文档: https://github.com/alibaba/druid/wiki/%E5%AE%9A%E6%97%B6%E8%BE%93%E5%87%BA%E7%BB%9F%E8%AE%A1%E4%BF%A1%E6%81%AF%E5%88%B0%E6%97%A5%E5%BF%97%E4%B8%AD 此外,还可以通过 com.alibaba.druid.support.http.stat

How realtime data input to Druid?

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-06 09:59:07
问题 I have analytic server (for example click counter). I want to send data to druid using some api. How should I do that? Can I use it as replacement for google analytics? 回答1: As se7entyse7en said: You can ingest your data to Kafka and then use druid's Kafka firehose to ingest your data to druid through real-time ingestion. After that you can interactively query druid using its api. It must be said that firehoses can be setup only on Druid realtime nodes. Here is a tutorial how to setup the

SpringBoot+Quartz+数据库存储

守給你的承諾、 提交于 2019-12-06 09:09:48
Spring 整合 Quartz 1、quartz 调度框架是有内置表的 进入 quartz 的官网 http://www.quartz-scheduler.org/ ,点击 Downloads , 下载后在目录 \docs\dbTables 下有常用数据库创建 quartz 表的脚本,例如:“ tables_mysql.sql ” table_mysql.sql table_mysql_innodb.sql 上述两者所有的数据库引擎不一样 2、导入pom依赖 <dependency> <groupId>org.quartz-scheduler</groupId> <artifactId>quartz-jobs</artifactId> <version>2.2.1</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-quartz</artifactId> </dependency> <!--quartz需要使用C3P0连接池将数据持久化到数据库--> <!--Quartz各版本数据库连接池技术更新情况--> <!--Quartz 2.0 以前 DBCP--> <!--Quartz 2.0 以后 C3P0

configure Druid to connect to Zookeeper on port 5181

十年热恋 提交于 2019-12-06 07:54:25
I'm running a MapR cluster and want to do some timeseries analysis with Druid . MapR uses a non-standard port for Zookeeper (port 5181 instead of the conventional port 2181). When I start the Druid coordinator service, it attempts to connect on the conventional Zookeeper port and fails: 2015-03-03T17:46:49,614 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. 2015-03-03T17:46:49,617 WARN [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Session 0x0 for server null, unexpected error,

springboot2.1.7整合Druid

走远了吗. 提交于 2019-12-06 05:45:05
一、maven的依赖;文中就贴重点的, 其他依赖就不贴了    <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.1.17</version> </dependency> 二、yml的配置    spring: application: name: Druids datasource: driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3308/systems?useUnicode=true&characterEncoding=utf8&serverTimezone=Asia/Shanghai username: root password: 123456 initialSize: 50 maxActive: 50 validationQuery: select DATE_SUB(curdate(),INTERVAL 0 DAY) filters: stat,wall,log4j 三、java配置类    import org.springframework.context.annotation.Configuration; import com.alibaba

How to insert data into druid via tranquility

℡╲_俬逩灬. 提交于 2019-12-06 05:37:06
问题 By following tutorial at http://druid.io/docs/latest/tutorials/tutorial-loading-streaming-data.html , I was able to insert data into druid via Kafka console Kafka console The spec file looks as following examples/indexing/wikipedia.spec [ { "dataSchema" : { "dataSource" : "wikipedia", "parser" : { "type" : "string", "parseSpec" : { "format" : "json", "timestampSpec" : { "column" : "timestamp", "format" : "auto" }, "dimensionsSpec" : { "dimensions": ["page","language","user","unpatrolled",

61.springboot 配置druid数据源 并查看监控信息

人走茶凉 提交于 2019-12-06 03:20:46
1.效果 2.配置过程 参考: https://www.jianshu.com/p/898e6f7bab18 (1) 使用druid数据源 (2) 配置DruidConfiguration 配置类 @Bean public ServletRegistrationBean startViewServlet(){ ServletRegistrationBean servletRegistrationBean = new ServletRegistrationBean(new StatViewServlet(),"/druid/*"); // IP白名单 servletRegistrationBean.addInitParameter("allow","127.0.0.1"); // IP黑名单(共同存在时,deny优先于allow) servletRegistrationBean.addInitParameter("deny","127.0.0.1"); //控制台管理用户 servletRegistrationBean.addInitParameter("loginUsername","admin"); servletRegistrationBean.addInitParameter("loginPassword","123456"); //是否能够重置数据