partition

kafka常用命令

南笙酒味 提交于 2019-12-22 18:34:23
1)下载kafka_2.10-0.8.2.1.tgz并解压(整合scala) 2)修改配置文件(可选) config/zookeeper.properties dataDir=/zookeeper 这么写建到d盘根目录 dataDir=zookeeper 这么写建到bin/windows目录 3)启动zookeeper(窗口1) zookeeper-server-start.sh ../config/zookeeper.properties 4)zookeeper测试(窗口2) echo stat | nc 192.168.159.133 2181 5)启动kafka(窗口3) kafka-server-start.sh ../config/server.properties 6)创建topic(窗口4) kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic mytopic 7)查看topic(窗口4) kafka-topics.sh --describe --zookeeper localhost:2181 --topic mytopic Topic:mytopic PartitionCount:3 ReplicationFactor:1

Oracle : Drop multiple partitions

假如想象 提交于 2019-12-22 13:58:29
问题 A table TMP has 5 partitions, namely P_1, P_2,....P_5. I need to drop some partitions of TMP ; the partitions to drop are derived by another query. Ex: ALTER TABLE TMP DROP PARTITIONS (SELECT ... From ... //expression to get partition names ) Let's say the SELECT statement returns P_1 & P_5. The part query of the ALTER statement above doesn't work. Is there any way to drop partitions with input from a SELECT statement? 回答1: You can use dynamic sql in anonymous pl/sql block; Begin for i in

SQL Server中的窗口函数

扶醉桌前 提交于 2019-12-21 09:12:57
所谓窗口,是指对于Select子句查询的结果集,OVER()子句按照指定的分区字段定义的行集,也就是说,一个窗口是数据行的集合。如下图所示,按照Province字段来对结果集分窗口: 窗口函数是应用于窗口的函数,像排名函数,分析函数和聚合函数,都可以计算窗口中的行集的值。您可以把OVER子句与窗口函数一起使用来计算聚合值,例如移动平均值,累积聚合,运行总计或每组结果的前N个。基于窗口的计算,可以把每一个窗口看作是一个分组,或分区。 窗口具有移动(或者滑动)的特性,这是由OVER子句中的ORDER BY子句来定义的,按照特定的顺序做基于窗口的计算。 注意OVER()子句的执行顺序:OVER()子句的执行顺序在SELECT子句之后,在DISTINCT子句之后,在ORDER By子句之前。DISTINCT子句是在SELECT子句之后执行。 使用以下代码创建示例数据: create table dbo.dt_test ( ID int, Code int ) go --insert data insert into dbo.dt_test(ID,Code) values(3,1),(3,2),(1,1),(1,2),(2,3),(1,2) go View Code 一,计算整个窗口的聚合 窗口是通过OVER()子句来定义的,可以把整个查询结果集作为一个窗口,也可使用partition by

How to scale tf.nn.embedding_lookup_sparse

╄→尐↘猪︶ㄣ 提交于 2019-12-21 03:03:30
问题 I'm trying to build a very large sparse model (e.g. LR if there is only one embedding layer), the input dimension can be as large as 100000000, and the sample is very sparse, the average number of non zero value is around 100. Since the weights is very large and we have to partition and distribute it onto different servers. Here is the code: weights = tf.get_variable("weights", weights_shape, partitioner=tf.fixed_size_partitioner(num_shards, axis=0), initializer=tf.truncated_normal

How to scale tf.nn.embedding_lookup_sparse

本秂侑毒 提交于 2019-12-21 03:03:17
问题 I'm trying to build a very large sparse model (e.g. LR if there is only one embedding layer), the input dimension can be as large as 100000000, and the sample is very sparse, the average number of non zero value is around 100. Since the weights is very large and we have to partition and distribute it onto different servers. Here is the code: weights = tf.get_variable("weights", weights_shape, partitioner=tf.fixed_size_partitioner(num_shards, axis=0), initializer=tf.truncated_normal

手动删2008R2子域及其域控

我的未来我决定 提交于 2019-12-20 18:56:11
在子域自身出错无法降级的情况下,用ntdsutil手动删 1 删除子域域控 ntdsutil:me cl metadata cleanup: con server connections: con to do xx.com server connections: q metadata cleanup: se op tar select operation target: li si select operation target: se si 0 select operation target: li do select operation target: se do 1 select operation target: li se in si select operation target: se se 1 select operation target: q metadata cleanup: re se se 2 删除子域 ntdsutil: me cl metadata cleanup: con server connections: con to se xx.xx.com(是server,不是domain) server connections: q metadata cleanup: se op tar select operation target: li si

舔一舔 · 肌霸Kafka

橙三吉。 提交于 2019-12-20 11:56:26
目录 1、关于Kafka你知道这些术语么? 2、Kafka如何存储数据? 3、kafka扑街了,如何保证高可用? 4、Kafka如何做到数据不丢失? 又是烟雨蒙蒙的冬日,一杯暖茶,春天的气息已经在杯中袅袅升起的热气里荡漾开来,茶醇使人醉,技术要学会。我们来简单剖析一下kafka的一些原理特性。 1、关于Kafka,你知道这些术语吗? Kafka在消息处理领域能独步天下,自然离不开他优良的架构设计,我们先来看看在Kafka的领域里有哪些组件和概念,下面是一些枯燥的名词解释,如果已经掌握,可以帮忙看看是否正确解释了。 Topic ,顾名思义,主题的意思。可以理解为是对某一类型的消息的标识,kafka处理的消息集按照Topic分类,相当于逻辑上的一个消息消息集合。 Partition ,分区,数据分区,数据分片,这是物理存储上的分组,每一个Topic可能对应多个分片,比如Topic为Order的消息需要存放5TB的数据到磁盘,如果分配5个Partition,每个partition就是1TB的数据。 一直在说kafka是分布式,高可靠的消息系统,那么这里就有所体现,多个Partition可以分散在不同的服务器上,将数据存储到不同服务器的磁盘上。 Broker ,Kafka是可以分布式部署集群,集群中多台服务器,每台部署一个Kafka进程,这个Kafka进程就称之为Broker。

ubuntu下nagios配置

风流意气都作罢 提交于 2019-12-20 09:07:23
参考文献: http://www.cnblogs.com/mchina/archive/2013/02/20/2883404.html http://my.oschina.net/duangr/blog/183160 声明 本文是我参照上述两篇参考文献做nagios配置的一个记录,当中的理论部分内容大多数拷贝自上述两篇文章。如果想看详细内容,可以参考上述两篇文章。 一、Nagios简介   Nagios是一款开源的电脑系统和网络监视工具,能有效监控Windows、Linux和Unix的主机状态,交换机路由器等网络设置,打印机等。在系统或服务状态异常时发出邮件或短信报警第一时间通知网站运维人员,在状态恢复后发出正常的邮件或短信通知。   Nagios原名为NetSaint,由Ethan Galstad开发并维护至今。NAGIOS是一个缩写形式: "Nagios Ain't Gonna Insist On Sainthood" Sainthood 翻译为圣徒,而"Agios"是"saint"的希腊表示方法。Nagios被开发在Linux下使用,但在Unix下也工作得非常好。 主要功能 网络服务监控(SMTP、POP3、HTTP、NNTP、ICMP、SNMP、FTP、SSH) 主机资源监控(CPU load、disk usage、system logs),也包括Windows主机(使用

How to find median in sql

喜你入骨 提交于 2019-12-20 05:22:09
问题 I have the following sql query which gives me the total h_time grouped by month, week and day. Instead I want the median h_time for month, week and day. How do I do that in Oracle SQL? SELECT DAY, MEDIAN(H_TIME) AS HANDLE_TIME FROM( select MONTH, WEEK, DAY, CASE WHEN C.JOINED IS NOT NULL THEN (NVL(C.TOTAL_TALK,0) + NVL(C.TOTAL_HOLD,0) + (NVL((C.DATETIME - C.START_DATETIME)*86400,0)) )/86400 ELSE 0 END AS H_TIME from TABLE1 C LEFT JOIN TABLE2 S ON S.ID = C.ID where c.direct = 'Inbound' ) where

Find groups of thousands which sum to a given number, in lexical order

允我心安 提交于 2019-12-20 03:44:16
问题 A large number can be comma formatted to read more easily into groups of three. E.g. 1050 = 1,050 and 10200 = 10,200 . The sum of each of these groups of three would be: 1050=1,050 gives: 1+50=51 10200=10,200 gives: 10+200=210 I need to search for matches in the sum of the groups of threes. Namely, if I am searching for 1234 , then I am looking for numbers whose sum of threes = 1234 . The smallest match is 235,999 since 235+999=1234 . No other integer less than 235,999 gives a sum of threes