hbase

Hortonworks shc unresolved dependencies

只愿长相守 提交于 2019-12-14 01:56:05
问题 I would like to use the hbase hortonworks connector. github guide But I don't know how to import it in my project. I have the following build.sbt : name := "project" version := "1.0" scalaVersion := "2.11.8" libraryDependencies ++= Seq( "org.apache.spark" % "spark-core_2.11" % "2.2.0", "org.apache.spark" % "spark-sql_2.11" % "2.2.0", "org.scala-lang" % "scala-compiler" % "2.11.8", "com.hortonworks" % "shc" % "1.1.2-2.1-s_2.11-SNAPSHOT" ) And it gives me the follwing unresolved dependencies :

HBase shell操作

不羁的心 提交于 2019-12-14 01:51:36
一、HBase常用shell操作 1、进入HBase客户端命令操作界面 $ bin/hbase shell 2、查看帮助命令 hbase(main):001:0> help 3、查看当前数据库中有哪些表 hbase(main):002:0> list 4、创建一张表 创建user表,包含info、data两个列族 hbase(main):010:0> create 'user', 'info', 'data' 或者 hbase(main):010:0> create 'user', {NAME => 'info', VERSIONS => '3'},{NAME => 'data'} 5、添加 数据 操作 向user表中插入信息,row key为rk0001,列族info中添加name列标示符,值为zhangsan hbase(main):011:0> put 'user', 'rk0001', 'info:name', 'zhangsan' 向user表中插入信息,row key为rk0001,列族info中添加gender列标示符,值为female hbase(main):012:0> put 'user', 'rk0001', 'info:gender', 'female' 向user表中插入信息,row key为rk0001,列族info中添加age列标示符,值为20

How to use hbase coprocessor to implement groupby?

本小妞迷上赌 提交于 2019-12-14 01:28:50
问题 Recently I learned hbase coprocessor, I used endpoint to accumulate one column of hbase table.For example, the hbase table named "pendings",its family is "asset", I accumulate all the value of "asset:amount". The table has other columns,such as "asset:customer_name". The first thing I want to do is accumulate the the value of "asset:amount" group by "asset:customer_name". But I found there is not API for groupby, or I did not find it. Do you know how to implement GROUPBY or how to use the API

HBase常用shell操作

和自甴很熟 提交于 2019-12-14 00:11:01
HBase常用shell操作 1、进入HBase客户端命令操作界面 $ bin/hbase shell 2、查看帮助命令 hbase(main):001:0> help 3、查看当前数据库中有哪些表 hbase(main):002:0> list 4、创建一张表 创建user表,包含info、data两个列族 hbase(main):010:0> create ‘user’, ‘info’, ‘data’ 或者 hbase(main):010:0> create ‘user’, {NAME => ‘info’, VERSIONS => ‘3’},{NAME => ‘data’} 5、添加数据操作 向user表中插入信息,row key为rk0001,列族info中添加name列标示符,值为zhangsan hbase(main):011:0> put ‘user’, ‘rk0001’, ‘info:name’, ‘zhangsan’ 向user表中插入信息,row key为rk0001,列族info中添加gender列标示符,值为female hbase(main):012:0> put ‘user’, ‘rk0001’, ‘info:gender’, ‘female’ 向user表中插入信息,row key为rk0001,列族info中添加age列标示符,值为20 hbase

Apache Pig- ERROR 6007: “Unable to check name” message

我们两清 提交于 2019-12-13 22:50:04
问题 Environment: hadoop 1.0.3, hbase 0.94.1, pig 0.11.1 I am running a pig script in Java program, I get the following error sometimes but not all the time. What the program does is it loads a file from hdfs, do some transformation and store it into hbase. My program is multi-threaded. And I've already made PigServer thread-safe and I have "/user/root" directory created in hdfs. Here is the snippet of the program and the exception I've got. Please advise. pigServer = PigFactory.getServer(); URL

Hbase安装部署

社会主义新天地 提交于 2019-12-13 22:07:34
注意: 注意事项:HBase强依赖zookeeper和hadoop,安装HBase之前一定要保证zookeeper和hadoop启动成功,且服务正常运行 第一步:下载对应的HBase的安装包 所有关于CDH版本的软件包下载地址如下 http://archive.cloudera.com/cdh5/cdh/5/ HBase对应的版本下载地址如下 http://archive.cloudera.com/cdh5/cdh/5/hbase-1.2.0-cdh5.14.0.tar.gz 第二步:压缩包上传并解压 将我们的压缩包上传到node01服务器的/export/softwares路径下并解压 cd /export/softwares/ tar -zxvf hbase-1.2.0-cdh5.14.0-bin.tar.gz -C ../servers/ 第三步:修改配置文件 第一台机器进行修改配置文件 cd /export/servers/hbase-1.2.0-cdh5.14.0/conf 修改第一个配置文件hbase-env.sh 注释掉HBase使用内部zk vim hbase-env.sh export JAVA_HOME=/export/servers/jdk1.8.0_141 export HBASE_MANAGES_ZK=false 修改第二个配置文件hbase-site

using Oracle Loader to import HBase data into Oracle table

自作多情 提交于 2019-12-13 18:06:14
问题 I have data in HBase table that I am trying to import into an Oracle or a MySQL table . I heard there is an Oracle Loader that can server the purpose. Anyone tried to import Hadoop HBase data into an Oracle table? If so, could you please give me a reference link on how I can do that? 回答1: I don't know how you are doing this. Last time I had checked there was no support for export from HBase to SQL DBs through Sqoop . You are facing this issue because of this fact. You could probably try :

Apache Phoenix illegal data exception

南笙酒味 提交于 2019-12-13 16:21:57
问题 I am having problems writing data from HBase and reading it with Phoenix. These are the steps to reproduce the problem: Create a table using Phoenix. CREATE TABLE test ( id varchar not null, t1.a unsigned_int, t1.b varchar CONSTRAINT pk PRIMARY KEY (id)) COLUMN_ENCODED_BYTES = 0; If I add information to the table using Phoenix using Upsert upsert into test (id, t1.a, t1.b) values ('a1',1,'foo_a'); And I try query the table, I get this: select * from test; +-----+----+--------+ | ID | A | B |

HBase region over region servers load not balanced

非 Y 不嫁゛ 提交于 2019-12-13 15:32:27
问题 I'm running a small cluster with two region servers of HBase 0.94.7. I find that the load request over region servers is very unbalanced. From the Web UI, I got: Region1: numberOfOnlineRegions=1, usedHeapMB=26, maxHeapMB=3983 Region2: numberOfOnlineRegions=22, usedHeapMB=44, maxHeapMB=3983 The region2 is servered as master. I checked that the load balancer is on. And I find some logs in the master log: INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced

Hbase CopyTable inside Java

别说谁变了你拦得住时间么 提交于 2019-12-13 15:25:58
问题 I want to copy one Hbase table to another location with good performance. I would like to reuse the code from CopyTable.java from Hbase-server github page I've been looking the doccumentation from hbase but it didn't help me much http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/CopyTable.html After looking in this post of stackoverflow: Can a main() method of class be invoked in another class in java I think I can directly call it using its main class. Question: Do you think