hbase

HBase进入web页面失败

大兔子大兔子 提交于 2019-12-11 09:56:38
安装配置好hbase后进入网页发下进不去 看看服务是否启动完全 hadoop的集群没启动 还有Hmaster服务没启动 如果hadoop不启动 那么Hmaster会自己关闭 加载成功 注意事项:HBase强依赖zookeeper和hadoop,安装HBase之前一定要保证zookeeper和hadoop启动成功,且服务正常运行 来源: CSDN 作者: 修电脑的9527 链接: https://blog.csdn.net/weixin_44361667/article/details/103486568

Why reading broadcast variable in Spark Streaming got exception after days of running?

一曲冷凌霜 提交于 2019-12-11 09:53:48
问题 I am using Spark Streaming (Spark V1.6.0) along with HBase in my project, and HBase(HBase V1.1.2) configurations are transferred among executors with broadcast variable. The Spark Streaming application works at first, while about 2 days later, exception will appear. val hBaseContext: HBaseContext = new HBaseContext(sc, HBaseCock.hBaseConfiguration()) private def _materialDStream(dStream: DStream[(String, Int)], columnName: String, batchSize: Int) = { hBaseContext.streamBulkIncrement[(String,

Unable to Connect to Hbase using Java

房东的猫 提交于 2019-12-11 09:32:19
问题 Hi i have installed ubuntu on my machine and installed hbase0.98-hadoop2. Then i edited hbase-env.sh file and hbase-site.xml. Now my hbase shell is working fine. But when i try to connect to hbase from Java code using hbase java api's. I get errors. My Code is: Configuration hc = HBaseConfiguration.create(); HTableDescriptor ht = new HTableDescriptor("User"); ht.addFamily( new HColumnDescriptor("Id")); ht.addFamily( new HColumnDescriptor("Name")); System.out.println( "connecting" );

The node /hbase-unsecure is not in ZooKeeper. Check the value configured in 'zookeeper.znode.parent'.

血红的双手。 提交于 2019-12-11 08:58:04
问题 I am getting this error while starting standalone hBase on my ubuntu machine. Please help. Spent a huge amount of time to get it running. :( What I have checked so far - /etc/hosts contains localhost 127.0.0.1 HBase : hbase-0.98.3-hadoop2-bin.tar.gz Hadoop: hadoop-2.6.0.tar.gz I already have the node /hbase-unsecure in my hbase-site.xml. When I try to run the command - create 'usertable', 'resultfamily' It gives me following exception - ERROR: The node /hbase-unsecure is not in ZooKeeper. It

integration between Hive and Hbase

时光怂恿深爱的人放手 提交于 2019-12-11 08:52:14
问题 I'm using hive over hbase to make some BI. i have already configured hive and hbase but when i run that query "select count(*) from hbase_table_2 " on Hive hbase_table_2 is a table in hive which refer to a table in Hbase This exception occurred: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201212171838_0009_m_000000" java.io.IOException: java.io.IOException: org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@7d858aa0 closed at org

Migrating from MongoDB to HBase

我们两清 提交于 2019-12-11 08:38:17
问题 Hi I am very new to HBase database. I downloaded some twitter data and stored into MongoDB. Now I need to transform that data into HBase to speed-up Hadoop processing. But I am not able to create it's scheme. Here I have twitter data into JSON format- { "_id" : ObjectId("512b71e6e4b02a4322d1c0b0"), "id" : NumberLong("306044618179506176"), "source" : "<a href=\"http://www.facebook.com/twitter\" rel=\"nofollow\">Facebook</a>", "user" : { "name" : "Dada Bhagwan", "location" : "India", "url" :

Janusgraph 0.3.2 + HBase 1.4.9 - Can't set graph.timestamps

核能气质少年 提交于 2019-12-11 08:28:58
问题 I am running Janusgraph 0.3.2 in a docker container and trying to use an AWS EMR cluster running HBase 1.4.9 as the storage backend. I can run gremlin-server.sh, but if I try to save something, I get the stack trace pasted below. It looks to me like the locks are being created using different timestamps lengths causing it to look like no lock exists. I tried adding the graph.timestamps setting to the config file, but still got the same error. Here is my configuration gremlin-server.yml host:

Store documents (.pdf, .doc and .txt files) in MaprDB

会有一股神秘感。 提交于 2019-12-11 08:25:11
问题 I need to store documents such as .pdf, .doc and .txt files to MaprDB. I saw one example in Hbase where it stores files in binary and is retrieved as files in Hue, but I not sure how it could be implemented. Any idea how can a document be stored in MaprDB? 回答1: First thing is , Im not aware about Maprdb as Im using Cloudera. But I have experience in hbase storing many types of objects in hbase as byte array like below mentioned. Most primitive way of storing in hbase or any other db is byte

Store multiple versions in hbase row with the same family: qualifier but different timestamps.

牧云@^-^@ 提交于 2019-12-11 08:24:36
问题 I want to store multiple versions of a row which has the same family: qualifier but different value and timestamps. Put put = new Put(Bytes.toBytes(key)); put.add(family, qualifier,timestamp0, value0); put.add(family, qualifier,timestamp1, value1); table.put(put); However, only one of them which had the higher timestamp will be stored in the table. The issue is not because of MaxVersions. Is there any way I could have hbase to store both versions? 回答1: I wrote a test, and it is ok. pls check

Change Hbase size limit

自古美人都是妖i 提交于 2019-12-11 08:15:06
问题 I have in one of my tables one of the rows exceeding the default size of 64MB. Now whenever I try to scan that row or delete it this error shows up: ERROR: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit. I've tried changing in hbase-site.xml hbase.client.keyvalue.maxsize to 256MB and it has no effect. I've also tried with no luck to change it from the shell directly with CodedInputStream.setSizeLimit(268435456) How can I change