Hadoop+Hbase分布式集群架构“完全篇”

匿名 (未验证) 提交于 2019-12-02 21:56:30

本文收录在Linux运维企业架构实战系列

前言:本篇博客是博主踩过无数坑,反复查阅资料,一步步搭建,操作完成后整理的个人心得,分享给大家~~~

1、认识Hadoop和Hbase

1.1 hadoop

  HadoopjavaApacheHadoopHadoop

1.2 Hadoop

Hadoop

  • Hadoop CommonHadoopJavaHadoopJava
  • Hadoop YARN
  • HadoopHDFS
  • Hadoop MapReduce YARN

Hadoop

2012“Hadoop”HadoopApache PigApache HiveApache HBaseApache

1.3 Hadoop

1

/Hadoophadoop

  • 绫讳互jarmapreduce

2

Hadoopjar /JobTrackerJobTracker/

3

TaskTrackersMapReducereduce

1.4 Hadoop

  • CPU
  • FTHAHadoop
  • Hadoop
  • Java

1.5 HBase

  HbasehbasehadoopHbaseHadoopHDFSHadoopMapReduceHbasezookeeper

1.6 HBase

Client

  • HBasecacheHBase

Zookeeper

  • master
  • Region
  • Region serverMaster
  • HBaseschematable

Master

  • Region serverregion
  • Region server
  • Region serverregion
  • table

RegionServer

  • regionregionIO
  • region

HLog(WAL log)

  • Hadoop Sequence FileSequence File KeyHLogKeyHLogKeytableregionsequence numbertimestamptimestampsequence number0sequence number
  • ValueHBaseKeyValueHFileKeyValue

Region

  • (region)regionregionregionregionregion
  • tableregionRegionserver

Memstore storefile

  • regionstorestoreCF
  • memstorestorefilememstorememstorehregionserverflashcachestorefilestorefile
  • storefileminormajor compactionmajarstorefile
  • regionstorefileregion hmasterregionserver
  • memstorestorefile
  • HBaseHRegionHRegion server
  • Storestorecolumns family
  • StrorememStore0StoreFile

2hadoop

2.1

:

主机名 IP 说明
hadoop01 192.168.10.101 DataNode、NodeManager、ResourceManager、NameNode
hadoop02 192.168.10.102 DataNode、NodeManager、SecondaryNameNode
hadoop03 192.168.10.106 DataNode、NodeManager

2.2

2.2.1

$ cat /etc/redhat-release  CentOS Linux release 7.3.1611 (Core)    $ uname -r 3.10.0-514.el7.x86_64

clsn

2.2.2 selinux

[along@hadoop01 ~]$ sestatus  SELinux status:                 disabled [root@hadoop01 ~]$ iptables -F [along@hadoop01 ~]$ systemctl status firewalld.service  ● firewalld.service - firewalld - dynamic firewall daemon    Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)    Active: inactive (dead)      Docs: man:firewalld(1) 

  

2.2.3

$ id along uid=1000(along) gid=1000(along) groups=1000(along) 

  

2.2.4 hosts

$ cat /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6  192.168.10.101 hadoop01 192.168.10.102 hadoop02 192.168.10.103 hadoop03 

  

2.2.5

$ yum -y install ntpdate $ sudo ntpdate cn.pool.ntp.org 

  

2.2.6 ssh

1

[along@hadoop01 ~]$ ssh-keygen 

2

---along用户 [along@hadoop01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 127.0.0.1 [along@hadoop01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop01 [along@hadoop01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop02 [along@hadoop01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop03 ---root用户 [along@hadoop01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 127.0.0.1 [along@hadoop01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop01 [along@hadoop01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop02 [along@hadoop01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop03 

3

[along@hadoop02 ~]$ ssh along@hadoop01 [along@hadoop02 ~]$ ssh along@hadoop02 [along@hadoop02 ~]$ ssh along@hadoop03 

  

2.3 jdk

[root@hadoop01 ~]# tar -xvf jdk-8u201-linux-x64.tar.gz -C /usr/local [root@hadoop01 ~]# chown along.along -R /usr/local/jdk1.8.0_201/ [root@hadoop01 ~]# ln -s /usr/local/jdk1.8.0_201/ /usr/local/jdk [root@hadoop01 ~]# cat /etc/profile.d/jdk.sh export JAVA_HOME=/usr/local/jdk PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH [root@hadoop01 ~]# source /etc/profile.d/jdk.sh [along@hadoop01 ~]$ java -version java version "1.8.0_201" Java(TM) SE Runtime Environment (build 1.8.0_201-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode) 

  

2.4 hadoop

[root@hadoop01 ~]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.2.0/hadoop-3.2.0.tar.gz [root@hadoop01 ~]# tar -xvf hadoop-3.2.0.tar.gz -C /usr/local/ [root@hadoop01 ~]# chown along.along -R /usr/local/hadoop-3.2.0/ [root@hadoop01 ~]# ln -s /usr/local/hadoop-3.2.0/  /usr/local/hadoop 

  

3hadoop

hadoop

[along@hadoop01 ~]$ cd /usr/local/hadoop/etc/hadoop/ [along@hadoop01 hadoop]$ vim hadoop-env.sh export JAVA_HOME=/usr/local/jdk export HADOOP_HOME=/usr/local/hadoop export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop 

  

3.2 core-site.xml HDFS

[along@hadoop01 hadoop]$ vim core-site.xml <configuration>     <!-- 指定HDFS默认(namenode)的通信地址 -->     <property>         <name>fs.defaultFS</name>         <value>hdfs://hadoop01:9000</value>     </property>     <!-- 指定hadoop运行时产生文件的存储路径 -->     <property>         <name>hadoop.tmp.dir</name>         <value>/data/hadoop/tmp</value>     </property> </configuration> [root@hadoop01 ~]# mkdir /data/hadoop 

  

3.3 hdfs-site.xml namenode

[along@hadoop01 hadoop]$ vim hdfs-site.xml  <configuration>     <!-- 设置namenode的http通讯地址 -->     <property>         <name>dfs.namenode.http-address</name>         <value>hadoop01:50070</value>     </property>      <!-- 设置secondarynamenode的http通讯地址 -->     <property>         <name>dfs.namenode.secondary.http-address</name>         <value>hadoop02:50090</value>     </property>      <!-- 设置namenode存放的路径 -->     <property>         <name>dfs.namenode.name.dir</name>         <value>/data/hadoop/name</value>     </property>      <!-- 设置hdfs副本数量 -->     <property>         <name>dfs.replication</name>         <value>2</value>     </property>     <!-- 设置datanode存放的路径 -->     <property>         <name>dfs.datanode.data.dir</name>         <value>/data/hadoop/datanode</value>     </property>      <property>         <name>dfs.permissions</name>         <value>false</value>     </property> </configuration> [root@hadoop01 ~]# mkdir /data/hadoop/name -p [root@hadoop01 ~]# mkdir /data/hadoop/datanode -p 

  

3.4 mapred-site.xml

[along@hadoop01 hadoop]$ vim mapred-site.xml <configuration>     <!-- 通知框架MR使用YARN -->     <property>         <name>mapreduce.framework.name</name>         <value>yarn</value>     </property>     <property>         <name>mapreduce.application.classpath</name>         <value>         /usr/local/hadoop/etc/hadoop,         /usr/local/hadoop/share/hadoop/common/*,         /usr/local/hadoop/share/hadoop/common/lib/*,         /usr/local/hadoop/share/hadoop/hdfs/*,         /usr/local/hadoop/share/hadoop/hdfs/lib/*,         /usr/local/hadoop/share/hadoop/mapreduce/*,         /usr/local/hadoop/share/hadoop/mapreduce/lib/*,         /usr/local/hadoop/share/hadoop/yarn/*,         /usr/local/hadoop/share/hadoop/yarn/lib/*         </value>     </property> </configuration> 

  

3.5 yarn-site.xml resourcemanager

[along@hadoop01 hadoop]$ vim yarn-site.xml <configuration>     <property>         <name>yarn.resourcemanager.hostname</name>         <value>hadoop01</value>     </property>      <property>         <description>The http address of the RM web application.</description>         <name>yarn.resourcemanager.webapp.address</name>         <value>${yarn.resourcemanager.hostname}:8088</value>     </property>      <property>         <description>The address of the applications manager interface in the RM.</description>         <name>yarn.resourcemanager.address</name>         <value>${yarn.resourcemanager.hostname}:8032</value>     </property>      <property>         <description>The address of the scheduler interface.</description>         <name>yarn.resourcemanager.scheduler.address</name>         <value>${yarn.resourcemanager.hostname}:8030</value>     </property>      <property>         <name>yarn.resourcemanager.resource-tracker.address</name>         <value>${yarn.resourcemanager.hostname}:8031</value>     </property>      <property>         <description>The address of the RM admin interface.</description>         <name>yarn.resourcemanager.admin.address</name>         <value>${yarn.resourcemanager.hostname}:8033</value>     </property> </configuration> 

  

3.6 masters & slaves

[along@hadoop01 hadoop]$ echo 'hadoop02' >> /usr/local/hadoop/etc/hadoop/masters [along@hadoop01 hadoop]$ echo 'hadoop03 hadoop01'  >> /usr/local/hadoop/etc/hadoop/slaves 

  

3.7

3.7.1

/usr/local/hadoop/sbin

1start-dfs.sh stop-dfs.sh

[along@hadoop01 ~]$ vim /usr/local/hadoop/sbin/start-dfs.sh  [along@hadoop01 ~]$ vim /usr/local/hadoop/sbin/stop-dfs.sh HDFS_DATANODE_USER=along HADOOP_SECURE_DN_USER=hdfs HDFS_NAMENODE_USER=along HDFS_SECONDARYNAMENODE_USER=along

2start-yarn.sh stop-yarn.sh

[along@hadoop01 ~]$ vim /usr/local/hadoop/sbin/start-yarn.sh  [along@hadoop01 ~]$ vim /usr/local/hadoop/sbin/stop-yarn.sh YARN_RESOURCEMANAGER_USER=along HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=along 

  

3.7.2

[root@hadoop01 ~]# chown -R along.along /usr/local/hadoop-3.2.0/ [root@hadoop01 ~]# chown -R along.along /data/hadoop/ 

  

3.7.3 hadoop

[root@hadoop01 ~]# vim /etc/profile.d/hadoop.sh  [root@hadoop01 ~]# cat /etc/profile.d/hadoop.sh export HADOOP_HOME=/usr/local/hadoop PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH 

  

3.7.4

[root@hadoop01 ~]# vim /data/hadoop/rsync.sh #在集群内所有机器上都创建所需要的目录 for i in hadoop02 hadoop03     do           sudo rsync -a /data/hadoop $i:/data/ done   #复制hadoop配置到其他机器 for i in hadoop02 hadoop03     do           sudo rsync -a  /usr/local/hadoop-3.2.0/etc/hadoop $i:/usr/local/hadoop-3.2.0/etc/ done  [root@hadoop01 ~]# /data/hadoop/rsync.sh 

  

3.8 hadoop

3.8.1

[along@hadoop01 ~]$ hdfs namenode -format ... ... /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop01/192.168.10.101 ************************************************************/ [along@hadoop02 ~]$ hdfs namenode -format /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop02/192.168.10.102 ************************************************************/ [along@hadoop03 ~]$ hdfs namenode -format /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop03/192.168.10.103 ************************************************************/ 

  

3.8.2

1namenodedatanode

[along@hadoop01 ~]$ start-dfs.sh  [along@hadoop02 ~]$ start-dfs.sh  [along@hadoop03 ~]$ start-dfs.sh  [along@hadoop01 ~]$ jps 4480 DataNode 4727 Jps 4367 NameNode [along@hadoop02 ~]$ jps 4082 Jps 3958 SecondaryNameNode 3789 DataNode [along@hadoop03 ~]$ jps 2689 Jps 2475 DataNode 

2YARN

[along@hadoop01 ~]$ start-yarn.sh [along@hadoop02 ~]$ start-yarn.sh [along@hadoop03 ~]$ start-yarn.sh [along@hadoop01 ~]$ jps 4480 DataNode 4950 NodeManager 5447 NameNode 5561 Jps 4842 ResourceManager [along@hadoop02 ~]$ jps 3958 SecondaryNameNode 4503 Jps 3789 DataNode 4367 NodeManager [along@hadoop03 ~]$ jps 12353 Jps 12226 NodeManager 2475 DataNode 

  

3.9

1http://hadoop01:8088

ResourceManager Active Nodes

2http://hadoop01:50070/dfshealth.html#tab-datanode

NameNode

hadoop

4Hbase

4.1 Hbase

[root@hadoop01 ~]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/1.4.9/hbase-1.4.9-bin.tar.gz [root@hadoop01 ~]# tar -xvf hbase-1.4.9-bin.tar.gz -C /usr/local/ [root@hadoop01 ~]# chown -R along.along /usr/local/hbase-1.4.9/ [root@hadoop01 ~]# ln -s /usr/local/hbase-1.4.9/ /usr/local/hbase

2018.03.08hbase-2.1hbasehbase-1.4.9

4.2 Hbase

4.2.1 hbase-env.sh hbase

[root@hadoop01 ~]# cd /usr/local/hbase/conf/ [root@hadoop01 conf]# vim hbase-env.sh export JAVA_HOME=/usr/local/jdk export HBASE_CLASSPATH=/usr/local/hbase/conf 

  

4.2.2 hbase-site.xml hbase

[root@hadoop01 conf]# vim hbase-site.xml <configuration> <property>     <name>hbase.rootdir</name>     <!-- hbase存放数据目录 -->     <value>hdfs://hadoop01:9000/hbase/hbase_db</value>     <!-- 端口要和Hadoop的fs.defaultFS端口一致--> </property> <property>     <name>hbase.cluster.distributed</name>     <!-- 是否分布式部署 -->     <value>true</value> </property> <property>     <name>hbase.zookeeper.quorum</name>     <!-- zookooper 服务启动的节点,只能为奇数个 -->     <value>hadoop01,hadoop02,hadoop03</value> </property> <property>     <!--zookooper配置、日志等的存储位置,必须为以存在 -->     <name>hbase.zookeeper.property.dataDir</name>     <value>/data/hbase/zookeeper</value> </property> <property>     <!--hbase master -->     <name>hbase.master</name>     <value>hadoop01</value> </property> <property>     <!--hbase web 端口 -->     <name>hbase.master.info.port</name>     <value>16666</value> </property> </configuration> 

zookeeper

  • 2zookeeper1zookeeper12zookeeper0
  • 3zookeeper23zookeeper1
  • 2->0 ; 3->1 ; 4->1 ; 5->2 ; 6->2 2n2n-1n-1zookeeper

4.2.3

[root@hadoop01 conf]# vim regionservers hadoop01 hadoop02 hadoop03 

  

5Hbase

5.1 hbase

[root@hadoop01 ~]# vim /etc/profile.d/hbase.sh export HBASE_HOME=/usr/local/hbase PATH=$HBASE_HOME/bin:$PATH 

  

5.2

[root@hadoop01 ~]# mkdir -p /data/hbase/zookeeper [root@hadoop01 ~]# vim /data/hbase/rsync.sh  #在集群内所有机器上都创建所需要的目录 for i in hadoop02 hadoop03     do           sudo rsync -a /data/hbase $i:/data/          sudo scp -p /etc/profile.d/hbase.sh $i:/etc/profile.d/ done   #复制hbase配置到其他机器 for i in hadoop02 hadoop03     do           sudo rsync -a  /usr/local/hbase-2.1.3 $i:/usr/local/ done [root@hadoop01 conf]# chown -R along.along /data/hbase [root@hadoop01 ~]# /data/hbase/rsync.sh hbase.sh                                                        100%   62     0.1KB/s   00:00     hbase.sh                                                        100%   62     0.1KB/s   00:00     

  

5.3 hbase

1

[along@hadoop01 ~]$ start-hbase.sh  hadoop03: running zookeeper, logging to /usr/local/hbase/logs/hbase-along-zookeeper-hadoop03.out hadoop01: running zookeeper, logging to /usr/local/hbase/logs/hbase-along-zookeeper-hadoop01.out hadoop02: running zookeeper, logging to /usr/local/hbase/logs/hbase-along-zookeeper-hadoop02.out ... ... 

2

[along@hadoop01 ~]$ start-hbase.sh  hadoop03: running zookeeper, logging to /usr/local/hbase/logs/hbase-along-zookeeper-hadoop03.out hadoop01: running zookeeper, logging to /usr/local/hbase/logs/hbase-along-zookeeper-hadoop01.out hadoop02: running zookeeper, logging to /usr/local/hbase/logs/hbase-along-zookeeper-hadoop02.out ... ... 

  

5.4 hbase状态

http://hadoop01:16666

6Hbase

6.1 hbase shell

名称

命令表达式

创建表

create '','1','2'.......

添加记录

put '', '',':',''

查看记录

get '',''

查看表中的记录总数

count ''

删除记录

delete '',',''

删除表

①disable '' ②drop ''

查看所有记录

scan ''

查看某个表某个列中所有数据

scan '',[':']

更新记录

即重写一遍进行覆盖

6.2

1hbase

[along@hadoop01 ~]$ hbase shell    #需要等待一些时间 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hbase-1.4.9/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell Use "help" to get list of supported commands. Use "exit" to quit this interactive shell. Version 1.4.9, rd625b212e46d01cb17db9ac2e9e927fdb201afa1, Wed Dec  5 11:54:10 PST 2018  hbase(main):001:0>  

  

2

hbase(main):001:0> status  1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average load 

  

3hive

hbase(main):002:0> version 1.4.9, rd625b212e46d01cb17db9ac2e9e927fdb201afa1, Wed Dec  5 11:54:10 PST 2018 

  

6.3 DDL

1demoidinfo

hbase(main):001:0> create 'demo','id','info' 0 row(s) in 23.2010 seconds  => Hbase::Table - demo 

  

2

hbase(main):002:0> list TABLE                                                                                              demo                                                                                               1 row(s) in 0.6380 seconds  => ["demo"] ---获取详细描述 hbase(main):003:0> describe 'demo' Table demo is ENABLED                                                                              demo                                                                                               COLUMN FAMILIES DESCRIPTION                                                                        {NAME => 'id', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS =>  'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => ' 0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}                          {NAME => 'info', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS = > 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS =>  '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}                        2 row(s) in 0.3500 seconds 

  

3

disable

hbase(main):004:0> disable 'demo' 0 row(s) in 2.5930 seconds  hbase(main):006:0> alter 'demo',{NAME=>'info',METHOD=>'delete'} Updating all regions with the new schema... 1/1 regions updated. Done. 0 row(s) in 4.3410 seconds  hbase(main):007:0> describe 'demo' Table demo is DISABLED                                                                               demo                                                                                                 COLUMN FAMILIES DESCRIPTION                                                                          {NAME => 'id', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'F ALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0',  BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}                                1 row(s) in 0.1510 seconds 

  

4

disable,drop

hbase(main):008:0> list TABLE                                                                                                demo                                                                                                 1 row(s) in 0.1010 seconds  => ["demo"] hbase(main):009:0> disable 'demo' 0 row(s) in 0.0480 seconds  hbase(main):010:0> is_disabled 'demo'   #判断表是否disable true                                                                                                 0 row(s) in 0.0210 seconds  hbase(main):013:0> drop 'demo' 0 row(s) in 2.3270 seconds  hbase(main):014:0> list   #已经删除成功 TABLE                                                                                                0 row(s) in 0.0250 seconds  => [] hbase(main):015:0> is_enabled 'demo'   #查询是否存在demo表  ERROR: Unknown table demo! 

  

6.4 DML

1

hbase(main):024:0> create 'demo','id','info' 0 row(s) in 10.0720 seconds  => Hbase::Table - demo hbase(main):025:0> is_enabled 'demo' true                                                                                                 0 row(s) in 0.1930 seconds  hbase(main):030:0> put 'demo','example','id:name','along' 0 row(s) in 0.0180 seconds  hbase(main):039:0> put 'demo','example','id:sex','male' 0 row(s) in 0.0860 seconds  hbase(main):040:0> put 'demo','example','id:age','24' 0 row(s) in 0.0120 seconds  hbase(main):041:0> put 'demo','example','id:company','taobao' 0 row(s) in 0.3840 seconds  hbase(main):042:0> put 'demo','taobao','info:addres','china' 0 row(s) in 0.1910 seconds  hbase(main):043:0> put 'demo','taobao','info:company','alibaba' 0 row(s) in 0.0300 seconds  hbase(main):044:0> put 'demo','taobao','info:boss','mayun' 0 row(s) in 0.1260 seconds 

  

2demo

hbase(main):045:0> get 'demo','example' COLUMN                     CELL                                                                       id:age                    timestamp=1552030411620, value=24                                          id:company                timestamp=1552030467196, value=taobao                                      id:name                   timestamp=1552030380723, value=along                                       id:sex                    timestamp=1552030392249, value=male                                       1 row(s) in 0.8850 seconds  hbase(main):046:0> get 'demo','taobao' COLUMN                     CELL                                                                       info:addres               timestamp=1552030496973, value=china                                       info:boss                 timestamp=1552030532254, value=mayun                                       info:company              timestamp=1552030520028, value=alibaba                                    1 row(s) in 0.2500 seconds  hbase(main):047:0> get 'demo','example','id' COLUMN                     CELL                                                                       id:age                    timestamp=1552030411620, value=24                                          id:company                timestamp=1552030467196, value=taobao                                      id:name                   timestamp=1552030380723, value=along                                       id:sex                    timestamp=1552030392249, value=male                                       1 row(s) in 0.3150 seconds  hbase(main):048:0> get 'demo','example','info' COLUMN                     CELL                                                                      0 row(s) in 0.0200 seconds  hbase(main):049:0> get 'demo','taobao','id' COLUMN                     CELL                                                                      0 row(s) in 0.0410 seconds  hbase(main):053:0> get 'demo','taobao','info' COLUMN                     CELL                                                                       info:addres               timestamp=1552030496973, value=china                                       info:boss                 timestamp=1552030532254, value=mayun                                       info:company              timestamp=1552030520028, value=alibaba                                    1 row(s) in 0.0240 seconds  hbase(main):055:0> get 'demo','taobao','info:boss' COLUMN                     CELL                                                                       info:boss                 timestamp=1552030532254, value=mayun                                      1 row(s) in 0.1810 seconds 

  

3

hbase(main):056:0> put 'demo','example','id:age','88' 0 row(s) in 0.1730 seconds  hbase(main):057:0> get 'demo','example','id:age' COLUMN                     CELL                                                                       id:age                    timestamp=1552030841823, value=88                                         1 row(s) in 0.1430 seconds 

  

4

timestamp

hbase(main):059:0> get 'demo','example',{COLUMN=>'id:age',TIMESTAMP=>1552030841823} COLUMN                     CELL                                                                       id:age                    timestamp=1552030841823, value=88                                         1 row(s) in 0.0200 seconds  hbase(main):060:0> get 'demo','example',{COLUMN=>'id:age',TIMESTAMP=>1552030411620} COLUMN                     CELL                                                                       id:age                    timestamp=1552030411620, value=24                                         1 row(s) in 0.0930 seconds 

  

5

hbase(main):061:0> scan 'demo' ROW                        COLUMN+CELL                                                                example                   column=id:age, timestamp=1552030841823, value=88                           example                   column=id:company, timestamp=1552030467196, value=taobao                   example                   column=id:name, timestamp=1552030380723, value=along                       example                   column=id:sex, timestamp=1552030392249, value=male                         taobao                    column=info:addres, timestamp=1552030496973, value=china                   taobao                    column=info:boss, timestamp=1552030532254, value=mayun                     taobao                    column=info:company, timestamp=1552030520028, value=alibaba               2 row(s) in 0.3880 seconds 

  

6idexample'id:age'

hbase(main):062:0> delete 'demo','example','id:age' 0 row(s) in 1.1360 seconds  hbase(main):063:0> get 'demo','example' COLUMN                     CELL                                                                                                             id:company                timestamp=1552030467196, value=taobao                                      id:name                   timestamp=1552030380723, value=along                                       id:sex                    timestamp=1552030392249, value=male 

  

7

hbase(main):070:0> deleteall 'demo','taobao' 0 row(s) in 1.8140 seconds  hbase(main):071:0> get 'demo','taobao' COLUMN                     CELL                                                                      0 row(s) in 0.2200 seconds 

  

8exampleid'id:age',counter

hbase(main):072:0> incr 'demo','example','id:age' COUNTER VALUE = 1 0 row(s) in 3.2200 seconds  hbase(main):073:0> get 'demo','example','id:age' COLUMN                     CELL                                                                       id:age                    timestamp=1552031388997, value=\x00\x00\x00\x00\x00\x00\x00\x01           1 row(s) in 0.0280 seconds  hbase(main):074:0> incr 'demo','example','id:age' COUNTER VALUE = 2 0 row(s) in 0.0340 seconds  hbase(main):075:0> incr 'demo','example','id:age' COUNTER VALUE = 3 0 row(s) in 0.0420 seconds  hbase(main):076:0> get 'demo','example','id:age' COLUMN                     CELL                                                                       id:age                    timestamp=1552031429912, value=\x00\x00\x00\x00\x00\x00\x00\x03           1 row(s) in 0.0690 seconds  hbase(main):077:0> get_counter 'demo','example','id:age'   #获取当前count值 COUNTER VALUE = 3 

  

9

hbase(main):078:0> truncate 'demo' Truncating 'demo' table (it may take a while):  - Disabling table...  - Truncating table... 0 row(s) in 33.0820 seconds

hbasedisable,drop,create

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!