Hbase简介 1.1、Hbase逻辑结构 1.2、Hbase物理存储结构
1)Name Space:命名空间,类似于关系型数据库的database概念,每个命名空间下有多个表。Hbase有两个自带的命名空间,分别是hbase和default,hbase中存放的是Hbase内置的表,default表是用户默认使用的命名空间;
2)Table:类似于关系型数据库的表概念。不同的是,Hbase定义表时只需要声明列族即可,不需要声明具体的列。这意味着,往Hbase写入数据时,字段可以动态、按需指定。因此,和关系型数据库相比,Hbase能够轻松应对字段变更的场景;
3)Row:Hbase表中的每行数据都由一个RowKey和多个Column(列)组成,数据是按照RowKey的字典顺序存储的,并且查询数据时只能根据RowKey进行检索,所以RowKey的设计十分重要;
4)Column:Hbase中的每个列都由Column Family(列族)和Column Qualifier(列限定符)进行限定,例如info:name,info:age。建表时,只需指明列族,而列限定符无需预先定义;
5)Time Stamp:用于标识数据的不同版本(version),每条数据写入时,系统会自动为其加上该字段,其值为写入Hbase的时间;
6)Cell:由{rowkey, column Family, column Qualifier, time Stamp} 唯一确定的单元。cell中的数据全部是字节码形式存贮;
1)Region Server:Region Server为 Region的管理者,其实现类为HRegionServe;
2)Master:Master是所有Region Server的管理者,其实现类为HMaster,分配regions到每个RegionServer,监控每个RegionServer的状态,负载均衡和故障转移;
3)Zookeeper:Hbase通过Zookeeper来做master的高可用、RegionServer的监控、元数据的入口以及集群配置的维护等工作;
4)HDFS:HDFS为Hbase提供最终的底层数据存储服务,同时为Hbase提供高可用的支持;
hadoop-3.1.3.tar.gz
apache-zookeeper-3.5.7-bin.tar.gz
hbase-2.0.5-bin.tar.gz
apache-phoenix-5.0.0-Hbase-2.0-bin.tar.gz
Hbase四种部署模式和基本操作Hadoop2.x与Hadoop3.x的默认端口变化
2022-02-03 17:21:01,104 ERROR [Thread-15] master.HMaster: Failed to become active masterjava.net.ConnectException: Call From hadoop102/192.168.2.34 to hadoop102:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
The fs.defaultFS makes HDFS a file abstraction over a cluster, so that its root is not the same as the local system’s、You need to change the value in order to create the distributed file system.
注意:hadoop 3.X的HDFS文件系统访问端口号为9820,而不是8020
# 单点启动[atguigu@hadoop102 hbase-2.0.5]$ bin/hbase-daemon.sh start master[atguigu@hadoop102 hbase-2.0.5]$ bin/hbase-daemon.sh start regionserver# 群起[atguigu@hadoop102 hbase-2.0.5]$ bin/start-hbase.sh[atguigu@hadoop102 hbase-2.0.5]$ bin/stop-hbase.sh
2.2、namespace的操作# 进入Hbase的shell[atguigu@hadoop102 ~]$ hbase shellHbase ShellUse "help" to get list of supported commands.Use "exit" to quit this interactive shell.For Reference, please visit: http://hbase.apache.org/2.0/book.html#shellVersion 2.0.5, rUnknown, Thu Jun 18 15:10:52 CST 2020Took 0.0024 seconds # 查看命名空间 hbase(main):025:0> list_namespaceNAMESPACE default hbase 2 row(s)Took 0.0272 seconds# 创建明明空间hbase(main):031:0> create_namespace 'test'Took 0.3161 seconds # 查看命名空间hbase(main):032:0> describe_namespace 'test'DEscriptION {NAME => 'test'} Took 0.0077 seconds => 1# 修改命名空间属性hbase(main):033:0> alter_namespace 'test',{METHOD => 'set','author' => 'jieky'}Took 0.2959 seconds hbase(main):034:0> describe_namespace 'test'DEscriptION {NAME => 'test', author => 'jieky'} Took 0.0097 seconds => 1hbase(main):035:0> alter_namespace 'test',{METHOD => 'set','like' => 'study'}Took 0.2554 seconds # 清楚命名空间属性hbase(main):039:0> alter_namespace 'test',{METHOD => 'unset',NAME => 'like'}Took 0.2505 seconds # 删除命名空间hbase(main):041:0> drop_namespace 'test'Took 0.3004 seconds
2.3、Hbase Shell表操作Hbase入门基本:命名空间、建表、增删改查 3、Hbase进阶 3.1、RegionServer 架构 3.2、Hbase的读写流程 3.3、Hbase的Java API操作 4、Hbase优化
RPC是什么,看完你就知道了Hbase常用优化、Hbae性能优化、Hbase优化经验总结Hbase优化(预分区、RowKey设计) 5、Apache Phoenix
OLTP与OLAP的关系是什么?
Apache Phoenix enables OLTP and operational analytics in Hadoop for low latency applications by combining the best of both worlds:
1)the power of standard SQL and JDBC APIs with full ACID transaction capabilities and
2)the flexibility of late-bound, schema-on-read capabilities from the NoSQL world by leveraging Hbase as its backing store
Apache Phoenix is fully integrated with other Hadoop products such as Spark, Hive, Pig, Flume, and Map Reduce.
phoenix开启schema对应hbase中的namespaceHbase2.0.5-整合Phoenix实际操作Phoenix 的 “thick Client” 和 “thin Client”phoenix构建二级索引详细介绍Hbase 与 Hive 集成使用 5.1、Phoenix Shell操作 Thin Client
[atguigu@hadoop102 ~]$ queryserver.py start[atguigu@hadoop102 ~]$ sqlline-thin.py http://hadoop102:8765
Thick Client默认情况下,在Phoenix 中不能直接创建schema,需要将如下的参数添加到Hbase中conf目录下的hbase-site.xml 和Phoenix中bin目录下的 hbase-site.xml中
[atguigu@hadoop102 ~]$ sqlline.py hadoop102,hadoop103,hadoop104:2181Setting property: [incremental, false]Setting property: [isolation, TRANSACTION_READ_COMMITTED]issuing: !connect jdbc:phoenix:hadoop102,hadoop103,hadoop104:2181 none none org.apache.phoenix.jdbc.PhoenixDriverConnecting to jdbc:phoenix:hadoop102,hadoop103,hadoop104:2181SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/opt/module/apache-phoenix-5.0.0-Hbase-2.0-bin/phoenix-5.0.0-Hbase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.22/02/05 13:30:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform..、using builtin-java classes where applicableConnected to: Phoenix (version 5.0)Driver: PhoenixEmbeddedDriver (version 5.0)Autocommit status: trueTransaction isolation: TRANSACTION_READ_COMMITTEDBuilding list of tables and columns for tab-completion (set fastconnect to true to skip)...133/133 (100%) DoneDonesqlline version 1.2.0
# 注意:在phoenix中,schema名,表名,字段名等会自动转换为大写,若要小写,使用双引号,如"student"。0: jdbc:phoenix:hadoop102,hadoop103,hadoop104> create schema bigdata;
5.2、Phoenix API操作 Thin Client(pom.xml)报错java.lang.ClassNotFoundException:org.apache.http.config.Lookup解决方式NoClassDefFoundError: com/google/protobuf/GeneratedMessageV3
Maven exclusions(排除依赖)
phoenix_编码问题
如果hbase表中的数据是由phoenix写入,不会出现问题,因为对数字的编解码都是phoenix来负责;如果hbase表中的数据不是由phoenix写入的,数字的编码由hbase负责,而phoenix读数据时要对数字进行解码, 因为Phoenix与Hbase编解码方式不一致,导致数字出错。
# 1)在hbase中创建表,并插入数值类型的数据hbase(main):001:0> create 'person','info'# 注意: 如果要插入数字类型,需要通过Bytes.toBytes(123456)来实现。hbase(main):001:0> put 'person','1001', 'info:salary',Bytes.toBytes(123456)
# 2)在phoenix中创建映射表并查询数据0: jdbc:phoenix:hadoop102,hadoop103,hadoop104> create table "person"(id varchar primary key,"info"."salary" integer ) column_encoded_bytes=0;# 会发现数字显示有问题0: jdbc:phoenix:hadoop102,hadoop103,hadoop104> select * from "person"
# 3)解决办法: 在phoenix中创建表时使用无符号的数值类型unsigned_long(hbase负数phoenix解析会报错)0: jdbc:phoenix:hadoop102,hadoop103,hadoop104> create table "person"(id varchar primary key,"info"."salary" unsigned_long ) column_encoded_bytes=0;
5.4、整合Hive与HbaseHive表与Hbase表,同建同删。(两边都插入的数据互相可见)
create table hive_customer22( name string, --对应hbase的rowkey order_numb string, --对应hbase的列簇名order 和列名numb 用_表示: order_date string, --对应hbase的列簇名order 和列名date 用_表示: addr_city string, --对应hbase的列簇名addr 和列名city 用_表示: addr_state string --对应hbase的列簇名addr 和列名state 用_表示: ) stored by 'org.apache.hadoop.hive.hbase.HbaseStorageHandler'with serdeproperties("hbase.columns.mapping"=":key,order:numb,order:date,addr:city,addr:state")tblproperties("hbase.table.name" = "hbase_customer22");
先建Hbase表后建Hive表,删除Hive表后Hbase表存在,删除Hbase表后Hive表报错。(两边都插入的数据互相可见)
create external table hive_customer( name string, --对应hbase的rowkey order_numb string, --对应hbase的列簇名order 和列名numb 用_表示: order_date string, --对应hbase的列簇名order 和列名date 用_表示: addr_city string, --对应hbase的列簇名addr 和列名city 用_表示: addr_state string --对应hbase的列簇名addr 和列名state 用_表示: ) stored by 'org.apache.hadoop.hive.hbase.HbaseStorageHandler'with serdeproperties("hbase.columns.mapping"=":key,order:numb,order:date,addr:city,addr:state")tblproperties("hbase.table.name" = "hbase_customer");create 'hbase_customer','order','addr'