众所周知,Join的种类丰富:
按照**关联形式(**Join type)划分:
有内关联,外关联,左关联,右关联,半关联,逆关联等,由业务逻辑决定的关联形式决定了Spark任务的运行结果;
按照关联机制(Join Mechanisms)划分:
有NLJ(Nested Loop Join) , SMJ(Sort Merge Join)和HJ(Hash Join),由数据内容决定的实现机制,则决定了Spark任务的运行效率;
关联形式Spark支持的关联形式:
import spark.implicits._import org.apache.spark.sql.Dataframe // 创建员工信息表val seq = Seq((1, "Mike", 28, "Male"), (2, "Lily", 30, "Female"), (3, "Raymond", 26, "Male"), (5, "Dave", 36, "Male"))val employees: Dataframe = seq.toDF("id", "name", "age", "gender") // 创建薪资表val seq2 = Seq((1, 26000), (2, 30000), (4, 25000), (3, 20000))val salaries:Dataframe = seq2.toDF("id", "salary")// 左表salaries.show // 右表employees.show
内连接(Inner join)内连接是默认关联形式,可以省略写成join、左右表按照join key连接, 舍弃未匹配的行,仅仅保留左右表中满足关联条件的那些数据记录.
// 内关联val jointDF: Dataframe = salaries.join(employees, salaries("id") === employees("id"), "inner") jointDF.show
外连接(Outer join)val jointDF: Dataframe = salaries.join(employees, salaries("id") === employees("id"), "left") jointDF.show
外连接的左右指的是不满足join条件的数据来自于哪张表,上述的"left"左外连接,就让第三行数据来自于左表.
半关联(semi join)// 左半关联val jointDF: Dataframe = salaries.join(employees, salaries("id") === employees("id"), "leftsemi") jointDF.show
半关联是inner join的一半返回,left semi join返回左表数据, right semi join返回右表数据
逆关联(anti join)// 左逆关联val jointDF: Dataframe = salaries.join(employees, salaries("id") === employees("id"), "leftanti") jointDF.show
逆关联返回的是未关联上的行.
关联机制假设我们将join表称为"驱动表",将被join的表称为"基表",基于这两个称呼:
spark的关联机制* 将哈希值按照并行度(parallelism)取模封装一张小表为广播变量,发送到所有Executor.优点不论数据的体量是大是小、不管内存是否足够,Shuffle Join 在功能上都能成功地完成数据关联的计算通过分发较小数据表,SQL的执行性能高效.适用场景任何场景都能完成广播表较小前提条件无基表需要足够小(小于Excutor内存)缺点shuffle IO 带来的性能瓶颈无关联机制 x 关联策略
3种关联机制跟 2中关联策略的组合,出现了6中join.由于Broadcast SMJ < Broadcast HJ ,去掉毫无用武之处的Broadcast SMJ,余下了以下5种join方式.