<tfoot id='JwRW7'></tfoot>
<i id='JwRW7'><tr id='JwRW7'><dt id='JwRW7'><q id='JwRW7'><span id='JwRW7'><b id='JwRW7'><form id='JwRW7'><ins id='JwRW7'></ins><ul id='JwRW7'></ul><sub id='JwRW7'></sub></form><legend id='JwRW7'></legend><bdo id='JwRW7'><pre id='JwRW7'><center id='JwRW7'></center></pre></bdo></b><th id='JwRW7'></th></span></q></dt></tr></i><div id='JwRW7'><tfoot id='JwRW7'></tfoot><dl id='JwRW7'><fieldset id='JwRW7'></fieldset></dl></div>
    • <bdo id='JwRW7'></bdo><ul id='JwRW7'></ul>

      <legend id='JwRW7'><style id='JwRW7'><dir id='JwRW7'><q id='JwRW7'></q></dir></style></legend>
    1. <small id='JwRW7'></small><noframes id='JwRW7'>

      1. Spark SQL/Hive 查询永远需要加入

        时间:2023-08-22

        <small id='qDZrG'></small><noframes id='qDZrG'>

            <tbody id='qDZrG'></tbody>

          <tfoot id='qDZrG'></tfoot>
            <legend id='qDZrG'><style id='qDZrG'><dir id='qDZrG'><q id='qDZrG'></q></dir></style></legend>

              <i id='qDZrG'><tr id='qDZrG'><dt id='qDZrG'><q id='qDZrG'><span id='qDZrG'><b id='qDZrG'><form id='qDZrG'><ins id='qDZrG'></ins><ul id='qDZrG'></ul><sub id='qDZrG'></sub></form><legend id='qDZrG'></legend><bdo id='qDZrG'><pre id='qDZrG'><center id='qDZrG'></center></pre></bdo></b><th id='qDZrG'></th></span></q></dt></tr></i><div id='qDZrG'><tfoot id='qDZrG'></tfoot><dl id='qDZrG'><fieldset id='qDZrG'></fieldset></dl></div>
              • <bdo id='qDZrG'></bdo><ul id='qDZrG'></ul>
                1. 本文介绍了Spark SQL/Hive 查询永远需要加入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  所以我正在做一些应该很简单的事情,但显然它不在 Spark SQL 中.

                  So I'm doing something that should be simple, but apparently it's not in Spark SQL.

                  如果我在 MySQL 中运行以下查询,查询会在几分之一秒内完成:

                  If I run the following query in MySQL, the query finishes in a fraction of a second:

                  SELECT ua.address_id
                  FROM user u
                  inner join user_address ua on ua.address_id = u.user_address_id
                  WHERE u.user_id = 123;
                  

                  但是,在 Spark (1.5.1) 下的 HiveContext 中运行相同的查询需要超过 13 秒.添加更多连接会使查询运行很长时间(超过 10 分钟).我不确定我在这里做错了什么以及如何加快速度.

                  However, running the same query in HiveContext under Spark (1.5.1) takes more than 13 seconds. Adding more joins makes the query run for a very very long time (over 10 minutes). I'm not sure what I'm doing wrong here and how I can speed things up.

                  这些表是 MySQL 表,它们作为临时表加载到 Hive 上下文中.它在单个实例中运行,数据库在远程机器上.

                  The tables are MySQL tables that are loaded into the Hive Context as temporary tables.This is running in a single instance, with the database on a remote machine.

                  • 用户表大约有 480 万行.
                  • user_address 表有 350,000 行.

                  表有外键字段,但在数据库中没有定义明确的 fk 关系.我正在使用 InnoDB.

                  The tables have foreign key fields, but no explicit fk relationships is defined in the db. I'm using InnoDB.

                  Spark 中的执行计划:

                  The execution plan in Spark:

                  计划:

                  扫描JDBCRelation(jdbc:mysql://.user,[Lorg.apache.spark.Partition;@596f5dfc,{user=, password=, url=jdbc:mysql://, dbtable=user})[address_id#0L,user_address_id#27L]

                  Scan JDBCRelation(jdbc:mysql://.user,[Lorg.apache.spark.Partition;@596f5dfc, {user=, password=, url=jdbc:mysql://, dbtable=user}) [address_id#0L,user_address_id#27L]

                  过滤器 (user_id#0L = 123) 扫描JDBCRelation(jdbc:mysql://.user_address,[Lorg.apache.spark.Partition;@2ce558f3,{user=, password=,url=jdbc:mysql://, dbtable=user_address})[address_id#52L]

                  Filter (user_id#0L = 123) Scan JDBCRelation(jdbc:mysql://.user_address, [Lorg.apache.spark.Partition;@2ce558f3,{user=, password=, url=jdbc:mysql://, dbtable=user_address})[address_id#52L]

                  ConvertToUnsafe ConvertToUnsafe

                  ConvertToUnsafe ConvertToUnsafe

                  TungstenExchange hashpartitioning(address_id#52L) TungstenExchangehashpartitioning(user_address_id#27L) TungstenSort [address_id#52LASC], false, 0 TungstenSort [user_address_id#27L ASC], false, 0

                  TungstenExchange hashpartitioning(address_id#52L) TungstenExchange hashpartitioning(user_address_id#27L) TungstenSort [address_id#52L ASC], false, 0 TungstenSort [user_address_id#27L ASC], false, 0

                  SortMergeJoin [user_address_id#27L], [address_id#52L]

                  SortMergeJoin [user_address_id#27L], [address_id#52L]

                  == 物理计划 == TungstenProject [address_id#0L]

                  == Physical Plan == TungstenProject [address_id#0L]

                  推荐答案

                  首先,您执行的查询类型极其低效.至于现在(Spark 1.5.0*)要执行这样的连接,每次执行查询时都必须对两个表进行混洗/散列分区.对于 users 表,其中 user_id = 123 谓词最有可能被下推,但仍然需要对 user_address.

                  First of all type of query you perform is extremely inefficient. As for now (Spark 1.5.0*) to perform join like this, both tables has to be shuffled / hash-partitioned each time query is executed. It shouldn't be a problem in case of users table where user_id = 123 predicate is most likely pushed-down but still requires full shuffle on user_address.

                  此外,如果表只注册而不缓存,那么每次执行此查询都会从 MySQL 获取整个 user_address 表到 Spark.

                  Moreover, if tables are only registered and not cached, then every execution of this query will fetch a whole user_address table from MySQL to Spark.

                  我不确定我在这里做错了什么以及如何加快速度.

                  I'm not sure what I'm doing wrong here and how I can speed things up.

                  不清楚为什么要将 Spark 用于应用程序,但单机设置、小数据和查询类型表明 Spark 不适合这里.

                  It is not exactly clear why you want to use Spark for application but single machine setup, small data and type of queries suggest that Spark is not a good fit here.

                  一般来说,如果应用程序逻辑需要单条记录访问,那么 Spark SQL 的性能就不会很好.它专为分析查询而设计,而不是作为 OLTP 数据库的替代品.

                  Generally speaking if application logic requires a single record access then Spark SQL won't perform well. It is designed for analytical queries not as a OLTP database replacement.

                  如果单个表/数据框小得多,您可以尝试广播.

                  If a single table / data frame is much smaller you could try broadcasting.

                  import org.apache.spark.sql.DataFrame
                  import org.apache.spark.sql.functions.broadcast
                  
                  val user: DataFrame = ???
                  val user_address: DataFrame = ???
                  
                  val userFiltered = user.where(???)
                  
                  user_addresses.join(
                    broadcast(userFiltered), $"address_id" === $"user_address_id")
                  

                  <小时>

                  * 这应该在 Spark 1.6.0 中改变,SPARK-11410应该启用持久表分区.

                  这篇关于Spark SQL/Hive 查询永远需要加入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:在“GROUP BY"中重用选择表达式的结果;条款 下一篇:如何有效地使用窗口函数根据 N 个先前值来决定

                  相关文章

                  最新文章

                    <tfoot id='RnxHT'></tfoot>
                    • <bdo id='RnxHT'></bdo><ul id='RnxHT'></ul>

                      <legend id='RnxHT'><style id='RnxHT'><dir id='RnxHT'><q id='RnxHT'></q></dir></style></legend>

                      <i id='RnxHT'><tr id='RnxHT'><dt id='RnxHT'><q id='RnxHT'><span id='RnxHT'><b id='RnxHT'><form id='RnxHT'><ins id='RnxHT'></ins><ul id='RnxHT'></ul><sub id='RnxHT'></sub></form><legend id='RnxHT'></legend><bdo id='RnxHT'><pre id='RnxHT'><center id='RnxHT'></center></pre></bdo></b><th id='RnxHT'></th></span></q></dt></tr></i><div id='RnxHT'><tfoot id='RnxHT'></tfoot><dl id='RnxHT'><fieldset id='RnxHT'></fieldset></dl></div>

                      <small id='RnxHT'></small><noframes id='RnxHT'>