使用pyspark:
from pyspark.sql import SparkSession火花 = SparkSession\.builder\.appName("spark play")\.getOrCreate()df = spark.read\.format("jdbc")\.option("url", "jdbc:mysql://localhost:port")\.option("dbtable", "schema.tablename")\.option("用户", "用户名")\.option("密码", "密码")\.加载()我宁愿获取查询的结果集,而不是获取schema.tablename".
同 1.x 可以传递有效的子查询作为 dbtable 参数例如:
Using pyspark:
from pyspark.sql import SparkSession
spark = SparkSession\
.builder\
.appName("spark play")\
.getOrCreate()
df = spark.read\
.format("jdbc")\
.option("url", "jdbc:mysql://localhost:port")\
.option("dbtable", "schema.tablename")\
.option("user", "username")\
.option("password", "password")\
.load()
Rather than fetch "schema.tablename", I would prefer to grab the result set of a query.
Same as in 1.x you can pass valid subquery as dbtable argument for example:
...
.option("dbtable", "(SELECT foo, bar FROM schema.tablename) AS tmp")
...
这篇关于在 Apache Spark 2.0.0 中,是否可以从外部数据库获取查询(而不是获取整个表)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为