我知道我们可以通过以下命令从 SparkSQL 查询或创建一个 Mysql 表.
I know that we can query or create a Mysql table from SparkSQL through the below commands.
val data = sqlContext.read.jdbc(urlstring, tablename, properties)
data.write.format("com.databricks.spark.csv").save(result_location)
val dataframe = sqlContext.read.json("users.json")
dataframe.write.jdbc(urlstring, table, properties)
这样有没有办法删除一个表?
Like that is there any way to drop a table ?
您可以尝试使用 JDBC 驱动程序进行基本的 DROP 操作:
You can try a basic DROP operation with the JDBC driver :
val DB_URL: String = ???
val USER: String = ???
val PASS: String = ???
def dropTable(tableName: String) = {
import java.sql._;
var conn: Connection = null;
var stmt: Statement = null;
try {
Class.forName("com.mysql.jdbc.Driver");
println("Connecting to a selected database...");
conn = DriverManager.getConnection(DB_URL, USER, PASS);
println("Connected database successfully...");
println("Deleting table in given database...");
stmt = conn.createStatement();
val sql: String = s"DROP TABLE ${tableName} ";
stmt.executeUpdate(sql);
println(s"Table ${tableName} deleted in given database...");
} catch {
case e: Exception => println("exception caught: " + e);
} finally {
???
}
}
dropTable("test")
您可以使用 JDBCUtils 在 Spark 中做到这一点,但这非常简单.
You can do that with Spark using JDBCUtils but this is quite straightforward.
这篇关于使用 SparkSQL 删除 MySQL 表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
如何有效地使用窗口函数根据 N 个先前值来决定How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函数根据
在“GROUP BY"中重用选择表达式的结果;条款reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用选择表达式的结果;条款?)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 时出错,使用 for 循环数组
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
如何将 Apache Spark 与 MySQL 集成以将数据库表作为How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何将 Apache Spark 与 MySQL 集成以将数据库表作为