<i id='4G5zC'><tr id='4G5zC'><dt id='4G5zC'><q id='4G5zC'><span id='4G5zC'><b id='4G5zC'><form id='4G5zC'><ins id='4G5zC'></ins><ul id='4G5zC'></ul><sub id='4G5zC'></sub></form><legend id='4G5zC'></legend><bdo id='4G5zC'><pre id='4G5zC'><center id='4G5zC'></center></pre></bdo></b><th id='4G5zC'></th></span></q></dt></tr></i><div id='4G5zC'><tfoot id='4G5zC'></tfoot><dl id='4G5zC'><fieldset id='4G5zC'></fieldset></dl></div>

      <bdo id='4G5zC'></bdo><ul id='4G5zC'></ul>
    <legend id='4G5zC'><style id='4G5zC'><dir id='4G5zC'><q id='4G5zC'></q></dir></style></legend>

    <small id='4G5zC'></small><noframes id='4G5zC'>

      <tfoot id='4G5zC'></tfoot>

        SQLITE_ERROR:通过 JDBC 从 Spark 连接到 SQLite 数据库时

        时间:2023-08-20
        • <small id='zRYqC'></small><noframes id='zRYqC'>

            <tbody id='zRYqC'></tbody>

          <i id='zRYqC'><tr id='zRYqC'><dt id='zRYqC'><q id='zRYqC'><span id='zRYqC'><b id='zRYqC'><form id='zRYqC'><ins id='zRYqC'></ins><ul id='zRYqC'></ul><sub id='zRYqC'></sub></form><legend id='zRYqC'></legend><bdo id='zRYqC'><pre id='zRYqC'><center id='zRYqC'></center></pre></bdo></b><th id='zRYqC'></th></span></q></dt></tr></i><div id='zRYqC'><tfoot id='zRYqC'></tfoot><dl id='zRYqC'><fieldset id='zRYqC'></fieldset></dl></div>

          • <tfoot id='zRYqC'></tfoot>

              <bdo id='zRYqC'></bdo><ul id='zRYqC'></ul>
            • <legend id='zRYqC'><style id='zRYqC'><dir id='zRYqC'><q id='zRYqC'></q></dir></style></legend>

                1. 本文介绍了SQLITE_ERROR:通过 JDBC 从 Spark 连接到 SQLite 数据库时,连接已关闭的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  我正在使用 Apache Spark 1.5.1 并尝试连接到名为 clinton.db 的本地 SQLite 数据库.从数据库表创建数据框工作正常,但是当我对创建的对象执行某些操作时,我收到以下错误消息,其中显示SQL 错误或丢失的数据库(连接已关闭)".有趣的是,我还是得到了手术的结果.知道我可以做些什么来解决问题,即避免错误吗?

                  I am using Apache Spark 1.5.1 and trying to connect to a local SQLite database named clinton.db. Creating a data frame from a table of the database works fine but when I do some operations on the created object, I get the error below which says "SQL error or missing database (Connection is closed)". Funny thing is that I get the result of the operation nevertheless. Any idea what I can do to solve the problem, i.e., avoid the error?

                  spark-shell 的启动命令:

                  Start command for spark-shell:

                  ../spark/bin/spark-shell --master local[8] --jars ../libraries/sqlite-jdbc-3.8.11.1.jar --classpath ../libraries/sqlite-jdbc-3.8.11.1.jar
                  

                  从数据库中读取:

                  val emails = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlite:../data/clinton.sqlite", "dbtable" -> "Emails")).load()
                  

                  简单计数(失败):

                  emails.count
                  

                  错误:

                  15/09/30 09:06:39 WARN JDBCRDD:异常结束语句java.sql.SQLException: [SQLITE_ERROR] SQL 错误或缺少数据库(连接已关闭)在 org.sqlite.core.DB.newSQLException(DB.java:890)在 org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)在 org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)在 org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)在 org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)在 org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)在 org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60)在 org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79)在 org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77)在 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)在 scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)在 org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77)在 org.apache.spark.scheduler.Task.run(Task.scala:90)在 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)在 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)在 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)在 java.lang.Thread.run(Thread.java:745)res1:长 = 7945

                  推荐答案

                  我遇到了同样的错误 今天,并且重要的一行就在异常之前:

                  I got the same error today, and the important line is just before the exception:

                  15/11/30 12:13:02 INFO jdbc.JDBCRDD:关闭连接

                  15/11/30 12:13:02 INFO jdbc.JDBCRDD: closed connection

                  15/11/30 12:13:02 WARN jdbc.JDBCRDD:异常结束语句java.sql.SQLException: [SQLITE_ERROR] SQL 错误或缺少数据库(连接已关闭)在 org.sqlite.core.DB.newSQLException(DB.java:890)在 org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)在 org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)在 org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)

                  15/11/30 12:13:02 WARN jdbc.JDBCRDD: Exception closing statement java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed) at org.sqlite.core.DB.newSQLException(DB.java:890) at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109) at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)

                  所以Spark成功关闭JDBC连接,然后关闭JDBC语句

                  So Spark succeeded to close the JDBC connection, and then it fails to close the JDBC statement

                  看源码,close()被调用了两次:

                  第 358 行(org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD,Spark 1.5.1)

                  Line 358 (org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD, Spark 1.5.1)

                  context.addTaskCompletionListener{ context => close() }
                  

                  第 469 行

                  override def hasNext: Boolean = {
                    if (!finished) {
                      if (!gotNext) {
                        nextValue = getNext()
                        if (finished) {
                          close()
                        }
                        gotNext = true
                      }
                    }
                    !finished
                  }
                  

                  如果您查看 close() 方法(第 443 行)

                  If you look at the close() method (line 443)

                  def close() {
                    if (closed) return
                  

                  您可以看到它检查了变量 closed,但该值从未设置为 true.

                  you can see that it checks the variable closed, but that value is never set to true.

                  如果我没看错的话,这个bug还在master里面.我已提交错误报告.

                  If I see it correctly, this bug is still in the master. I have filed a bug report.

                  • 来源:JDBCRDD.scala(行号略有不同)
                  • Source: JDBCRDD.scala (lines numbers differ slightly)

                  这篇关于SQLITE_ERROR:通过 JDBC 从 Spark 连接到 SQLite 数据库时,连接已关闭的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:在spark sql中转换两个数据帧 下一篇:在 SparkSQL 中使用窗口函数 (dense_rank()) 进行选择

                  相关文章

                  最新文章

                    <legend id='OfZ5w'><style id='OfZ5w'><dir id='OfZ5w'><q id='OfZ5w'></q></dir></style></legend>

                      <tfoot id='OfZ5w'></tfoot>
                      • <bdo id='OfZ5w'></bdo><ul id='OfZ5w'></ul>
                    1. <i id='OfZ5w'><tr id='OfZ5w'><dt id='OfZ5w'><q id='OfZ5w'><span id='OfZ5w'><b id='OfZ5w'><form id='OfZ5w'><ins id='OfZ5w'></ins><ul id='OfZ5w'></ul><sub id='OfZ5w'></sub></form><legend id='OfZ5w'></legend><bdo id='OfZ5w'><pre id='OfZ5w'><center id='OfZ5w'></center></pre></bdo></b><th id='OfZ5w'></th></span></q></dt></tr></i><div id='OfZ5w'><tfoot id='OfZ5w'></tfoot><dl id='OfZ5w'><fieldset id='OfZ5w'></fieldset></dl></div>

                      <small id='OfZ5w'></small><noframes id='OfZ5w'>