<legend id='kKRMt'><style id='kKRMt'><dir id='kKRMt'><q id='kKRMt'></q></dir></style></legend>
      <bdo id='kKRMt'></bdo><ul id='kKRMt'></ul>

        <i id='kKRMt'><tr id='kKRMt'><dt id='kKRMt'><q id='kKRMt'><span id='kKRMt'><b id='kKRMt'><form id='kKRMt'><ins id='kKRMt'></ins><ul id='kKRMt'></ul><sub id='kKRMt'></sub></form><legend id='kKRMt'></legend><bdo id='kKRMt'><pre id='kKRMt'><center id='kKRMt'></center></pre></bdo></b><th id='kKRMt'></th></span></q></dt></tr></i><div id='kKRMt'><tfoot id='kKRMt'></tfoot><dl id='kKRMt'><fieldset id='kKRMt'></fieldset></dl></div>
      1. <tfoot id='kKRMt'></tfoot>
      2. <small id='kKRMt'></small><noframes id='kKRMt'>

        SQLITE_ERROR:通过 JDBC 从 Spark 连接到 SQLite 数据库时

        时间:2023-08-20

        <legend id='LsVnD'><style id='LsVnD'><dir id='LsVnD'><q id='LsVnD'></q></dir></style></legend>
          <tbody id='LsVnD'></tbody>
          <bdo id='LsVnD'></bdo><ul id='LsVnD'></ul>

          1. <small id='LsVnD'></small><noframes id='LsVnD'>

                <i id='LsVnD'><tr id='LsVnD'><dt id='LsVnD'><q id='LsVnD'><span id='LsVnD'><b id='LsVnD'><form id='LsVnD'><ins id='LsVnD'></ins><ul id='LsVnD'></ul><sub id='LsVnD'></sub></form><legend id='LsVnD'></legend><bdo id='LsVnD'><pre id='LsVnD'><center id='LsVnD'></center></pre></bdo></b><th id='LsVnD'></th></span></q></dt></tr></i><div id='LsVnD'><tfoot id='LsVnD'></tfoot><dl id='LsVnD'><fieldset id='LsVnD'></fieldset></dl></div>
                  <tfoot id='LsVnD'></tfoot>
                  本文介绍了SQLITE_ERROR:通过 JDBC 从 Spark 连接到 SQLite 数据库时,连接已关闭的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  我正在使用 Apache Spark 1.5.1 并尝试连接到名为 clinton.db 的本地 SQLite 数据库.从数据库表创建数据框工作正常,但是当我对创建的对象执行某些操作时,我收到以下错误消息,其中显示SQL 错误或丢失的数据库(连接已关闭)".有趣的是,我还是得到了手术的结果.知道我可以做些什么来解决问题,即避免错误吗?

                  I am using Apache Spark 1.5.1 and trying to connect to a local SQLite database named clinton.db. Creating a data frame from a table of the database works fine but when I do some operations on the created object, I get the error below which says "SQL error or missing database (Connection is closed)". Funny thing is that I get the result of the operation nevertheless. Any idea what I can do to solve the problem, i.e., avoid the error?

                  spark-shell 的启动命令:

                  Start command for spark-shell:

                  ../spark/bin/spark-shell --master local[8] --jars ../libraries/sqlite-jdbc-3.8.11.1.jar --classpath ../libraries/sqlite-jdbc-3.8.11.1.jar
                  

                  从数据库中读取:

                  val emails = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlite:../data/clinton.sqlite", "dbtable" -> "Emails")).load()
                  

                  简单计数(失败):

                  emails.count
                  

                  错误:

                  15/09/30 09:06:39 WARN JDBCRDD:异常结束语句java.sql.SQLException: [SQLITE_ERROR] SQL 错误或缺少数据库(连接已关闭)在 org.sqlite.core.DB.newSQLException(DB.java:890)在 org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)在 org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)在 org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)在 org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)在 org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)在 org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60)在 org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79)在 org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77)在 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)在 scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)在 org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77)在 org.apache.spark.scheduler.Task.run(Task.scala:90)在 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)在 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)在 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)在 java.lang.Thread.run(Thread.java:745)res1:长 = 7945

                  推荐答案

                  我遇到了同样的错误 今天,并且重要的一行就在异常之前:

                  I got the same error today, and the important line is just before the exception:

                  15/11/30 12:13:02 INFO jdbc.JDBCRDD:关闭连接

                  15/11/30 12:13:02 INFO jdbc.JDBCRDD: closed connection

                  15/11/30 12:13:02 WARN jdbc.JDBCRDD:异常结束语句java.sql.SQLException: [SQLITE_ERROR] SQL 错误或缺少数据库(连接已关闭)在 org.sqlite.core.DB.newSQLException(DB.java:890)在 org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)在 org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)在 org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)

                  15/11/30 12:13:02 WARN jdbc.JDBCRDD: Exception closing statement java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed) at org.sqlite.core.DB.newSQLException(DB.java:890) at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109) at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)

                  所以Spark成功关闭JDBC连接,然后关闭JDBC语句

                  So Spark succeeded to close the JDBC connection, and then it fails to close the JDBC statement

                  看源码,close()被调用了两次:

                  第 358 行(org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD,Spark 1.5.1)

                  Line 358 (org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD, Spark 1.5.1)

                  context.addTaskCompletionListener{ context => close() }
                  

                  第 469 行

                  override def hasNext: Boolean = {
                    if (!finished) {
                      if (!gotNext) {
                        nextValue = getNext()
                        if (finished) {
                          close()
                        }
                        gotNext = true
                      }
                    }
                    !finished
                  }
                  

                  如果您查看 close() 方法(第 443 行)

                  If you look at the close() method (line 443)

                  def close() {
                    if (closed) return
                  

                  您可以看到它检查了变量 closed,但该值从未设置为 true.

                  you can see that it checks the variable closed, but that value is never set to true.

                  如果我没看错的话,这个bug还在master里面.我已提交错误报告.

                  If I see it correctly, this bug is still in the master. I have filed a bug report.

                  • 来源:JDBCRDD.scala(行号略有不同)
                  • Source: JDBCRDD.scala (lines numbers differ slightly)

                  这篇关于SQLITE_ERROR:通过 JDBC 从 Spark 连接到 SQLite 数据库时,连接已关闭的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:在 SparkSQL 中使用窗口函数 (dense_rank()) 进行选择 下一篇:需要指导:后端 SQL 逻辑,用于前端用户动态选择

                  相关文章

                  最新文章

                    <bdo id='c0z46'></bdo><ul id='c0z46'></ul>

                    <tfoot id='c0z46'></tfoot>
                    <legend id='c0z46'><style id='c0z46'><dir id='c0z46'><q id='c0z46'></q></dir></style></legend>

                    <small id='c0z46'></small><noframes id='c0z46'>

                  1. <i id='c0z46'><tr id='c0z46'><dt id='c0z46'><q id='c0z46'><span id='c0z46'><b id='c0z46'><form id='c0z46'><ins id='c0z46'></ins><ul id='c0z46'></ul><sub id='c0z46'></sub></form><legend id='c0z46'></legend><bdo id='c0z46'><pre id='c0z46'><center id='c0z46'></center></pre></bdo></b><th id='c0z46'></th></span></q></dt></tr></i><div id='c0z46'><tfoot id='c0z46'></tfoot><dl id='c0z46'><fieldset id='c0z46'></fieldset></dl></div>