<legend id='rPzkS'><style id='rPzkS'><dir id='rPzkS'><q id='rPzkS'></q></dir></style></legend>
      • <bdo id='rPzkS'></bdo><ul id='rPzkS'></ul>
        <tfoot id='rPzkS'></tfoot>

        <small id='rPzkS'></small><noframes id='rPzkS'>

        <i id='rPzkS'><tr id='rPzkS'><dt id='rPzkS'><q id='rPzkS'><span id='rPzkS'><b id='rPzkS'><form id='rPzkS'><ins id='rPzkS'></ins><ul id='rPzkS'></ul><sub id='rPzkS'></sub></form><legend id='rPzkS'></legend><bdo id='rPzkS'><pre id='rPzkS'><center id='rPzkS'></center></pre></bdo></b><th id='rPzkS'></th></span></q></dt></tr></i><div id='rPzkS'><tfoot id='rPzkS'></tfoot><dl id='rPzkS'><fieldset id='rPzkS'></fieldset></dl></div>

        在 Spark 中找不到适合 jdbc 的驱动程序

        时间:2023-08-22
          <tbody id='HN2in'></tbody>

        <tfoot id='HN2in'></tfoot>

        <legend id='HN2in'><style id='HN2in'><dir id='HN2in'><q id='HN2in'></q></dir></style></legend>
      1. <i id='HN2in'><tr id='HN2in'><dt id='HN2in'><q id='HN2in'><span id='HN2in'><b id='HN2in'><form id='HN2in'><ins id='HN2in'></ins><ul id='HN2in'></ul><sub id='HN2in'></sub></form><legend id='HN2in'></legend><bdo id='HN2in'><pre id='HN2in'><center id='HN2in'></center></pre></bdo></b><th id='HN2in'></th></span></q></dt></tr></i><div id='HN2in'><tfoot id='HN2in'></tfoot><dl id='HN2in'><fieldset id='HN2in'></fieldset></dl></div>

          • <bdo id='HN2in'></bdo><ul id='HN2in'></ul>
                • <small id='HN2in'></small><noframes id='HN2in'>

                  本文介绍了在 Spark 中找不到适合 jdbc 的驱动程序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  我正在使用

                  df.write.mode("append").jdbc("jdbc:mysql://ip:port/database", "table_name", properties)
                  

                  插入到 MySQL 中的表中.

                  to insert into a table in MySQL.

                  此外,我在代码中添加了 Class.forName("com.mysql.jdbc.Driver").

                  Also, I have added Class.forName("com.mysql.jdbc.Driver") in my code.

                  当我提交 Spark 申请时:

                  When I submit my Spark application:

                  spark-submit --class MY_MAIN_CLASS
                    --master yarn-client
                    --jars /path/to/mysql-connector-java-5.0.8-bin.jar
                    --driver-class-path /path/to/mysql-connector-java-5.0.8-bin.jar
                    MY_APPLICATION.jar
                  

                  这种纱线客户端模式对我有用.

                  This yarn-client mode works for me.

                  但是当我使用纱线簇模式时:

                  But when I use yarn-cluster mode:

                  spark-submit --class MY_MAIN_CLASS
                    --master yarn-cluster
                    --jars /path/to/mysql-connector-java-5.0.8-bin.jar
                    --driver-class-path /path/to/mysql-connector-java-5.0.8-bin.jar
                    MY_APPLICATION.jar
                  

                  它不起作用.我也试过设置--conf":

                  It doens't work. I also tried setting "--conf":

                  spark-submit --class MY_MAIN_CLASS
                    --master yarn-cluster
                    --jars /path/to/mysql-connector-java-5.0.8-bin.jar
                    --driver-class-path /path/to/mysql-connector-java-5.0.8-bin.jar
                    --conf spark.executor.extraClassPath=/path/to/mysql-connector-java-5.0.8-bin.jar
                    MY_APPLICATION.jar
                  

                  但仍然出现找不到适合 jdbc 的驱动程序"错误.

                  but still get the "No suitable driver found for jdbc" error.

                  推荐答案

                  有 3 种可能的解决方案,

                  There is 3 possible solutions,

                  1. 您可能希望使用构建管理器(Maven、SBT)组装您的应用程序,因此您无需在 spark-submit cli 中添加依赖项.
                  2. 您可以在 spark-submit cli 中使用以下选项:

                  1. You might want to assembly you application with your build manager (Maven,SBT) thus you'll not need to add the dependecies in your spark-submit cli.
                  2. You can use the following option in your spark-submit cli :

                  --jars $(echo ./lib/*.jar | tr ' ' ',')
                  

                  说明:假设您在项目根目录的 lib 目录中拥有所有 jar,这将读取所有库并将它们添加到应用程序提交中.

                  Explanation : Supposing that you have all your jars in a lib directory in your project root, this will read all the libraries and add them to the application submit.

                  您也可以尝试在 SPARK_HOME/conf/spark 中配置这 2 个变量:spark.driver.extraClassPathspark.executor.extraClassPath-default.conf 文件并将这些变量的值指定为jar文件的路径.确保工作节点上存在相同的路径.

                  You can also try to configure these 2 variables : spark.driver.extraClassPath and spark.executor.extraClassPath in SPARK_HOME/conf/spark-default.conf file and specify the value of these variables as the path of the jar file. Ensure that the same path exists on worker nodes.

                  这篇关于在 Spark 中找不到适合 jdbc 的驱动程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:如何在 jdbc 数据源中使用 dbtable 选项的子查询? 下一篇:Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整

                  相关文章

                  最新文章

                  <tfoot id='PV5Qo'></tfoot>

                  1. <legend id='PV5Qo'><style id='PV5Qo'><dir id='PV5Qo'><q id='PV5Qo'></q></dir></style></legend>
                      <bdo id='PV5Qo'></bdo><ul id='PV5Qo'></ul>

                    <i id='PV5Qo'><tr id='PV5Qo'><dt id='PV5Qo'><q id='PV5Qo'><span id='PV5Qo'><b id='PV5Qo'><form id='PV5Qo'><ins id='PV5Qo'></ins><ul id='PV5Qo'></ul><sub id='PV5Qo'></sub></form><legend id='PV5Qo'></legend><bdo id='PV5Qo'><pre id='PV5Qo'><center id='PV5Qo'></center></pre></bdo></b><th id='PV5Qo'></th></span></q></dt></tr></i><div id='PV5Qo'><tfoot id='PV5Qo'></tfoot><dl id='PV5Qo'><fieldset id='PV5Qo'></fieldset></dl></div>
                    1. <small id='PV5Qo'></small><noframes id='PV5Qo'>