我想在钻头中为 oracle jdbc 创建存储插件.我将 ojdbc7.jar 复制到 apache-drill-1.3.0/jars/3rdparty 路径并添加 drill.exec.sys.store.provider.local.path= "/mypath" 到 dill.override.conf.当我想使用以下配置创建一个新的存储插件时:
I want to create storage plugin in drill for oracle jdbc. I copy ojdbc7.jar to apache-drill-1.3.0/jars/3rdparty path and add drill.exec.sys.store.provider.local.path = "/mypath" to dill.override.conf.
when I want to create a new storage plugin with below configuration:
{
"type": "jdbc",
"enabled": true,
"driver": "oracle.jdbc.OracleDriver",
"url":"jdbc:oracle:thin:user/pass@x.x.x.x:1521/orcll"
}
我收到无法创建/更新存储错误.
我使用的是 Redhat 7 &钻头版本 - 1.3.在分布式模式下.
I am using Redhat 7 & Drill version - 1.3. in distributed mode.
我猜问题出在默认架构名称上.它应该是 orcl 而不是 orcll.
I guess the problem is with default schema name. It should be orcl instead of orcll.
插件:
{
"type": "jdbc",
"enabled": true
"driver": "oracle.jdbc.OracleDriver",
"url": "jdbc:oracle:thin:user/pass@x.x.x.x:1521:orcl"
}
这篇关于无法为 oracle 创建 Drill 存储插件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 调用 o23.load 时发生错误 没有合适的
在 Apache Spark 2.0.0 中,是否可以从外部数据库获取In Apache Spark 2.0.0, is it possible to fetch a query from an external database (rather than grab the whole table)?(在 Apache Spark 2.0.0 中,是否可
Spark在执行jdbc保存时给出空指针异常Spark giving Null Pointer Exception while performing jdbc save(Spark在执行jdbc保存时给出空指针异常)
Pyspark DataFrameWriter jdbc 函数的 ignore 选项是忽略整Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函数的 ig
在 Spark 中找不到适合 jdbc 的驱动程序No suitable driver found for jdbc in Spark(在 Spark 中找不到适合 jdbc 的驱动程序)