Spark/Scala将Oracle表加载到Hive

4

我正在将几个 Oracle 表加载到 Hive 中,看起来工作正常,但有两个表出现错误 - IllegalArgumentException: requirement failed: Decimal precision 136 exceeds max precision 38。我检查了源中的 Oracle 表,没有列具有小数位数 (136) 精度。

这是在 spark-shell 中使用的 Spark/Scala 代码:

val df_oracle = spark.read.format("jdbc").option("url", "jdbc:oracle:thin:@hostname:port:SID").option("user",userName).option("password",passWord).option("driver", "oracle.jdbc.driver.OracleDriver").option("dbtable", inputTable).load()

  df_oracle.repartition(10).write.format("orc").mode("overwrite").saveAsTable(outputTable)

这里是完整的错误信息 -

    java.lang.IllegalArgumentException: requirement failed: Decimal precision 136 exceeds max precision 38
    at scala.Predef$.require(Predef.scala:224)
    at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
    at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:434)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$nullSafeConvert(JdbcUtils.scala:438)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:337)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:335)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:286)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:268)
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
    at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:147)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

17/10/28 07:56:58 ERROR TaskSetManager: Task 0 in stage 36.0 failed 4 times; aborting job
17/10/28 07:56:58 ERROR FileFormatWriter: Aborting job null.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 36.0 failed 4 times, most recent failure: Lost task 0.3 in stage 36.0 (TID 201, alphd1dx009.dlx.idc.ge.com, executor 1): java.lang.IllegalArgumentException: requirement failed: Decimal precision 136 exceeds max precision 38
    at scala.Predef$.require(Predef.scala:224)
    at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
    at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:434)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$nullSafeConvert(JdbcUtils.scala:438)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:337)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:335)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:286)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:268)
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
    at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:147)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1961)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:121)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
    at org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:484)
    at org.apache.spark.sql.execution.datasources.DataSource.writeAndRead(DataSource.scala:500)
    at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:263)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
    at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:404)
    at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:358)
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(<console>:41)
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(<console>:28)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:28)
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:54)
    at $line34.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:56)
    at $line34.$read$$iw$$iw$$iw$$iw$$iw.<init>(<console>:58)
    at $line34.$read$$iw$$iw$$iw$$iw.<init>(<console>:60)
    at $line34.$read$$iw$$iw$$iw.<init>(<console>:62)
    at $line34.$read$$iw$$iw.<init>(<console>:64)
    at $line34.$read$$iw.<init>(<console>:66)
    at $line34.$read.<init>(<console>:68)
    at $line34.$read$.<init>(<console>:72)
    at $line34.$read$.<clinit>(<console>)
    at $line34.$eval$.$print$lzycompute(<console>:7)
    at $line34.$eval$.$print(<console>:6)
    at $line34.$eval.$print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
    at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
    at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
    at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:825)
    at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
    at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
    at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
    at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
    at org.apache.spark.repl.Main$.doMain(Main.scala:69)
    at org.apache.spark.repl.Main$.main(Main.scala:52)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:751)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.IllegalArgumentException: requirement failed: Decimal precision 136 exceeds max precision 38
    at scala.Predef$.require(Predef.scala:224)
    at org.apache.spark.sql.types.Decimal.set(Decimal.scala:113)
    at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:434)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3$$anonfun$9.apply(JdbcUtils.scala:337)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$nullSafeConvert(JdbcUtils.scala:438)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:337)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$3.apply(JdbcUtils.scala:335)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:286)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:268)
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
    at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:147)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

请告诉我我漏掉了什么。

谢谢

Spark版本为2.1.1.2.6.2.0-205


正在使用哪个版本的Spark? - N3WOS
你能在Oracle的sqlplus中执行desc table_name;命令,并贴出输出结果(针对“有问题”的表)吗? - MaxU - stand with Ukraine
Spark version 2.1.1.2.6.2.0-205 - user5319411
似乎在Spark 2.1.0中存在一个错误,但在Spark 2.3中已经修复。这个链接可能会有所帮助:https://issues.apache.org/jira/browse/SPARK-20427 - N3WOS
感谢N3WOS。很明显这个问题在2.3+中已经解决了。我现在的选择是要么请求升级集群上的Spark,要么对单独的列进行操作并跳过引起问题的那一列。我检查了表定义,Oracle中全部都是“数字”。因此,问题肯定出在具有更高精度的数据上,而Spark 2.1无法处理。有没有办法找到那个列/数据值,以便我可以跳过它直到升级Spark? - user5319411
你是不是想在没有那个列的情况下加载表格? - N3WOS
3个回答

3

解决方法1:

在您的Oracle数据库中创建一个视图:

create or replace view schema_name.v_table_name
as
select
  cast(number_col1_name as number(20, 6)) as number_col1_name,   /* problematic column */
  col2,
  col3,
  ...
from table_name;

使用该视图而不是表名

解决方法2:在Spark端实时执行相同操作:

val query = """
(
select
  cast(number_col1_name as number(20, 6)) as number_col1_name,
  col2,
  col3,
  ...
from table_name
) v_table_name
"""

val df_oracle = spark.read
                     .format("jdbc")
                     .option("url", "jdbc:oracle:thin:@hostname:port:SID")
                     .option("user",userName)
                     .option("password",passWord)
                     .option("driver", "oracle.jdbc.driver.OracleDriver")
                     .option("dbtable", query)
                     .load()

谢谢MaxU,我更喜欢解决方法2。它需要一些开发,但我可以将其作为通用方法应用于其他遇到类似问题的表格。谢谢! - user5319411
我进行了测试,似乎解决方法对我的情况不起作用。由于这是一个数据问题,Oracle中的一个数字列有一些巨大的值,例如999000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000,我可能需要在你提到的行上进行解码,而不是转换。 - user5319411
很好的回答,只是需要注意,在Oracle中,您不使用“AS”来为SELECT设置别名。 因此,(SELECT * FROM table_name) v_table_name可以工作。 - kfkhalili
@kfkhalili,感谢您的评论!您确定Oracle不理解(SELECT * FROM table_name) AS v_table_name中的“AS”吗?我现在无法检查... - MaxU - stand with Ukraine
1
@MaxU 我刚刚使用了您的答案,不得不删除用于别名表的 AS。在列转换和重命名中同时使用 AS 都能按预期工作。 - kfkhalili

0

欢迎提供解决方案的链接,但请确保您的答案即使没有链接也是有用的:在链接周围添加上下文,以便其他用户了解它的内容和原因,然后引用您链接的页面中最相关的部分,以防目标页面不可用。仅仅提供链接的答案可能会被删除。 - Das_Geek

0

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接