Java.lang.RuntimeException: 无法实例化org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

61

我在Ubuntu 14.0上安装了Hadoop 2.7.1和apache-hive-1.2.1版本。

  1. 为什么会出现这个错误?
  2. 需要安装任何元存储吗?
  3. 当我们在终端键入hive命令时,xml是如何被内部调用的,这些xml的流程是什么?
  4. 是否需要其他配置?

当我在Ubuntu 14.0终端上编写hive命令时,它会抛出以下异常。

 $ hive

    Logging initialized using configuration in jar:file:/usr/local/hive/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
    Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:520)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
    Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
        ... 8 more
    Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:426)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
        ... 14 more
    Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
    NestedThrowables:
    java.lang.reflect.InvocationTargetException
        at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:520)
        at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)
        at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)
        at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:624)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        ... 19 more
    Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:426)
        at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
        at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
        at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282)
        at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240)
        at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:286)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:426)
        at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
        at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
        at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
        at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
        ... 48 more
    Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:85)
        ... 66 more
    Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
        at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
        at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
        ... 68 more
为了避免上述错误,我创建了带有以下内容的hive-site.xml文件:
 <configuration>

      <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/home/local/hive-metastore-dir/warehouse</value>

      </property>

    <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://localhost:3306/hivedb?createDatabaseIfNotExist=true</value>
    </property>
    <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
    </property>
    <property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>user</value>
    </property>
    <property>
      <name>javax.jdo.option.ConnectionPassword</name>
      <value>password</value>
    </property>

    <configuration>

~/.bashrc文件中提供环境变量,但错误仍然存在。

#HIVE home directory configuration
export HIVE_HOME=/usr/local/hive/apache-hive-1.2.1-bin
export PATH="$PATH:$HIVE_HOME/bin"

我也在这个链接中回答了这个编程问题:https://dev59.com/k2Eh5IYBdhLWcg3wMA0t#51499009 - Hamid
你的hive-site.xml文件中,闭合标签</configuration>缺少了斜线/ - moshfiqur
18个回答

1

我也遇到过这个问题,但我重新启动了Hadoop并使用命令"hadoop dfsadmin -safemode leave",

现在启动Hive,我认为它会正常工作。


1

1- 将以下行添加到启动文件~/.bashrc

export HIVE_HOME=~/hive 
export PATH=$PATH:$HIVE_HOME/bin 
export CLASSPATH=$HADOOP_HOME/lib/* 
export CLASSPATH=$CLASSPATH:HIVE_HOME/lib/*:.

2-编辑文件$HIVE_HOME/conf/hive-env.sh

cd $HIVE_HOME/conf
cp hive-env.sh.template hive-env.sh

3-编辑hive-env.sh文件,在文件末尾添加以下行:

export HADOOP_HOME=$HADOOP_HOME

4- Hadoop和Hive使用了两个不同版本的"guava"库。解决方法是在Hadoop和Hive中使用相同版本的"guava": 注意: (我的系统使用的是guava-27,所以我分享了这个例子。它取决于你的Hadoop和Hive的guava版本。你需要检查一下)

cp ~/hadoop/share/hadoop/hdfs/lib/guava-27.0-jre.jar   ~/hive/lib/
rm ~/hive/lib/guava-19.0.jar

5- 在hive的conf目录下(~/hive/conf),创建元数据存储:

schematool -initSchema -dbType derby

6- 我们还需要在conf目录下创建Hive的配置文件:

cp hive-default.xml.template  hive-site.xml

在文件 hive-site.xml 的第 3215 行,第96列左右有一些需要删除的字符。要删除的四个字符为: &#8; 7- 按以下方式编辑文件 hive-site.xml: 将出现的 ${system:java.io.tmpdir} 替换为 /tmp/hive_io${system:user.name} 替换为 hadoop 注意:在 hive-site.xml 中会出现 3-4 次 ${system:java.io.tmpdir},请确保全部更改。 8- 最后可以输入 hive。(不要忘记,首先需要运行 hadoop,然后再运行 hive)注意:确保在 hive 路径下启动 hive

1
我通过从spark-submit代码中删除 --deploy-mode cluster 来解决了这个问题。默认情况下,spark submit采用客户端模式,具有以下优点:
1. It opens up Netty HTTP server and distributes all jars to the worker nodes.
2. Driver program runs on master node , which means dedicated resources to driver process.

集群模式下:
 1.  It runs on worker node.
 2. All the jars need to be placed in a common folder of the cluster so that it is accessible to all the worker nodes or in folder of each worker node.

由于群集中的任何节点都无法访问hive jar,因此无法访问hive metastore。 在这里输入图像描述

0
在我的情况下,我停止了我的Docker Hive容器并重新运行它,最终它成功工作了。希望能对某人有用。
注意:这可能是由于后台有实例正在运行,因此停止容器将停止所有后台实例。

0

0

只需从hive文件夹中打开hive终端,然后编辑(bashrc)和(hive-site.xml)文件。 步骤-- 打开安装hive的hive文件夹。 现在从文件夹中打开终端。


嘿,阿比纳夫,你能详细说明一下这个答案是针对哪个问题的吗? 请考虑提供更多上下文,以便其他用户了解为什么你提出的解决方案会起作用。 - Aiyion.Prime

0
我通过创建并复制hive-default.xml.template到hive-site.xml来解决了这个问题。您可以使用以下命令来创建它:
cd /usr/local/Cellar/hive/2.7.1/libexec/conf (please replace hive version)
cp hive-default.xml.template hive-site.xml

并在hive-site.xml中更改以下属性的值

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
   <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
   </property>
   <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
   </property>
   <property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>hiveuser</value>
   </property>
   <property>
      <name>javax.jdo.option.ConnectionPassword</name>
      <value>password</value>
   </property>
   <property>
      <name>datanucleus.fixedDatastore</name>
      <value>false</value>
   </property>
   <property>
      <name>hive.exec.local.scratchdir</name>
      <value>/tmp/hive</value>
      <description>Local scratch space for Hive jobs</description>
   </property>
   <property>
      <name>hive.downloaded.resources.dir</name>
      <value>/tmp/hive</value>
      <description>Temporary local directory for added resources in the remote file system.</description>
   </property>
   <property>
      <name>hive.querylog.location</name>
      <value>/tmp/hive</value>
      <description>Location of Hive run time structured log file</description>
   </property>
  <property>
    <name>hive.druid.metadata.db.type</name>
    <value>mysql</value>
    <description>
      Expects one of the pattern in [mysql, postgresql, derby].
      Type of the metadata database.
    </description>
  </property>
</configuration>

接下来,我在MySql中创建了名为matastore的数据库,并使用以下查询语句创建了用户名密码和授权。

$ mysql
mysql> CREATE DATABASE metastore;
mysql> USE metastore;
mysql> CREATE USER 'hiveuser'@'localhost' IDENTIFIED BY 'password';
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,ALTER,CREATE ON metastore.* TO 'hiveuser'@'localhost';

并使用以下命令在MySql中运行脚本:

mysql> source /usr/local/Cellar/hive/3.1.2_3/libexec/scripts/metastore/upgrade/mysql/hive-schema-3.1.0.mysql.sql

同时别忘了使用以下命令将SQL连接器jar移动到hive包中 下载MySQL连接器并解压缩

tar zxvf mysql-connector-java-5.1.35.tar.gz
sudo cp mysql-connector-java-5.1.35/mysql-connector-java-5.1.35-bin.jar /usr/local/Cellar/hive/2.7.1/libexec/lib/

就是这样。现在我可以成功地在Hive中运行show tables等命令了。:)


0

当我运行一个用Java编写的Spark应用程序时,我遇到了这个错误。根据官方文档的建议,我通过将与Spark相关的依赖项设置为“provided”来解决了这个问题。不确定确切的原因,但是它有效 :)

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.12</artifactId>
        <version>2.4.5</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.12</artifactId>
        <version>2.4.5</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-hive_2.12</artifactId>
        <version>2.4.5</version>
        <scope>provided</scope>
    </dependency>

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接