oozie hive操作创建的表无法从hive客户端找到,但可以在HDFS中找到。

3

我正在尝试通过Oozie Hive Action运行Hive脚本,我在我的script.q中创建了一个Hive表'test',而且oozie任务已经成功运行,我可以在hdfs路径/user/hive/warehouse下找到由oozie任务创建的表。但是我无法通过Hive客户端命令“show tables”找到“test”表。

我认为我的元数据存储配置存在问题,但我就是无法弄清楚。 有人能帮忙吗?

oozie admin -oozie http://localhost:11000/oozie -status

系统模式:正常

oozie job -oozie http://localhost:11000/oozie -config C:\Hadoop\oozie-3.2.0-incubating\oozie-win-distro\examples\apps\hive\job.properties -run

工作ID: 0000001-130910094106919-oozie-hado-W

运行结果

这是我的oozie-site.xml文件


   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->

<!--
    Refer to the oozie-default.xml file for the complete list of
    Oozie configuration properties and their default values.
-->

<property>
    <name>oozie.service.ActionService.executor.ext.classes</name>
    <value>
        org.apache.oozie.action.email.EmailActionExecutor,
        org.apache.oozie.action.hadoop.HiveActionExecutor,
        org.apache.oozie.action.hadoop.ShellActionExecutor,
        org.apache.oozie.action.hadoop.SqoopActionExecutor
    </value>
</property>

<property>
    <name>oozie.service.SchemaService.wf.ext.schemas</name>
    <value>shell-action-0.1.xsd,email-action-0.1.xsd,hive-action-0.2.xsd,sqoop-action-0.2.xsd,ssh-action-0.1.xsd</value>
</property>

<property>
    <name>oozie.system.id</name>
    <value>oozie-${user.name}</value>
    <description>
        The Oozie system ID.
    </description>
</property>

<property>
    <name>oozie.systemmode</name>
    <value>NORMAL</value>
    <description>
        System mode for  Oozie at startup.
    </description>
</property>

<property>
    <name>oozie.service.AuthorizationService.security.enabled</name>
    <value>false</value>
    <description>
        Specifies whether security (user name/admin role) is enabled or not.
        If disabled any user can manage Oozie system and manage any job.
    </description>
</property>

<property>
    <name>oozie.service.PurgeService.older.than</name>
    <value>30</value>
    <description>
        Jobs older than this value, in days, will be purged by the PurgeService.
    </description>
</property>

<property>
    <name>oozie.service.PurgeService.purge.interval</name>
    <value>3600</value>
    <description>
        Interval at which the purge service will run, in seconds.
    </description>
</property>

<property>
    <name>oozie.service.CallableQueueService.queue.size</name>
    <value>10000</value>
    <description>Max callable queue size</description>
</property>

<property>
    <name>oozie.service.CallableQueueService.threads</name>
    <value>10</value>
    <description>Number of threads used for executing callables</description>
</property>

<property>
    <name>oozie.service.CallableQueueService.callable.concurrency</name>
    <value>3</value>
    <description>
        Maximum concurrency for a given callable type.
        Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc).
        Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc).
        All commands that use action executors (action-start, action-end, action-kill and action-check) use
        the action type as the callable type.
    </description>
</property>

<property>
    <name>oozie.service.coord.normal.default.timeout
    </name>
    <value>120</value>
    <description>Default timeout for a coordinator action input check (in minutes) for normal job.
        -1 means infinite timeout</description>
</property>

<property>
    <name>oozie.db.schema.name</name>
    <value>oozie</value>
    <description>
        Oozie DataBase Name
    </description>
</property>

<property>
    <name>oozie.service.JPAService.create.db.schema</name>
    <value>true</value>
    <description>
        Creates Oozie DB.

        If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP.
        If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.jdbc.driver</name>
    <value>org.apache.derby.jdbc.EmbeddedDriver</value>
    <description>
        JDBC driver class.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.jdbc.url</name>
    <value>jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true</value>
    <description>
        JDBC URL.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.jdbc.username</name>
    <value>sa</value>
    <description>
        DB user name.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.jdbc.password</name>
    <value>pwd</value>
    <description>
        DB user password.

        IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value,
                   if empty Configuration assumes it is NULL.
    </description>
</property>

<property>
    <name>oozie.service.JPAService.pool.max.active.conn</name>
    <value>10</value>
    <description>
         Max number of connections.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.kerberos.enabled</name>
    <value>false</value>
    <description>
        Indicates if Oozie is configured to use Kerberos.
    </description>
</property>

<property>
    <name>local.realm</name>
    <value>LOCALHOST</value>
    <description>
        Kerberos Realm used by Oozie and Hadoop. Using 'local.realm' to be aligned with Hadoop configuration
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.keytab.file</name>
    <value>${user.home}/oozie.keytab</value>
    <description>
        Location of the Oozie user keytab file.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.kerberos.principal</name>
    <value>${user.name}/localhost@${local.realm}</value>
    <description>
        Kerberos principal for Oozie service.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.jobTracker.whitelist</name>
    <value> </value>
    <description>
        Whitelisted job tracker for Oozie service.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name>
    <value> </value>
    <description>
        Whitelisted job tracker for Oozie service.
    </description>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
    <value>*=hadoop-conf</value>
    <description>
        Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of
        the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is
        used when there is no exact match for an authority. The HADOOP_CONF_DIR contains
        the relevant Hadoop *-site.xml files. If the path is relative is looked within
        the Oozie configuration directory; though the path can be absolute (i.e. to point
        to Hadoop client conf/ directories in the local filesystem.
    </description>
</property>

<property>
    <name>oozie.service.WorkflowAppService.system.libpath</name>
    <value>/user/${user.name}/share/lib</value>
    <description>
        System library path to use for workflow applications.
        This path is added to workflow application if their job properties sets
        the property 'oozie.use.system.libpath' to true.
    </description>
</property>

<property>
    <name>use.system.libpath.for.mapreduce.and.pig.jobs</name>
    <value>false</value>
    <description>
        If set to true, submissions of MapReduce and Pig jobs will include
        automatically the system library path, thus not requiring users to
        specify where the Pig JAR files are. Instead, the ones from the system
        library path are used.
    </description>
</property>

<property>
    <name>oozie.authentication.type</name>
    <value>simple</value>
    <description>
        Defines authentication used for Oozie HTTP endpoint.
        Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#
    </description>
</property>

<property>
    <name>oozie.authentication.token.validity</name>
    <value>36000</value>
    <description>
        Indicates how long (in seconds) an authentication token is valid before it has
        to be renewed.
    </description>
</property>

<property>
    <name>oozie.authentication.signature.secret</name>
    <value>oozie</value>
    <description>
        The signature secret for signing the authentication tokens.
        If not set a random secret is generated at startup time.
        In order to authentiation to work correctly across multiple hosts
        the secret must be the same across al the hosts.
    </description>
</property>

<property>
  <name>oozie.authentication.cookie.domain</name>
  <value></value>
  <description>
    The domain to use for the HTTP cookie that stores the authentication token.
    In order to authentiation to work correctly across multiple hosts
    the domain must be correctly set.
  </description>
</property>

<property>
    <name>oozie.authentication.simple.anonymous.allowed</name>
    <value>true</value>
    <description>
        Indicates if anonymous requests are allowed.
        This setting is meaningful only when using 'simple' authentication.
    </description>
</property>

<property>
    <name>oozie.authentication.kerberos.principal</name>
    <value>HTTP/localhost@${local.realm}</value>
    <description>
        Indicates the Kerberos principal to be used for HTTP endpoint.
        The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification.
    </description>
</property>

<property>
    <name>oozie.authentication.kerberos.keytab</name>
    <value>${oozie.service.HadoopAccessorService.keytab.file}</value>
    <description>
        Location of the keytab file with the credentials for the principal.
        Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop.
    </description>
</property>

<property>
    <name>oozie.authentication.kerberos.name.rules</name>
    <value>DEFAULT</value>
    <description>
        The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's
        KerberosName for more details.
    </description>
</property>

<!-- Proxyuser Configuration -->

<!--

<property>
    <name>oozie.service.ProxyUserService.proxyuser.#USER#.hosts</name>
    <value>*</value>
    <description>
        List of hosts the '#USER#' user is allowed to perform 'doAs'
        operations.

        The '#USER#' must be replaced with the username o the user who is
        allowed to perform 'doAs' operations.

        The value can be the '*' wildcard or a list of hostnames.

        For multiple users copy this property and replace the user name
        in the property name.
    </description>
</property>

<property>
    <name>oozie.service.ProxyUserService.proxyuser.#USER#.groups</name>
    <value>*</value>
    <description>
        List of groups the '#USER#' user is allowed to impersonate users
        from to perform 'doAs' operations.

        The '#USER#' must be replaced with the username o the user who is
        allowed to perform 'doAs' operations.

        The value can be the '*' wildcard or a list of groups.

        For multiple users copy this property and replace the user name
        in the property name.
    </description>
</property>

-->


Here is my hive-site.xml


[hive-site.xml]

Here is my script.q


create table test(id int);

1个回答

0

在您的oozie hive操作中,您需要告诉oozie您的hive metastore的位置。

这意味着您需要将您的hive-site.xml作为参数传递。

此外,您需要为hive配置外部metastore才能使其正常工作。默认的derby数据库配置对您无效。

因此,简单来说:

创建具有外部数据库(例如mysql)的hive设置 将该hive-site.xml传递给oozie操作

详见此处。

http://oozie.apache.org/docs/3.3.1/DG_HiveActionExtension.html

谢谢


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接