Namenode和Datanode未在使用jps命令时列出

5

环境:ubuntu 14.04,hadoop 2.6

在输入start-all.shjps之后,终端上没有列出DataNode

>jps
9529 ResourceManager
9652 NodeManager
9060 NameNode
10108 Jps
9384 SecondaryNameNode

根据这篇答案:Hadoop中Datanode进程未运行,我尝试了最佳解决方案:
  • bin/stop-all.sh (或2.x系列中的stop-dfs.sh和stop-yarn.sh)
  • rm -Rf /app/tmp/hadoop-your-username/*
  • bin/hadoop namenode -format(或2.x系列中的hdfs)
然而,现在我得到了这个:
>jps
20369 ResourceManager
26032 Jps
20204 SecondaryNameNode
20710 NodeManager

您可以看到,即使是缺少NameNode,请帮助我。

DataNode日志https://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032

NameNode日志https://gist.github.com/fifiteen82726/02dcf095b5a23c1570b0

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

</configuration>

更新

coda@ubuntu:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
Starting secondary namenodes [0.0.0.0]
coda@0.0.0.0's password: 
0.0.0.0: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
0.0.0.0: secondarynamenode running as process 20204. Stop it first.
15/04/30 01:07:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
resourcemanager running as process 20369. Stop it first.
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: nodemanager running as process 20710. Stop it first.
coda@ubuntu:/usr/local/hadoop/sbin$ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager

更新

hadoop@ubuntu:/usr/local/hadoop/sbin$ $HADOOP_HOME ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/05/03 09:32:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hadoop@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
hadoop@localhost's password: 
localhost: datanode running as process 28584. Stop it first.
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
15/05/03 09:32:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-ubuntu.out
hadoop@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-ubuntu.out
hadoop@ubuntu:/usr/local/hadoop/sbin$ jps
6842 Jps
28584 DataNode

实际上,您不应该格式化您的namenode超过一次,现在您的集群因此不稳定。 - Karthik
抱歉问一个愚蠢的问题,如何找到NameNode和DataNode的日志? - rj487
你可以在 $HADOOP_HOME/logs 文件夹中找到 Hadoop 日志。 - Rajesh N
你能发一下你的mapred-site.xml文件吗? - Yosser Abdellatif Goupil
已经发布了^^,谢谢 - rj487
显示剩余4条评论
6个回答

6

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: secureMain异常 java.io.IOException: dfs.datanode.data.dir中的所有目录都无效:"/usr/local/hadoop_store/hdfs/datanode/"

此错误可能是由于/usr/local/hadoop_store/hdfs/datanode/文件夹权限设置不正确导致的。

FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: 启动namenode失败。 org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: 目录/usr/local/hadoop_store/hdfs/namenode处于不一致状态:存储目录不存在或无法访问。

此错误可能是由于/usr/local/hadoop_store/hdfs/namenode文件夹权限设置不正确或该文件夹不存在。要解决此问题,请执行以下操作:

选项I:

如果您没有/usr/local/hadoop_store/hdfs文件夹,则创建并按照以下方式授予权限:

sudo mkdir /usr/local/hadoop_store/hdfs
sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
sudo chmod -R 755 /usr/local/hadoop_store/hdfs

hadoopuserhadoopgroup更改为您的Hadoop用户名和Hadoop组名。现在,尝试启动Hadoop进程。如果问题仍然存在,请尝试选项2。

选项II:

删除/usr/local/hadoop_store/hdfs文件夹中的内容:

sudo rm -r /usr/local/hadoop_store/hdfs/*

更改文件夹权限:

sudo chmod -R 755 /usr/local/hadoop_store/hdfs

现在,启动hadoop进程。它应该能正常工作。

注意:如果错误仍然存在,请发布新的日志。

更新:

如果您还没有创建hadoop用户和组,请按照以下步骤执行:

sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop

现在,更改 /usr/local/hadoop/usr/local/hadoop_store 的所有权:
sudo chown -R hadoop:hadoop /usr/local/hadoop
sudo chown -R hadoop:hadoop /usr/local/hadoop_store

将用户更改为hadoop:

su - hadoop

输入您的Hadoop用户密码。现在您的终端应该是这样的:

hadoop@ubuntu:$

现在,输入以下命令:

$HADOOP_HOME/bin/start-all.sh

或者

sh /usr/local/hadoop/bin/start-all.sh


ls -l /usr/local 的结果是什么? - Rajesh N
你在安装Hadoop时是否执行了以下步骤:sudo addgroup hadoopgroupnamesudo adduser --ingroup hadoopgroupname hadoopusername?安装时提供的 hadoopgroupnamehadoopusername 将分别成为你的 Hadoop 组名和用户名。 - Rajesh N
whoami 显示我的用户名:coda,我在安装Hadoop时没有输入 sudo adduser --ingroup hadoopgroupname hadoopusername 命令,这是我失败的原因吗? - rj487
更新的答案。请查看。 - Rajesh N
让我们在聊天中继续这个讨论 - rj487
显示剩余7条评论

5
我遇到了类似的问题,jps没有显示datanode。
清空hdfs文件夹中的内容并更改文件夹权限对我有用。
sudo rm -r /usr/local/hadoop_store/hdfs/*
sudo chmod -R 755 /usr/local/hadoop_store/hdfs    
hadoop namenode =format
start-all.sh
jps

0

解决方案是先停止您的namenode,

前往/usr/local/hadoop

bin/hdfs namenode -format

然后删除您家目录下的hdfs和tmp目录

mkdir ~/tmp
mkdir ~/hdfs
chmod 750 ~/hdfs

进入Hadoop目录并启动Hadoop

`sbin/start-dfs.sh`

它将显示数据节点


0
为此,您需要允许您的hdfc文件夹权限。 然后运行以下命令:
  1. 通过命令创建一个组:sudo adgroup hadoop
  2. 将您的用户添加到其中:sudo usermod -a -G hadoop "ur_user" (您可以使用Who命令查看当前用户)
  3. 现在通过以下方式更改此hadoop_store的所有者: sudo chown -R "ur_user":"ur_gourp" /usr/local/hadoop_store
  4. 然后通过以下方式重新格式化名称节点: hdfs namenode -format

并启动所有服务,您就可以看到结果...... 现在输入JPS(它将正常工作)。


0
遇到同样的问题:Jps命令中没有显示Namenode服务 解决方案:这是由于目录/usr/local/hadoop_store/hdfs的权限问题引起的,只需更改权限并格式化Namenode,然后重新启动Hadoop即可: $sudo chmod -R 755 /usr/local/hadoop_store/hdfs $hadoop namenode -format

$start-all.sh

$jps


0
在设置权限时需要记住一件事: ssh-keygen -t rsa -P "" 以上命令应该只在namenode中输入。 然后将生成的公钥添加到所有数据节点 ssh-copy-id -i ~/.ssh/id_rsa.pub 然后按下命令 ssh 权限将被设置...... 之后在启动dfs时不需要密码......

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接