Hadoop安装配置

Linux大全评论1.1K views阅读模式

系统:Ubuntu10.10
用户:linuxidc
安装路径:/opt
apache下载地址:http://mirror.bjtu.edu.cn/apache//
一.JDK
1.解压安装
www.linuxidc.com @linuxidc:/opt$ sudo chmod +x jdk-6u30-linux-i586.bin
www.linuxidc.com @linuxidc:/opt$ sudo -s ./jdk-6u30-linux-i586.bin
www.linuxidc.com @linuxidc:/opt$ ln -s jdk1.6.0_30 jdk6

www.linuxidc.com @linuxidc:/opt$ sudo chown xcloud -R jdk*
www.linuxidc.com @linuxidc:/opt$ sudo chmod 775 -R jdk*

2.设置环境变量
www.linuxidc.com @linuxidc:/opt$ sudo gedit /etc/environment

JAVA_HOME=/opt/jdk6
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/opt/jdk6/bin

www.linuxidc.com @linuxidc:/opt$ source /etc/environment

3.测试
www.linuxidc.com @linuxidc:/opt$ java -version
java version "1.6.0_30"
Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
Java HotSpot(TM) Server VM (build 20.5-b03, mixed mode)

二.Eclipse
1.安装
www.linuxidc.com @linuxidc:/opt$ sudo tar zxvf eclipse-jee-indigo-SR1-linux-gtk.tar.gz
www.linuxidc.com @linuxidc:/opt$ sudo chown xcloud -R eclipse
www.linuxidc.com @linuxidc:/opt$ sudo chmod 775 -R eclipse
www.linuxidc.com @linuxidc:/opt$ cd eclipse/
www.linuxidc.com @linuxidc:/opt/eclipse$ ./eclipse &

三.Tomcat
1.安装
www.linuxidc.com @linuxidc:/opt$ sudo unzip apache-tomcat-6.0.35.zip
www.linuxidc.com @linuxidc:/opt$ sudo mv apache-tomcat-6.0.35 tomcat6

www.linuxidc.com @linuxidc:/opt$ sudo chown xcloud -R tomcat6
www.linuxidc.com @linuxidc:/opt$ sudo chmod 775 -R tomcat6
www.linuxidc.com @linuxidc:/opt$ sudo gedit /etc/environment

CATALINA_HOME=/opt/tomcat6
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/opt/jdk6/bin:/opt/tomcat6/bin

www.linuxidc.com @linuxidc:/opt$ source /etc/environment

四.Hadoop
1.解压
www.linuxidc.com @linuxidc:/opt/cloudera$ sudo tar zxvf hadoop-0.20.2-cdh3u2.tar.gz
www.linuxidc.com @linuxidc:/opt/cloudera$ sudo mv hadoop-0.20.2-cdh3u2 ..

www.linuxidc.com @linuxidc:/opt$ sudo mv hadoop-0.20.2-cdh3u2 hadoop

2.修改配置文件
www.linuxidc.com @linuxidc:/opt$ cd hadoop/conf
www.linuxidc.com @linuxidc:/opt/hadoop/conf$ sudo gedit hadoop-env.sh
+
export JAVA_HOME=/opt/jdk6

www.linuxidc.com @linuxidc:/opt$ sudo chown xcloud -R hadoop
www.linuxidc.com @linuxidc:/opt$ sudo chmod 775 -R hadoop

3.安装SSH/修改无密码验证SSH登录模式
sudo apt-get install openssh-server
sudo apt-get install openssh-client

www.linuxidc.com @linuxidc:/opt/hadoop/dataRoot/output$ sudo gedit /etc/hosts

www.linuxidc.com @linuxidc:~$ sudo apt-get install openssh-server
www.linuxidc.com @linuxidc:~$ sudo apt-get install openssh-client
www.linuxidc.com @linuxidc:~$ ssh-keygen -t rsa
www.linuxidc.com @linuxidc:~$ cd  /home/xcloud/.ssh/
www.linuxidc.com @linuxidc:~/.ssh$ cp id_rsa.pub authorized_keys

4.hadoop 配置
主机的配置
gedit /etc/hosts
127.0.0.1    localhost
127.0.1.1    xcloud
10.45.46.106  xcloud

gedit /etc/hostname
xcloud
conf/hadoop-env.sh :
export JAVA_HOME=/opt/jdk6
conf/core-site.xml:
<property> 
  <name>hadoop.tmp.dir</name> 
  <value>/opt/hadoop/dataRoot</value> 
  <description>A base for other temporary directories.</description> 
</property> 
 
<property> 
  <name>fs.default.name</name> 
  <value>hdfs://xcloud:9000/</value> 
  <description>The name of the default file system.  A URI whose 
  scheme and authority determine the FileSystem implementation.  The 
  uri's scheme determines the config property (fs.SCHEME.impl) naming 
  the FileSystem implementation class.  The uri's authority is used to 
  determine the host, port, etc. for a filesystem.</description> 
</property>
/conf/hdfs-site.xml
<property> 
  <name>dfs.name.dir</name> 
  <value>/opt/hadoop/dataRoot/name</value> 
</property> 
 
<property> 
  <name>dfs.data.dir</name> 
  <value>/opt/hadoop/dataRoot/data</value> 
</property> 
 
<property> 
  <name>dfs.permissions</name> 
  <value>false</value> 
</property> 
 
<property> 
  <name>dfs.replication</name> 
  <value>1</value> 
</property>
conf/mapred-site.xml:
<property> 
  <name>mapred.job.tracker</name> 
  <value>xcloud:9001</value> 
  <description>The host and port that the MapReduce job tracker runs 
  at.  If "local", then jobs are run in-process as a single map 
  and reduce task. 
  </description> 
</property> 
conf/masters:
xcloud
conf/slaves:
xcloud

/etc/environment
HADOOP_HOME=/opt/hadoop
HBASE_HOME=/opt/hbase
ZOOKEEPER_HOME=/opt/zookeeper
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/opt/jdk6/bin:/opt/tomcat6/bin:/opt/hadoop/bin:/opt/hbase/bin:/opt/zookeeper/bin

5.启动测试
www.linuxidc.com @linuxidc:/opt/hadoop/bin$start-all.sh
www.linuxidc.com @linuxidc:/opt/hadoop/bin$netstat -an|grep 9000
www.linuxidc.com @linuxidc:/opt/hadoop/bin$netstat -an|grep 9001
www.linuxidc.com @linuxidc:/opt/hadoop/bin$jps
www.linuxidc.com @linuxidc:/opt/hadoop/bin$hadoop dfs -ls

6.eclipse环境
www.linuxidc.com @linuxidc:/opt/hadoop20/contrib/eclipse-plugin$ cp * /opt/eclipse/plugins/
www.linuxidc.com @linuxidc:/opt/hadoop20/contrib/eclipse-plugin$ cd /opt/eclipse/plugins/
www.linuxidc.com @linuxidc:/opt/eclipse/plugins$ mv hadoop-0.20.2-eclipse-plugin.jar hadoop.jar

启动eclipse,进行设置
打开eclipse,看到Project Explorer有个DFS Locations目录
Window > Preferences > Hadoop Map/Reduce 设置安装目录:/opt/hadoop

Window > Show View > Other > MapReduce Tools选择Map/Reduce Locations
在打开到控制台中,点击右边蓝色小象图标按钮> New Hadoop location...
Location name :hadoop
Map/Reduce Master下
Host :xcloud
Port :9001

DFS Master
Port :9000
User name : xcloud
7.问题列表
问题1:
Cannot connect to the Map/Reduce location: hadoop
Call to xcloud/127.0.1.1:9001 failed on local exception: java.io.EOFException

解决:
下载 hadoop-0.20.3-dev-eclipse-plugin.jar覆盖原先的 hadoop-0.20.2.jar
http://code.google.com/p/hadoop-eclipse-plugin/downloads/detail?name=hadoop-0.20.3-dev-eclipse-plugin.jar&can=2&q=

参考:http://heipark.iteye.com/blog/786302
问题2:无法启动namenode
通过日志文件进行分析:tail -f /opt/hadoop/logs/hadoop-xcloud-namenode-xcloud.log

有用信息:2011-12-22 09:23:45,885 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call addBlock(/opt/hadoop20/dataRoot/tmp/mapred/system/jobtracker.info, DFSClient_2118960177) from 127.0.0.1:54034: error: java.io.IOException: File /opt/hadoop20/dataRoot/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1

java.io.IOException: File /opt/hadoop20/dataRoot/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1

解决:
认真看遍 配置文件
删除原来目录
格式化

参考:http://www.hadoopor.com/thread-361-1-1.html
问题3:
www.linuxidc.com @linuxidc:/opt/hadoop20/bin$ hadoop dfs -ls
Bad connection to FS. command aborted. exception: Call to xcloud/127.0.0.1:9000 failed on local exception: java.io.EOFException

LOG信息:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /opt/hadoop20/dataRoot/tmp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe mode will be turned off automatically in 8 seconds.

1、检查防火墙是否关闭?
2、把safemode置于off状态:hadoop dfsadmin -safemode leave
3、也有可能是刚启动,datanode与namenode正在进行状态检测,所以可以等待几分钟。
问题0:
2011-12-22 12:41:33,714 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: Missing directory /opt/hadoop/dataRoot/name

解决:
www.linuxidc.com @linuxidc:/opt/hadoop/dataRoot$ mkdir name

问题0——1:
2011-12-22 12:44:41,282 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.

企鹅博客
  • 本文由 发表于 2019年10月2日 04:45:47
  • 转载请务必保留本文链接:https://www.qieseo.com/143815.html

发表评论