hadoop@master:/home/duanwf/Installpackage$ sudo tar zxvf hadoop-2.4.1.tar.gz -C /opt/
hadoop@master:~$ sudo vi /etc/profileexport HADOOP_DEV_HOME=/home/hadoop/hadoop-2.4.1/export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}export YARN_HOME=${HADOOP_DEV_HOME}export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoopexport PATH=$HADOOP_DEV_HOME/bin:$HADOOP_DEV_HOME/sbin:$PATH
hadoop@master:~$ source /etc/profile
查看Hadoop环境变量是否生效,在终端执行命令:
hadoop@master:~$ hadoop Usage: hadoop [--config confdir] COMMAND where COMMAND is one of: fs run a generic filesystem user client version print the version jar <jar> run a jar file checknative [-a|-h] check native hadoop and compression libraries availability distcp <srcurl> <desturl> copy file or directories recursively archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive classpath prints the class path needed to get the Hadoop jar and the required libraries daemonlog get/set the log level for each daemon or CLASSNAME run the class named CLASSNAME Most commands print help when invoked w/o parameters.
配置之前,需要在master本地文件系统创建以下文件夹:
~/dfs/name
~/dfs/data
~/temphadoop@master:~$ mkdir ~/dfs hadoop@master:~$ mkdir ~/temp hadoop@master:~$ mkdir ~/dfs/name hadoop@master:~$ mkdir ~/dfs/data
这里要涉及到的配置文件有7个:
~/hadoop-2.4.1/etc/hadoop/hadoop-env.sh
~/hadoop-2.4.1/etc/hadoop/yarn-env.sh
~/hadoop-2.4.1/etc/hadoop/slaves
~/hadoop-2.4.1/etc/hadoop/core-site.xml
~/hadoop-2.4.1/etc/hadoop/hdfs-site.xml
~/hadoop-2.4.1/etc/hadoop/yarn-site.xml
hadoop@slave1:~$ scp -r hadoop@master:/home/hadoop/hadoop-2.4.1/ /home/hadoop/
进入slave2:
hadoop@slave2:~$ scp -r hadoop@master:/home/hadoop/hadoop-2.4.1/ /home/hadoop/
hadoop@master:~$ sudo chown -R hadoop:hadoop hadoop-2.4.1/
slave1和slave2同样需要执行。
hadoop@master:~/hadoop-2.4.1$ jps 31711 SecondaryNameNode 31464 NameNode 31857 Jps
hadoop@slave1:~$ jps 5529 DataNode 5610 Jps
hadoop@slave2:~$ jps 8119 Jps 8035 DataNode
联系客服