博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Hadoop2 namenode 联邦 实验
阅读量:5936 次
发布时间:2019-06-19

本文共 6192 字,大约阅读时间需要 20 分钟。

hot3.png

实验的Hadoop版本为2.5.2,硬件环境是5台虚拟机,使用的均是CentOS6.6操作系统,虚拟机IP和hostname分别为: 
192.168.63.171    node1.zhch
192.168.63.172    node2.zhch
192.168.63.173    node3.zhch
192.168.63.174    node4.zhch
192.168.63.175    node5.zhch
ssh免密码、防火墙、JDK这里就不在赘述了。虚拟机的角色分配是 node1、2是namendoe节点,node3、4、5为datanode节点。 
步骤和 基本相同 ,主要的不同在于
hdfs-site.xml这个配置文件,其余配置和hadoop的安装配置基本一致。
一、配置Hadoop
## 解压[yyl@node1 program]$ tar -zxf hadoop-2.5.2.tar.gz ## 创建文件夹[yyl@node1 program]$ mkdir hadoop-2.5.2/name[yyl@node1 program]$ mkdir hadoop-2.5.2/data[yyl@node1 program]$ mkdir hadoop-2.5.2/tmp## 配置hadoop-env.sh[yyl@node1 program]$ cd hadoop-2.5.2/etc/hadoop/[yyl@node1 hadoop]$ vim hadoop-env.shexport JAVA_HOME=/usr/lib/java/jdk1.7.0_80## 配置yarn-env.sh[yyl@node1 hadoop]$ vim yarn-env.shexport JAVA_HOME=/usr/lib/java/jdk1.7.0_80## 配置slaves[yyl@node1 hadoop]$ vim slaves node3.zhchnode4.zhchnode5.zhch## 配置core-site.xml[yyl@node1 program]$ cd hadoop-2.5.2/etc/hadoop/[yyl@node1 hadoop]$ vim core-site.xml
fs.defaultFS
hdfs://node1.zhch:9000
io.file.buffer.size
131072
hadoop.tmp.dir
file:/home/yyl/program/hadoop-2.5.2/tmp
hadoop.proxyuser.hduser.hosts
*
hadoop.proxyuser.hduser.groups
*
## 配置hdfs-site.xml[yyl@node1 hadoop]$ vim hdfs-site.xml
dfs.namenode.name.dir
file:/home/yyl/program/hadoop-2.5.2/name
dfs.datanode.data.dir
file:/home/yyl/program/hadoop-2.5.2/data
dfs.replication
1
dfs.webhdfs.enabled
true
dfs.permissions
false
dfs.nameservices
ns1,ns2
dfs.namenode.rpc-address.ns1
node1.zhch:9000
dfs.namenode.http-address.ns1
node1.zhch:50070
dfs.namenode.rpc-address.ns2
node2.zhch:9000
dfs.namenode.http-address.ns2
node2.zhch:50070
##配置 mapred-site.xml[yyl@node1 hadoop]$ cp mapred-site.xml.template mapred-site.xml[yyl@node1 hadoop]$ vim mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
node1.zhch:10020
mapreduce.jobhistory.webapp.address
node1.zhch:19888
##配置 yarn-site.xml[yyl@node1 hadoop]$ vim yarn-site.xml
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address
node1.zhch:8032
yarn.resourcemanager.scheduler.address
node1.zhch:8030
yarn.resourcemanager.resource-tracker.address
node1.zhch:8031
yarn.resourcemanager.admin.address
node1.zhch:8033
yarn.resourcemanager.webapp.address
node1.zhch:8088
## 分发到各个节点[yyl@node1 hadoop]$ cd /home/yyl/program/[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node2.zhch:/home/yyl/program/[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node3.zhch:/home/yyl/program/[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node4.zhch:/home/yyl/program/[yyl@node1 program]$ scp -rp hadoop-2.5.2 yyl@node5.zhch:/home/yyl/program/## 在各个节点上设置hadoop环境变量[yyl@node1 ~]$ vim .bash_profile export HADOOP_PREFIX=/home/yyl/program/hadoop-2.5.2export HADOOP_COMMON_HOME=$HADOOP_PREFIXexport HADOOP_HDFS_HOME=$HADOOP_PREFIXexport HADOOP_MAPRED_HOME=$HADOOP_PREFIXexport HADOOP_YARN_HOME=$HADOOP_PREFIXexport HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoopexport PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin

二、NameNode 

## 在namenode1上执行格式化[yyl@node1 ~]$ hdfs namenode -format -clusterId c1## 在namenode2上执行格式化[yyl@node2 ~]$ hdfs namenode -format -clusterId c1## 在namenode1启动namenode[yyl@node1 ~]$ hadoop-daemon.sh start namenodestarting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node1.zhch.out[yyl@node1 ~]$ jps1177 NameNode1240 Jps## 在namenode2启动namenode[yyl@node2 ~]$ hadoop-daemon.sh start namenodestarting namenode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-namenode-node2.zhch.out[yyl@node2 ~]$ jps1508 Jps1445 NameNode

三、HDFS联邦检查 

http://node1.zhch:50070/
http://node2.zhch:50070/
 
四、启动DataNode和yarn 
[yyl@node1 ~]$ hadoop-daemons.sh start datanodenode4.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node4.zhch.outnode5.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node5.zhch.outnode3.zhch: starting datanode, logging to /home/yyl/program/hadoop-2.5.2/logs/hadoop-yyl-datanode-node3.zhch.out[yyl@node1 ~]$ start-yarn.shstarting yarn daemonsstarting resourcemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-resourcemanager-node1.zhch.outnode5.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node5.zhch.outnode3.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node3.zhch.outnode4.zhch: starting nodemanager, logging to /home/yyl/program/hadoop-2.5.2/logs/yarn-yyl-nodemanager-node4.zhch.out[yyl@node1 ~]$ jps1402 Jps1177 NameNode1333 ResourceManager[yyl@node2 ~]$ jps1445 NameNode1539 Jps[yyl@node3 ~]$ jps1214 NodeManager1166 DataNode1256 Jps

下次启动不需要重复上面的步骤,可以直接使用下面的命令启动集群:
sh $HADOOP_HOME/sbin/start-dfs.sh
sh $HADOOP_HOME/sbin/start-yarn.sh

转载于:https://my.oschina.net/zc741520/blog/496541

你可能感兴趣的文章
如何用Python爬网站数据,并用BI可视化分析?
查看>>
这么多机器学习的应用场景,金融领域到底有何不同?
查看>>
零售数字化必经哪四个阶段?
查看>>
Redis缓存穿透、缓存雪崩、redis并发问题分析
查看>>
让转型自动化测试不是空喊!怎么转型自动化测试?
查看>>
了解国产***冰河2.2 加强网络安全意识
查看>>
jQuery操作数组函数
查看>>
Hive中的所有Join
查看>>
CM cluster 之间trunk
查看>>
SqlServer系列笔记——字符串函数
查看>>
军哥讲IE之道,如何考取IE的第一步:笔试
查看>>
SVN源码服务器搭建-详细教程
查看>>
SSL/TLS协议运行机制的概述
查看>>
openstack Nova、Cinder、Neutron资源配额设置
查看>>
三种备份方式的区别
查看>>
创建 OVS flat network - 每天5分钟玩转 OpenStack(134)
查看>>
我的友情链接
查看>>
linux 常用基本命令之 touch cp file rm mv cat 的介绍与使用
查看>>
Ext.grid.gridpanel
查看>>
JSRender模板一
查看>>