|
马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。
您需要 登录 才可以下载或查看,没有帐号?立即注册
x
如果您觉得本篇CentOSLinux教程讲得好,请记得点击右边漂浮的分享程序,把好文章分享给你的小伙伴们!我们来看下,怎样在Centos6.5下,安排完整散布式集群。
上面先来看下详细的体系情况
序号称号形貌1体系情况Centos6.5最幸亏linux上安排2Hadoop版本Hadoop2.2.0Hadoop2.x中的第一个不乱版本3JAVA情况JDK1.764位(build1.7.0_25-b15)
安排情形
序号IP地点节点名1192.168.46.28hp1(master)2192.168.46.29hp2(slave)3192.168.46.30hp3(slave)
安排步调
序号操纵1设置SSH无暗码上岸2设置情况变量JAVA(必需),MAVEN,ANT3设置Hadoop情况变量4设置core-site.xml文件5设置hdfs-site.xml文件6设置mapred-site.xml文件7设置yarn-site.xml文件8设置slaves文件9分发到从机上10在每台呆板上格局化namenode11启动集群sbin/start-all.sh12实行jps下令,查询master与slave的java历程13测试页面会见,集群形态信息,14能够测试一个MR功课,考证集群
1,起首我们的集群之间的ssh是信托的,便利hadoop历程之间的通讯。
天生公钥:ssh-keygen-trsa-P
拷贝信托:ssh-copy-id.ssh/id_rsa.pubroot@hp2
2,设置各类情况变量包含java,maven,ant,hadoop等的变量,代码以下:
- exportPATH=.:$PATHexportJAVA_HOME="/usr/local/jdk"exportCLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libexportPATH=$PATH:$JAVA_HOME/binexportHADOOP_HOME=/root/hadoopexportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexportCLASSPATH=.:$CLASSPATH:$HADOOP_HOME/libexportPATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexportANT_HOME=/usr/local/antexportCLASSPATH=$CLASSPATH:$ANT_HOME/libexportPATH=$PATH:$ANT_HOME/binexportMAVEN_HOME="/usr/local/maven"exportCLASSPATH=$CLASSPATH:$MAVEN_HOME/libexportPATH=$PATH:$MAVEN_HOME/bin
复制代码 3,设置core-site.xml文件- <?xmlversion="1.0"encoding="UTF-8"?><?xml-stylesheettype="text/xsl"href="configuration.xsl"?><!--LicensedundertheApacheLicense,Version2.0(the"License");youmaynotusethisfileexceptincompliancewiththeLicense.YoumayobtainacopyoftheLicenseathttp://www.apache.org/licenses/LICENSE-2.0Unlessrequiredbyapplicablelaworagreedtoinwriting,softwaredistributedundertheLicenseisdistributedonan"ASIS"BASIS,WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied.SeetheLicenseforthespecificlanguagegoverningpermissionsandlimitationsundertheLicense.SeeaccompanyingLICENSEfile.--><!--Putsite-specificpropertyoverridesinthisfile.--><configuration><property><name>fs.default.name</name><value>hdfs://192.168.46.28:9000</value></property><property><name>hadoop.tmp.dir</name><value>/root/hadoop/tmp</value></property></configuration>
复制代码
4,设置hdfs-site.xml文件- <?xmlversion="1.0"encoding="UTF-8"?><?xml-stylesheettype="text/xsl"href="configuration.xsl"?><!--LicensedundertheApacheLicense,Version2.0(the"License");youmaynotusethisfileexceptincompliancewiththeLicense.YoumayobtainacopyoftheLicenseathttp://www.apache.org/licenses/LICENSE-2.0Unlessrequiredbyapplicablelaworagreedtoinwriting,softwaredistributedundertheLicenseisdistributedonan"ASIS"BASIS,WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied.SeetheLicenseforthespecificlanguagegoverningpermissionsandlimitationsundertheLicense.SeeaccompanyingLICENSEfile.--><!--Putsite-specificpropertyoverridesinthisfile.--><configuration><property><name>dfs.replication</name><value>1</value></property><property><name>dfs.namenode.name.dir</name><value>/root/hadoop/nddir</value></property><property><name>dfs.datanode.data.dir</name><value>/root/hadoop/dddir</value></property><property><name>dfs.permissions</name><value>false</value></property></configuration>
复制代码 设置mapred-site.xml文件- <?xmlversion="1.0"?><?xml-stylesheettype="text/xsl"href="configuration.xsl"?><!--LicensedundertheApacheLicense,Version2.0(the"License");youmaynotusethisfileexceptincompliancewiththeLicense.YoumayobtainacopyoftheLicenseathttp://www.apache.org/licenses/LICENSE-2.0Unlessrequiredbyapplicablelaworagreedtoinwriting,softwaredistributedundertheLicenseisdistributedonan"ASIS"BASIS,WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied.SeetheLicenseforthespecificlanguagegoverningpermissionsandlimitationsundertheLicense.SeeaccompanyingLICENSEfile.--><!--Putsite-specificpropertyoverridesinthisfile.--><configuration><property><name>mapred.job.tracker</name><value>hp1:8021</value><final>true</final><description>ThehostandportthattheMapReduceJobTrackerrunsat.</description></property><property><name>mapreduce.cluster.temp.dir</name><value></value><description>Nodescription</description><final>true</final></property><property><name>mapreduce.cluster.local.dir</name><value></value><description>Nodescription</description><final>true</final></property></configuration>
复制代码
设置yarn-site.xml文件- <?xmlversion="1.0"?><!--LicensedundertheApacheLicense,Version2.0(the"License");youmaynotusethisfileexceptincompliancewiththeLicense.YoumayobtainacopyoftheLicenseathttp://www.apache.org/licenses/LICENSE-2.0Unlessrequiredbyapplicablelaworagreedtoinwriting,softwaredistributedundertheLicenseisdistributedonan"ASIS"BASIS,WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied.SeetheLicenseforthespecificlanguagegoverningpermissionsandlimitationsundertheLicense.SeeaccompanyingLICENSEfile.--><configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property><property><name>Yarn.nodemanager.aux-services</name><value>mapreduce.shuffle</value></property><property><name>yarn.resourcemanager.address</name><value>hp1:8032</value></property><property><name>yarn.resourcemanager.scheduler.address</name><value>hp1:8030</value></property><property><name>yarn.resourcemanager.resource-tracker.address</name><value>hp1:8031</value></property><property><name>yarn.resourcemanager.admin.address</name><value>hp1:8033</value></property><property><name>yarn.resourcemanager.webapp.address</name><value>hp1:8088</value></property></configuration>
复制代码 设置slaves文件- 192.168.46.28192.168.46.29192.168.46.30
复制代码 设置好后,注重,在hdfs-site.xml文件里的目次,必要本人在hadoop根目次下创立,和hadoop的HDFS的tmp目次。统统做好以后,我们就能够分发整套hadoop到从机上,然后格局化namenode,并启动集群,利用jps在主机,和从机上分离显现以下:
master的jps显现以下:- 4335SecondaryNameNode4464ResourceManager4553NodeManager4102NameNode4206DataNode6042Jps
复制代码 slave上的jps显现以下:- 1727DataNode1810NodeManager2316Jps
复制代码 的确jps下令显现的java历程准确,我们就能够会见,web界面举行检察了,截图以下:
至此,我们已乐成的安排完成hadoop集群,装置时,注重散仙的步调,按如许按次来,一样平常不简单不错。欢迎大家来到仓酷云论坛! |
|