亚洲免费在线-亚洲免费在线播放-亚洲免费在线观看-亚洲免费在线观看视频-亚洲免费在线看-亚洲免费在线视频

Cloudera CDH 5集群搭建(yum 方式)

系統 1735 0

?

1??????集群環境

主節點

master001 ~~ master006

從節點

slave001 ~~ slave064

2??????安裝CDH5的YUM源

rpm -Uvhhttp://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm

wgethttp://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/cloudera-cdh5.repo

mv cloudera-cdh5.repo /ect/yum.repo.d/

?

3??????ZooKeeper

3.1????節點分配

ZooKeeperServer :

master002,master003, master004, master005, master006

ZooKeeperClient :

master001,master002, master003, master004, master005, master006

?

3.2????安裝

ZooKeeper Client節點:

yum install -y zookeeper

ZooKeeper Server節點:

yum install -y zookeeper-server

3.3????配置

1.zookeeper節點改動zookeeper配置文件

/etc/zookeeper/conf/zoo.cfg

maxClientCnxns=50

# Thenumber of milliseconds of each tick

tickTime=2000

# Thenumber of ticks that the initial

#synchronization phase can take

initLimit=10

# Thenumber of ticks that can pass between

# sendinga request and getting an acknowledgement

syncLimit=5

# thedirectory where the snapshot is stored.

dataDir =/data/disk01/zookeeper/zk_data

dataLogDir =/data/disk01/zookeeper/zk_log

# theport at which the clients will connect

clientPort=2181

?

server.2=master002:2888:3888

server.3=master003:2888:3888

server.4=master004:2888:3888

server.5=master005:2888:3888

server.6=master006:2888:3888

?

2.初始化節點

master002:

service zookeeper-server init --myid=2

master003:

service zookeeper-server init --myid=3

master004:

service zookeeper-server init --myid=4

master005:

service zookeeper-server init --myid=5

master006:

service zookeeper-server init --myid=6

3.執行zookeeper

service zookeeper-server start

?

3.4????安裝路徑

程序路徑

/usr/lib/zookeeper/

配置文件路徑

/etc/zookeeper/conf

日志路徑

/var/log/zookeeper

3.5????執行|關閉|查看狀態

ZooKeeper

service zookeeper-server start|stop|status

?

3.6????經常使用命令

查看ZooKeeper節點狀態

zookeeper-server status

?

手動清理日志

/usr/lib/zookeeper/bin/zkCleanup.shdataLogDir [snapDir] -n count

自己主動清理日志

autopurge.purgeInterval 這個參數指定了清理頻率,單位是小時,須要填寫一個1或更大的整數,默認是0,表示不開啟自己清理功能。

autopurge.snapRetainCount 這個參數和上面的參數搭配使用,這個參數指定了須要保留的文件數目。默認是保留3個。

?

3.7????測試

https://github.com/phunt/zk-smoketest

3.8????參考文獻

ZooKeeper參數配置

http://my.oschina.net/u/128568/blog/194820

ZooKeeper常見管理和運維

http://nileader.blog.51cto.com/1381108/1032157

?

4??????HDFS

4.1????節點分配(配置NN HA)

namenode、zkfc:

master002, master003

datanode:

slave001-slave064

journalnode:

master002, master003, master004

4.2????安裝

namenode:

yum install hadoop-hdfs-namenode

yum install hadoop-hdfs-zkfc

(yum install -y hadoop-hdfs-namenodehadoop-hdfs-zkfc hadoop-client)

datanode:

yum install hadoop-hdfs-datanode

(yum install -y hadoop-hdfs-datanodehadoop-client)

journalnode:

yum install hadoop-hdfs-journalnode

(yum install -y hadoop-hdfs-journalnode)

全部節點:

yum install hadoop-client

?

4.3????配置

1.配置文件

/etc/hadoop/conf/core-site.xml

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>

?

<configuration>

? <property>

??? <name>fs.defaultFS</name>

??? <value>hdfs://bdcluster</value>

? </property>

?

? <property>

??? <name>fs.trash.interval</name>

??? <value>1440</value>

? </property>

?

? <property>

???<name>hadoop.proxyuser.httpfs.hosts</name>

??? <value>*</value>

? </property>

?

? <property>

??? <name>hadoop.proxyuser.httpfs.groups</name>

??? <value>*</value>

? </property>

</configuration>

?

/etc/hadoop/conf/hdfs-site.xml

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>

?

<configuration>

?

? <property>

??? <name>dfs.nameservices</name>

??? <value>bdcluster</value>

? </property>

?

? <property>

???<name>dfs.ha.namenodes.bdcluster</name>

??? <value>nn002,nn003</value>

? </property>

?

? <property>

???<name>dfs.namenode.rpc-address.bdcluster.nn002</name>

??? <value>master002:8020</value>

? </property>

?

? <property>

???<name>dfs.namenode.rpc-address.bdcluster.nn003</name>

??? <value>master003:8020</value>

? </property>

?

? <property>

??? <name>dfs.namenode.http-address.bdcluster.nn002</name>

??? <value>master002:50070</value>

? </property>

?

? <property>

???<name>dfs.namenode.http-address.bdcluster.nn003</name>

??? <value>master003:50070</value>

? </property>

?

? <property>

???<name>dfs.namenode.shared.edits.dir</name>

???<value>qjournal://master002:8485;master003:8485;master004:8485/bdcluster</value>

? </property>

?

? <property>

???<name>dfs.journalnode.edits.dir</name>

???<value>/data/disk01/hadoop/hdfs/journalnode</value>

? </property>

?

? <property>

???<name>dfs.client.failover.proxy.provider.bdcluster</name>

???<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

? </property>

?

? <property>

???<name>dfs.ha.fencing.methods</name>

? ??<value>sshfence</value>

? </property>

?

? <property>

???<name>dfs.ha.fencing.ssh.private-key-files</name>

???<value>/var/lib/hadoop-hdfs/.ssh/id_dsa</value>

? </property>

?

? <property>

???<name>dfs.ha.automatic-failover.enabled</name>

??? <value>true</value>

? </property>

?

? <property>

???<name>ha.zookeeper.quorum</name>

???<value>master002:2181,master003:2181,master004:2181,master005:2181,master006:2181</value>

? </property>

?

? <property>

???<name>dfs.permissions.superusergroup</name>

??? <value>hadoop</value>

? </property>

?

? <property>

???<name>dfs.namenode.name.dir</name>

???<value>/data/disk01/hadoop/hdfs/namenode</value>

? </property>

?

? <property>

???<name>dfs.datanode.data.dir</name>

???<value>/data/disk01/hadoop/hdfs/datanode,/data/disk02/hadoop/hdfs/datanode,/data/disk03/hadoop/hdfs/datanode,/data/disk04/hadoop/hdfs/datanode,/data/disk05/hadoop/hdfs/datanode,/data/disk06/hadoop/hdfs/datanode,/data/disk07/hadoop/hdfs/datanode</value>

? </property>

?

? <property>?

??? <name>dfs.datanode.failed.volumes.tolerated</name>?

??? <value>3</value>?

? </property>

?

? <property>

???<name>dfs.datanode.max.xcievers</name>

??? <value>4096</value>

? </property>

?

? <property>

???<name>dfs.webhdfs.enabled</name>

??? <value>true</value>

? </property>

</configuration>

?

/etc/hadoop/conf/slaves

slave001

slave002

slave064

?

2.配置hdfs用戶的免password登陸

?

3.創建數據文件夾

namenode

mkdir -p/data/disk01/hadoop/hdfs/namenode

chown -Rhdfs:hdfs /data/disk01/hadoop/hdfs/

chown -Rhdfs:hdfs /data/disk01/hadoop/hdfs/namenode

chmod 700/data/disk01/hadoop/hdfs/namenode

?

datanode

mkdir -p/data/disk01/hadoop/hdfs/datanode

chmod 700/data/disk01/hadoop/hdfs/datanode

chown -Rhdfs:hdfs /data/disk01/hadoop/hdfs/

?

mkdir -p/data/disk02/hadoop/hdfs/datanode

chmod 700/data/disk02/hadoop/hdfs/datanode

chown -Rhdfs:hdfs /data/disk02/hadoop/hdfs/

?

mkdir -p/data/disk03/hadoop/hdfs/datanode

chmod 700/data/disk03/hadoop/hdfs/datanode

chown -Rhdfs:hdfs /data/disk03/hadoop/hdfs/

?

mkdir -p/data/disk04/hadoop/hdfs/datanode

chmod 700/data/disk04/hadoop/hdfs/datanode

chown -Rhdfs:hdfs /data/disk04/hadoop/hdfs/

?

mkdir -p/data/disk05/hadoop/hdfs/datanode

chmod 700/data/disk05/hadoop/hdfs/datanode

chown -Rhdfs:hdfs /data/disk05/hadoop/hdfs/

?

mkdir -p/data/disk06/hadoop/hdfs/datanode

chmod 700/data/disk06/hadoop/hdfs/datanode

chown -Rhdfs:hdfs /data/disk06/hadoop/hdfs/

?

mkdir -p/data/disk07/hadoop/hdfs/datanode

chmod 700/data/disk07/hadoop/hdfs/datanode

chown -Rhdfs:hdfs /data/disk07/hadoop/hdfs/

?

journalnode

mkdir -p/data/disk01/hadoop/hdfs/journalnode

chown -Rhdfs:hdfs /data/disk01/hadoop/hdfs/journalnode

?

4.啟動journalnode

service hadoop-hdfs-journalnode start

?

5.格式化namenode(master002)

sudo -u hdfs hadoop namenode -format

?

6.在ZooKeeper中初始化HA狀態(namenodemaster002)

hdfs zkfc -formatZK

?

7.初始化Shared Editsdirectory(master002)

hdfs namenode -initializeSharedEdits

?

8.啟動namenode

formatted namenode(master002):

service hadoop-hdfs-namenode start

standby namenode(master003):

sudo -u hdfs hdfs namenode-bootstrapStandby

service hadoop-hdfs-namenode start

?

9.啟動datanode

service hadoop-hdfs-datanode start

?

10.啟動zkfc(namenode)

service hadoop-hdfs-zkfc start

?

11.初始化HDFS文件夾

/usr/lib/hadoop/libexec/init-hdfs.sh

?

4.4????安裝路徑

程序路徑

/usr/lib/hadoop-hdfs

?

配置文件路徑

/etc/hadoop/conf

?

日志路徑

/var/log/hadoop-hdfs

?

4.5????執行|關閉|查看狀態

NameNode

service hadoop-hdfs-namenodestart|stop|status

DataNode

service hadoop-hdfs-datanodestart|stop|status

JournalNode

service hadoop-hdfs-journalnodestart|stop|status

zkfc

service hadoop-hdfs-zkfc start|stop|status

?

4.6????經常使用命令

查看集群狀態

sudo -u hdfs hdfs dfsadmin -report

檢查文件及其副本

sudo -u hdfs hdfs fsck [文件名稱] -files-blocks -locations –racks

?

5??????YARN

5.1????節點分配

resourcemanager:

master004

nodemanager、mapreduce:

slave001-slave064

mapreduce-historyserver:

master006

?

5.2????安裝

resourcemanager:

yum -y install hadoop-yarn-resourcemanager

nodemanager:

yum -y install hadoop-yarn-nodemanagerhadoop-mapreduce

mapreduce-historyserver:

yum -y installhadoop-mapreduce-historyserver hadoop-yarn-proxyserver

全部節點

yum -y install hadoop-client

?

5.3????配置

1.配置文件

/etc/hadoop/conf/mapred-site.xml

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>

?

<configuration>

? <property>

???<name>mapreduce.framework.name</name>

??? <value>yarn</value>

? </property>

?

? <property>

???<name>mapreduce.task.io.sort.mb</name>

??? <value>1024</value>

? </property>

?

? <property>

???<name>mapred.child.java.opts</name>

??? <value>-XX:-UseGCOverheadLimit-Xms1024m -Xmx2048m</value>

? </property>

?

? <property>

???<name>yarn.app.mapreduce.am.command-opts</name>

??? <value>-Xmx2048m</value>

? </property>

?

? <property>

???<name>mapreduce.jobhistory.address</name>

??? <value>master006:10020</value>

??? <description>MapReduce JobHistoryServer IPC host:port</description>

? </property>

?

? <property>

???<name>mapreduce.jobhistory.webapp.address</name>

??? <value>master006:19888</value>

??? <description>MapReduce JobHistoryServer Web UI host:port</description>

? </property>

?

? <property>

??? <name>mapreduce.map.memory.mb</name>

??? <value>2048</value>

? </property>

?

? <property>

???<name>mapreduce.reduce.memory.mb</name>

??? <value>4096</value>

? </property>

?

? <property>

??????<name>mapreduce.jobhistory.intermediate-done-dir</name>

??????<value>/user/history/done_intermediate</value>

? </property>

?

? <property>

???<name>mapreduce.jobhistory.done-dir</name>

???<value>/user/history/done</value>

? </property>

?

</configuration>

?

/etc/hadoop/conf/yarn-site.xml

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>

?

<configuration>

?

? <property>

???<name>yarn.resourcemanager.resource-tracker.address</name>

??? <value>master004:8031</value>

? </property>

?

? <property>

???<name>yarn.resourcemanager.address</name>

??? <value>master004:8032</value>

? </property>

?

? <property>

???<name>yarn.resourcemanager.scheduler.address</name>

??? <value>master004:8030</value>

? </property>

?

? <property>

???<name>yarn.resourcemanager.admin.address</name>

??? <value>master004:8033</value>

? </property>

?

? <property>

???<name>yarn.resourcemanager.webapp.address</name>

??? <value>master004:8088</value>

? </property>

?

? <property>

???<name>yarn.nodemanager.aux-services</name>

???<value>mapreduce_shuffle</value>

? </property>

?

? <property>

??? <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>

???<value>org.apache.hadoop.mapred.ShuffleHandler</value>

? </property>

?

? <property>

???<name>yarn.log-aggregation-enable</name>

??? <value>true</value>

? </property>

?

? <property>

??? <description>List of directories tostore localized files in.</description>

???<name>yarn.nodemanager.local-dirs</name>

???<value>/data/disk01/hadoop/yarn/local,/data/disk02/hadoop/yarn/local, /data/disk03/hadoop/yarn/local,/data/disk04/hadoop/yarn/local, /data/disk05/hadoop/yarn/local</value>

? </property>

?

? <property>

??? <description>Where to store containerlogs.</description>

???<name>yarn.nodemanager.log-dirs</name>

??? <value>/data/disk01/hadoop/yarn/logs,/data/disk02/hadoop/yarn/logs, /data/disk03/hadoop/yarn/logs,/data/disk04/hadoop/yarn/logs, /data/disk05/hadoop/yarn/logs</value>

? </property>

?

? <!--property>

??? <description>Where to aggregate logsto.</description>

???<name>yarn.nodemanager.remote-app-log-dir</name>

???<value>/var/log/hadoop-yarn/apps</value>

? </property-->

?

? <property>

??? <description>Classpath for typicalapplications.</description>

????<name>yarn.application.classpath</name>

???? <value>

??????? $HADOOP_CONF_DIR,

???????$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,

???????$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,

???????$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,

???????$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*

???? </value>

? </property>

?

? <property>

???<name>yarn.app.mapreduce.am.staging-dir</name>

??? <value>/user</value>

? </property>

?

? <property>

??? <description>The minimum allocationfor every container request at the RM,

??? in MBs. Memory requests lower than thiswon't take effect,

??? and the specified value will get allocatedat minimum.</description>

???<name>yarn.scheduler.minimum-allocation-mb</name>

??? <value>1024</value>

? </property>

?

? <property>

??? <description>The maximum allocationfor every container request at the RM,

??? in MBs. Memory requests higher than thiswon't take effect,

??? and will get capped to thisvalue.</description>

???<name>yarn.scheduler.maximum-allocation-mb</name>

??? <value>16384</value>

? </property>

?

? <property>

??? <description>The minimum allocationfor every container request at the RM,

??? in terms of virtual CPU cores. Requestslower than this won't take effect,

??? and the specified value will get allocatedthe minimum.</description>

???<name>yarn.scheduler.minimum-allocation-vcores</name>

??? <value>1</value>

? </property>

?

? <property>

??? <description>The maximum allocationfor every container request at the RM,

??? in terms of virtual CPU cores. Requestshigher than this won't take effect,

??? and will get capped to thisvalue.</description>

???<name>yarn.scheduler.maximum-allocation-vcores</name>

??? <value>32</value>

? </property>

?

? <property>

??? <description>Number of CPU cores thatcan be allocated

??? for containers.</description>

???<name>yarn.nodemanager.resource.cpu-vcores</name>

??? <value>48</value>

? </property>

?

? <property>

??? <description>Amount of physicalmemory, in MB, that can be allocated

??? for containers.</description>

??? <name>yarn.nodemanager.resource.memory-mb</name>

??? <value>120000</value>

? </property>

?

? <property>

??? <description>Ratio between virtualmemory to physical memory when

??? setting memory limits for containers.Container allocations are

??? expressed in terms of physical memory, andvirtual memory usage

??? is allowed to exceed this allocation bythis ratio.

??? </description>

???<name>yarn.nodemanager.vmem-pmem-ratio</name>

??? <value>6</value>

? </property>

?

</configuration>

?

2. nodemanager創建本地文件夾

mkdir -p/data/disk01/hadoop/yarn/local /data/disk02/hadoop/yarn/local/data/disk03/hadoop/yarn/local /data/disk04/hadoop/yarn/local/data/disk05/hadoop/yarn/local

mkdir -p/data/disk01/hadoop/yarn/logs /data/disk02/hadoop/yarn/logs/data/disk03/hadoop/yarn/logs /data/disk04/hadoop/yarn/logs/data/disk05/hadoop/yarn/logs

chown -Ryarn:yarn /data/disk01/hadoop/yarn /data/disk02/hadoop/yarn/local/data/disk03/hadoop/yarn /data/disk04/hadoop/yarn /data/disk05/hadoop/yarn

chown -Ryarn:yarn /data/disk01/hadoop/yarn/local /data/disk02/hadoop/yarn/local/data/disk03/hadoop/yarn/local /data/disk04/hadoop/yarn/local/data/disk05/hadoop/yarn/local

chown -Ryarn:yarn /data/disk01/hadoop/yarn/logs /data/disk02/hadoop/yarn/logs/data/disk03/hadoop/yarn/logs /data/disk04/hadoop/yarn/logs/data/disk05/hadoop/yarn/logs

?

3. 創建history文件夾

sudo -u hdfs hadoop fs -mkdir /user/history

sudo -u hdfs hadoop fs -chmod -R 1777/user/history

sudo -u hdfs hadoop fs -chown yarn/user/history

?

4. 啟動服務

resourcemanager:

sudo service hadoop-yarn-resourcemanagerstart

nodemanager:

sudo service hadoop-yarn-nodemanager start

mapreduce-historyserver:

sudo service hadoop-mapreduce-historyserverstart

5.4????安裝路徑

程序路徑

/usr/lib/hadoop-yarn

配置文件路徑

/etc/hadoop/conf

日志路徑

/var/log/hadoop-yarn

?

5.5????執行|關閉|查看狀態

resourcemanager:

service hadoop-yarn-resourcemanagerstart|stop|status

nodemanager:

service hadoop-yarn-nodemanagerstart|stop|status

mapreduce-historyserver:

service hadoop-mapreduce-historyserverstart|stop|status

Edit

5.6????經常使用命令

查看節點狀態

yarn node -list -all

?

resourcemanager管理

yarm rmadmin ...

?

6??????HBase

6.1????節點分配

hbase-master

master004, master005, master006

hbase-regionserver

slave001 ~~ 064

hbase-thrift

master004, master005, master006

hbase-rest

master004, master005, master006

6.2????安裝

hbase-master

yum install -y hbase hbase-master

hbase-regionserver

yum install -y hbase hbase-regionserver

hbase-thrift

yum install -y hbase-thrift

hbase-rest

yum install -y hbase-rest

6.3????配置

1.配置文件

/etc/security/limits.conf

hdfs -nofile 32768

hbase -nofile 32768

?

/etc/hbase/conf/hbase-site.xml

<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>

<configuration>

? <property>

???<name>hbase.rest.port</name>

??? <value>60050</value>

? </property>

?

? <property>

???<name>hbase.zookeeper.quorum</name>

??? <value>master002, master003,master004, master005,master006</value>

? </property>

?

? <property>

???<name>hbase.cluster.distributed</name>

??? <value>true</value>

? </property>

?

? <property>

??? <name>hbase.tmp.dir</name>

???<value>/tmp/hadoop/hbase</value>

? </property>

?

? <property>

??? <name>hbase.rootdir</name>

??? <value>hdfs://bdcluster/hbase/</value>

? </property>

</configuration>

?

/etc/hbase/conf/hbase-env.sh

# Setenvironment variables here.

?

# Thisscript sets variables multiple times over the course of starting an hbaseprocess,

# so tryto keep things idempotent unless you want to take an even deeper look

# intothe startup scripts (bin/hbase, etc.)

?

# Thejava implementation to use.? Java 1.6required.

# exportJAVA_HOME=/usr/java/default/

?

# ExtraJava CLASSPATH elements.? Optional.

# exportHBASE_CLASSPATH=

?

# Themaximum amount of heap to use, in MB. Default is 1000.

# exportHBASE_HEAPSIZE=1000

?

# ExtraJava runtime options.

# Beloware what we set by default.? May onlywork with SUN JVM.

# Formore on why as well as other possible settings,

# seehttp://wiki.apache.org/hadoop/PerformanceTuning

exportHBASE_OPTS="-XX:+UseConcMarkSweepGC"

?

#Uncomment one of the below three options to enable java garbage collectionlogging for the server-side processes.

?

# Thisenables basic gc logging to the .out file.

# exportSERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails-XX:+PrintGCDateStamps"

?

exportSERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps$HBASE_GC_OPTS"

?

exportSERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M$HBASE_GC_OPTS"

?

# Thisenables basic gc logging to its own file.

# IfFILE-PATH is not replaced, the log file(.gc) would still be generated in theHBASE_LOG_DIR .

# exportSERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps-Xloggc:<FILE-PATH>"

?

# Thisenables basic GC logging to its own file with automatic log rolling. Onlyapplies to jdk 1.6.0_34+ and 1.7.0_2+.

# IfFILE-PATH is not replaced, the log file(.gc) would still be generated in theHBASE_LOG_DIR .

# exportSERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps-Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1-XX:GCLogFileSize=512M"

?

#Uncomment one of the below three options to enable java garbage collectionlogging for the client processes.

?

# Thisenables basic gc logging to the .out file.

# exportCLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails-XX:+PrintGCDateStamps"

?

exportCLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps$HBASE_GC_OPTS"

?

# Thisenables basic gc logging to its own file.

# IfFILE-PATH is not replaced, the log file(.gc) would still be generated in theHBASE_LOG_DIR .

# exportCLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps-Xloggc:<FILE-PATH>"

?

# Thisenables basic GC logging to its own file with automatic log rolling. Onlyapplies to jdk 1.6.0_34+ and 1.7.0_2+.

# IfFILE-PATH is not replaced, the log file(.gc) would still be generated in theHBASE_LOG_DIR .

# exportCLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps-Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1-XX:GCLogFileSize=512M"

?

#Uncomment below if you intend to use the EXPERIMENTAL off heap cache.

# exportHBASE_OPTS="$HBASE_OPTS -XX:MaxDirectMemorySize="

# Sethbase.offheapcache.percentage in hbase-site.xml to a nonzero value.

?

exportHBASE_USE_GC_LOGFILE=true

?

#Uncomment and adjust to enable JMX exporting

# Seejmxremote.password and jmxremote.access in $JRE_HOME/lib/management toconfigure remote password access.

# Moredetails at:http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html

#

# exportHBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false-Dcom.sun.management.jmxremote.authenticate=false"

# exportHBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE-Dcom.sun.management.jmxremote.port=10101"

# exportHBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE-Dcom.sun.management.jmxremote.port=10102"

# exportHBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE-Dcom.sun.management.jmxremote.port=10103"

# exportHBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE-Dcom.sun.management.jmxremote.port=10104"

# exportHBASE_REST_OPTS="$HBASE_REST_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10105"

?

# Filenaming hosts on which HRegionServers will run.?$HBASE_HOME/conf/regionservers by default.

# exportHBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers

?

#Uncomment and adjust to keep all the Region Server pages mapped to be memoryresident

#HBASE_REGIONSERVER_MLOCK=true

#HBASE_REGIONSERVER_UID="hbase"

?

# Filenaming hosts on which backup HMaster will run.?$HBASE_HOME/conf/backup-masters by default.

# exportHBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters

?

# Extrassh options.? Empty by default.

# exportHBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"

?

# Wherelog files are stored.? $HBASE_HOME/logsby default.

# exportHBASE_LOG_DIR=${HBASE_HOME}/logs

?

# Enableremote JDWP debugging of major HBase processes. Meant for Core Developers

# exportHBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"

# exportHBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"

# exportHBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"

# exportHBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"

?

# Astring representing this instance of hbase. $USER by default.

# exportHBASE_IDENT_STRING=$USER

?

# Thescheduling priority for daemon processes.?See 'man nice'.

# exportHBASE_NICENESS=10

?

# Thedirectory where pid files are stored. /tmp by default.

# exportHBASE_PID_DIR=/var/hadoop/pids

?

# Secondsto sleep between slave commands.? Unsetby default.? This

# can beuseful in large clusters, where, e.g., slave rsyncs can

#otherwise arrive faster than the master can service them.

# exportHBASE_SLAVE_SLEEP=0.1

?

# TellHBase whether it should manage it's own instance of Zookeeper or not.

exportHBASE_MANAGES_ZK=false

?

# Thedefault log rolling policy is RFA, where the log file is rolled as per the sizedefined for the

# RFAappender. Please refer to the log4j.properties file to see more details on thisappender.

# In caseone needs to do log rolling on a date change, one should set the environmentproperty

#HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".

# Forexample:

#HBASE_ROOT_LOGGER=INFO,DRFA

# Thereason for changing default to RFA is to avoid the boundary case of filling outdisk space as

# DRFAdoesn't put any cap on the log size. Please refer to HBase-5655 for morecontext.

?

2. 啟動

hbase-master

service hbase-master start

hbase-regionserver

service hbase-regionserver start

hbase-thrift

service hbase-thrift start

hbase-rest

service hbase-rest start

6.4????安裝路徑

安裝路徑

/usr/lib/hbase

配置文件路徑

/etc/hbase/conf

日志路徑

/var/log/hbase

6.5????執行|關閉|查看狀態

hbase-master:

service hbase-master start|stop|status

hbase-regionserver:

service hbase-regionserverstart|stop|status

hbase-thrift:

service hbase-thrift start|stop|status

hbase-rest:

service hbase-rest start|stop|status

6.6????經常使用命令

hbase shell

?

7??????Spark

7.1????節點分配

master002 ~~ master006

7.2????安裝

yum install spark-core spark-masterspark-worker spark-python

7.3????配置

1. /etc/spark/conf/spark-env.sh

export SPARK_HOME=/usr/lib/spark

?

2. 部署Spark到HDFS

source /etc/spark/conf/spark-env.sh

hdfs dfs -mkdir -p /user/spark/share/lib

sudo -u hdfs hdfs dfs -put/usr/lib/spark/assembly/lib/spark-assembly_2.10-0.9.0-cdh5.0.0-hadoop2.3.0-cdh5.0.0.jar/user/spark/share/lib/spark-assembly.jar

7.4????安裝路徑

程序路徑

/usr/lib/spark

配置文件路徑

/etc/spark/conf

日志路徑

/var/log/spark

spark在hdfs的路徑

/user/spark/share/lib/spark-assembly.jar

7.5????演示樣例程序

source /etc/spark/conf/spark-env.sh

SPARK_JAR=hdfs://bdcluster/user/spark/share/lib/spark-assembly.jarAPP_JAR=$SPARK_HOME/examples/lib/spark-examples_2.10-0.9.0-cdh5.0.0.jar$SPARK_HOME/bin/spark-class org.apache.spark.deploy.yarn.Client --jar $APP_JAR--class org.apache.spark.examples.SparkPi --args yarn-standalone --args 10

Cloudera CDH 5集群搭建(yum 方式)


更多文章、技術交流、商務合作、聯系博主

微信掃碼或搜索:z360901061

微信掃一掃加我為好友

QQ號聯系: 360901061

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點擊下面給點支持吧,站長非常感激您!手機微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點擊微信右上角掃一掃功能,選擇支付二維碼完成支付。

【本文對您有幫助就好】

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描上面二維碼支持博主2元、5元、10元、自定義金額等您想捐的金額吧,站長會非常 感謝您的哦!!!

發表我的評論
最新評論 總共0條評論
主站蜘蛛池模板: 高清亚洲综合色成在线播放放 | 91视频夜色| 久久久久久久久久爱 | 欧美成人另类bbw | 欧美亚洲精品一区二三 | 老子理论不卡影院6080 | 日韩永久免费进入2015 | 国产亚洲欧洲国产综合一区 | 亚洲国产欧美一区二区欧美 | 久草香蕉视频在线观看 | 天天天天躁天天天天碰 | 一 级做人爱全视频在线看 一a级毛片 | 色偷偷久久一区二区三区 | 日韩免费不卡视频 | 五月天婷婷亚洲 | 日韩欧美理论 | 久久婷婷色 | 国产普通话自拍 | 大杳焦伊人久久综合热 | 国产精品久久久久这里只有精品 | 天天操91 | 香蕉视频免费在线播放 | 一级毛片a女人刺激视频免费 | 日日摸日日添夜夜爽97 | 亚洲免费在线观看视频 | 成人小视频在线播放 | 综合久久99久久99播放 | 欧美整片在线 | 美女操穴视频 | 四虎在线播放 | 国产中文在线观看 | 久久天天躁日日躁狠狠躁 | 青青草99热这里都是精品 | 不卡网站 | 国产一区二区三区亚洲欧美 | 欧美日韩一区二区三区久久 | 四虎最新永久在线精品免费 | 久久精品国产亚洲黑森林 | 一区二区三区国产精品 | 四虎影视www四虎免费 | 国产伦精品一区二区三区视频小说 |