环境准备

  • JDK

  • Zookeeper

  • Mysql

  • Hive

  • Kafka

  • Solr

  • Hbase

  • Atlas

安装Zookeeper3.5.7

1)上传压缩包到software文件夹,并进行解压

cd /opt/software/
tar -zxvf apache-zookeeper-3.5.7-bin.tar.gz -C /opt/module/

2)配置zoo.cfg

cd conf/
mv zoo_sample.cfg zoo.cfg
vim zoo.cfg 
dataDir=./tmp/zookeeper

3)启动

/opt/module/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start

安装Kafka

1)上传压缩包并解压,并进行解压

tar -zxvf kafka_2.11-2.4.1.tgz -C /opt/module/

2)进入kafka目录,穿件log日志文件夹

cd /opt/module/kafka_2.11-2.4.1/
mkdir logs

3)修改配置文件

cd config/
vim server.properties

输入以下内容:

#broker的全局唯一编号,不能重复
broker.id=0
#删除topic功能使能
delete.topic.enable=true
#处理网络请求的线程数量
num.network.threads=3
#用来处理磁盘IO的现成数量
num.io.threads=8
#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400
#接收套接字的缓冲区大小
socket.receive.buffer.bytes=102400
#请求套接字的缓冲区大小
socket.request.max.bytes=104857600
#kafka运行日志存放的路径
log.dirs=/opt/module/kafka_2.11-2.4.0/logs
#topic在当前broker上的分区个数
num.partitions=1
#用来恢复和清理data下数据的线程数量
num.recovery.threads.per.data.dir=1
#segment文件保留的最长时间,超时将被删除
log.retention.hours=168
#配置连接Zookeeper集群地址
zookeeper.connect=hadoop101:2181

4)启动zk,再启动kafka

/opt/module/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start
/opt/module/kafka_2.11-2.4.1/bin/kafka-server-start.sh -daemon /opt/module/kafka_2.11-2.4.1/config/server.properties 

5)启动后,可以去zk里看下注册信息

/opt/module/apache-zookeeper-3.5.7-bin/bin/zkCli.sh 
ls /
[kafka_2.4, zookeeper]

注册到kafka_2.4中,而不是根目录,可以继续查看里面信息

ls /kafka_2.4
[admin, brokers, cluster, config, consumers, controller, controller_epoch, isr_change_notification, latest_producer_id_block, log_dir_event_notification]

安装Solr-7.73

1)上传并解压solr-7.7.3.tgz到/opt/module/目录下面

tar -zxvf solr-7.7.3.tgz -C /opt/module/
cd solr-7.7.3/
vim bin/solr.in.sh
ZK_HOST="hadoop101:2181"
SOLR_PORT=898
SOLR_ULIMIT_CHECKS=flase

2)修改限制

vim /etc/security/limits.conf
\* hard nproc 65000
\* soft nproc 65000
ulimit -u 65000

3)启动solr

bin/solr start -force

5)Web访问8983端,http://IP:8983/solr/open in new window#/open in new window

安装HBase

1)上传并解压hbase-2.2.4-bin.tar.gz

cd /opt/software/
tar -zxvf hbase-2.2.4-bin.tar.gz -C /opt/module/

2)修改conf/regionservers,删除localhost,修改对应各主机域名或ip

cd /opt/module/hbase-2.2.4/
vim conf/regionservers 
hadoop101

3)修改conf/hbase-site.xml文件

cd conf/
vim hbase-site.xml 
<configuration>
<property>
 <name>hbase.rootdir</name>
 <value>hdfs://hadoop101:8020/hbase</value>
</property>
<property>
 <name>hbase.cluster.distributed</name>
 <value>true</value>
</property>
<property>
 <name>hbase.master.port</name>
 <value>16000</value>
</property>
<property>
 <name>hbase.zookeeper.property.dataDir</name>
 <value>/home/root/zookeeper</value>
</property>
<property>
 <name>hbase.zookeeper.quorum</name>
 <value>hadoop101</value>
</property>
<property>
 <name>hbase.unsafe.stream.capability.enforce</name>
 <value>false</value>
 </property>
</configuration>

5)修改hbase-env.sh 声明jdk路径

vim hbase-env.sh 
export JAVA_HOME=/opt/module/jdk1.8.0_211
export HBASE_MANAGES_ZK=false

6)拷贝hdfs-site.xml到hbase conf下

cp /opt/module/hadoop-3.1.3/etc/hadoop/hdfs-site.xml /opt/module/hbase-2.2.4/conf/

7)配置hbase环境变量

vim /etc/profile
#HBASE_HOME
export HBASE_HOME=/opt/module/hbase-2.2.4
export PATH=$PATH:$HBASE_HOME/bin
source /etc/profile

9)启动hbase

start-hbase.sh 

安装Atlas2.2

1)把apache-atlas-2.2.0-server.tar.gz 上传到hadoop102的/opt/software目录下

2)解压apache-atlas-2.2.0-server.tar.gz 到/opt/module/目录下面

tar -zxvf apache-atlas-2.2.0-server.tar.gz -C /opt/module/

3)修改apache-atlas-2.2.0的名称为atlas

mv apache-atlas-2.2.0/ atlas