Skip to content

Commit 6b01f97

Browse files
committed
🚧 Flink
1 parent 7f5687e commit 6b01f97

File tree

3 files changed

+104
-68
lines changed

3 files changed

+104
-68
lines changed

markdown-file/Ansible-Install-And-Settings.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -206,6 +206,32 @@ PLAY RECAP *********************************************************************
206206
- 执行命令:`ansible-playbook /opt/jdk8-playbook.yml`
207207

208208

209+
#### 修改 hosts
210+
211+
212+
- 创建脚本文件:`vim /opt/hosts-playbook.yml`
213+
214+
```
215+
- hosts: all
216+
remote_user: root
217+
tasks:
218+
- name: update hosts
219+
blockinfile:
220+
path: /etc/hosts
221+
block: |
222+
192.168.0.223 linux01
223+
192.168.0.223 linux02
224+
192.168.0.223 linux03
225+
192.168.0.223 linux04
226+
192.168.0.223 linux05
227+
```
228+
229+
230+
- 执行命令:`ansible-playbook /opt/hosts-playbook.yml`
231+
232+
-------------------------------------------------------------------
233+
234+
209235
## 资料
210236

211237

markdown-file/Hadoop-Install-And-Settings.md

Lines changed: 27 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,9 @@
2828
- 分别给三台机子设置 hostname
2929

3030
```
31-
hostnamectl --static set-hostname hadoop-master
32-
hostnamectl --static set-hostname hadoop-node1
33-
hostnamectl --static set-hostname hadoop-node2
31+
hostnamectl --static set-hostname linux01
32+
hostnamectl --static set-hostname linux02
33+
hostnamectl --static set-hostname linux03
3434
```
3535

3636

@@ -39,13 +39,13 @@ hostnamectl --static set-hostname hadoop-node2
3939
```
4040
就按这个来,其他多余的别加,不然可能也会有影响
4141
vim /etc/hosts
42-
172.16.0.17 hadoop-master
43-
172.16.0.43 hadoop-node1
44-
172.16.0.180 hadoop-node2
42+
172.16.0.17 linux01
43+
172.16.0.43 linux02
44+
172.16.0.180 linux03
4545
```
4646

4747

48-
-hadoop-master 设置免密:
48+
-linux01 设置免密:
4949

5050
```
5151
生产密钥对
@@ -64,13 +64,13 @@ ssh localhost
6464
- 如果你是采用 pem 登录的,可以看这个:[SSH 免密登录](SSH-login-without-password.md)
6565

6666
```
67-
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@172.16.0.43,根据提示输入 hadoop-node1 机器的 root 密码,成功会有相应提示
68-
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@172.16.0.180,根据提示输入 hadoop-node2 机器的 root 密码,成功会有相应提示
67+
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@172.16.0.43,根据提示输入 linux02 机器的 root 密码,成功会有相应提示
68+
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@172.16.0.180,根据提示输入 linux03 机器的 root 密码,成功会有相应提示
6969
7070
71-
hadoop-master 上测试:
72-
ssh hadoop-node1
73-
ssh hadoop-node2
71+
linux01 上测试:
72+
ssh linux02
73+
ssh linux03
7474
7575
```
7676

@@ -88,7 +88,7 @@ mkdir -p /data/hadoop/hdfs/name /data/hadoop/hdfs/data /data/hadoop/hdfs/tmp
8888
```
8989

9090
- 下载 Hadoop:<http://apache.claz.org/hadoop/common/hadoop-2.6.5/>
91-
- 现在 hadoop-master 机子上安装
91+
- 现在 linux01 机子上安装
9292

9393
```
9494
cd /usr/local && wget http://apache.claz.org/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz
@@ -108,7 +108,7 @@ source /etc/profile
108108
```
109109

110110

111-
## 修改 hadoop-master 配置
111+
## 修改 linux01 配置
112112

113113

114114
```
@@ -145,12 +145,12 @@ vim $HADOOP_HOME/etc/hadoop/core-site.xml,改为:
145145
<!--
146146
<property>
147147
<name>fs.default.name</name>
148-
<value>hdfs://hadoop-master:9000</value>
148+
<value>hdfs://linux01:9000</value>
149149
</property>
150150
-->
151151
<property>
152152
<name>fs.defaultFS</name>
153-
<value>hdfs://hadoop-master:9000</value>
153+
<value>hdfs://linux01:9000</value>
154154
</property>
155155
<property>
156156
<name>hadoop.proxyuser.root.hosts</name>
@@ -225,7 +225,7 @@ vim $HADOOP_HOME/etc/hadoop/yarn-site.xml
225225
<configuration>
226226
<property>
227227
<name>yarn.resourcemanager.hostname</name>
228-
<value>hadoop-master</value>
228+
<value>linux01</value>
229229
</property>
230230
231231
<property>
@@ -244,21 +244,21 @@ vim $HADOOP_HOME/etc/hadoop/yarn-site.xml
244244
vim $HADOOP_HOME/etc/hadoop/slaves
245245
246246
把默认的配置里面的 localhost 删除,换成:
247-
hadoop-node1
248-
hadoop-node2
247+
linux02
248+
linux03
249249
250250
```
251251

252252

253253
```
254-
scp -r /usr/local/hadoop-2.6.5 root@hadoop-node1:/usr/local/
254+
scp -r /usr/local/hadoop-2.6.5 root@linux02:/usr/local/
255255
256-
scp -r /usr/local/hadoop-2.6.5 root@hadoop-node2:/usr/local/
256+
scp -r /usr/local/hadoop-2.6.5 root@linux03:/usr/local/
257257
258258
```
259259

260260

261-
## hadoop-master 机子运行
261+
## linux01 机子运行
262262

263263
```
264264
格式化 HDFS
@@ -269,7 +269,7 @@ hdfs namenode -format
269269
- 输出结果:
270270

271271
```
272-
[root@hadoop-master hadoop-2.6.5]# hdfs namenode -format
272+
[root@linux01 hadoop-2.6.5]# hdfs namenode -format
273273
18/12/17 17:47:17 INFO namenode.NameNode: STARTUP_MSG:
274274
/************************************************************
275275
STARTUP_MSG: Starting NameNode
@@ -424,10 +424,10 @@ tcp6 0 0 :::37481 :::* LISTEN
424424

425425
## 管理界面
426426

427-
- 查看 HDFS NameNode 管理界面:<http://hadoop-master:50070>
428-
- 访问 YARN ResourceManager 管理界面:<http://hadoop-master:8088>
429-
- 访问 NodeManager-1 管理界面:<http://hadoop-node1:8042>
430-
- 访问 NodeManager-2 管理界面:<http://hadoop-node2:8042>
427+
- 查看 HDFS NameNode 管理界面:<http://linux01:50070>
428+
- 访问 YARN ResourceManager 管理界面:<http://linux01:8088>
429+
- 访问 NodeManager-1 管理界面:<http://linux02:8042>
430+
- 访问 NodeManager-2 管理界面:<http://linux03:8042>
431431

432432

433433
-------------------------------------------------------------------

markdown-file/Wormhole-Install-And-Settings.md

Lines changed: 51 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -26,21 +26,32 @@
2626
## 基础环境
2727

2828
- 参考官网:<https://edp963.github.io/wormhole/deployment.html>
29-
- 三台 4C8G 服务器 CentOS 7.4
30-
- hostname:`linux-05`
31-
- hostname:`linux-06`
32-
- hostname:`linux-07`
29+
- 4 台 8C32G 服务器 CentOS 7.5
30+
- **为了方便测试,服务器都已经关闭防火墙,并且对外开通所有端口**
31+
- **都做了免密登录**
32+
- hostname:`linux01`
33+
- hostname:`linux02`
34+
- hostname:`linux03`
35+
- hostname:`linux04`
36+
- hostname:`linux05`
37+
- Ansible 批量添加 hosts 请看:[点击我](Ansible-Install-And-Settings.md)
3338
- 必须(版本请不要随便用,而是按照如下说明来):
3439
- 一般情况下,我组件都是放在:`/usr/local`
35-
- JDK(三台):`1.8.0_181`
36-
- Hadoop 集群(HDFS,YARN)(三台):`2.6.5`
37-
- Spark 单点(linux-05):`2.2.0`
38-
- Flink 单点(linux-05):`1.5.1`
39-
- Zookeeper 单点(linux-05):`3.4.13`
40-
- Kafka 单点(linux-05):`0.10.2.2`
41-
- MySQL 单点(linux-05):`5.7`
42-
- wormhole 单点(linux-05):`0.6.0-beta`,2018-12-06 版本
43-
- 以上组件安装教程可以查看该教程:[点击我](https://github.com/judasn/Linux-Tutorial)
40+
- JDK(所有服务器):`1.8.0_181`
41+
- 批量添加 JDK 请看:[点击我](Ansible-Install-And-Settings.md)
42+
- Hadoop 集群(HDFS,YARN)(linux01、linux02、linux03):`2.6.5`
43+
- 安装请看:[点击我](Hadoop-Install-And-Settings.md)
44+
- Zookeeper 单点(linux04):`3.4.13`
45+
- 安装请看:[点击我](Zookeeper-Install.md)
46+
- Kafka 单点(linux04):`0.10.2.2`
47+
- 安装请看:[点击我](Kafka-Install-And-Settings.md)
48+
- MySQL 单点(linux04):`5.7`
49+
- 安装请看:[点击我](Mysql-Install-And-Settings.md)
50+
- Spark 单点(linux05):`2.2.0`
51+
- 安装请看:[点击我](Spark-Install-And-Settings.md)
52+
- Flink 单点(linux05):`1.5.1`
53+
- 安装请看:[点击我](Flink-Install-And-Settings.md)
54+
- wormhole 单点(linux05):`0.6.0-beta`,2018-12-06 版本
4455
- 非必须:
4556
- Elasticsearch(支持版本 5.x)(非必须,若无则无法查看 wormhole 处理数据的吞吐和延时)
4657
- Grafana (支持版本 4.x)(非必须,若无则无法查看 wormhole 处理数据的吞吐和延时的图形化展示)
@@ -50,15 +61,16 @@
5061
## Wormhole 安装 + 配置
5162

5263
- 参考官网:<https://edp963.github.io/wormhole/deployment.html>
53-
- 最终环境 application.conf 配置文件参考
64+
- 解压:`cd /usr/local && tar -xvf wormhole-0.6.0-beta.tar.gz`
65+
- 修改配置文件:`vim /usr/local/wormhole-0.6.0-beta/conf/application.conf`
5466

5567
```
5668
5769
akka.http.server.request-timeout = 120s
5870
5971
wormholeServer {
6072
cluster.id = "" #optional global uuid
61-
host = "linux-05"
73+
host = "linux05"
6274
port = 8989
6375
ui.default.language = "Chinese"
6476
token.timeout = 1
@@ -73,36 +85,36 @@ mysql = {
7385
driver = "com.mysql.jdbc.Driver"
7486
user = "root"
7587
password = "123456"
76-
url = "jdbc:mysql://localhost:3306/wormhole?useUnicode=true&characterEncoding=UTF-8&useSSL=false"
88+
url = "jdbc:mysql://linux04:3306/wormhole?useUnicode=true&characterEncoding=UTF-8&useSSL=false"
7789
numThreads = 4
7890
minConnections = 4
7991
maxConnections = 10
8092
connectionTimeout = 3000
8193
}
8294
}
8395
84-
ldap = {
85-
enabled = false
86-
user = ""
87-
pwd = ""
88-
url = ""
89-
dc = ""
90-
read.timeout = 3000
91-
read.timeout = 5000
92-
connect = {
93-
timeout = 5000
94-
pool = true
95-
}
96-
}
96+
#ldap = {
97+
# enabled = false
98+
# user = ""
99+
# pwd = ""
100+
# url = ""
101+
# dc = ""
102+
# read.timeout = 3000
103+
# read.timeout = 5000
104+
# connect = {
105+
# timeout = 5000
106+
# pool = true
107+
# }
108+
#}
97109
98110
spark = {
99111
wormholeServer.user = "root" #WormholeServer linux user
100112
wormholeServer.ssh.port = 22 #ssh port, please set WormholeServer linux user can password-less login itself remote
101113
spark.home = "/usr/local/spark"
102114
yarn.queue.name = "default" #WormholeServer submit spark streaming/job queue
103-
wormhole.hdfs.root.path = "hdfs://linux-05/wormhole" #WormholeServer hdfslog data default hdfs root path
104-
yarn.rm1.http.url = "linux-05:8088" #Yarn ActiveResourceManager address
105-
yarn.rm2.http.url = "linux-05:8088" #Yarn StandbyResourceManager address
115+
wormhole.hdfs.root.path = "hdfs://linux01/wormhole" #WormholeServer hdfslog data default hdfs root path
116+
yarn.rm1.http.url = "linux01:8088" #Yarn ActiveResourceManager address
117+
yarn.rm2.http.url = "linux01:8088" #Yarn StandbyResourceManager address
106118
}
107119
108120
flink = {
@@ -111,20 +123,18 @@ flink = {
111123
feedback.state.count=100
112124
checkpoint.enable=false
113125
checkpoint.interval=60000
114-
stateBackend="hdfs://linux-05/flink-checkpoints"
126+
stateBackend="hdfs://linux01/flink-checkpoints"
115127
feedback.interval=30
116128
}
117129
118130
zookeeper = {
119-
connection.url = "localhost:2181" #WormholeServer stream and flow interaction channel
131+
connection.url = "linux04:2181" #WormholeServer stream and flow interaction channel
120132
wormhole.root.path = "/wormhole" #zookeeper
121133
}
122134
123135
kafka = {
124-
#brokers.url = "localhost:6667" #WormholeServer feedback data store
125-
brokers.url = "linux-05:9092"
126-
zookeeper.url = "localhost:2181"
127-
#topic.refactor = 3
136+
brokers.url = "linux04:9092"
137+
zookeeper.url = "linux04:2181"
128138
topic.refactor = 1
129139
using.cluster.suffix = false #if true, _${cluster.id} will be concatenated to consumer.feedback.topic
130140
consumer = {
@@ -156,7 +166,7 @@ kafka = {
156166
157167
# choose monitor method among ES、MYSQL
158168
monitor ={
159-
database.type="ES"
169+
database.type="MYSQL"
160170
}
161171
162172
#Wormhole feedback data store, if doesn't want to config, you will not see wormhole processing delay and throughput
@@ -205,7 +215,7 @@ maintenance = {
205215
- 初始化表结构脚本路径:<https://github.com/edp963/wormhole/blob/master/rider/conf/wormhole.sql>
206216
- 该脚本存在一个问题:初始化脚本和补丁脚本混在一起,所以直接复制执行会有报错,但是报错的部分是不影响
207217
- 我是直接把基础 sql 和补丁 sql 分开执行,方便判断。
208-
- 部署完成,浏览器访问:<http://linux-05:8989>
218+
- 部署完成,浏览器访问:<http://linux01:8989>
209219

210220
-------------------------------------------------------------------
211221

@@ -313,7 +323,7 @@ maintenance = {
313323
## Kafka 发送测试数据
314324

315325
- `cd /usr/local/kafka/bin`
316-
- `./kafka-console-producer.sh --broker-list linux-05:9092 --topic source --property "parse.key=true" --property "key.separator=@@@"`
326+
- `./kafka-console-producer.sh --broker-list linux01:9092 --topic source --property "parse.key=true" --property "key.separator=@@@"`
317327
- 发送 UMS 流消息协议规范格式:
318328

319329
```

0 commit comments

Comments
 (0)