26
26
## 基础环境
27
27
28
28
- 参考官网:< https://edp963.github.io/wormhole/deployment.html >
29
- - 三台 4C8G 服务器 CentOS 7.4
30
- - hostname:` linux-05 `
31
- - hostname:` linux-06 `
32
- - hostname:` linux-07 `
29
+ - 4 台 8C32G 服务器 CentOS 7.5
30
+ - ** 为了方便测试,服务器都已经关闭防火墙,并且对外开通所有端口**
31
+ - ** 都做了免密登录**
32
+ - hostname:` linux01 `
33
+ - hostname:` linux02 `
34
+ - hostname:` linux03 `
35
+ - hostname:` linux04 `
36
+ - hostname:` linux05 `
37
+ - Ansible 批量添加 hosts 请看:[ 点击我] ( Ansible-Install-And-Settings.md )
33
38
- 必须(版本请不要随便用,而是按照如下说明来):
34
39
- 一般情况下,我组件都是放在:` /usr/local `
35
- - JDK(三台):` 1.8.0_181 `
36
- - Hadoop 集群(HDFS,YARN)(三台):` 2.6.5 `
37
- - Spark 单点(linux-05):` 2.2.0 `
38
- - Flink 单点(linux-05):` 1.5.1 `
39
- - Zookeeper 单点(linux-05):` 3.4.13 `
40
- - Kafka 单点(linux-05):` 0.10.2.2 `
41
- - MySQL 单点(linux-05):` 5.7 `
42
- - wormhole 单点(linux-05):` 0.6.0-beta ` ,2018-12-06 版本
43
- - 以上组件安装教程可以查看该教程:[ 点击我] ( https://github.com/judasn/Linux-Tutorial )
40
+ - JDK(所有服务器):` 1.8.0_181 `
41
+ - 批量添加 JDK 请看:[ 点击我] ( Ansible-Install-And-Settings.md )
42
+ - Hadoop 集群(HDFS,YARN)(linux01、linux02、linux03):` 2.6.5 `
43
+ - 安装请看:[ 点击我] ( Hadoop-Install-And-Settings.md )
44
+ - Zookeeper 单点(linux04):` 3.4.13 `
45
+ - 安装请看:[ 点击我] ( Zookeeper-Install.md )
46
+ - Kafka 单点(linux04):` 0.10.2.2 `
47
+ - 安装请看:[ 点击我] ( Kafka-Install-And-Settings.md )
48
+ - MySQL 单点(linux04):` 5.7 `
49
+ - 安装请看:[ 点击我] ( Mysql-Install-And-Settings.md )
50
+ - Spark 单点(linux05):` 2.2.0 `
51
+ - 安装请看:[ 点击我] ( Spark-Install-And-Settings.md )
52
+ - Flink 单点(linux05):` 1.5.1 `
53
+ - 安装请看:[ 点击我] ( Flink-Install-And-Settings.md )
54
+ - wormhole 单点(linux05):` 0.6.0-beta ` ,2018-12-06 版本
44
55
- 非必须:
45
56
- Elasticsearch(支持版本 5.x)(非必须,若无则无法查看 wormhole 处理数据的吞吐和延时)
46
57
- Grafana (支持版本 4.x)(非必须,若无则无法查看 wormhole 处理数据的吞吐和延时的图形化展示)
50
61
## Wormhole 安装 + 配置
51
62
52
63
- 参考官网:< https://edp963.github.io/wormhole/deployment.html >
53
- - 最终环境 application.conf 配置文件参考
64
+ - 解压:` cd /usr/local && tar -xvf wormhole-0.6.0-beta.tar.gz `
65
+ - 修改配置文件:` vim /usr/local/wormhole-0.6.0-beta/conf/application.conf `
54
66
55
67
```
56
68
57
69
akka.http.server.request-timeout = 120s
58
70
59
71
wormholeServer {
60
72
cluster.id = "" #optional global uuid
61
- host = "linux-05 "
73
+ host = "linux05 "
62
74
port = 8989
63
75
ui.default.language = "Chinese"
64
76
token.timeout = 1
@@ -73,36 +85,36 @@ mysql = {
73
85
driver = "com.mysql.jdbc.Driver"
74
86
user = "root"
75
87
password = "123456"
76
- url = "jdbc:mysql://localhost :3306/wormhole?useUnicode=true&characterEncoding=UTF-8&useSSL=false"
88
+ url = "jdbc:mysql://linux04 :3306/wormhole?useUnicode=true&characterEncoding=UTF-8&useSSL=false"
77
89
numThreads = 4
78
90
minConnections = 4
79
91
maxConnections = 10
80
92
connectionTimeout = 3000
81
93
}
82
94
}
83
95
84
- ldap = {
85
- enabled = false
86
- user = ""
87
- pwd = ""
88
- url = ""
89
- dc = ""
90
- read.timeout = 3000
91
- read.timeout = 5000
92
- connect = {
93
- timeout = 5000
94
- pool = true
95
- }
96
- }
96
+ # ldap = {
97
+ # enabled = false
98
+ # user = ""
99
+ # pwd = ""
100
+ # url = ""
101
+ # dc = ""
102
+ # read.timeout = 3000
103
+ # read.timeout = 5000
104
+ # connect = {
105
+ # timeout = 5000
106
+ # pool = true
107
+ # }
108
+ # }
97
109
98
110
spark = {
99
111
wormholeServer.user = "root" #WormholeServer linux user
100
112
wormholeServer.ssh.port = 22 #ssh port, please set WormholeServer linux user can password-less login itself remote
101
113
spark.home = "/usr/local/spark"
102
114
yarn.queue.name = "default" #WormholeServer submit spark streaming/job queue
103
- wormhole.hdfs.root.path = "hdfs://linux-05 /wormhole" #WormholeServer hdfslog data default hdfs root path
104
- yarn.rm1.http.url = "linux-05 :8088" #Yarn ActiveResourceManager address
105
- yarn.rm2.http.url = "linux-05 :8088" #Yarn StandbyResourceManager address
115
+ wormhole.hdfs.root.path = "hdfs://linux01 /wormhole" #WormholeServer hdfslog data default hdfs root path
116
+ yarn.rm1.http.url = "linux01 :8088" #Yarn ActiveResourceManager address
117
+ yarn.rm2.http.url = "linux01 :8088" #Yarn StandbyResourceManager address
106
118
}
107
119
108
120
flink = {
@@ -111,20 +123,18 @@ flink = {
111
123
feedback.state.count=100
112
124
checkpoint.enable=false
113
125
checkpoint.interval=60000
114
- stateBackend="hdfs://linux-05 /flink-checkpoints"
126
+ stateBackend="hdfs://linux01 /flink-checkpoints"
115
127
feedback.interval=30
116
128
}
117
129
118
130
zookeeper = {
119
- connection.url = "localhost :2181" #WormholeServer stream and flow interaction channel
131
+ connection.url = "linux04 :2181" #WormholeServer stream and flow interaction channel
120
132
wormhole.root.path = "/wormhole" #zookeeper
121
133
}
122
134
123
135
kafka = {
124
- #brokers.url = "localhost:6667" #WormholeServer feedback data store
125
- brokers.url = "linux-05:9092"
126
- zookeeper.url = "localhost:2181"
127
- #topic.refactor = 3
136
+ brokers.url = "linux04:9092"
137
+ zookeeper.url = "linux04:2181"
128
138
topic.refactor = 1
129
139
using.cluster.suffix = false #if true, _${cluster.id} will be concatenated to consumer.feedback.topic
130
140
consumer = {
@@ -156,7 +166,7 @@ kafka = {
156
166
157
167
# choose monitor method among ES、MYSQL
158
168
monitor ={
159
- database.type="ES "
169
+ database.type="MYSQL "
160
170
}
161
171
162
172
#Wormhole feedback data store, if doesn't want to config, you will not see wormhole processing delay and throughput
@@ -205,7 +215,7 @@ maintenance = {
205
215
- 初始化表结构脚本路径:< https://github.com/edp963/wormhole/blob/master/rider/conf/wormhole.sql >
206
216
- 该脚本存在一个问题:初始化脚本和补丁脚本混在一起,所以直接复制执行会有报错,但是报错的部分是不影响
207
217
- 我是直接把基础 sql 和补丁 sql 分开执行,方便判断。
208
- - 部署完成,浏览器访问:< http://linux-05 :8989 >
218
+ - 部署完成,浏览器访问:< http://linux01 :8989 >
209
219
210
220
-------------------------------------------------------------------
211
221
@@ -313,7 +323,7 @@ maintenance = {
313
323
## Kafka 发送测试数据
314
324
315
325
- ` cd /usr/local/kafka/bin `
316
- - ` ./kafka-console-producer.sh --broker-list linux-05 :9092 --topic source --property "parse.key=true" --property "key.separator=@@@" `
326
+ - ` ./kafka-console-producer.sh --broker-list linux01 :9092 --topic source --property "parse.key=true" --property "key.separator=@@@" `
317
327
- 发送 UMS 流消息协议规范格式:
318
328
319
329
```
0 commit comments