Skip to content

Commit a1f6723

Browse files
committed
Spark on Yarn initial commit
0 parents  commit a1f6723

File tree

3 files changed

+61
-0
lines changed

3 files changed

+61
-0
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
.project

Dockerfile

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
FROM sequenceiq/hadoop-docker:2.4.1
2+
MAINTAINER SequenceIQ
3+
4+
RUN curl -s https://s3-eu-west-1.amazonaws.com/seq-spark/spark-v1.0.1-rc2.tar.gz | tar -xz -C /usr/local/
5+
RUN cd /usr/local && ln -s spark-v1.0.1-rc2 spark
6+
RUN $BOOTSTRAP && $HADOOP_PREFIX/bin/hadoop dfsadmin -safemode leave && $HADOOP_PREFIX/bin/hdfs dfs -put /usr/local/spark/assembly/target/scala-2.10 /spark
7+
8+
ENV YARN_CONF_DIR $HADOOP_PREFIX/etc/hadoop
9+
ENV SPARK_JAR hdfs:///spark/spark-assembly-1.0.1-hadoop2.4.1.jar
10+
11+
CMD ["/etc/bootstrap.sh", "-d"]

README.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
Apache Spark on Docker
2+
==========
3+
4+
This repository contains a docker file to build a docker image with Apache Spark. This docker image depends on our previous Hadoop docker image, available at the SequenceIQ GitHub page.
5+
The base Hadoop docker image is also available as an official Docker image (sequenceiq/hadoop-docker).
6+
7+
## Building the image
8+
```
9+
docker build --rm -t sequenceiq/spark .
10+
```
11+
12+
## Running the image
13+
```
14+
docker run -i -t -h sandbox sequenceiq/spark /etc/bootstrap.sh -bash
15+
```
16+
17+
## Versions
18+
Hadoop 2.4.1 and Apache Spark v1.0.1-rc2
19+
20+
## Testing
21+
22+
You can run one of the stock examples:
23+
24+
```
25+
cd /usr/local/spark
26+
# run the spark shell
27+
./bin/spark-shell --master yarn-client --driver-memory 1g --executor-memory 1g --executor-cores 1
28+
29+
# execute the the following command which should return 1000
30+
scala> sc.parallelize(1 to 1000).count()
31+
```
32+
33+
There are two deploy modes that can be used to launch Spark applications on YARN. In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application. In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.
34+
35+
Estimating Pi (yarn-cluster mode): :
36+
```
37+
cd /usr/local/spark
38+
39+
# execute the the following command which should write the "Pi is roughly 3.1418" into the logs
40+
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --driver-memory 1g --executor-memory 1g --executor-cores 1 examples/target/scala-2.10/spark-examples_2.10-1.0.1.jar
41+
```
42+
43+
Estimating Pi (yarn-client mode):
44+
```
45+
cd /usr/local/spark
46+
47+
# execute the the following command which should print the "Pi is roughly 3.1418" to the screen
48+
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 1g --executor-memory 1g --executor-cores 1 examples/target/scala-2.10/spark-examples_2.10-1.0.1.jar
49+
```

0 commit comments

Comments
 (0)