Skip to content

Commit bcbd0e9

Browse files
committed
Update README.md
1 parent eb0e8be commit bcbd0e9

File tree

1 file changed

+12
-11
lines changed

1 file changed

+12
-11
lines changed

README.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -22,29 +22,29 @@ Spark HBase is built using [Apache Maven](http://maven.apache.org/).
2222

2323
I. Clone and build Huawei-Spark/Spark-SQL-on-HBase
2424
```
25-
$ git clone https://github.com/Huawei-Spark/Spark-SQL-on-HBase spark-hbase
25+
$ git clone https://github.com/Huawei-Spark/Spark-SQL-on-HBase spark-hbase
2626
```
2727

2828
II. Go to the root of the source tree
2929
```
30-
$ cd spark-hbase
30+
$ cd spark-hbase
3131
```
3232

3333
III. Build without testing
3434
```
35-
$ mvn -Phbase,hadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package install
35+
$ mvn -Phbase,hadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package install
3636
```
3737

3838
IV. Build and run test suites against a HBase minicluster, from Maven.
3939
```
40-
$ mvn clean install
40+
$ mvn clean install
4141
```
4242

4343
## Interactive Scala Shell
4444

4545
The easiest way to start using Spark HBase is through the Scala shell:
4646
```
47-
./bin/hbase-sql
47+
./bin/hbase-sql
4848
```
4949

5050
## Python Shell
@@ -55,16 +55,17 @@ SPARK_CLASSPATH=$SPARK_CLASSPATH:/spark-hbase-root-dir/target/spark-sql-on-hbase
5555
```
5656
Then go to the spark-hbase installation directory and issue
5757
```
58-
./bin/pyspark-hbase
58+
./bin/pyspark-hbase
5959
```
6060
A successfull message is as follows:
6161

6262
You are using Spark SQL on HBase!!!
6363
HBaseSQLContext available as hsqlContext.
6464

6565
To run a python script, the PYTHONPATH environment should be set to the "python" directory of the Spark-HBase installation. For example,
66-
67-
export PYTHONPATH=/root-of-Spark-HBase/python
66+
```
67+
export PYTHONPATH=/root-of-Spark-HBase/python
68+
```
6869

6970
Note that the shell commands are not included in the Zip file of the Spark release. They are for developers' use only for this version of 1.0.0. Instead, users can use "$SPARK_HOME/bin/spark-shell --packages Huawei-Spark/Spark-SQL-on-HBase:1.0.0" for SQL shell or "$SPARK_HOME/bin/pyspark --packages Huawei-Spark/Spark-SQL-on-HBase:1.0.0" for Pythin shell.
7071

@@ -74,11 +75,11 @@ Testing first requires [building Spark HBase](#building-spark). Once Spark HBase
7475

7576
Run all test suites from Maven:
7677
```
77-
mvn -Phbase,hadoop-2.4 test
78+
mvn -Phbase,hadoop-2.4 test
7879
```
7980
Run a single test suite from Maven, for example:
8081
```
81-
mvn -Phbase,hadoop-2.4 test -DwildcardSuites=org.apache.spark.sql.hbase.BasicQueriesSuite
82+
mvn -Phbase,hadoop-2.4 test -DwildcardSuites=org.apache.spark.sql.hbase.BasicQueriesSuite
8283
```
8384
## IDE Setup
8485

@@ -94,7 +95,7 @@ To import the current Spark HBase project for IntelliJ:
9495
6. When you run the scala test, sometimes you will get out of memory exception. You can increase your VM memory usage by the following setting, for example:
9596

9697
```
97-
-XX:MaxPermSize=512m -Xmx3072m
98+
-XX:MaxPermSize=512m -Xmx3072m
9899
```
99100

100101
You can also make those setting to be the default by setting to the "Defaults -> ScalaTest".

0 commit comments

Comments
 (0)