Skip to content

Commit 843a31a

Browse files
lianchengyhuai
authored andcommitted
[SPARK-12046][DOC] Fixes various ScalaDoc/JavaDoc issues
This PR backports PR apache#10039 to master Author: Cheng Lian <lian@databricks.com> Closes apache#10063 from liancheng/spark-12046.doc-fix.master. (cherry picked from commit 69dbe6b) Signed-off-by: Yin Huai <yhuai@databricks.com>
1 parent 40769b4 commit 843a31a

File tree

25 files changed

+152
-133
lines changed

25 files changed

+152
-133
lines changed

core/src/main/java/org/apache/spark/api/java/function/Function4.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,5 +23,5 @@
2323
* A four-argument function that takes arguments of type T1, T2, T3 and T4 and returns an R.
2424
*/
2525
public interface Function4<T1, T2, T3, T4, R> extends Serializable {
26-
public R call(T1 v1, T2 v2, T3 v3, T4 v4) throws Exception;
26+
R call(T1 v1, T2 v2, T3 v3, T4 v4) throws Exception;
2727
}

core/src/main/java/org/apache/spark/api/java/function/VoidFunction.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,5 +23,5 @@
2323
* A function with no return value.
2424
*/
2525
public interface VoidFunction<T> extends Serializable {
26-
public void call(T t) throws Exception;
26+
void call(T t) throws Exception;
2727
}

core/src/main/java/org/apache/spark/api/java/function/VoidFunction2.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,5 +23,5 @@
2323
* A two-argument function that takes arguments of type T1 and T2 with no return value.
2424
*/
2525
public interface VoidFunction2<T1, T2> extends Serializable {
26-
public void call(T1 v1, T2 v2) throws Exception;
26+
void call(T1 v1, T2 v2) throws Exception;
2727
}

core/src/main/scala/org/apache/spark/api/java/JavaPairRDD.scala

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -215,13 +215,13 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])
215215
/**
216216
* Generic function to combine the elements for each key using a custom set of aggregation
217217
* functions. Turns a JavaPairRDD[(K, V)] into a result of type JavaPairRDD[(K, C)], for a
218-
* "combined type" C * Note that V and C can be different -- for example, one might group an
218+
* "combined type" C. Note that V and C can be different -- for example, one might group an
219219
* RDD of type (Int, Int) into an RDD of type (Int, List[Int]). Users provide three
220220
* functions:
221221
*
222-
* - `createCombiner`, which turns a V into a C (e.g., creates a one-element list)
223-
* - `mergeValue`, to merge a V into a C (e.g., adds it to the end of a list)
224-
* - `mergeCombiners`, to combine two C's into a single one.
222+
* - `createCombiner`, which turns a V into a C (e.g., creates a one-element list)
223+
* - `mergeValue`, to merge a V into a C (e.g., adds it to the end of a list)
224+
* - `mergeCombiners`, to combine two C's into a single one.
225225
*
226226
* In addition, users can control the partitioning of the output RDD, the serializer that is use
227227
* for the shuffle, and whether to perform map-side aggregation (if a mapper can produce multiple
@@ -247,13 +247,13 @@ class JavaPairRDD[K, V](val rdd: RDD[(K, V)])
247247
/**
248248
* Generic function to combine the elements for each key using a custom set of aggregation
249249
* functions. Turns a JavaPairRDD[(K, V)] into a result of type JavaPairRDD[(K, C)], for a
250-
* "combined type" C * Note that V and C can be different -- for example, one might group an
250+
* "combined type" C. Note that V and C can be different -- for example, one might group an
251251
* RDD of type (Int, Int) into an RDD of type (Int, List[Int]). Users provide three
252252
* functions:
253253
*
254-
* - `createCombiner`, which turns a V into a C (e.g., creates a one-element list)
255-
* - `mergeValue`, to merge a V into a C (e.g., adds it to the end of a list)
256-
* - `mergeCombiners`, to combine two C's into a single one.
254+
* - `createCombiner`, which turns a V into a C (e.g., creates a one-element list)
255+
* - `mergeValue`, to merge a V into a C (e.g., adds it to the end of a list)
256+
* - `mergeCombiners`, to combine two C's into a single one.
257257
*
258258
* In addition, users can control the partitioning of the output RDD. This method automatically
259259
* uses map-side aggregation in shuffling the RDD.

core/src/main/scala/org/apache/spark/memory/package.scala

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,13 @@ package org.apache.spark
2121
* This package implements Spark's memory management system. This system consists of two main
2222
* components, a JVM-wide memory manager and a per-task manager:
2323
*
24-
* - [[org.apache.spark.memory.MemoryManager]] manages Spark's overall memory usage within a JVM.
25-
* This component implements the policies for dividing the available memory across tasks and for
26-
* allocating memory between storage (memory used caching and data transfer) and execution (memory
27-
* used by computations, such as shuffles, joins, sorts, and aggregations).
28-
* - [[org.apache.spark.memory.TaskMemoryManager]] manages the memory allocated by individual tasks.
29-
* Tasks interact with TaskMemoryManager and never directly interact with the JVM-wide
30-
* MemoryManager.
24+
* - [[org.apache.spark.memory.MemoryManager]] manages Spark's overall memory usage within a JVM.
25+
* This component implements the policies for dividing the available memory across tasks and for
26+
* allocating memory between storage (memory used caching and data transfer) and execution
27+
* (memory used by computations, such as shuffles, joins, sorts, and aggregations).
28+
* - [[org.apache.spark.memory.TaskMemoryManager]] manages the memory allocated by individual
29+
* tasks. Tasks interact with TaskMemoryManager and never directly interact with the JVM-wide
30+
* MemoryManager.
3131
*
3232
* Internally, each of these components have additional abstractions for memory bookkeeping:
3333
*

core/src/main/scala/org/apache/spark/rdd/CoGroupedRDD.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ private[spark] class CoGroupPartition(
7070
*
7171
* Note: This is an internal API. We recommend users use RDD.cogroup(...) instead of
7272
* instantiating this directly.
73-
73+
*
7474
* @param rdds parent RDDs.
7575
* @param part partitioner used to partition the shuffle output
7676
*/

core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,9 +65,9 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
6565
* Note that V and C can be different -- for example, one might group an RDD of type
6666
* (Int, Int) into an RDD of type (Int, Seq[Int]). Users provide three functions:
6767
*
68-
* - `createCombiner`, which turns a V into a C (e.g., creates a one-element list)
69-
* - `mergeValue`, to merge a V into a C (e.g., adds it to the end of a list)
70-
* - `mergeCombiners`, to combine two C's into a single one.
68+
* - `createCombiner`, which turns a V into a C (e.g., creates a one-element list)
69+
* - `mergeValue`, to merge a V into a C (e.g., adds it to the end of a list)
70+
* - `mergeCombiners`, to combine two C's into a single one.
7171
*
7272
* In addition, users can control the partitioning of the output RDD, and whether to perform
7373
* map-side aggregation (if a mapper can produce multiple items with the same key).

core/src/main/scala/org/apache/spark/rdd/ShuffledRDD.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ class ShuffledRDD[K: ClassTag, V: ClassTag, C: ClassTag](
8686
Array.tabulate[Partition](part.numPartitions)(i => new ShuffledRDDPartition(i))
8787
}
8888

89-
override def getPreferredLocations(partition: Partition): Seq[String] = {
89+
override protected def getPreferredLocations(partition: Partition): Seq[String] = {
9090
val tracker = SparkEnv.get.mapOutputTracker.asInstanceOf[MapOutputTrackerMaster]
9191
val dep = dependencies.head.asInstanceOf[ShuffleDependency[K, V, C]]
9292
tracker.getPreferredLocationsForShuffle(dep, partition.index)

core/src/main/scala/org/apache/spark/scheduler/Task.scala

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,9 @@ import org.apache.spark.util.Utils
3333

3434
/**
3535
* A unit of execution. We have two kinds of Task's in Spark:
36-
* - [[org.apache.spark.scheduler.ShuffleMapTask]]
37-
* - [[org.apache.spark.scheduler.ResultTask]]
36+
*
37+
* - [[org.apache.spark.scheduler.ShuffleMapTask]]
38+
* - [[org.apache.spark.scheduler.ResultTask]]
3839
*
3940
* A Spark job consists of one or more stages. The very last stage in a job consists of multiple
4041
* ResultTasks, while earlier stages consist of ShuffleMapTasks. A ResultTask executes the task

core/src/main/scala/org/apache/spark/serializer/SerializationDebugger.scala

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -53,12 +53,13 @@ private[spark] object SerializationDebugger extends Logging {
5353
/**
5454
* Find the path leading to a not serializable object. This method is modeled after OpenJDK's
5555
* serialization mechanism, and handles the following cases:
56-
* - primitives
57-
* - arrays of primitives
58-
* - arrays of non-primitive objects
59-
* - Serializable objects
60-
* - Externalizable objects
61-
* - writeReplace
56+
*
57+
* - primitives
58+
* - arrays of primitives
59+
* - arrays of non-primitive objects
60+
* - Serializable objects
61+
* - Externalizable objects
62+
* - writeReplace
6263
*
6364
* It does not yet handle writeObject override, but that shouldn't be too hard to do either.
6465
*/

0 commit comments

Comments
 (0)