Skip to content

Commit 40d0396

Browse files
tedyumarmbrus
authored andcommitted
[DOC] Adjust coverage for partitionBy()
This is the related thread: http://search-hadoop.com/m/q3RTtO3ReeJ1iF02&subj=Re+partitioning+json+data+in+spark Michael suggested fixing the doc. Please review. Author: tedyu <yuzhihong@gmail.com> Closes apache#10499 from ted-yu/master.
1 parent 573ac55 commit 40d0396

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ final class DataFrameWriter private[sql](df: DataFrame) {
119119
* Partitions the output by the given columns on the file system. If specified, the output is
120120
* laid out on the file system similar to Hive's partitioning scheme.
121121
*
122-
* This is only applicable for Parquet at the moment.
122+
* This was initially applicable for Parquet but in 1.5+ covers JSON, text, ORC and avro as well.
123123
*
124124
* @since 1.4.0
125125
*/

0 commit comments

Comments
 (0)