You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[SPARK-12318][SPARKR] Save mode in SparkR should be error by default
shivaram Please help review.
Author: Jeff Zhang <zjffdu@apache.org>
Closesapache#10290 from zjffdu/SPARK-12318.
(cherry picked from commit 2eb5af5)
Signed-off-by: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
Copy file name to clipboardExpand all lines: docs/sparkr.md
+8-1Lines changed: 8 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -148,7 +148,7 @@ printSchema(people)
148
148
</div>
149
149
150
150
The data sources API can also be used to save out DataFrames into multiple file formats. For example we can save the DataFrame from the previous example
151
-
to a Parquet file using `write.df`
151
+
to a Parquet file using `write.df` (Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API)
152
152
153
153
<divdata-lang="r"markdown="1">
154
154
{% highlight r %}
@@ -387,3 +387,10 @@ The following functions are masked by the SparkR package:
387
387
Since part of SparkR is modeled on the `dplyr` package, certain functions in SparkR share the same names with those in `dplyr`. Depending on the load order of the two packages, some functions from the package loaded first are masked by those in the package loaded after. In such case, prefix such calls with the package name, for instance, `SparkR::cume_dist(x)` or `dplyr::cume_dist(x)`.
388
388
389
389
You can inspect the search path in R with [`search()`](https://stat.ethz.ch/R-manual/R-devel/library/base/html/search.html)
390
+
391
+
392
+
# Migration Guide
393
+
394
+
## Upgrading From SparkR 1.6 to 1.7
395
+
396
+
- Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API.
0 commit comments