Skip to content

Commit d49e3f1

Browse files
committed
Disable spark.reducer.maxReqSizeShuffleToMem
1 parent bcae03f commit d49e3f1

File tree

2 files changed

+1
-9
lines changed

2 files changed

+1
-9
lines changed

core/src/main/scala/org/apache/spark/internal/config/package.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -326,7 +326,7 @@ package object config {
326326
.doc("The blocks of a shuffle request will be fetched to disk when size of the request is " +
327327
"above this threshold. This is to avoid a giant request takes too much memory.")
328328
.bytesConf(ByteUnit.BYTE)
329-
.createWithDefaultString("200m")
329+
.createWithDefault(Long.MaxValue)
330330

331331
private[spark] val TASK_METRICS_TRACK_UPDATED_BLOCK_STATUSES =
332332
ConfigBuilder("spark.taskMetrics.trackUpdatedBlockStatuses")

docs/configuration.md

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -528,14 +528,6 @@ Apart from these, the following properties are also available, and may be useful
528528
By allowing it to limit the number of fetch requests, this scenario can be mitigated.
529529
</td>
530530
</tr>
531-
<tr>
532-
<td><code>spark.reducer.maxReqSizeShuffleToMem</code></td>
533-
<td>200m</td>
534-
<td>
535-
The blocks of a shuffle request will be fetched to disk when size of the request is above
536-
this threshold. This is to avoid a giant request takes too much memory.
537-
</td>
538-
</tr>
539531
<tr>
540532
<td><code>spark.shuffle.compress</code></td>
541533
<td>true</td>

0 commit comments

Comments
 (0)