@@ -30,6 +30,8 @@ command set:
3030 size that the CLI uses for multipart transfers of individual files.
3131* ``max_bandwidth `` - The maximum bandwidth that will be consumed for uploading
3232 and downloading data to and from Amazon S3.
33+ * ``io_chunksize `` - The maximum size of read parts that can be queued in-memory
34+ to be written for a download.
3335
3436For experimental ``s3 `` configuration values, see the the
3537`Experimental Configuration Values <#experimental-configuration-values >`__
@@ -208,6 +210,26 @@ threads having to wait unnecessarily which can lead to excess resource
208210consumption and connection timeouts.
209211
210212
213+ io_chunksize
214+ ------------
215+
216+ **Default ** - ``256KB ``
217+
218+ When a GET request is called for downloads, the response contains a file-like
219+ object that streams data fetched from S3. Chunks are read from the stream and
220+ queued in-memory for writes. ``io_chunksize `` configures the maximum size of
221+ elements in the IO queue. This value can be specified using the same semantics
222+ as ``multipart_threshold ``, that is either as the number of bytes as an
223+ integer, or using a size suffix.
224+
225+ Increasing this value may result in higher overall throughput by preventing
226+ blocking in cases where large objects are downloaded in environments where
227+ network speed exceeds disk write speed. It is recommended to only configure
228+ ``io_chunksize `` if overall download throughput is constrained by writes.
229+ In cases where network IO is the bottleneck, it is recommended to configure
230+ ``max_concurrent_requests `` instead.
231+
232+
211233use_accelerate_endpoint
212234-----------------------
213235
0 commit comments