Skip to content

Conversation

@kskalski
Copy link

@kskalski kskalski commented Nov 6, 2024

Closes #404


This change is Reviewable

@durch durch changed the base branch from master to 0.37.0-alpha.1 September 16, 2025 20:10
@durch durch requested a review from Copilot September 16, 2025 20:10
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the streaming upload functionality to process chunks sequentially rather than concurrently, addressing issue #404. The change simplifies the multipart upload logic by removing the concurrent chunk processing and instead handling one chunk at a time.

Key changes:

  • Replaces concurrent chunk processing with sequential processing using try_join to parallelize only the current chunk upload and next chunk reading
  • Simplifies error handling by removing nested match statements for abort scenarios
  • Eliminates the intermediate collection and transformation of ETags by building the parts vector directly

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

let etag = response_data.as_str()?;
etags.push(etag.to_string());
parts.push(Part {
etag: response.to_string()?,
Copy link

Copilot AI Sep 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method call response.to_string()? appears incorrect. Based on the original code that used response_data.as_str()?, this should likely be response.as_str()?.to_string() to first convert to string slice then to owned String.

Suggested change
etag: response.to_string()?,
etag: response.as_str()?.to_string(),

Copilot uses AI. Check for mistakes.
@durch
Copy link
Owner

durch commented Sep 16, 2025

@kskalski appreciate your patience, will be release in 0.37.*, very nice work, thank you!

@durch
Copy link
Owner

durch commented Sep 16, 2025

Thank you for identifying this memory issue and providing a fix!

You're absolutely right that the current implementation reads ALL chunks into memory before starting uploads, which causes memory exhaustion for large files and connection timeouts.

Your sequential approach correctly fixes the memory issue. However, we're concerned about the performance impact of processing chunks one at a time, especially for large files or high-latency connections.

Better Solution We'll Implement: Bounded Parallelism

We'll implement a sliding window approach with bounded parallelism that:

  • Maintains a small number of chunks in flight simultaneously (e.g., 3-5)
  • As each chunk upload completes, reads and starts the next one
  • Keeps memory usage bounded while preserving upload performance

This can be done using futures::stream::FuturesUnordered with a buffer limit or a semaphore-based approach.

Thank you again for identifying this issue and demonstrating the problem clearly. We'll build on your work to create a solution that addresses both the memory and performance concerns.

@durch durch closed this Sep 16, 2025
durch added a commit that referenced this pull request Sep 16, 2025
* Kick off 0.37.0 version, update dependencies, clippy lints

* fix(s3): exclude content-length for ListBuckets (#428)

* Switched futures to futures-util (#426)

Co-authored-by: Mohammed Jassim Alkarkhi <[email protected]>

* feat: Add builder pattern for PUT operations with custom headers

Implements a flexible builder pattern for S3 PUT operations that allows
setting custom headers, content type, and metadata on a per-request basis.
This replaces the approach from PR #425 with a more extensible design.

Key changes:
- Add PutObjectRequest builder for regular uploads with custom headers
- Add PutObjectStreamRequest builder for streaming uploads
- Support custom headers including Cache-Control, Content-Encoding, metadata
- Integrate with existing multipart upload flow for large files
- Use Result types for builder methods to handle invalid header values

The builder pattern provides a cleaner API that avoids method proliferation
while maintaining backward compatibility with existing PUT methods.

* Fix Bucket::get_object_range_to_writer() sync impl (#413)

* R2: EU jurisdiction endpoint (#409)

* Correct R2/Custom region comments

* feat(region): add Cloudflare R2 EU jurisdiction support

Add new R2Eu variant to Region enum to support Cloudflare R2's EU
jurisdiction endpoint with format "{account_id}.eu.r2.cloudflarestorage.com"

* Update aws-region/src/region.rs

Co-authored-by: Copilot <[email protected]>

---------

Co-authored-by: Drazen Urch <[email protected]>
Co-authored-by: Copilot <[email protected]>

* bucket: ensure Bucket::exists() honours 'dangereous' config (#415)

* bucket: ensure Bucket::exists() honours 'dangereous' config

* Update s3/src/bucket.rs

Co-authored-by: Copilot <[email protected]>

---------

Co-authored-by: Drazen Urch <[email protected]>
Co-authored-by: Copilot <[email protected]>

* refactor: restructure Makefiles to run fmt and clippy before tests

- Separated formatting, linting, and testing phases in build process
- fmt and clippy now run for all feature combinations first
- Tests run only after successful formatting and linting checks
- Improved CI workflow efficiency by catching style issues early

* fix: ensure Bucket::exists() honors dangerous SSL config (PR #415)

- Renamed list_buckets_ to _list_buckets for consistency
- Fixed error variant configuration for InvalidHeaderName
- Added test coverage for dangerous SSL config with exists() method

* fix: add RUST_S3_SKIP_LOCATION_CONSTRAINT env var for LocalStack compatibility

Fixes #411 by allowing users to skip sending location constraints in bucket
creation requests. LocalStack and some other S3-compatible services don't
support location constraints in the request body and return
InvalidLocationConstraint errors.

Users can now set RUST_S3_SKIP_LOCATION_CONSTRAINT=true (or 1) to skip
sending the location constraint when creating buckets.

* feat: add AsyncRead implementation for ResponseDataStream

Implements AsyncRead trait for ResponseDataStream to enable streaming S3 objects
without loading them entirely into memory. This allows use of standard Rust I/O
utilities like tokio::io::copy() for more efficient and ergonomic file transfers.

- Add tokio::io::AsyncRead implementation for tokio runtime
- Add async_std::io::Read implementation for async-std runtime
- Add comprehensive tests for both implementations
- Add surf to with-async-std feature for proper async-std support

This addresses the issue raised in PR #410 and enables memory-efficient streaming
for large S3 objects.

* Prepare for PR #407 merge - update dependencies and fix formatting

* fix: implement bounded parallelism for multipart uploads to prevent memory exhaustion

- Adds dynamic memory-based concurrency control for multipart uploads
- Uses FuturesUnordered to maintain a sliding window of concurrent chunk uploads
- Calculates optimal concurrent chunks based on available system memory (2-10 chunks)
- Prevents memory exhaustion issue where all chunks were loaded before uploading (fixes #404)
- Adds optional sysinfo dependency, only enabled with async features
- Maintains backward compatibility and performance through bounded parallelism

This solution fixes the "Broken pipe" errors that occurred with large file uploads
by ensuring we only keep a limited number of chunks in memory at once, while still
maintaining good upload performance through parallel chunk uploads.

* perf: increase max concurrent chunks from 10 to 100 for better performance

The previous limit of 10 concurrent chunks was too conservative and caused
significant performance degradation. Increasing to 100 allows better
parallelism while still maintaining memory-based limits.

* fix: correct delete_bucket_lifecycle to use DeleteBucketLifecycle command

The delete_bucket_lifecycle method was incorrectly using Command::DeleteBucket
instead of Command::DeleteBucketLifecycle, which could have caused entire buckets
to be deleted instead of just their lifecycle configuration.

Closes #414

* docs: clarify ETag handling in response_data for PUT operations

Added detailed comments explaining why the library extracts ETag from headers
and returns it as the response body when etag=true. This behavior is used for
PUT operations where S3 returns empty or non-useful response bodies, and the
calling code needs easy access to the ETag value.

Added TODO comments for future refactoring to properly handle response bodies
and ETags separately, though this would be a breaking change.

Closes #430

* fix: remove trailing slashes from custom endpoints to prevent signature mismatches

Custom endpoints with trailing slashes (like 'https://s3.gra.io.cloud.ovh.net/')
were causing 403 Signature Mismatch errors. The trailing slash was being included
in the host header, causing signature validation to fail.

This fix ensures that trailing slashes are stripped from custom endpoints,
making the behavior consistent with AWS CLI and other SDKs.

Added comprehensive tests to verify the fix handles various edge cases including
multiple trailing slashes and endpoints with ports.

Closes #429

* fix: preserve standard ports in presigned URLs for signature validation

When custom endpoints explicitly specify standard ports (80 for HTTP, 443 for HTTPS),
these ports must be preserved in presigned URLs. The URL crate's to_string() method
omits standard ports by design, but this causes signature validation failures when
servers expect the port in the Host header.

This fix rebuilds the presigned URL string with the original host (including port)
when a custom region explicitly specifies a standard port. Non-standard ports
continue to work as before since they are naturally preserved by URL parsing.

Added comprehensive tests to verify correct behavior for both standard and
non-standard ports.

Closes #419

* Bump versions, fmt

---------

Co-authored-by: George <[email protected]>
Co-authored-by: Mohammed Jassim Alkarkhi <[email protected]>
Co-authored-by: Mohammed Jassim Alkarkhi <[email protected]>
Co-authored-by: Jasin Bushnaief <[email protected]>
Co-authored-by: Ruben De Smet <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: whitty <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Make put_object_stream for large files progressively upload several chunks at a time

2 participants