-
Notifications
You must be signed in to change notification settings - Fork 226
0.37.0 #432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
0.37.0 #432
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Brownian Motion (Brass)Recommendation: Stop Summary: PR introduces significant changes with potential over-engineering and complexity. Highlights
Unknowns
Next actions
Reflection questions
|
Co-authored-by: Mohammed Jassim Alkarkhi <[email protected]>
Implements a flexible builder pattern for S3 PUT operations that allows setting custom headers, content type, and metadata on a per-request basis. This replaces the approach from PR #425 with a more extensible design. Key changes: - Add PutObjectRequest builder for regular uploads with custom headers - Add PutObjectStreamRequest builder for streaming uploads - Support custom headers including Cache-Control, Content-Encoding, metadata - Integrate with existing multipart upload flow for large files - Use Result types for builder methods to handle invalid header values The builder pattern provides a cleaner API that avoids method proliferation while maintaining backward compatibility with existing PUT methods.
* Correct R2/Custom region comments
* feat(region): add Cloudflare R2 EU jurisdiction support
Add new R2Eu variant to Region enum to support Cloudflare R2's EU
jurisdiction endpoint with format "{account_id}.eu.r2.cloudflarestorage.com"
* Update aws-region/src/region.rs
Co-authored-by: Copilot <[email protected]>
---------
Co-authored-by: Drazen Urch <[email protected]>
Co-authored-by: Copilot <[email protected]>
* bucket: ensure Bucket::exists() honours 'dangereous' config * Update s3/src/bucket.rs Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Drazen Urch <[email protected]> Co-authored-by: Copilot <[email protected]>
- Separated formatting, linting, and testing phases in build process - fmt and clippy now run for all feature combinations first - Tests run only after successful formatting and linting checks - Improved CI workflow efficiency by catching style issues early
- Renamed list_buckets_ to _list_buckets for consistency - Fixed error variant configuration for InvalidHeaderName - Added test coverage for dangerous SSL config with exists() method
…atibility Fixes #411 by allowing users to skip sending location constraints in bucket creation requests. LocalStack and some other S3-compatible services don't support location constraints in the request body and return InvalidLocationConstraint errors. Users can now set RUST_S3_SKIP_LOCATION_CONSTRAINT=true (or 1) to skip sending the location constraint when creating buckets.
Implements AsyncRead trait for ResponseDataStream to enable streaming S3 objects without loading them entirely into memory. This allows use of standard Rust I/O utilities like tokio::io::copy() for more efficient and ergonomic file transfers. - Add tokio::io::AsyncRead implementation for tokio runtime - Add async_std::io::Read implementation for async-std runtime - Add comprehensive tests for both implementations - Add surf to with-async-std feature for proper async-std support This addresses the issue raised in PR #410 and enables memory-efficient streaming for large S3 objects.
…emory exhaustion - Adds dynamic memory-based concurrency control for multipart uploads - Uses FuturesUnordered to maintain a sliding window of concurrent chunk uploads - Calculates optimal concurrent chunks based on available system memory (2-10 chunks) - Prevents memory exhaustion issue where all chunks were loaded before uploading (fixes #404) - Adds optional sysinfo dependency, only enabled with async features - Maintains backward compatibility and performance through bounded parallelism This solution fixes the "Broken pipe" errors that occurred with large file uploads by ensuring we only keep a limited number of chunks in memory at once, while still maintaining good upload performance through parallel chunk uploads.
…mance The previous limit of 10 concurrent chunks was too conservative and caused significant performance degradation. Increasing to 100 allows better parallelism while still maintaining memory-based limits.
…mand The delete_bucket_lifecycle method was incorrectly using Command::DeleteBucket instead of Command::DeleteBucketLifecycle, which could have caused entire buckets to be deleted instead of just their lifecycle configuration. Closes #414
Added detailed comments explaining why the library extracts ETag from headers and returns it as the response body when etag=true. This behavior is used for PUT operations where S3 returns empty or non-useful response bodies, and the calling code needs easy access to the ETag value. Added TODO comments for future refactoring to properly handle response bodies and ETags separately, though this would be a breaking change. Closes #430
…re mismatches Custom endpoints with trailing slashes (like 'https://s3.gra.io.cloud.ovh.net/') were causing 403 Signature Mismatch errors. The trailing slash was being included in the host header, causing signature validation to fail. This fix ensures that trailing slashes are stripped from custom endpoints, making the behavior consistent with AWS CLI and other SDKs. Added comprehensive tests to verify the fix handles various edge cases including multiple trailing slashes and endpoints with ports. Closes #429
When custom endpoints explicitly specify standard ports (80 for HTTP, 443 for HTTPS), these ports must be preserved in presigned URLs. The URL crate's to_string() method omits standard ports by design, but this causes signature validation failures when servers expect the port in the Host header. This fix rebuilds the presigned URL string with the original host (including port) when a custom region explicitly specifies a standard port. Non-standard ports continue to work as before since they are naturally preserved by URL parsing. Added comprehensive tests to verify correct behavior for both standard and non-standard ports. Closes #419
71529df to
0cf6492
Compare
Brownian Motion (Brass)Recommendation: Refactor Summary: PR addresses performance and API improvements but risks over-complication. Highlights
Unknowns
Next actions
Reflection questions
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Release 0.37.0 - Performance, Reliability, and API Improvements
This release brings significant improvements in performance, memory management, and reliability, along with several bug fixes and new features.
🚀 Performance Improvements
Multipart Upload Optimizations
🐛 Bug Fixes
Endpoint and URL Handling
API Correctness
✨ New Features
Builder Pattern for PUT Operations
Added a fluent builder API for PUT operations with custom headers:
bucket.put_object_builder("/my-file.txt", b"Hello, World!")
.with_content_type("text/plain")
.with_cache_control("public, max-age=3600")?
.with_metadata("author", "john-doe")?
.execute()
.await?
Region Support
📚 Documentation
This change is