Skip to content

aws cli fails for for s3 copy function when run within amazoncorretto container image #9403

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task
donovat opened this issue Mar 26, 2025 · 4 comments
Closed
1 task
Assignees
Labels
bug This issue is a bug. p3 This is a minor priority issue s3

Comments

@donovat
Copy link

donovat commented Mar 26, 2025

Describe the bug

Using amazoncorretto:21 & amazoncorretto:23 as base images to build a new Docker container, which also needs to contain the aws cli tool. In this particular case to use the s3 elements of the aws cli tool to both list and copy files from the running image into s3 storage. Note: This is local s3 storage, not hosted amazon s3 storage.
However, the same command aws cli works via a different base image.
below is the Dockerfile being utilised, and the two aws commands used within the running image.

FROM amazoncorretto:23
RUN yum install -y rsync procps-ng shadow-utils which wget tar unzip

RUN BINARY=yq_linux_amd64 && \
  LATEST=$(wget -qO- https://api.github.com/repos/mikefarah/yq/releases/latest 2>/dev/null | \
  grep browser_download_url | grep $BINARY\"\$|awk '{print $NF}' | tr -d '"') &&  \
  wget -q $LATEST -O /usr/bin/yq && chmod +x /usr/bin/yq

ENV NEXTFLOW_HOME=/usr/local/nextflow
ENV NXF_HOME=${NEXTFLOW_HOME}/.nextflow
ENV NXF_ASSETS=/mnt/nextflow/pangenome/assets

WORKDIR ${NEXTFLOW_HOME}
RUN groupadd -g 1001 nextflow && useradd -u 1001 -g nextflow nextflow

RUN curl -s https://get.nextflow.io | bash
RUN mv nextflow /usr/local/bin \
  && chmod 755 /usr/local/bin/nextflow

RUN chown nextflow:nextflow -R ${NEXTFLOW_HOME} \
  && chmod ugo=rwX -R ${NEXTFLOW_HOME}

# Download and install AWS CLI v2
#RUN yum install -y unzip curl
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
    && unzip awscliv2.zip \
    && ./aws/install \
    && rm -rf awscliv2.zip aws
RUN aws --version

# Import the CA certificate into the JVM's truststore if SSL certificate validation error
COPY service-ca.crt .
ENV TRUSTSTORE_FILE=/usr/lib/jvm/jre/lib/security/cacerts
ENV TRUSTSTORE_PASS=changeit
RUN keytool -importcert -trustcacerts -alias server-cert -file service-ca.crt -keystore $TRUSTSTORE_FILE -storepass $TRUSTSTORE_PASS -noprompt -storetype JKS

CMD ["bash"]

Have also tried without the nextflow element within the image, but do need this within the final image. Also version 24 is not yet compatible with nextflow.

The aws cli commands are ;
(works)
aws --endpoint-url=${S3_ENDPOINT_URL} s3 ls s3://${S3_BUCKET_NAME}/
(fails)
aws --endpoint-url=${S3_ENDPOINT_URL} s3 cp /mnt/nextflow/3496e5a3-9079-475b-9e6c-75b8bfac6e8d/logfiles/3496e5a3-9079-475b-9e6c-75b8bfac6e8d_nextflow.log s3://${S3_BUCKET_NAME}/logfiles/

The error message is:
upload failed: ../../../mnt/nextflow/3496e5a3-9079-475b-9e6c-75b8bfac6e8d/logfiles/3496e5a3-9079-475b-9e6c-75b8bfac6e8d_nextflow.log to s3://nextflow-s3-prod/logfiles/3496e5a3-9079-475b-9e6c-75b8bfac6e8d_nextflow.log An error occurred (InvalidDigest) when calling the PutObject operation: The Content-MD5 you specified is not valid.

Note: This issue was also raised here: corretto/corretto-docker#234

Regression Issue

  • Select this option if this issue appears to be a regression.

Expected Behavior

For the s3 copy function to work, and does on many other base images.

Current Behavior

See above.

Reproduction Steps

See Dockerfile above. Build and run with env values set.

Possible Solution

No response

Additional Information/Context

No response

CLI version used

aws-cli/2.17.22 & later.

Environment details (OS name and version, etc.)

amazoncorretto:21 & amazoncorretto:23 Docker images as base.. pulled down from docker-hub.

@donovat donovat added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Mar 26, 2025
@adev-code adev-code self-assigned this Apr 8, 2025
@adev-code adev-code added s3 investigating This issue is being investigated and/or work is in progress to resolve the issue. p3 This is a minor priority issue and removed needs-triage This issue or PR still needs to be triaged. labels Apr 8, 2025
@adev-code
Copy link

Hello @donovat, I tried to replicate the above with amazoncorretto and MiniIO as a short example and did not get the issue. I was able to upload to s3 via cp command using a container. Could you please verify if you still get the issue at this moment? Also please verify if the third party or the local S3 storage has conformed with S3 new default integrity protections #9214 ?

@adev-code adev-code added response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. and removed investigating This issue is being investigated and/or work is in progress to resolve the issue. labels Apr 9, 2025
@donovat
Copy link
Author

donovat commented Apr 10, 2025

Hi @adev-code - I have paused investigation/testing as I have been told that the infrastructure team who look after our this particular OpenShift cluster are replacing the current s3 hardware attached, with different hardware. (planned for the next few days). I hope to re-test next week and see if the issue is still present. Discussing with the team, they believe the issue could be a hardware fault (but not confirmed), as it seems to only happen when moving medium to large files.

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Apr 10, 2025
@adev-code
Copy link

Ok thanks for letting me know. Looks like no updates for weeks.

Copy link

This issue is now closed. Comments on closed issues are hard for our team to see.
If you need more assistance, please open a new issue that references this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a bug. p3 This is a minor priority issue s3
Projects
None yet
Development

No branches or pull requests

2 participants