{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":23418517,"defaultBranch":"trunk","name":"hadoop","ownerLogin":"apache","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2014-08-28T07:00:08.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/47359?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1726858601.0","currentOid":""},"activityList":{"items":[{"before":"1d1e4ac20f4a898b90f23f9312e6116ab8d42eff","after":"68ae025c9549677b45461d680bfb0322e030efcf","ref":"refs/heads/gh-pages","pushedAt":"2024-09-21T16:08:30.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"deploy: 28538d628ecff740e1ca8ae2741addb0db8cfd71","shortMessageHtmlLink":"deploy: 28538d6"}},{"before":"6bcc2541235486d30be5b5438327673039d07951","after":"28538d628ecff740e1ca8ae2741addb0db8cfd71","ref":"refs/heads/trunk","pushedAt":"2024-09-21T15:56:51.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ayushtkn","name":"Ayush Saxena","path":"/ayushtkn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/25608848?s=80&v=4"},"commit":{"message":"HADOOP-19164. Hadoop CLI MiniCluster is broken (#7050). Contributed by Ayush Saxena.\n\nReviewed-by: Vinayakumar B ","shortMessageHtmlLink":"HADOOP-19164. Hadoop CLI MiniCluster is broken (#7050). Contributed b…"}},{"before":"66a716f5a9e7f69a26305e15386f59f621fad87e","after":"1d1e4ac20f4a898b90f23f9312e6116ab8d42eff","ref":"refs/heads/gh-pages","pushedAt":"2024-09-20T21:50:45.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"deploy: 6bcc2541235486d30be5b5438327673039d07951","shortMessageHtmlLink":"deploy: 6bcc254"}},{"before":"91d0249c100f94427a8a1feffa0b6df781ec3137","after":"c1b57df87ed0d186a95db0923eab8ac73db42e7c","ref":"refs/heads/branch-3.4.1","pushedAt":"2024-09-20T21:46:31.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"mukund-thakur","name":"Mukund Thakur","path":"/mukund-thakur","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10720944?s=80&v=4"},"commit":{"message":"HADOOP-19279. ABFS: Disabling Apache Http Client as Default Http Client for ABFS Driver(#7055)\n\nAs part of work done under HADOOP-19120 [ABFS]: ApacheHttpClient adaptation as network library - ASF JIRA\r\nApache HTTP Client was introduced as an alternative Network Library that can be used with ABFS Driver. Earlier JDK Http Client was the only supported network library.\r\n\r\nApache HTTP Client was found to be more helpful in terms of controls and knobs it provides to manage the Network aspects of the driver better. Hence, the default Network Client was made to be used with the ABFS Driver.\r\n\r\nRecently while running scale workloads, we observed a regression where some unexpected wait time was observed while establishing connections. A possible fix has been identified and we are working on getting it fixed.\r\nThere was also a possible NPE scenario was identified on the new network client code.\r\n\r\nUntil we are done with the code fixes and revalidated the whole Apache client flow, we would like to make JDK Client as default client again. The support will still be there, but it will be disabled behind a config.\r\n\r\nContributed by: manika137","shortMessageHtmlLink":"HADOOP-19279. ABFS: Disabling Apache Http Client as Default Http Clie…"}},{"before":"f3600d28ab9a396e628034bad3aabfa212fe4339","after":"f01adaf855c7ea8a6b7ca277ced2f8562f83f1fa","ref":"refs/heads/branch-3.4","pushedAt":"2024-09-20T21:43:56.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"mukund-thakur","name":"Mukund Thakur","path":"/mukund-thakur","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10720944?s=80&v=4"},"commit":{"message":"HADOOP-19279. ABFS: Disabling Apache Http Client as Default Http Client for ABFS Driver(#7055)\n\nAs part of work done under HADOOP-19120 [ABFS]: ApacheHttpClient adaptation as network library - ASF JIRA\r\nApache HTTP Client was introduced as an alternative Network Library that can be used with ABFS Driver. Earlier JDK Http Client was the only supported network library.\r\n\r\nApache HTTP Client was found to be more helpful in terms of controls and knobs it provides to manage the Network aspects of the driver better. Hence, the default Network Client was made to be used with the ABFS Driver.\r\n\r\nRecently while running scale workloads, we observed a regression where some unexpected wait time was observed while establishing connections. A possible fix has been identified and we are working on getting it fixed.\r\nThere was also a possible NPE scenario was identified on the new network client code.\r\n\r\nUntil we are done with the code fixes and revalidated the whole Apache client flow, we would like to make JDK Client as default client again. The support will still be there, but it will be disabled behind a config.\r\n\r\nContributed by: manika137","shortMessageHtmlLink":"HADOOP-19279. ABFS: Disabling Apache Http Client as Default Http Clie…"}},{"before":"ee2e5ac4e419b8b2a9c3ff22d3a170456795e4f8","after":"6bcc2541235486d30be5b5438327673039d07951","ref":"refs/heads/trunk","pushedAt":"2024-09-20T21:38:56.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"mukund-thakur","name":"Mukund Thakur","path":"/mukund-thakur","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10720944?s=80&v=4"},"commit":{"message":"HADOOP-19279. ABFS: Disabling Apache Http Client as Default Http Client for ABFS Driver(#7055)\n\nAs part of work done under HADOOP-19120 [ABFS]: ApacheHttpClient adaptation as network library - ASF JIRA\r\nApache HTTP Client was introduced as an alternative Network Library that can be used with ABFS Driver. Earlier JDK Http Client was the only supported network library.\r\n\r\nApache HTTP Client was found to be more helpful in terms of controls and knobs it provides to manage the Network aspects of the driver better. Hence, the default Network Client was made to be used with the ABFS Driver.\r\n\r\nRecently while running scale workloads, we observed a regression where some unexpected wait time was observed while establishing connections. A possible fix has been identified and we are working on getting it fixed.\r\nThere was also a possible NPE scenario was identified on the new network client code.\r\n\r\nUntil we are done with the code fixes and revalidated the whole Apache client flow, we would like to make JDK Client as default client again. The support will still be there, but it will be disabled behind a config.\r\n\r\nContributed by: manika137","shortMessageHtmlLink":"HADOOP-19279. ABFS: Disabling Apache Http Client as Default Http Clie…"}},{"before":"4b7e3d796b7f0cacacdf439f64c7ad391988112e","after":null,"ref":"refs/heads/dependabot/maven/hadoop-project/com.google.protobuf-protobuf-java-3.25.5","pushedAt":"2024-09-20T18:56:41.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"dependabot[bot]","name":null,"path":"/apps/dependabot","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/29110?s=80&v=4"}},{"before":null,"after":"4b7e3d796b7f0cacacdf439f64c7ad391988112e","ref":"refs/heads/dependabot/maven/hadoop-project/com.google.protobuf-protobuf-java-3.25.5","pushedAt":"2024-09-19T16:08:42.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"dependabot[bot]","name":null,"path":"/apps/dependabot","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/29110?s=80&v=4"},"commit":{"message":"Bump com.google.protobuf:protobuf-java in /hadoop-project\n\nBumps [com.google.protobuf:protobuf-java](https://github.com/protocolbuffers/protobuf) from 2.5.0 to 3.25.5.\n- [Release notes](https://github.com/protocolbuffers/protobuf/releases)\n- [Changelog](https://github.com/protocolbuffers/protobuf/blob/main/protobuf_release.bzl)\n- [Commits](https://github.com/protocolbuffers/protobuf/compare/v2.5.0...v3.25.5)\n\n---\nupdated-dependencies:\n- dependency-name: com.google.protobuf:protobuf-java\n dependency-type: direct:production\n...\n\nSigned-off-by: dependabot[bot] ","shortMessageHtmlLink":"Bump com.google.protobuf:protobuf-java in /hadoop-project"}},{"before":"42bd83366a35359716fdd9a78693e6c7fda0e2c9","after":"f3600d28ab9a396e628034bad3aabfa212fe4339","ref":"refs/heads/branch-3.4","pushedAt":"2024-09-19T13:55:50.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"steveloughran","name":"Steve Loughran","path":"/steveloughran","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/162090?s=80&v=4"},"commit":{"message":"HADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged by transfer manager (#7048)\n\nDisables all logging below error in the AWS SDK Transfer Manager.\n\nThis is done in ClientManagerImpl construction so is automatically done\nduring S3A FS initialization.\n\nITests verify that\n* It is possible to restore the warning log. This verifies the validity of\n the test suite, and will identify when an SDK update fixes this regression.\n* Constructing an S3A FS instance will disable the logging.\n\nThe log manipulation code is lifted from Cloudstore, where it was used to\ndynamically enable logging. It uses reflection to load the Log4J binding;\nall uses of the API catch and swallow exceptions.\nThis is needed to avoid failures when running against different log backends\n\nThis is an emergency fix -we could come up with a better design for\nthe reflection based code using the new DynMethods classes.\nBut this is based on working code, which is always good.\n\nContributed by Steve Loughran","shortMessageHtmlLink":"HADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged by transfer manager ("}},{"before":"ce640ba4305781ad351541da811c47995304a8f3","after":"66a716f5a9e7f69a26305e15386f59f621fad87e","ref":"refs/heads/gh-pages","pushedAt":"2024-09-19T13:01:52.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"deploy: ee2e5ac4e419b8b2a9c3ff22d3a170456795e4f8","shortMessageHtmlLink":"deploy: ee2e5ac"}},{"before":"d1311e52f783bf96d5d0eac9cc0bc7c281dc8964","after":"ee2e5ac4e419b8b2a9c3ff22d3a170456795e4f8","ref":"refs/heads/trunk","pushedAt":"2024-09-19T12:50:06.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"steveloughran","name":"Steve Loughran","path":"/steveloughran","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/162090?s=80&v=4"},"commit":{"message":"HADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged by transfer manager (#7048)\n\n\r\nDisables all logging below error in the AWS SDK Transfer Manager.\r\n\r\nThis is done in ClientManagerImpl construction so is automatically done\r\nduring S3A FS initialization.\r\n\r\nITests verify that\r\n* It is possible to restore the warning log. This verifies the validity of\r\n the test suite, and will identify when an SDK update fixes this regression.\r\n* Constructing an S3A FS instance will disable the logging.\r\n\r\nThe log manipulation code is lifted from Cloudstore, where it was used to\r\ndynamically enable logging. It uses reflection to load the Log4J binding;\r\nall uses of the API catch and swallow exceptions.\r\nThis is needed to avoid failures when running against different log backends\r\n\r\nThis is an emergency fix -we could come up with a better design for\r\nthe reflection based code using the new DynMethods classes.\r\nBut this is based on working code, which is always good.\r\n\r\nContributed by Steve Loughran","shortMessageHtmlLink":"HADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged by transfer manager ("}},{"before":"4baebf584fc7514255febf4738751918ea3dadb1","after":"ce640ba4305781ad351541da811c47995304a8f3","ref":"refs/heads/gh-pages","pushedAt":"2024-09-17T12:37:21.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"deploy: d1311e52f783bf96d5d0eac9cc0bc7c281dc8964","shortMessageHtmlLink":"deploy: d1311e5"}},{"before":"182feb11a0fb742bfa7cc6cf4d407101006436b4","after":"d1311e52f783bf96d5d0eac9cc0bc7c281dc8964","ref":"refs/heads/trunk","pushedAt":"2024-09-17T12:24:11.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"brumi1024","name":"Benjamin Teke","path":"/brumi1024","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7017369?s=80&v=4"},"commit":{"message":"YARN-11709. NodeManager should be marked unhealthy on localizer config issues (#7043)","shortMessageHtmlLink":"YARN-11709. NodeManager should be marked unhealthy on localizer confi…"}},{"before":"21a05a88cdb9e3eec55198e54a9a9f1ca4df082f","after":"4baebf584fc7514255febf4738751918ea3dadb1","ref":"refs/heads/gh-pages","pushedAt":"2024-09-17T10:37:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"deploy: 182feb11a0fb742bfa7cc6cf4d407101006436b4","shortMessageHtmlLink":"deploy: 182feb1"}},{"before":"4d968add52d745d30c7eefa13608b23e91c46fa5","after":"182feb11a0fb742bfa7cc6cf4d407101006436b4","ref":"refs/heads/trunk","pushedAt":"2024-09-17T10:25:58.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"adoroszlai","name":"Doroszlai, Attila","path":"/adoroszlai","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6454655?s=80&v=4"},"commit":{"message":"HADOOP-19277. Files and directories mixed up in TreeScanResults#dump (#7047)","shortMessageHtmlLink":"HADOOP-19277. Files and directories mixed up in TreeScanResults#dump (#…"}},{"before":"a9fb087ff4107eb49b7b12456ebccd4a03bd2d89","after":"91d0249c100f94427a8a1feffa0b6df781ec3137","ref":"refs/heads/branch-3.4.1","pushedAt":"2024-09-16T20:33:49.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"mukund-thakur","name":"Mukund Thakur","path":"/mukund-thakur","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10720944?s=80&v=4"},"commit":{"message":"HADOOP-19271. NPE in AbfsManagedApacheHttpConnection.toString() when not connected (#7040)\n\n\r\nContributed by: Pranav Saxena","shortMessageHtmlLink":"HADOOP-19271. NPE in AbfsManagedApacheHttpConnection.toString() when …"}},{"before":"97abdb447aa46a436c6bc9d031827a8742fbacf2","after":"42bd83366a35359716fdd9a78693e6c7fda0e2c9","ref":"refs/heads/branch-3.4","pushedAt":"2024-09-16T20:33:38.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"mukund-thakur","name":"Mukund Thakur","path":"/mukund-thakur","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10720944?s=80&v=4"},"commit":{"message":"HADOOP-19271. NPE in AbfsManagedApacheHttpConnection.toString() when not connected (#7040)\n\n\r\nContributed by: Pranav Saxena","shortMessageHtmlLink":"HADOOP-19271. NPE in AbfsManagedApacheHttpConnection.toString() when …"}},{"before":null,"after":"d1bedf5448b1f2b94c19f7973737e98e1cab93a5","ref":"refs/heads/docker-hadoop-runner-jdk17-u2204","pushedAt":"2024-09-16T17:50:10.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"adoroszlai","name":"Doroszlai, Attila","path":"/adoroszlai","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6454655?s=80&v=4"},"commit":{"message":"HADOOP-19276. Create hadoop-runner based on JDK 17","shortMessageHtmlLink":"HADOOP-19276. Create hadoop-runner based on JDK 17"}},{"before":"13dca26f7f31eaa3289667e47463894288e9462c","after":"21a05a88cdb9e3eec55198e54a9a9f1ca4df082f","ref":"refs/heads/gh-pages","pushedAt":"2024-09-16T17:33:21.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"deploy: 4d968add52d745d30c7eefa13608b23e91c46fa5","shortMessageHtmlLink":"deploy: 4d968ad"}},{"before":"ea6e0f7cd58d0129897dfc7870aee188be80a904","after":"4d968add52d745d30c7eefa13608b23e91c46fa5","ref":"refs/heads/trunk","pushedAt":"2024-09-16T17:21:20.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"mukund-thakur","name":"Mukund Thakur","path":"/mukund-thakur","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10720944?s=80&v=4"},"commit":{"message":"HADOOP-19271. NPE in AbfsManagedApacheHttpConnection.toString() when not connected (#7040)\n\n\r\nContributed by: Pranav Saxena","shortMessageHtmlLink":"HADOOP-19271. NPE in AbfsManagedApacheHttpConnection.toString() when …"}},{"before":"87d9bb602290063f324ca09702bb030bd3fbbba6","after":"a9fb087ff4107eb49b7b12456ebccd4a03bd2d89","ref":"refs/heads/branch-3.4.1","pushedAt":"2024-09-16T13:28:07.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"asfgit","name":null,"path":"/asfgit","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1341245?s=80&v=4"},"commit":{"message":"Revert \"HADOOP-19195. S3A: Upgrade aws sdk v2 to 2.25.53 (#6900)\"\n\nThis reverts commit fc86a52c884f15b2f2fb401bbf0baaa36a057651.\n\nThis rollback is due to:\n\nHADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged about\n transfer manager not using CRT client\n\nChange-Id: I324f75d62daa02650ff9d199a2e0fc465a2ea28a","shortMessageHtmlLink":"Revert \"HADOOP-19195. S3A: Upgrade aws sdk v2 to 2.25.53 (#6900)\""}},{"before":"ea4137b0100a2f0b9ee7f5e2d3e691ea89f7c60d","after":"97abdb447aa46a436c6bc9d031827a8742fbacf2","ref":"refs/heads/branch-3.4","pushedAt":"2024-09-16T11:17:21.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"steveloughran","name":"Steve Loughran","path":"/steveloughran","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/162090?s=80&v=4"},"commit":{"message":"HADOOP-19221. S3A: Unable to recover from failure of multipart block upload attempt (#6938) (#7044)\n\n\r\nThis is a major change which handles 400 error responses when uploading\r\nlarge files from memory heap/buffer (or staging committer) and the remote S3\r\nstore returns a 500 response from a upload of a block in a multipart upload.\r\n\r\nThe SDK's own streaming code seems unable to fully replay the upload;\r\nat attempts to but then blocks and the S3 store returns a 400 response\r\n\r\n \"Your socket connection to the server was not read from or written to\r\n within the timeout period. Idle connections will be closed.\r\n (Service: S3, Status Code: 400...)\"\r\n\r\nThere is an option to control whether or not the S3A client itself\r\nattempts to retry on a 50x error other than 503 throttling events\r\n(which are independently processed as before)\r\n\r\nOption: fs.s3a.retry.http.5xx.errors\r\nDefault: true\r\n\r\n500 errors are very rare from standard AWS S3, which has a five nines\r\nSLA. It may be more common against S3 Express which has lower\r\nguarantees.\r\n\r\nThird party stores have unknown guarantees, and the exception may\r\nindicate a bad server configuration. Consider setting\r\nfs.s3a.retry.http.5xx.errors to false when working with\r\nsuch stores.\r\n\r\nSignification Code changes:\r\n\r\nThere is now a custom set of implementations of\r\nsoftware.amazon.awssdk.http.ContentStreamProvidercontent in\r\nthe class org.apache.hadoop.fs.s3a.impl.UploadContentProviders.\r\n\r\nThese:\r\n\r\n* Restart on failures\r\n* Do not copy buffers/byte buffers into new private byte arrays,\r\n so avoid exacerbating memory problems..\r\n\r\nThere new IOStatistics for specific http error codes -these are collected\r\neven when all recovery is performed within the SDK.\r\n \r\nS3ABlockOutputStream has major changes, including handling of\r\nThread.interrupt() on the main thread, which now triggers and briefly\r\nawaits cancellation of any ongoing uploads.\r\n\r\nIf the writing thread is interrupted in close(), it is mapped to\r\nan InterruptedIOException. Applications like Hive and Spark must\r\ncatch these after cancelling a worker thread.\r\n\r\nContributed by Steve Loughran","shortMessageHtmlLink":"HADOOP-19221. S3A: Unable to recover from failure of multipart block …"}},{"before":null,"after":"5bb6f450340fa3deb931571d08d42a3f358af8bf","ref":"refs/heads/docker-hadoop-runner-jdk11-u2204","pushedAt":"2024-09-16T04:49:19.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"adoroszlai","name":"Doroszlai, Attila","path":"/adoroszlai","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6454655?s=80&v=4"},"commit":{"message":"HADOOP-19209. Update and optimize hadoop-runner","shortMessageHtmlLink":"HADOOP-19209. Update and optimize hadoop-runner"}},{"before":"b8d14d94ac223fe368b772d25cf034963c66e01a","after":"ea4137b0100a2f0b9ee7f5e2d3e691ea89f7c60d","ref":"refs/heads/branch-3.4","pushedAt":"2024-09-14T05:30:43.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ayushtkn","name":"Ayush Saxena","path":"/ayushtkn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/25608848?s=80&v=4"},"commit":{"message":"HADOOP-19250. Fix test TestServiceInterruptHandling.testRegisterAndRaise. (#7020)\n\nContributed by Chenyu Zheng","shortMessageHtmlLink":"HADOOP-19250. Fix test TestServiceInterruptHandling.testRegisterAndRa…"}},{"before":"5635e34e5f254a0331936a3b190ef08d73ddc310","after":"b8d14d94ac223fe368b772d25cf034963c66e01a","ref":"refs/heads/branch-3.4","pushedAt":"2024-09-14T05:26:56.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ayushtkn","name":"Ayush Saxena","path":"/ayushtkn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/25608848?s=80&v=4"},"commit":{"message":"HADOOP-19262. Upgrade wildfly-openssl:1.1.3.Final to 2.1.4.Final to support Java17+ (#7032)\n\nContributed by Saikat Roy","shortMessageHtmlLink":"HADOOP-19262. Upgrade wildfly-openssl:1.1.3.Final to 2.1.4.Final to s…"}},{"before":"2ae18d1f6532d65b509f9a09aafc673c43618232","after":"13dca26f7f31eaa3289667e47463894288e9462c","ref":"refs/heads/gh-pages","pushedAt":"2024-09-13T19:13:46.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"deploy: ea6e0f7cd58d0129897dfc7870aee188be80a904","shortMessageHtmlLink":"deploy: ea6e0f7"}},{"before":"c835adb3a8d3106493c5b10240593a9693683e5b","after":"ea6e0f7cd58d0129897dfc7870aee188be80a904","ref":"refs/heads/trunk","pushedAt":"2024-09-13T19:02:14.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"steveloughran","name":"Steve Loughran","path":"/steveloughran","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/162090?s=80&v=4"},"commit":{"message":"HADOOP-19221. S3A: Unable to recover from failure of multipart block upload attempt (#6938)\n\n\r\nThis is a major change which handles 400 error responses when uploading\r\nlarge files from memory heap/buffer (or staging committer) and the remote S3\r\nstore returns a 500 response from a upload of a block in a multipart upload.\r\n\r\nThe SDK's own streaming code seems unable to fully replay the upload;\r\nat attempts to but then blocks and the S3 store returns a 400 response\r\n\r\n \"Your socket connection to the server was not read from or written to\r\n within the timeout period. Idle connections will be closed.\r\n (Service: S3, Status Code: 400...)\"\r\n\r\nThere is an option to control whether or not the S3A client itself\r\nattempts to retry on a 50x error other than 503 throttling events\r\n(which are independently processed as before)\r\n\r\nOption: fs.s3a.retry.http.5xx.errors\r\nDefault: true\r\n\r\n500 errors are very rare from standard AWS S3, which has a five nines\r\nSLA. It may be more common against S3 Express which has lower\r\nguarantees.\r\n\r\nThird party stores have unknown guarantees, and the exception may\r\nindicate a bad server configuration. Consider setting\r\nfs.s3a.retry.http.5xx.errors to false when working with\r\nsuch stores.\r\n\r\nSignification Code changes:\r\n\r\nThere is now a custom set of implementations of\r\nsoftware.amazon.awssdk.http.ContentStreamProvidercontent in\r\nthe class org.apache.hadoop.fs.s3a.impl.UploadContentProviders.\r\n\r\nThese:\r\n\r\n* Restart on failures\r\n* Do not copy buffers/byte buffers into new private byte arrays,\r\n so avoid exacerbating memory problems..\r\n\r\nThere new IOStatistics for specific http error codes -these are collected\r\neven when all recovery is performed within the SDK.\r\n \r\nS3ABlockOutputStream has major changes, including handling of\r\nThread.interrupt() on the main thread, which now triggers and briefly\r\nawaits cancellation of any ongoing uploads.\r\n\r\nIf the writing thread is interrupted in close(), it is mapped to\r\nan InterruptedIOException. Applications like Hive and Spark must\r\ncatch these after cancelling a worker thread.\r\n\r\nContributed by Steve Loughran","shortMessageHtmlLink":"HADOOP-19221. S3A: Unable to recover from failure of multipart block …"}},{"before":"5db50fb995b29bb301546b98866beb7b48cc8405","after":"87d9bb602290063f324ca09702bb030bd3fbbba6","ref":"refs/heads/branch-3.4.1","pushedAt":"2024-09-13T18:27:50.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"asfgit","name":null,"path":"/asfgit","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1341245?s=80&v=4"},"commit":{"message":"HADOOP-19201. S3A. Support external-id in assume role (#6876)\n\nThe option fs.s3a.assumed.role.external.id sets the\nexternal id for calls of AssumeRole to the STS service\n\nContributed by Smith Cruise","shortMessageHtmlLink":"HADOOP-19201. S3A. Support external-id in assume role (#6876)"}},{"before":"4e0b405adecc500fdabf831126fcffe9314accea","after":"a5919024552e7776f78bb93277d33282b8399f3c","ref":"refs/heads/HDFS-17531","pushedAt":"2024-09-12T07:44:23.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"Hexiaoqiao","name":"Hexiaoqiao","path":"/Hexiaoqiao","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1393756?s=80&v=4"},"commit":{"message":"HDFS-17544. [ARR] The router client rpc protocol PB supports asynchrony. (#6870). Contributed by Jian Zhang.\n\nSigned-off-by: He Xiaoqiao ","shortMessageHtmlLink":"HDFS-17544. [ARR] The router client rpc protocol PB supports asynchro…"}},{"before":"f268dfda542f8c5900d5b71b65b4c6869b36e032","after":null,"ref":"refs/heads/dependabot/npm_and_yarn/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/express-4.19.2","pushedAt":"2024-09-11T12:24:45.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"dependabot[bot]","name":null,"path":"/apps/dependabot","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/29110?s=80&v=4"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yMVQxNjowODozMC4wMDAwMDBazwAAAAS8x3vv","startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yMVQxNjowODozMC4wMDAwMDBazwAAAAS8x3vv","endCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0xMVQxMjoyNDo0NS4wMDAwMDBazwAAAASzK5Z2"}},"title":"Activity · apache/hadoop"}