Skip to content

Commit f84b8fa

Browse files
charles-edouard.bretechetekton-robot
charles-edouard.breteche
authored andcommitted
added s3 bucket configuration doc/example
added replaced duplicated content by a link fix link
1 parent 3b24043 commit f84b8fa

File tree

2 files changed

+43
-14
lines changed

2 files changed

+43
-14
lines changed

docs/developers/README.md

+4-10
Original file line numberDiff line numberDiff line change
@@ -16,16 +16,10 @@ on path `/pvc` by PipelineRun.
1616
adds a step to copy from PVC directory path:
1717
`/pvc/previous_task/resource_name`.
1818

19-
Another alternatives is to use a GCS storage bucket to share the artifacts. This
20-
can be configured using a ConfigMap with the name `config-artifact-bucket` with
21-
the following attributes:
22-
23-
- location: the address of the bucket (for example gs://mybucket)
24-
- bucket.service.account.secret.name: the name of the secret that will contain
25-
the credentials for the service account with access to the bucket
26-
- bucket.service.account.secret.key: the key in the secret with the required
27-
service account json. The bucket is recommended to be configured with a
28-
retention policy after which files will be deleted.
19+
Another alternatives is to use a GCS storage or S3 bucket to share the artifacts.
20+
This can be configured using a ConfigMap with the name `config-artifact-bucket`.
21+
22+
See [here](../install.md#how-are-resources-shared-between-tasks) for configuration details.
2923

3024
Both options provide the same functionality to the pipeline. The choice is based
3125
on the infrastructure used, for example in some Kubernetes platforms, the

docs/install.md

+39-4
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,8 @@ for more information_
118118
### How are resources shared between tasks
119119

120120
Pipelines need a way to share resources between tasks. The alternatives are a
121-
[Persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
121+
[Persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/),
122+
an [S3 Bucket](https://aws.amazon.com/s3/)
122123
or a [GCS storage bucket](https://cloud.google.com/storage/)
123124

124125
The PVC option can be configured using a ConfigMap with the name
@@ -127,11 +128,11 @@ The PVC option can be configured using a ConfigMap with the name
127128
- `size`: the size of the volume (5Gi by default)
128129
- `storageClassName`: the [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.
129130

130-
The GCS storage bucket can be configured using a ConfigMap with the name
131+
The GCS storage bucket or the S3 bucket can be configured using a ConfigMap with the name
131132
`config-artifact-bucket` with the following attributes:
132133

133-
- `location`: the address of the bucket (for example gs://mybucket)
134-
- bucket.service.account.secret.name: the name of the secret that will contain
134+
- `location`: the address of the bucket (for example gs://mybucket or s3://mybucket)
135+
- `bucket.service.account.secret.name`: the name of the secret that will contain
135136
the credentials for the service account with access to the bucket
136137
- `bucket.service.account.secret.key`: the key in the secret with the required
137138
service account json.
@@ -140,6 +141,40 @@ The GCS storage bucket can be configured using a ConfigMap with the name
140141
- `bucket.service.account.field.name`: the name of the environment variable to use when specifying the
141142
secret path. Defaults to `GOOGLE_APPLICATION_CREDENTIALS`. Set to `BOTO_CONFIG` if using S3 instead of GCS.
142143

144+
*Note:* When using an S3 bucket, there is a restriction that the bucket is located in the us-east-1 region.
145+
This is a limitation coming from using [gsutil](https://cloud.google.com/storage/docs/gsutil) with a boto configuration
146+
behind the scene to access the S3 bucket.
147+
148+
An typical configuration to use an S3 bucket is available below :
149+
150+
```yaml
151+
apiVersion: v1
152+
kind: Secret
153+
metadata:
154+
name: tekton-storage
155+
type: kubernetes.io/opaque
156+
stringData:
157+
boto-config: |
158+
[Credentials]
159+
aws_access_key_id = AWS_ACCESS_KEY_ID
160+
aws_secret_access_key = AWS_SECRET_ACCESS_KEY
161+
[s3]
162+
host = s3.us-east-1.amazonaws.com
163+
[Boto]
164+
https_validate_certificates = True
165+
---
166+
apiVersion: v1
167+
data: null
168+
kind: ConfigMap
169+
metadata:
170+
name: config-artifact-pvc
171+
data:
172+
location: s3://mybucket
173+
bucket.service.account.secret.name: tekton-storage
174+
bucket.service.account.secret.key: boto-config
175+
bucket.service.account.field.name: BOTO_CONFIG
176+
```
177+
143178
Both options provide the same functionality to the pipeline. The choice is based
144179
on the infrastructure used, for example in some Kubernetes platforms, the
145180
creation of a persistent volume could be slower than uploading/downloading files

0 commit comments

Comments
 (0)