You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Configures this workflow to run every time a change is pushed to the branch called `release`.
5
+
on:
6
+
push:
7
+
branches:
8
+
- main
9
+
workflow_dispatch:
10
+
11
+
# Defines two custom environment variables for the workflow. These are used for the Container registry domain, and a name for the Docker image that this workflow builds.
12
+
env:
13
+
REGISTRY: ghcr.io
14
+
IMAGE_NAME: ${{ github.repository }}
15
+
16
+
# There is a single job in this workflow. It's configured to run on the latest available version of Ubuntu.
17
+
jobs:
18
+
build-and-push-image:
19
+
runs-on: ubuntu-latest
20
+
# Sets the permissions granted to the `GITHUB_TOKEN` for the actions in this job.
21
+
permissions:
22
+
contents: read
23
+
packages: write
24
+
attestations: write
25
+
id-token: write
26
+
#
27
+
steps:
28
+
- name: Checkout repository
29
+
uses: actions/checkout@v4
30
+
# Uses the `docker/login-action` action to log in to the Container registry registry using the account and password that will publish the packages. Once published, the packages are scoped to the account defined here.
# This step uses [docker/metadata-action](https://github.com/docker/metadata-action#about) to extract tags and labels that will be applied to the specified image. The `id` "meta" allows the output of this step to be referenced in a subsequent step. The `images` value provides the base name for the tags and labels.
38
+
- name: Extract metadata (tags, labels) for Docker
# This step uses the `docker/build-push-action` action to build the image, based on your repository's `Dockerfile`. If the build succeeds, it pushes the image to GitHub Packages.
44
+
# It uses the `context` parameter to define the build's context as the set of files located in the specified path. For more information, see "[Usage](https://github.com/docker/build-push-action#usage)" in the README of the `docker/build-push-action` repository.
45
+
# It uses the `tags` and `labels` parameters to tag and label the image with the output from the "meta" step.
# This step generates an artifact attestation for the image, which is an unforgeable statement about where and how it was built. It increases supply chain security for people who consume the image. For more information, see "[AUTOTITLE](/actions/security-guides/using-artifact-attestations-to-establish-provenance-for-builds)."
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+7-7
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,20 @@
1
-
# Contributing to GCP-streaming-pipeline
1
+
# Contributing to data-consumer-pipeline
2
2
3
-
Firstly, thank you very much for your interest in contributing to GCP-streaming-pipeline! This document provides guidelines to help ensure the contribution process is smooth and efficient for everyone involved.
3
+
Firstly, thank you very much for your interest in contributing to data-consumer-pipeline! This document provides guidelines to help ensure the contribution process is smooth and efficient for everyone involved.
4
4
5
5
## How to Contribute
6
6
7
7
### 1. Fork the Repository
8
8
9
-
1. Go to [repository page](https://github.com/IvanildoBarauna/GCP-streaming-pipeline).
9
+
1. Go to [repository page](https://github.com/ivdatahub/data-consumer-pipeline).
10
10
2. Click the "Fork" button in the top right corner to create a copy of the repository on your GitHub.
11
11
12
12
### 2. Clone the Repository
13
13
14
14
Clone the forked repository to your local machine using the command:
If you find a bug, please open an [issue](https://github.com/IvanildoBarauna/GCP-streaming-pipeline/issues) and provide as much information as possible, including:
70
+
If you find a bug, please open an [issue](https://github.com/ivdatahub/data-consumer-pipeline/issues) and provide as much information as possible, including:
71
71
72
72
- Detailed description of the problem.
73
73
- Steps to reproduce the issue.
@@ -76,8 +76,8 @@ If you find a bug, please open an [issue](https://github.com/IvanildoBarauna/GCP
76
76
77
77
## Improvement suggestions
78
78
79
-
If you have suggestions for improvements, please open an [issue](https://github.com/IvanildoBarauna/GCP-streaming-pipeline/issues) and describe your idea in detail.
79
+
If you have suggestions for improvements, please open an [issue](https://github.com/ivdatahub/data-consumer-pipeline/issues) and describe your idea in detail.
80
80
81
81
## Thanks
82
82
83
-
Thanks for considering contributing to GCP-streaming-pipeline! Every contribution is valuable and helps to improve the project.
83
+
Thanks for considering contributing to data-consumer-pipeline! Every contribution is valuable and helps to improve the project.
-[Code Of Conduct](https://github.com/ivdatahub/data-consumer-pipeline/blob/main/CODE_OF_CONDUCT.md)
47
42
48
43
## Project Highlights:
49
44
@@ -59,9 +54,8 @@ See the following docs:
59
54
60
55
- Documentation: Creation of detailed documentation to facilitate the understanding and use of the application, including installation instructions, usage examples and troubleshooting guides.
61
56
62
-
63
57
# Data Pipeline Process:
64
58
65
59
1. Data Extraction: The data extraction process consists of making requests to the API to obtain the data. The requests are made in parallel workers using Cloud Dataflow to optimize the process. The data is extracted in JSON format.
66
60
2. Data Transformation: The data transformation process consists of converting the data to BigQuery Schema. The transformation is done using Cloud Dataflow in parallel workers to optimize the process.
67
-
3. Data Loading: The data loading process consists of loading the data into BigQuery. The data is loaded in parallel workers using Cloud Dataflow to optimize the process.
61
+
3. Data Loading: The data loading process consists of loading the data into BigQuery. The data is loaded in parallel workers using Cloud Dataflow to optimize the process.
0 commit comments