Skip to content

Commit 61b876d

Browse files
authored
Merge branch 'master' into reconcile_cbc_roles
2 parents 625e974 + 695092b commit 61b876d

40 files changed

+308
-64
lines changed

CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ possible as to what the changes are. Good things to include:
8686
understand.
8787
```
8888

89-
If you wish to tag a Github issue or another project management tracker, please
89+
If you wish to tag a GitHub issue or another project management tracker, please
9090
do so at the bottom of the commit message, and make it clearly labeled like so:
9191

9292
```

README.md

+11-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,9 @@ Have questions or looking for help? [Join our Discord group](https://discord.gg/
2222

2323
# Installation
2424

25-
We recommend following our [Quickstart](https://access.crunchydata.com/documentation/postgres-operator/v5/quickstart/) for how to install and get up and running with PGO, the Postgres Operator from Crunchy Data. However, if you can't wait to try it out, here are some instructions to get Postgres up and running on Kubernetes:
25+
Crunchy Data makes PGO available as the orchestration behind Crunchy Postgres for Kubernetes. Crunchy Postgres for Kubernetes is the integrated product that includes PostgreSQL, PGO and a collection of PostgreSQL tools and extensions that includes the various [open source components listed in the documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/references/components).
26+
27+
We recommend following our [Quickstart](https://access.crunchydata.com/documentation/postgres-operator/v5/quickstart/) for how to install and get up and running. However, if you can't wait to try it out, here are some instructions to get Postgres up and running on Kubernetes:
2628

2729
1. [Fork the Postgres Operator examples repository](https://github.com/CrunchyData/postgres-operator-examples/fork) and clone it to your host machine. For example:
2830

@@ -41,6 +43,8 @@ kubectl apply --server-side -k kustomize/install/default
4143

4244
For more information please read the [Quickstart](https://access.crunchydata.com/documentation/postgres-operator/v5/quickstart/) and [Tutorial](https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/).
4345

46+
These installation instructions provide the steps necessary to install PGO along with Crunchy Data's Postgres distribution, Crunchy Postgres, as Crunchy Postgres for Kubernetes. In doing so the installation downloads a series of container images from Crunchy Data's Developer Portal. For more information on the use of container images downloaded from the Crunchy Data Developer Portal or other third party sources, please see 'License and Terms' below. The installation and use of PGO outside of the use of Crunchy Postgres for Kubernetes will require modifications of these installation instructions and creation of the necessary PostgreSQL and related containers.
47+
4448
# Cloud Native Postgres for Kubernetes
4549

4650
PGO, the Postgres Operator from Crunchy Data, comes with all of the features you need for a complete cloud native Postgres experience on Kubernetes!
@@ -244,4 +248,10 @@ The image rollout can occur over the course of several days.
244248

245249
To stay up-to-date on when releases are made available in the [Crunchy Data Developer Portal](https://www.crunchydata.com/developers), please sign up for the [Crunchy Data Developer Program Newsletter](https://www.crunchydata.com/developers#email). You can also [join the PGO project community discord](https://discord.gg/a7vWKG8Ec9)
246250

251+
# FAQs, License and Terms
252+
253+
For more information regarding PGO, the Postgres Operator project from Crunchy Data, and Crunchy Postgres for Kubernetes, please see the [frequently asked questions](https://access.crunchydata.com/documentation/postgres-operator/latest/faq).
254+
255+
The installation instructions provided in this repo are designed for the use of PGO along with Crunchy Data's Postgres distribution, Crunchy Postgres, as Crunchy Postgres for Kubernetes. The unmodified use of these installation instructions will result in downloading container images from Crunchy Data repositories - specifically the Crunchy Data Developer Portal. The use of container images downloaded from the Crunchy Data Developer Portal are subject to the [Crunchy Data Developer Program terms](https://www.crunchydata.com/developers/terms-of-use).
256+
247257
The PGO Postgres Operator project source code is available subject to the [Apache 2.0 license](LICENSE.md) with the PGO logo and branding assets covered by [our trademark guidelines](docs/static/logos/TRADEMARKS.md).

config/crd/bases/postgres-operator.crunchydata.com_postgresclusters.yaml

+39-7
Original file line numberDiff line numberDiff line change
@@ -2695,7 +2695,7 @@ spec:
26952695
- bucket
26962696
type: object
26972697
name:
2698-
description: The name of the the repository
2698+
description: The name of the repository
26992699
pattern: ^repo[1-4]
27002700
type: string
27012701
s3:
@@ -4438,10 +4438,10 @@ spec:
44384438
properties:
44394439
pgbackrest:
44404440
description: 'Defines a pgBackRest cloud-based data source that
4441-
can be used to pre-populate the the PostgreSQL data directory
4442-
for a new PostgreSQL cluster using a pgBackRest restore. The
4443-
PGBackRest field is incompatible with the PostgresCluster field:
4444-
only one data source can be used for pre-populating a new PostgreSQL
4441+
can be used to pre-populate the PostgreSQL data directory for
4442+
a new PostgreSQL cluster using a pgBackRest restore. The PGBackRest
4443+
field is incompatible with the PostgresCluster field: only one
4444+
data source can be used for pre-populating a new PostgreSQL
44454445
cluster'
44464446
properties:
44474447
affinity:
@@ -5615,7 +5615,7 @@ spec:
56155615
- bucket
56165616
type: object
56175617
name:
5618-
description: The name of the the repository
5618+
description: The name of the repository
56195619
pattern: ^repo[1-4]
56205620
type: string
56215621
s3:
@@ -13319,6 +13319,38 @@ spec:
1331913319
required:
1332013320
- pgBouncer
1332113321
type: object
13322+
replicaService:
13323+
description: Specification of the service that exposes PostgreSQL
13324+
replica instances
13325+
properties:
13326+
metadata:
13327+
description: Metadata contains metadata for custom resources
13328+
properties:
13329+
annotations:
13330+
additionalProperties:
13331+
type: string
13332+
type: object
13333+
labels:
13334+
additionalProperties:
13335+
type: string
13336+
type: object
13337+
type: object
13338+
nodePort:
13339+
description: The port on which this service is exposed when type
13340+
is NodePort or LoadBalancer. Value must be in-range and not
13341+
in use or the operation will fail. If unspecified, a port will
13342+
be allocated if this Service requires one. - https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
13343+
format: int32
13344+
type: integer
13345+
type:
13346+
default: ClusterIP
13347+
description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types'
13348+
enum:
13349+
- ClusterIP
13350+
- NodePort
13351+
- LoadBalancer
13352+
type: string
13353+
type: object
1332213354
service:
1332313355
description: Specification of the service that exposes the PostgreSQL
1332413356
primary instance.
@@ -15291,7 +15323,7 @@ spec:
1529115323
type: boolean
1529215324
repoOptionsHash:
1529315325
description: A hash of the required fields in the spec for
15294-
defining an Azure, GCS or S3 repository, Utilizd to detect
15326+
defining an Azure, GCS or S3 repository, Utilized to detect
1529515327
changes to these fields and then execute pgBackRest stanza-create
1529615328
commands accordingly.
1529715329
type: string

installers/olm/Makefile

+1-1
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ tools: ## Download tools needed to build bundles
6666
tools: tools/$(SYSTEM)/jq
6767
tools/$(SYSTEM)/jq:
6868
install -d '$(dir $@)'
69-
curl -fSL -o '$@' "https://github.com/stedolan/jq/releases/download/jq-1.6/jq-$$(SYSTEM='$(SYSTEM)'; \
69+
curl -fSL -o '$@' "https://github.com/jqlang/jq/releases/download/jq-1.6/jq-$$(SYSTEM='$(SYSTEM)'; \
7070
case "$$SYSTEM" in \
7171
(linux-*) echo "$${SYSTEM/-amd/}";; (darwin-*) echo "$${SYSTEM/darwin/osx}";; (*) echo '$(SYSTEM)';; \
7272
esac)"

installers/olm/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ Marketplace: https://github.com/redhat-openshift-ecosystem/redhat-marketplace-op
5555

5656
We hit various issues with 5.1.0 where the 'replaces' name, set in the clusterserviceversion.yaml, didn't match the
5757
expected names found for all indexes. Previously, we set the 'com.redhat.openshift.versions' annotation to "v4.6-v4.9".
58-
The goal for this setting was to limit the upper bound of supported versions for a particulary PGO release.
58+
The goal for this setting was to limit the upper bound of supported versions for a particularly PGO release.
5959
The problem with this was, at the time of the 5.1.0 release, OCP 4.10 had been just been released. This meant that the
6060
5.0.5 bundle did not exist in the OCP 4.10 index. The solution presented by Red Hat was to use the 'skips' clause for
6161
the 5.1.0 release to remedy the immediate problem, but then go back to using an unbounded setting for subsequent

internal/bridge/client_test.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -304,7 +304,7 @@ func TestClientDoWithRetry(t *testing.T) {
304304
assert.Assert(t, requests[1].Header.Get("Idempotency-Key") != prior,
305305
"expected a new idempotency key")
306306

307-
// Requests are delayed according the the server's response.
307+
// Requests are delayed according the server's response.
308308
// TODO: Mock the clock for faster tests.
309309
assert.Assert(t, times[0].Add(time.Second).Before(times[1]),
310310
"expected the second request over 1sec after the first")

internal/config/config.go

+1-1
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ func RegistrationRequiredBy(cluster *v1beta1.PostgresCluster) string {
6565
// Red Hat Marketplace requires operators to use environment variables be used
6666
// for any image other than the operator itself. Those variables must start with
6767
// "RELATED_IMAGE_" so that OSBS can transform their tag values into digests
68-
// for a "disconncted" OLM CSV.
68+
// for a "disconnected" OLM CSV.
6969

7070
// - https://redhat-connect.gitbook.io/certified-operator-guide/troubleshooting-and-resources/offline-enabled-operators
7171
// - https://osbs.readthedocs.io/en/latest/users.html#pullspec-locations

internal/controller/pgupgrade/jobs.go

+2-2
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ func upgradeCommand(upgrade *v1beta1.PGUpgrade, fetchKeyCommand string) []string
6969
`echo "postgres:x:${gid%% *}:") > "${NSS_WRAPPER_GROUP}"`,
7070

7171
// Create a copy of the system user definitions, but remove the "postgres"
72-
// user or any user with the currrent UID. Replace them with our own that
72+
// user or any user with the current UID. Replace them with our own that
7373
// has the current UID and GID.
7474
`uid=$(id -u); NSS_WRAPPER_PASSWD=$(mktemp)`,
7575
`(sed "/^postgres:x:/ d; /^[^:]*:x:${uid}:/ d" /etc/passwd`,
@@ -80,7 +80,7 @@ func upgradeCommand(upgrade *v1beta1.PGUpgrade, fetchKeyCommand string) []string
8080
`export LD_PRELOAD='libnss_wrapper.so' NSS_WRAPPER_GROUP NSS_WRAPPER_PASSWD`,
8181

8282
// Below is the pg_upgrade script used to upgrade a PostgresCluster from
83-
// one major verson to another. Additional information concerning the
83+
// one major version to another. Additional information concerning the
8484
// steps used and command flag specifics can be found in the documentation:
8585
// - https://www.postgresql.org/docs/current/pgupgrade.html
8686

internal/controller/postgrescluster/cluster.go

+46-14
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ package postgrescluster
1717

1818
import (
1919
"context"
20+
"fmt"
2021
"io"
2122

2223
"github.com/pkg/errors"
@@ -199,33 +200,64 @@ func (r *Reconciler) generateClusterReplicaService(
199200
service := &corev1.Service{ObjectMeta: naming.ClusterReplicaService(cluster)}
200201
service.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Service"))
201202

202-
service.Annotations = naming.Merge(
203-
cluster.Spec.Metadata.GetAnnotationsOrNil())
203+
service.Annotations = cluster.Spec.Metadata.GetAnnotationsOrNil()
204+
service.Labels = cluster.Spec.Metadata.GetLabelsOrNil()
205+
206+
if spec := cluster.Spec.ReplicaService; spec != nil {
207+
service.Annotations = naming.Merge(service.Annotations,
208+
spec.Metadata.GetAnnotationsOrNil())
209+
service.Labels = naming.Merge(service.Labels,
210+
spec.Metadata.GetLabelsOrNil())
211+
}
212+
213+
// add our labels last so they aren't overwritten
204214
service.Labels = naming.Merge(
205-
cluster.Spec.Metadata.GetLabelsOrNil(),
215+
service.Labels,
206216
map[string]string{
207217
naming.LabelCluster: cluster.Name,
208218
naming.LabelRole: naming.RoleReplica,
209219
})
210220

211-
// Allocate an IP address and let Kubernetes manage the Endpoints by
212-
// selecting Pods with the Patroni replica role.
213-
// - https://docs.k8s.io/concepts/services-networking/service/#defining-a-service
214-
service.Spec.Type = corev1.ServiceTypeClusterIP
215-
service.Spec.Selector = map[string]string{
216-
naming.LabelCluster: cluster.Name,
217-
naming.LabelRole: naming.RolePatroniReplica,
218-
}
219-
220221
// The TargetPort must be the name (not the number) of the PostgreSQL
221222
// ContainerPort. This name allows the port number to differ between Pods,
222223
// which can happen during a rolling update.
223-
service.Spec.Ports = []corev1.ServicePort{{
224+
servicePort := corev1.ServicePort{
224225
Name: naming.PortPostgreSQL,
225226
Port: *cluster.Spec.Port,
226227
Protocol: corev1.ProtocolTCP,
227228
TargetPort: intstr.FromString(naming.PortPostgreSQL),
228-
}}
229+
}
230+
231+
// Default to a service type of ClusterIP
232+
service.Spec.Type = corev1.ServiceTypeClusterIP
233+
234+
// Check user provided spec for a specified type
235+
if spec := cluster.Spec.ReplicaService; spec != nil {
236+
service.Spec.Type = corev1.ServiceType(spec.Type)
237+
if spec.NodePort != nil {
238+
if service.Spec.Type == corev1.ServiceTypeClusterIP {
239+
// The NodePort can only be set when the Service type is NodePort or
240+
// LoadBalancer. However, due to a known issue prior to Kubernetes
241+
// 1.20, we clear these errors during our apply. To preserve the
242+
// appropriate behavior, we log an Event and return an error.
243+
// TODO(tjmoore4): Once Validation Rules are available, this check
244+
// and event could potentially be removed in favor of that validation
245+
r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "MisconfiguredClusterIP",
246+
"NodePort cannot be set with type ClusterIP on Service %q", service.Name)
247+
return nil, fmt.Errorf("NodePort cannot be set with type ClusterIP on Service %q", service.Name)
248+
}
249+
servicePort.NodePort = *spec.NodePort
250+
}
251+
}
252+
service.Spec.Ports = []corev1.ServicePort{servicePort}
253+
254+
// Allocate an IP address and let Kubernetes manage the Endpoints by
255+
// selecting Pods with the Patroni replica role.
256+
// - https://docs.k8s.io/concepts/services-networking/service/#defining-a-service
257+
service.Spec.Selector = map[string]string{
258+
naming.LabelCluster: cluster.Name,
259+
naming.LabelRole: naming.RolePatroniReplica,
260+
}
229261

230262
err := errors.WithStack(r.setControllerReference(cluster, service))
231263

internal/controller/postgrescluster/cluster_test.go

+41-4
Original file line numberDiff line numberDiff line change
@@ -732,11 +732,12 @@ func TestGenerateClusterReplicaServiceIntent(t *testing.T) {
732732
service, err := reconciler.generateClusterReplicaService(cluster)
733733
assert.NilError(t, err)
734734

735-
assert.Assert(t, marshalMatches(service.TypeMeta, `
735+
alwaysExpect := func(t testing.TB, service *corev1.Service) {
736+
assert.Assert(t, marshalMatches(service.TypeMeta, `
736737
apiVersion: v1
737738
kind: Service
738-
`))
739-
assert.Assert(t, marshalMatches(service.ObjectMeta, `
739+
`))
740+
assert.Assert(t, marshalMatches(service.ObjectMeta, `
740741
creationTimestamp: null
741742
labels:
742743
postgres-operator.crunchydata.com/cluster: pg2
@@ -750,7 +751,10 @@ ownerReferences:
750751
kind: PostgresCluster
751752
name: pg2
752753
uid: ""
753-
`))
754+
`))
755+
}
756+
757+
alwaysExpect(t, service)
754758
assert.Assert(t, marshalMatches(service.Spec, `
755759
ports:
756760
- name: postgres
@@ -763,6 +767,39 @@ selector:
763767
type: ClusterIP
764768
`))
765769

770+
types := []struct {
771+
Type string
772+
Expect func(testing.TB, *corev1.Service)
773+
}{
774+
{Type: "ClusterIP", Expect: func(t testing.TB, service *corev1.Service) {
775+
assert.Equal(t, service.Spec.Type, corev1.ServiceTypeClusterIP)
776+
}},
777+
{Type: "NodePort", Expect: func(t testing.TB, service *corev1.Service) {
778+
assert.Equal(t, service.Spec.Type, corev1.ServiceTypeNodePort)
779+
}},
780+
{Type: "LoadBalancer", Expect: func(t testing.TB, service *corev1.Service) {
781+
assert.Equal(t, service.Spec.Type, corev1.ServiceTypeLoadBalancer)
782+
}},
783+
}
784+
785+
for _, test := range types {
786+
t.Run(test.Type, func(t *testing.T) {
787+
cluster := cluster.DeepCopy()
788+
cluster.Spec.ReplicaService = &v1beta1.ServiceSpec{Type: test.Type}
789+
790+
service, err := reconciler.generateClusterReplicaService(cluster)
791+
assert.NilError(t, err)
792+
alwaysExpect(t, service)
793+
test.Expect(t, service)
794+
assert.Assert(t, marshalMatches(service.Spec.Ports, `
795+
- name: postgres
796+
port: 9876
797+
protocol: TCP
798+
targetPort: postgres
799+
`))
800+
})
801+
}
802+
766803
t.Run("AnnotationsLabels", func(t *testing.T) {
767804
cluster := cluster.DeepCopy()
768805
cluster.Spec.Metadata = &v1beta1.Metadata{

internal/controller/postgrescluster/controller_ref_manager_test.go

+2-2
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ func TestManageControllerRefs(t *testing.T) {
6767
t.Run("adopt Object", func(t *testing.T) {
6868

6969
obj := objBase.DeepCopy()
70-
obj.Name = "adpot"
70+
obj.Name = "adopt"
7171
obj.Labels = map[string]string{naming.LabelCluster: clusterName}
7272

7373
if err := r.Client.Create(ctx, obj); err != nil {
@@ -155,7 +155,7 @@ func TestManageControllerRefs(t *testing.T) {
155155

156156
obj := objBase.DeepCopy()
157157
obj.Name = "ignore-no-postgrescluster"
158-
obj.Labels = map[string]string{naming.LabelCluster: "noexist"}
158+
obj.Labels = map[string]string{naming.LabelCluster: "nonexistent"}
159159

160160
if err := r.Client.Create(ctx, obj); err != nil {
161161
t.Error(err)

internal/controller/postgrescluster/helpers_test.go

+2-2
Original file line numberDiff line numberDiff line change
@@ -210,14 +210,14 @@ func testCluster() *v1beta1.PostgresCluster {
210210

211211
// setupManager creates the runtime manager used during controller testing
212212
func setupManager(t *testing.T, cfg *rest.Config,
213-
contollerSetup func(mgr manager.Manager)) (context.Context, context.CancelFunc) {
213+
controllerSetup func(mgr manager.Manager)) (context.Context, context.CancelFunc) {
214214

215215
mgr, err := runtime.CreateRuntimeManager("", cfg, true)
216216
if err != nil {
217217
t.Fatal(err)
218218
}
219219

220-
contollerSetup(mgr)
220+
controllerSetup(mgr)
221221

222222
ctx, cancel := context.WithCancel(context.Background())
223223
go func() {

internal/controller/postgrescluster/instance.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ instance name or set to blank ("")
6969
### Logic Map
7070

7171
With this, the grid below shows the expected replica count value, depending on
72-
the the values. Below, the letters represent the following:
72+
the values. Below, the letters represent the following:
7373

7474
M = StartupInstance matches the instance name
7575

0 commit comments

Comments
 (0)