Skip to content

Commit

Permalink
add docs
Browse files Browse the repository at this point in the history
  • Loading branch information
freeznet committed Jan 24, 2025
1 parent cb383c2 commit 68e3053
Show file tree
Hide file tree
Showing 9 changed files with 545 additions and 22 deletions.
8 changes: 0 additions & 8 deletions api/v1alpha1/streamnativecloudconnection_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,6 @@ type StreamNativeCloudConnectionSpec struct {
// +required
Server string `json:"server"`

// CertificateAuthorityData is the PEM-encoded certificate authority certificates
// +optional
CertificateAuthorityData []byte `json:"certificateAuthorityData,omitempty"`

// InsecureSkipTLSVerify indicates whether to skip TLS verification
// +optional
InsecureSkipTLSVerify bool `json:"insecureSkipTLSVerify,omitempty"`

// Auth defines the authentication configuration
// +required
Auth AuthConfig `json:"auth"`
Expand Down
7 changes: 1 addition & 6 deletions api/v1alpha1/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Original file line number Diff line number Diff line change
Expand Up @@ -89,14 +89,6 @@ spec:
required:
- credentialsRef
type: object
certificateAuthorityData:
description: CertificateAuthorityData is the PEM-encoded certificate
authority certificates
format: byte
type: string
insecureSkipTLSVerify:
description: InsecureSkipTLSVerify indicates whether to skip TLS verification
type: boolean
logs:
description: Logs defines the logging service configuration
properties:
Expand Down
68 changes: 68 additions & 0 deletions config/samples/resource_v1alpha1_computeflinkdeployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Copyright 2024 StreamNative
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: resource.streamnative.io/v1alpha1
kind: ComputeFlinkDeployment
metadata:
name: operator-test-v1
namespace: default
spec:
apiServerRef:
name: test-connection
workspaceName: test-operator-workspace
template:
syncingMode: PATCH # below are crd from ververica platform
deployment:
userMetadata:
name: operator-test-v1
namespace: default # the flink namespace
displayName: operator-test-v1
spec:
state: RUNNING
deploymentTargetName: default # the flink deployment target, will need create first on ui
maxJobCreationAttempts: 99
template:
metadata:
annotations:
flink.queryable-state.enabled: 'false'
flink.security.ssl.enabled: 'false'
spec:
artifact:
jarUri: function://public/default/[email protected]
mainArgs: --runner=FlinkRunner --attachedMode=false --checkpointingInterval=60000 --checkpointTimeoutMillis=100000 --minPauseBetweenCheckpoints=1000
entryClass: org.apache.beam.examples.WordCount
kind: JAR
flinkVersion: "1.18.1"
flinkImageTag: "1.18.1-stream3-scala_2.12-java17"
flinkConfiguration:
execution.checkpointing.externalized-checkpoint-retention: RETAIN_ON_CANCELLATION
execution.checkpointing.interval: 1min
execution.checkpointing.timeout: 10min
high-availability.type: kubernetes
state.backend: filesystem
taskmanager.memory.managed.fraction: '0.2'
parallelism: 1
numberOfTaskManagers: 1
resources:
jobmanager:
cpu: "1"
memory: 2G
taskmanager:
cpu: "1"
memory: 2G
logging:
loggingProfile: default
log4jLoggers:
"": DEBUG
com.company: DEBUG
27 changes: 27 additions & 0 deletions config/samples/resource_v1alpha1_computeworkspace.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Copyright 2024 StreamNative
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: resource.streamnative.io/v1alpha1
kind: ComputeWorkspace
metadata:
name: test-operator-workspace
namespace: default
spec:
apiServerRef:
name: test-connection
pulsarClusterNames:
- "test-pulsar"
poolRef:
name: shared
namespace: streamnative
41 changes: 41 additions & 0 deletions config/samples/resource_v1alpha1_streamnativecloudconnection.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Copyright 2024 StreamNative
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: resource.streamnative.io/v1alpha1
kind: StreamNativeCloudConnection
metadata:
name: test-connection
namespace: default
spec:
server: https://api.streamnative.dev
auth:
credentialsRef:
name: test-credentials
organization: org
---
apiVersion: v1
kind: Secret
metadata:
name: test-credentials
namespace: default
type: Opaque
data:
credentials.json: |
{
"type": "sn_service_account",
"client_secret": "client_secret",
"client_email": "client-email",
"issuer_url": "issuer_url",
"client_id": "client-id"
}
197 changes: 197 additions & 0 deletions docs/compute_flink_deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
# ComputeFlinkDeployment

## Overview

The `ComputeFlinkDeployment` resource defines a Flink deployment in StreamNative Cloud. It supports both Ververica Platform (VVP) and Community deployment templates, allowing you to deploy and manage Flink applications.

## Specifications

| Field | Description | Required |
|----------------------|--------------------------------------------------------------------------------------------|----------|
| `apiServerRef` | Reference to the StreamNativeCloudConnection resource for API server access | Yes |
| `workspaceName` | Name of the ComputeWorkspace where the Flink deployment will run | Yes |
| `labels` | Labels to add to the deployment | No |
| `annotations` | Annotations to add to the deployment | No |
| `template` | VVP deployment template configuration | No* |
| `communityTemplate` | Community deployment template configuration | No* |
| `defaultPulsarCluster`| Default Pulsar cluster to use for the deployment | No |

*Note: Either `template` or `communityTemplate` must be specified, but not both.

### VVP Deployment Template

| Field | Description | Required |
|-----------------|--------------------------------------------------------------------------------------------|----------|
| `syncingMode` | How the deployment should be synced (e.g., PATCH) | No |
| `deployment` | VVP deployment configuration | Yes |

#### VVP Deployment Configuration

| Field | Description | Required |
|----------------------------|--------------------------------------------------------------------------------------------|----------|
| `userMetadata` | Metadata for the deployment (name, namespace, displayName, etc.) | Yes |
| `spec` | Deployment specification including state, target, resources, etc. | Yes |

##### Deployment Spec Fields

| Field | Description | Required |
|--------------------------------|----------------------------------------------------------------------------------------|----------|
| `deploymentTargetName` | Target name for the deployment | No |
| `state` | State of the deployment (RUNNING, SUSPENDED, CANCELLED) | No |
| `maxJobCreationAttempts` | Maximum number of job creation attempts (minimum: 1) | No |
| `maxSavepointCreationAttempts` | Maximum number of savepoint creation attempts (minimum: 1) | No |
| `template` | Deployment template configuration | Yes |

##### Template Spec Fields

| Field | Description | Required |
|----------------------|----------------------------------------------------------------------------------------|----------|
| `artifact` | Deployment artifact configuration | Yes |
| `flinkConfiguration` | Flink configuration key-value pairs | No |
| `parallelism` | Parallelism of the Flink job | No |
| `numberOfTaskManagers`| Number of task managers | No |
| `resources` | Resource requirements for jobmanager and taskmanager | No |
| `logging` | Logging configuration | No |

##### Artifact Configuration

| Field | Description | Required |
|--------------------------|----------------------------------------------------------------------------------------|----------|
| `kind` | Type of artifact (JAR, PYTHON, sqlscript) | Yes |
| `jarUri` | URI of the JAR file | No* |
| `pythonArtifactUri` | URI of the Python artifact | No* |
| `sqlScript` | SQL script content | No* |
| `flinkVersion` | Flink version to use | No |
| `flinkImageTag` | Flink image tag to use | No |
| `mainArgs` | Arguments for the main class/method | No |
| `entryClass` | Entry class for JAR artifacts | No |

*Note: One of `jarUri`, `pythonArtifactUri`, or `sqlScript` must be specified based on the `kind`.

### Community Deployment Template

| Field | Description | Required |
|--------------------------|----------------------------------------------------------------------------------------|----------|
| `metadata` | Metadata for the deployment (annotations, labels) | No |
| `spec` | Community deployment specification | Yes |

#### Community Deployment Spec

| Field | Description | Required |
|--------------------------|----------------------------------------------------------------------------------------|----------|
| `image` | Flink image to use | Yes |
| `jarUri` | URI of the JAR file | Yes |
| `entryClass` | Entry class of the JAR | No |
| `mainArgs` | Main arguments for the application | No |
| `flinkConfiguration` | Flink configuration key-value pairs | No |
| `jobManagerPodTemplate` | Pod template for the job manager | No |
| `taskManagerPodTemplate` | Pod template for the task manager | No |

## Status

| Field | Description |
|----------------------|-------------------------------------------------------------------------------------------------|
| `conditions` | List of status conditions for the deployment |
| `observedGeneration` | The last observed generation of the resource |
| `deploymentStatus` | Raw deployment status from the API server |

## Example

1. Create a ComputeFlinkDeployment with VVP template:

```yaml
apiVersion: resource.streamnative.io/v1alpha1
kind: ComputeFlinkDeployment
metadata:
name: operator-test-v1
namespace: default
spec:
apiServerRef:
name: test-connection
workspaceName: test-operator-workspace
template:
syncingMode: PATCH
deployment:
userMetadata:
name: operator-test-v1
namespace: default
displayName: operator-test-v1
spec:
state: RUNNING
deploymentTargetName: default
maxJobCreationAttempts: 99
template:
metadata:
annotations:
flink.queryable-state.enabled: 'false'
flink.security.ssl.enabled: 'false'
spec:
artifact:
jarUri: function://public/default/[email protected]
mainArgs: --runner=FlinkRunner --attachedMode=false --checkpointingInterval=60000
entryClass: org.apache.beam.examples.WordCount
kind: JAR
flinkVersion: "1.18.1"
flinkImageTag: "1.18.1-stream3-scala_2.12-java17"
flinkConfiguration:
execution.checkpointing.externalized-checkpoint-retention: RETAIN_ON_CANCELLATION
execution.checkpointing.interval: 1min
execution.checkpointing.timeout: 10min
high-availability.type: kubernetes
state.backend: filesystem
taskmanager.memory.managed.fraction: '0.2'
parallelism: 1
numberOfTaskManagers: 1
resources:
jobmanager:
cpu: "1"
memory: 2G
taskmanager:
cpu: "1"
memory: 2G
logging:
loggingProfile: default
log4jLoggers:
"": DEBUG
com.company: DEBUG
```
2. Apply the YAML file:
```shell
kubectl apply -f deployment.yaml
```

3. Check the deployment status:

```shell
kubectl get computeflinkdeployment operator-test-v1
```

The deployment is ready when the Ready condition is True:

```shell
NAME READY AGE
operator-test-v1 True 1m
```

## Update Deployment

You can update the deployment by modifying the YAML file and reapplying it. Most fields can be updated, including:
- Flink configuration
- Resources
- Parallelism
- Logging settings
- Artifact configuration

After applying changes, verify the status to ensure the deployment is updated properly.

## Delete Deployment

To delete a ComputeFlinkDeployment resource:

```shell
kubectl delete computeflinkdeployment operator-test-v1
```

This will stop the Flink job and clean up all associated resources in StreamNative Cloud.
Loading

0 comments on commit 68e3053

Please sign in to comment.