From 89723584f2a73ad19a2d722ca755341b831849a7 Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Tue, 21 Jan 2025 16:43:12 -0800 Subject: [PATCH 01/19] add feature spec for Serverless Container Runtimes Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- .../2025-01-serverless-feature-spec.md | 121 ++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 architecture/2025-01-serverless-feature-spec.md diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md new file mode 100644 index 0000000..c08f246 --- /dev/null +++ b/architecture/2025-01-serverless-feature-spec.md @@ -0,0 +1,121 @@ +# Topic: Serverless Container Runtimes + +* **Author**: Will Tsai (@willtsai) + +## Topic Summary + +Given the importance of serverless infrastructure in the modern application landscape, it is a priority for Radius to expand beyond Kubernetes and support serverless compute platforms. The initial expansion will focus on support for an unopinionated serverless container runtime (e.g. Azure Container Instances, AWS Elastic Container Service) before exploring integrations with other more opinionated serverless platforms (e.g. AWS Fargate, Azure Container Apps). + +This document describes the high-level overview for expanding the Radius platform to enable management of serverless container runtimes. The goal is to provide a seamless experience for developers to deploy and manage serverless containers in a way that is consistent with the existing Radius model, including Environments, Applications, and Resources. + +### Top level goals + +1. Genericize the Radius platform to support deployment of applications on serverless (and other) compute platforms beyond Kubernetes. +1. Enable developers to deploy and manage serverless containers in a way that is consistent with the existing Radius environment and application model. +1. Make all Radius features (Recipes, Connections, App Graph, UDT, etc.) available to engineers building applications on Radius-enabled supported serverless platforms. +1. Radius support for unopinionated or process-driven serverless container runtimes (e.g. AWS Elastic Container Service, Azure Container Instances, Azure Container Apps, AWS Fargate). + +### Non-goals (out of scope) + +1. Hosting the Radius control plane on serverless (and other) compute platforms outside of Kubernetes. +1. Radius support for more opinionated or event-driven serverless platforms (e.g. Azure Functions, AWS Lambda). + +## User profile and challenges + +Primarily, our users are application developers, operators, and platform engineers who wish to leverage Radius in building and managing applications on serverless compute platforms. Secondarily, our platform engineering users may have their own internal stakeholders (e.g. developers and IT teams) for whom they are building a developer platform. + +### User persona(s) + +**IT Operator**: An IT operator is responsible for helping developers deploy and manage applications on serverless compute platforms. They are familiar with containerized applications (e.g. Docker) and may have experience with Kubernetes and other cloud-native technologies. They are responsible for ensuring that applications are deployed and running correctly, and that the platform is secure, scalable, reliable, and follows best practices. They use Radius to preconfigure Environments and Recipes for use by the application developers that they support. + +**Application Developer**: A developer building applications that need to run and be managed on serverless compute platforms. They are familiar with containerized applications (e.g. Docker), but may or may not have experience with Kubernetes and other cloud-native technologies. With Radius, they will primarily deal with application definition files (e.g. `app.bicep`) and will leverage Radius features like Connections and Recipes. In many cases, their environments will be provided to them and preconfigured with the necessary settings and Recipes. + +**Platform Engineer**: A platform engineer responsible for building and maintaining the developer platform that developers and operators use to build and deploy applications. They are likely familiar with Kubernetes and other cloud-native technologies, but are definitely familiar with containerized applications (e.g. Docker). They are responsible for ensuring that the platform is secure, scalable, reliable, easy to use, and enforces best practices. + +**Site Reliability Engineer**: Applies updates or patches to compute infrastructure and workloads as needed to ensure the application scales appropriately to continue running without issue. + +### Challenge(s) faced by the user + +With the additional complexity and specialized knowledge required to manage and operate Kubernetes clusters, many users are looking to serverless compute platforms as a way to simplify their infrastructure and reduce operational overhead. Services like ACI and ECS provide a way to run containerized workloads without the need to manage the underlying infrastructure, while still providing the benefits of containers like portability and scalability. Azure Container Apps and AWS Fargate take this a step further by providing a fully managed serverless container platform that abstracts away the need to manage containers altogether. + +Users who build applications exclusively on serverless compute platforms are not able to adopt Radius. These users face the same challenges as those who have chosen to adopt Radius for Kubernetes: making sense of complex architectures, suboptimal troubleshooting experiences, pain in dealing with a plethora of cross-platform tools, difficulties in enforcing best practices, and hindered team collaboration due to unclear separation of concerns. + +Even if the user uses a mix of Kubernetes and serverless and they have adopted Radius, the same challenges persist and are even exacerbated when they can only use Radius for only their Kubernetes applications. + +### Positive user outcome + +The value to the user is that they can now use Radius to build and manage their applications on serverless compute platforms and thus taking advantage of the same benefits that Radius provides for Kubernetes applications. + +## Key scenarios + + +### Scenario 1: Model Radius Environment, Application, and Container resources for serverless + +Enable the ability to define and run serverless compute with necessary extensions in a Radius [environment](https://docs.radapp.io/reference/resource-schema/core-schema/environment-schema/). + +Users can specify Radius [application](https://docs.radapp.io/reference/resource-schema/core-schema/application-schema/) definitions for applications running on serverless compute platforms. +> Must be sure to include serverless platform specific customizations via a `runtimes` property, similar to how Kubernetes patching was implemented for [containers](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#runtimes). + +Allow for Radius abstraction of a [container](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/) resource that can be deployed to serverless compute platforms, leveraging the [`runtimes` property](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#runtimes). Must be sure to include serverless platform specific configurations via [connections](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#connections), [extensions](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#extensions), and routing to other resources via [gateways](https://docs.radapp.io/reference/resource-schema/core-schema/gateway/). +> One idea to explore is whether we can we build extensibility via Recipes - i.e. allow Recipes for Containers, which themselves can be serverless containers. + +### Scenario 2: User interfaces for serverless--Radius API, CLI, Dashboard + +Enable deployment and management of serverless compute resources via the existing Radius API and CLI commands. + +Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. + +### Scenario 3: Punch-through to platform-specific features and incremental adoption of Radius into existing serverless applications + +Allow for platform-specific features to be used in Radius applications via abstraction "punch-through" mechanisms, similar to how Kubernetes-specific features are supported in Radius via [base YAML or PodSpec patching](https://docs.radapp.io/guides/author-apps/containers/overview/#kubernetes) functionalities. +> Stretch goal: Ability to add Radius to existing serverless applications without requiring a full rewrite of the application, similar to how Radius can be added to existing Kubernetes applications via [Kubernetes manifest](https://docs.radapp.io/tutorials/add-radius/) or [Helm chart](https://docs.radapp.io/tutorials/helm/) annotations. Note: The Kubernetes/Helm support works because Kubernetes itself is extensible. Other systems like ACA are not extensible in the same way but we should explore if there are options to make this work. + +## Key dependencies and risks + + + + +**Dependency: orchestrator and API for the underlying serverless platform** - For each additional serverless platform that Radius supports, we have a dependency on the underlying orchestrator for that platform. For example, for Azure Container Instances, we depend on the Azure API. For AWS Elastic Container Service, we depend on the AWS API. There is a risk that the underlying orchestrator may not provide the necessary APIs to support the Radius model, the APIs may not be stable or reliable, or the APIs may not be publicly available. This will need to be investigated as a part of the implementation. + +**Risk: platform specific features that cannot be implemented in Radius** - There might be platform-specific features that cannot (or should not) be implemented in Radius. For example, Kubernetes has features for taints and tolerations that are not common across compute platforms and thus should not be implemented in Radius. This risk can be mitigated by providing mechanisms to punch-through the Radius abstraction and use platform-specific features directly, like the [Kubernetes customization options](https://docs.radapp.io/guides/author-apps/containers/overview/#kubernetes) currently available in Radius. + +**Risk: Differing platforms between Radius control plane and application runtime** - There is a need to deploy and maintain a Kubernetes cluster to host the Radius control plane is needed even if the application is completely serverless and this might not work for some customers (e.g. those who absolutely do not want to use Kubernetes). For now we will accept this risk and consider alternative hosting platforms for the Radius control plane to be out of scope. + +**Risk: Significant refactoring work anticipated** - There will be a significant amount of work to refactor the Radius code as a part of implementation, especially since the current codebase is heavily Kubernetes-centric. While this is an unavoidable risk, we will ensure that the refactoring work is done with future extensibility in mind so that it becomes easier to add support for additional compute platforms in the future. + +## Key assumptions to test and questions to answer + + +**Assumption**: The Radius model can be extended to support serverless compute platforms without significant changes to the existing model. Serverless platforms have orchestrators that expose APIs that can be used to model the resources in Radius. We have validated these assumptions by building a prototype for Azure Container Instances. + +**Assumption**: As a result of the implementation, the refactoring done to the core Radius codebase to support Serverless container runtimes should also allow for easier extensibility of Radius to support other new platforms going forward. In other words, the work done for Serverless should be genericized within the Radius code and not resemble custom implementations for Kubernetes and select Serverless platforms. + +**Question**: How do we handle the case where the underlying serverless platform (especially opinionated ones like Azure Container Apps or AWS Fargate) does not provide the necessary APIs to support the Radius model? We will answer this question by building prototypes for additional serverless platforms and evaluating the feasibility of supporting them in Radius. + +## Current state + +There has been a proof-of-concept for ACI support in Radius built on top of the ACI nGroups features. Here is a demo of the prototype that has been created so far: https://youtu.be/NMNZE22nSQI?si=Dq7Q5WVKgHularsO&t=1201 + +## Details of user problem + + +## Desired user experience outcome + + +### Detailed user experience + + + +## Key investments + + +### Feature 1 + + +### Feature 2 + + +### Feature 3 + From 9adc6b4595f96b7a753935ff397f5c646720a20e Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Sat, 25 Jan 2025 21:32:14 -0800 Subject: [PATCH 02/19] update content Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- .../2025-01-serverless-feature-spec.md | 133 ++++++++++++++++-- 1 file changed, 120 insertions(+), 13 deletions(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index c08f246..eda210d 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -59,17 +59,17 @@ Users can specify Radius [application](https://docs.radapp.io/reference/resource Allow for Radius abstraction of a [container](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/) resource that can be deployed to serverless compute platforms, leveraging the [`runtimes` property](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#runtimes). Must be sure to include serverless platform specific configurations via [connections](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#connections), [extensions](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#extensions), and routing to other resources via [gateways](https://docs.radapp.io/reference/resource-schema/core-schema/gateway/). > One idea to explore is whether we can we build extensibility via Recipes - i.e. allow Recipes for Containers, which themselves can be serverless containers. -### Scenario 2: User interfaces for serverless--Radius API, CLI, Dashboard +### Scenario 2: "Punch-through" to platform-specific features and incremental adoption of Radius into existing serverless applications + +Allow for platform-specific features to be used in Radius applications via abstraction "punch-through" mechanisms, similar to how Kubernetes-specific features are supported in Radius via [base YAML or PodSpec patching](https://docs.radapp.io/guides/author-apps/containers/overview/#kubernetes) functionalities. For example, ACI has a confidential containers offering that may not be common across serverless platforms, but Radius should allow for this feature to be used in ACI applications. +> Stretch goal: Ability to add Radius to existing serverless applications without requiring a full rewrite of the application, similar to how Radius can be added to existing Kubernetes applications via [Kubernetes manifest](https://docs.radapp.io/tutorials/add-radius/) or [Helm chart](https://docs.radapp.io/tutorials/helm/) annotations. Note: The Kubernetes/Helm support works because Kubernetes itself is extensible. Other systems like ACA are not extensible in the same way but we should explore if there are options to make this work. + +### Scenario 3: User interfaces for serverless--Radius API, CLI, Dashboard Enable deployment and management of serverless compute resources via the existing Radius API and CLI commands. Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. -### Scenario 3: Punch-through to platform-specific features and incremental adoption of Radius into existing serverless applications - -Allow for platform-specific features to be used in Radius applications via abstraction "punch-through" mechanisms, similar to how Kubernetes-specific features are supported in Radius via [base YAML or PodSpec patching](https://docs.radapp.io/guides/author-apps/containers/overview/#kubernetes) functionalities. -> Stretch goal: Ability to add Radius to existing serverless applications without requiring a full rewrite of the application, similar to how Radius can be added to existing Kubernetes applications via [Kubernetes manifest](https://docs.radapp.io/tutorials/add-radius/) or [Helm chart](https://docs.radapp.io/tutorials/helm/) annotations. Note: The Kubernetes/Helm support works because Kubernetes itself is extensible. Other systems like ACA are not extensible in the same way but we should explore if there are options to make this work. - ## Key dependencies and risks @@ -96,18 +96,125 @@ Allow for platform-specific features to be used in Radius applications via abstr There has been a proof-of-concept for ACI support in Radius built on top of the ACI nGroups features. Here is a demo of the prototype that has been created so far: https://youtu.be/NMNZE22nSQI?si=Dq7Q5WVKgHularsO&t=1201 -## Details of user problem - - -## Desired user experience outcome - - -### Detailed user experience +## Detailed user experience +### Step 1: Define a Radius Environment for serverless + +1. The user defines a new Radius Environment for serverless compute by creating a new Environment definition file (e.g. `env.bicep`) and specifying the necessary settings for the serverless platform. + +```bicep +resource environment 'Applications.Core/environments@2023-10-01-preview' = { + name: 'myenv' + properties: { + compute: { + kind: 'aci' // Required. The kind of container runtime to use, e.g. 'aci' for Azure Container Instances and 'ecs' for AWS Elastic Container Service + namespace: 'default' // Required. Need to figure out what would be the equivalent of a namespace in the specific serverless platform + } + extensions: [ + { + kind: 'kubernetesMetadata' + labels: { + 'team.contact.name': 'frontend' + } + // Add other serverless platform-specific extensions here + } + ] + } +} +``` + +### Step 2: Define a Radius Application for serverless + +1. The user defines a new Radius Application for serverless compute by creating a new Application definition file (e.g. `app.bicep`) and specifying the necessary settings for the container runtime platform. Note that the application definitions are container runtime platform-agnostic, thus this same application definition can be deployed to both Kubernetes and serverless compute platforms. + +```bicep +resource app 'Applications.Core/applications@2023-10-01-preview' = { + name: 'myapp' + properties: { + environment: environment + extensions: [ // these are all currently Kubernetes-centric, will need to update for serverless + { + kind: 'kubernetesNamespace' // need to genericize this to be platform-agnostic, rename it to allow for encompassing both Kubernetes and serverless namespaces (e.g. NGroups for ACI) + namespace: 'myapp' + } + { + kind: 'kubernetesMetadata' // need to genericize this to be platform-agnostic, rename it to allow for encompassing both Kubernetes and serverless metadata + labels: { + 'team.contact.name': 'frontend' + } + } + ] + } +} +``` + +### Step 3: Define a Radius Container for serverless + + +1. The user defines a Radius Container within the application definition (e.g. `app.bicep`). Note that the container definitions are container runtime platform-agnostic, thus this same container definition can be deployed to both Kubernetes and serverless compute platforms. + +```bicep +resource demo 'Applications.Core/containers@2023-10-01-preview' = { + name: 'demo' + properties: { + application: application + container: { + image: 'ghcr.io/radius-project/samples/demo:latest' + ports: { + web: { + containerPort: 3000 + } + } + } + } +} +``` + +### Step 4: Define and connect other resources to the serverless container + +1. The user defines other resources (e.g. databases, message queues, etc.) that the container can connect to. These resources can be defined in the same application definition file (e.g. `app.bicep`) and connected to the container using the `connections` property for serverless containers just as they can be today for Kubernetes containers. + +```bicep +resource demo 'Applications.Core/containers@2023-10-01-preview' = { + name: 'demo' + properties: { + application: application + container: { + image: 'ghcr.io/radius-project/samples/demo:latest' + ports: { + web: { + containerPort: 3000 + } + } + } + connections: { + redis: { + source: db.id + } + } + } +} + +resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { + name: 'db' + properties: { + application: application + environment: environment + } +} +``` + +### Step 5: Deploy the application to the serverless platform + +### Step 6: View the serverless containers in the CLI and Radius Dashboard + +### Step 7: Make use of platform-specific features in the application + + ## Key investments From 2b944f38e9caf81503abdb5453263478409863a4 Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Mon, 27 Jan 2025 21:56:25 -0800 Subject: [PATCH 03/19] update content Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- .../2025-01-serverless-feature-spec.md | 86 +++++++++++++------ 1 file changed, 60 insertions(+), 26 deletions(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index eda210d..4a2d88f 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -8,6 +8,10 @@ Given the importance of serverless infrastructure in the modern application land This document describes the high-level overview for expanding the Radius platform to enable management of serverless container runtimes. The goal is to provide a seamless experience for developers to deploy and manage serverless containers in a way that is consistent with the existing Radius model, including Environments, Applications, and Resources. +## Terms and definitions + +- nGroups: ACI feature that provides you with advanced capabilities for managing multiple related container groups, e.g. maintaining multiple instances, rolling upgrades, high availability, managed identity support, confidential container support, load balancing, zone rebalancing. + ### Top level goals 1. Genericize the Radius platform to support deployment of applications on serverless (and other) compute platforms beyond Kubernetes. @@ -56,19 +60,17 @@ Enable the ability to define and run serverless compute with necessary extension Users can specify Radius [application](https://docs.radapp.io/reference/resource-schema/core-schema/application-schema/) definitions for applications running on serverless compute platforms. > Must be sure to include serverless platform specific customizations via a `runtimes` property, similar to how Kubernetes patching was implemented for [containers](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#runtimes). -Allow for Radius abstraction of a [container](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/) resource that can be deployed to serverless compute platforms, leveraging the [`runtimes` property](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#runtimes). Must be sure to include serverless platform specific configurations via [connections](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#connections), [extensions](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#extensions), and routing to other resources via [gateways](https://docs.radapp.io/reference/resource-schema/core-schema/gateway/). -> One idea to explore is whether we can we build extensibility via Recipes - i.e. allow Recipes for Containers, which themselves can be serverless containers. +Allow for Radius abstraction of a [container](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/) resource that can be deployed to serverless compute platforms. Must be sure to include serverless platform specific configurations via [connections](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#connections), [extensions](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#extensions), and routing to other resources via [gateways](https://docs.radapp.io/reference/resource-schema/core-schema/gateway/). +> One future idea to explore is whether we can we build extensibility via Recipes - i.e. allow Recipes for Containers, which themselves can be serverless containers. ### Scenario 2: "Punch-through" to platform-specific features and incremental adoption of Radius into existing serverless applications -Allow for platform-specific features to be used in Radius applications via abstraction "punch-through" mechanisms, similar to how Kubernetes-specific features are supported in Radius via [base YAML or PodSpec patching](https://docs.radapp.io/guides/author-apps/containers/overview/#kubernetes) functionalities. For example, ACI has a confidential containers offering that may not be common across serverless platforms, but Radius should allow for this feature to be used in ACI applications. +Allow for platform-specific features to be used in Radius applications via abstraction "punch-through" mechanisms, similar to how Kubernetes-specific features are supported in Radius via [base YAML or PodSpec patching](https://docs.radapp.io/guides/author-apps/containers/overview/#kubernetes) functionalities that is achieved through the [`runtimes`](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#runtimes) property in the container definition. For example, ACI has a confidential containers offering that may not be common across serverless platforms, but Radius should allow for this feature to be used in ACI containers. > Stretch goal: Ability to add Radius to existing serverless applications without requiring a full rewrite of the application, similar to how Radius can be added to existing Kubernetes applications via [Kubernetes manifest](https://docs.radapp.io/tutorials/add-radius/) or [Helm chart](https://docs.radapp.io/tutorials/helm/) annotations. Note: The Kubernetes/Helm support works because Kubernetes itself is extensible. Other systems like ACA are not extensible in the same way but we should explore if there are options to make this work. ### Scenario 3: User interfaces for serverless--Radius API, CLI, Dashboard -Enable deployment and management of serverless compute resources via the existing Radius API and CLI commands. - -Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. +Enable deployment and management of serverless compute resources via the existing Radius API and CLI commands. Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. ## Key dependencies and risks @@ -102,25 +104,31 @@ There has been a proof-of-concept for ACI support in Radius built on top of the Step 2 … --> -### Step 1: Define a Radius Environment for serverless +### Step 1: Define and deploy a Radius Environment for serverless -1. The user defines a new Radius Environment for serverless compute by creating a new Environment definition file (e.g. `env.bicep`) and specifying the necessary settings for the serverless platform. +The user defines a new Radius Environment for serverless compute by creating a new Environment definition file (e.g. `env.bicep`) and specifying the necessary settings for the serverless platform before deploying the environment using `rad deploy env.bicep`. -```bicep +```diff resource environment 'Applications.Core/environments@2023-10-01-preview' = { name: 'myenv' properties: { compute: { - kind: 'aci' // Required. The kind of container runtime to use, e.g. 'aci' for Azure Container Instances and 'ecs' for AWS Elastic Container Service - namespace: 'default' // Required. Need to figure out what would be the equivalent of a namespace in the specific serverless platform ++ kind: 'aci' // Required. The kind of container runtime to use, e.g. 'aci' for Azure Container Instances and 'ecs' for AWS Elastic Container Service ++ namespace: 'default' // This is currently required, but may be made optional if it doesn't apply to all compute platforms ++ resourceGroup: '/subscriptions/.../resourceGroups/myrg' // This was introduced as a part of the ACI prototype, but may be changed depending on the implementation, need to figure out if namespace and resourceGroup should be combined into a single more generic property } ++ providers: { // This was introduced as a part of the ACI prototype, but may be changed depending on the implementation ++ azure: { ++ scope: '/subscriptions/.../resourceGroups/myrg' ++ } ++ } extensions: [ { kind: 'kubernetesMetadata' labels: { 'team.contact.name': 'frontend' } - // Add other serverless platform-specific extensions here ++ // Add other serverless platform-specific extensions here } ] } @@ -129,35 +137,35 @@ resource environment 'Applications.Core/environments@2023-10-01-preview' = { ### Step 2: Define a Radius Application for serverless -1. The user defines a new Radius Application for serverless compute by creating a new Application definition file (e.g. `app.bicep`) and specifying the necessary settings for the container runtime platform. Note that the application definitions are container runtime platform-agnostic, thus this same application definition can be deployed to both Kubernetes and serverless compute platforms. +The user defines a new Radius Application for serverless compute by creating a new Application definition file (e.g. `app.bicep`) and specifying the necessary settings for the container runtime platform. Note that the application definitions are container runtime platform-agnostic, thus this same application definition can be deployed to both Kubernetes and serverless compute platforms. -```bicep +```diff resource app 'Applications.Core/applications@2023-10-01-preview' = { name: 'myapp' properties: { environment: environment - extensions: [ // these are all currently Kubernetes-centric, will need to update for serverless ++ extensions: [ // these are all currently Kubernetes-centric, will need to update for serverless { - kind: 'kubernetesNamespace' // need to genericize this to be platform-agnostic, rename it to allow for encompassing both Kubernetes and serverless namespaces (e.g. NGroups for ACI) + kind: 'kubernetesNamespace' namespace: 'myapp' } { - kind: 'kubernetesMetadata' // need to genericize this to be platform-agnostic, rename it to allow for encompassing both Kubernetes and serverless metadata + kind: 'kubernetesMetadata' labels: { 'team.contact.name': 'frontend' } } ++ // add other serverless extensions as applicable ] } } ``` ### Step 3: Define a Radius Container for serverless - -1. The user defines a Radius Container within the application definition (e.g. `app.bicep`). Note that the container definitions are container runtime platform-agnostic, thus this same container definition can be deployed to both Kubernetes and serverless compute platforms. +The user defines a Radius Container within the application definition (e.g. `app.bicep`) and specifies relevant container properties, such as `extensions` or `runtimes` to set platform specific configurations. Note that the container definitions are container runtime platform-agnostic, thus this same container definition can be deployed to both Kubernetes and serverless compute platforms if common functionalities across compute platforms are used. -```bicep +```diff resource demo 'Applications.Core/containers@2023-10-01-preview' = { name: 'demo' properties: { @@ -169,14 +177,30 @@ resource demo 'Applications.Core/containers@2023-10-01-preview' = { containerPort: 3000 } } ++ // all the other container properties should also be implemented (e.g. `env`, `readinessProbe`, `livenessProbe`, etc.) } + extensions: [ ++ { ++ kind: 'manualScaling' ++ replicas: 2 ++ } + ] + runtimes: { ++ aci: { ++ // Add ACI-specific properties here to punch-through the Radius abstraction, e.g. sku, osType, etc. ++ sku: 'Confidential' // 'Standard', 'Dedicated', etc. ++ } ++ ecs: { ++ // Add AWS ECS-specific properties here to punch-through the Radius abstraction ++ } ++ } } } ``` ### Step 4: Define and connect other resources to the serverless container -1. The user defines other resources (e.g. databases, message queues, etc.) that the container can connect to. These resources can be defined in the same application definition file (e.g. `app.bicep`) and connected to the container using the `connections` property for serverless containers just as they can be today for Kubernetes containers. +The user defines other resources (e.g. databases, message queues, etc.) that the container can connect to. These resources can be defined in the same application definition file (e.g. `app.bicep`) and connected to the container using the `connections` property for serverless containers just as they can be today for Kubernetes containers. ```bicep resource demo 'Applications.Core/containers@2023-10-01-preview' = { @@ -210,19 +234,29 @@ resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { ### Step 5: Deploy the application to the serverless platform +The user deploys the application to the serverless platform by running the `rad run` or `rad deploy` command with the application definition file targeting the serverless environment (e.g. `app.bicep`). + ### Step 6: View the serverless containers in the CLI and Radius Dashboard -### Step 7: Make use of platform-specific features in the application - +After successful deployment, the user can view the serverless containers in the CLI using the `rad app graph` command or via the Application Graph in the Radius Dashboard. ## Key investments -### Feature 1 +### Feature 1: Model Radius Environment resources for serverless +Add support for defining and deploying serverless compute resources in a Radius Environment definition file (e.g. `env.bicep`). -### Feature 2 +### Feature 2: Model Radius Application resources for serverless +Add support for defining serverless compute resources in a Radius Application definition file (e.g. `app.bicep`). -### Feature 3 +### Feature 3: Model Radius Container resources for serverless +Add support for defining serverless container resources for container functionalities that are common across all platforms (e.g. `image`, `env`, `volumes`, etc.) within a Radius application definition file. + +### Feature 4: Punch-through to platform-specific features + +Add support for platform-specific features for containers via abstraction "punch-through" mechanisms. This will allow users to use platform-specific features, such as confidential containers or spot instances, in Radius applications. + +> This is similar to how Kubernetes-specific features are supported in Radius via base YAML or PodSpec patching functionalities. \ No newline at end of file From f733eb0c55089edf034ff20972590f9277034b68 Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Mon, 27 Jan 2025 22:06:30 -0800 Subject: [PATCH 04/19] add references and terms Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 4a2d88f..e53073b 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -10,7 +10,13 @@ This document describes the high-level overview for expanding the Radius platfor ## Terms and definitions -- nGroups: ACI feature that provides you with advanced capabilities for managing multiple related container groups, e.g. maintaining multiple instances, rolling upgrades, high availability, managed identity support, confidential container support, load balancing, zone rebalancing. +- [AWS Elastic Container Service (ECS)](https://aws.amazon.com/ecs/): A fully managed container orchestration service that allows you to run containers on a cluster of virtual machines. +- [Azure Container Instances (ACI)](https://learn.microsoft.com/en-us/azure/container-instances/): A serverless container runtime service that enables you to run containers on-demand without having to manage the underlying infrastructure. +- [AWS Fargate](https://aws.amazon.com/fargate/): A serverless compute engine for containers that allows you to run containers without having to manage the underlying infrastructure.- [Azure Container Apps (ACA)](https://learn.microsoft.com/en-us/azure/container-apps/): A fully managed serverless container platform that abstracts away the need to manage containers altogether. +- ACI [Container Groups](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-groups): A container group is a collection of containers that get scheduled on the same host machine. The containers in a container group share a lifecycle, resources, local network, and storage volumes. It's similar in concept to a pod in Kubernetes. +- ACI [nGroups](https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups): ACI feature that provides you with advanced capabilities for managing multiple related container groups, e.g. maintaining multiple instances, rolling upgrades, high availability, managed identity support, confidential container support, load balancing, zone rebalancing. +- ACI [Confidential Containers](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-confidential-overview): ACI feature that provides a secure enclave for your containerized applications to run in a confidential computing environment. +- ACI [Spot Instances](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-spot-containers-overview): ACI feature that allows you to run interruptible workloads at a reduced cost compared to the standard price by taking advantage of unused capacity in Azure datacenters. ### Top level goals @@ -198,6 +204,12 @@ resource demo 'Applications.Core/containers@2023-10-01-preview' = { } ``` +> AWS ECS schema reference: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html + +> AWS ECS API reference: https://docs.aws.amazon.com/pdfs/AmazonECS/latest/APIReference/ecs-api.pdf#Welcome + +> ACI schema reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-reference-yaml + ### Step 4: Define and connect other resources to the serverless container The user defines other resources (e.g. databases, message queues, etc.) that the container can connect to. These resources can be defined in the same application definition file (e.g. `app.bicep`) and connected to the container using the `connections` property for serverless containers just as they can be today for Kubernetes containers. @@ -259,4 +271,4 @@ Add support for defining serverless container resources for container functional Add support for platform-specific features for containers via abstraction "punch-through" mechanisms. This will allow users to use platform-specific features, such as confidential containers or spot instances, in Radius applications. -> This is similar to how Kubernetes-specific features are supported in Radius via base YAML or PodSpec patching functionalities. \ No newline at end of file +> This is similar to how Kubernetes-specific features are supported in Radius via base YAML or [PodSpec patching](https://docs.radapp.io/guides/author-apps/kubernetes/patch-podspec/) functionalities. \ No newline at end of file From 9ab9026376bdde09b54eebe11b86971ed484239c Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Mon, 27 Jan 2025 22:07:44 -0800 Subject: [PATCH 05/19] bullet point update Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index e53073b..88592a5 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -12,7 +12,8 @@ This document describes the high-level overview for expanding the Radius platfor - [AWS Elastic Container Service (ECS)](https://aws.amazon.com/ecs/): A fully managed container orchestration service that allows you to run containers on a cluster of virtual machines. - [Azure Container Instances (ACI)](https://learn.microsoft.com/en-us/azure/container-instances/): A serverless container runtime service that enables you to run containers on-demand without having to manage the underlying infrastructure. -- [AWS Fargate](https://aws.amazon.com/fargate/): A serverless compute engine for containers that allows you to run containers without having to manage the underlying infrastructure.- [Azure Container Apps (ACA)](https://learn.microsoft.com/en-us/azure/container-apps/): A fully managed serverless container platform that abstracts away the need to manage containers altogether. +- [AWS Fargate](https://aws.amazon.com/fargate/): A serverless compute engine for containers that allows you to run containers without having to manage the underlying infrastructure. +- [Azure Container Apps (ACA)](https://learn.microsoft.com/en-us/azure/container-apps/): A fully managed serverless container platform that abstracts away the need to manage containers altogether. - ACI [Container Groups](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-groups): A container group is a collection of containers that get scheduled on the same host machine. The containers in a container group share a lifecycle, resources, local network, and storage volumes. It's similar in concept to a pod in Kubernetes. - ACI [nGroups](https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups): ACI feature that provides you with advanced capabilities for managing multiple related container groups, e.g. maintaining multiple instances, rolling upgrades, high availability, managed identity support, confidential container support, load balancing, zone rebalancing. - ACI [Confidential Containers](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-confidential-overview): ACI feature that provides a secure enclave for your containerized applications to run in a confidential computing environment. From b7ec17c81343e7df7b6a3505864c957b5d340166 Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Mon, 27 Jan 2025 22:11:53 -0800 Subject: [PATCH 06/19] add feature 5 Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 88592a5..80ab203 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -272,4 +272,8 @@ Add support for defining serverless container resources for container functional Add support for platform-specific features for containers via abstraction "punch-through" mechanisms. This will allow users to use platform-specific features, such as confidential containers or spot instances, in Radius applications. -> This is similar to how Kubernetes-specific features are supported in Radius via base YAML or [PodSpec patching](https://docs.radapp.io/guides/author-apps/kubernetes/patch-podspec/) functionalities. \ No newline at end of file +> This is similar to how Kubernetes-specific features are supported in Radius via base YAML or [PodSpec patching](https://docs.radapp.io/guides/author-apps/kubernetes/patch-podspec/) functionalities. + +### Feature 5: User interfaces for serverless--Radius API, CLI, Dashboard + +Add support for deploying and managing serverless compute resources via the existing Radius API and CLI commands. Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. \ No newline at end of file From 4ef30308f7cefdebc9f3a355b16a5a92c0b28910 Mon Sep 17 00:00:00 2001 From: Will <28876888+willtsai@users.noreply.github.com> Date: Wed, 5 Feb 2025 15:18:26 -0800 Subject: [PATCH 07/19] Update architecture/2025-01-serverless-feature-spec.md Co-authored-by: Zach Casper Signed-off-by: Will <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 80ab203..e13c7d3 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -4,7 +4,7 @@ ## Topic Summary -Given the importance of serverless infrastructure in the modern application landscape, it is a priority for Radius to expand beyond Kubernetes and support serverless compute platforms. The initial expansion will focus on support for an unopinionated serverless container runtime (e.g. Azure Container Instances, AWS Elastic Container Service) before exploring integrations with other more opinionated serverless platforms (e.g. AWS Fargate, Azure Container Apps). +Given the importance of serverless infrastructure in the modern application landscape, it is a priority for Radius to expand beyond Kubernetes and support additional container platforms with lower operational overhead. The initial expansion will focus on support for Azure Container Instances, then AWS Elastic Container Service including AWS Fargate. This will be followed by more feature rich platforms including Azure Container Apps and eventually Google CloudRun. This document describes the high-level overview for expanding the Radius platform to enable management of serverless container runtimes. The goal is to provide a seamless experience for developers to deploy and manage serverless containers in a way that is consistent with the existing Radius model, including Environments, Applications, and Resources. From 5ba1be9e1273eb22c632aaf4f935832b7c06d352 Mon Sep 17 00:00:00 2001 From: Will <28876888+willtsai@users.noreply.github.com> Date: Wed, 5 Feb 2025 15:28:03 -0800 Subject: [PATCH 08/19] Apply suggestions from code review Co-authored-by: Zach Casper Signed-off-by: Will <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index e13c7d3..76a27dc 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -1,4 +1,4 @@ -# Topic: Serverless Container Runtimes +# Topic: Serverless Container Platforms * **Author**: Will Tsai (@willtsai) @@ -6,13 +6,12 @@ Given the importance of serverless infrastructure in the modern application landscape, it is a priority for Radius to expand beyond Kubernetes and support additional container platforms with lower operational overhead. The initial expansion will focus on support for Azure Container Instances, then AWS Elastic Container Service including AWS Fargate. This will be followed by more feature rich platforms including Azure Container Apps and eventually Google CloudRun. -This document describes the high-level overview for expanding the Radius platform to enable management of serverless container runtimes. The goal is to provide a seamless experience for developers to deploy and manage serverless containers in a way that is consistent with the existing Radius model, including Environments, Applications, and Resources. +This document describes the high-level overview for deploying Radius-managed applications to serverless container platforms. The goal is to provide a consistent development experience for developers regardless of which container platform the platform engineer has chosen. In other words, with serverless support in Radius, applications will now be portable between cloud providers and now container platforms. ## Terms and definitions -- [AWS Elastic Container Service (ECS)](https://aws.amazon.com/ecs/): A fully managed container orchestration service that allows you to run containers on a cluster of virtual machines. +- [AWS Elastic Container Service (ECS)](https://aws.amazon.com/ecs/): A fully managed container orchestration service that allows you to run containerized applications. Containers are grouped into tasks (analogous to Kubernetes pods) which are then grouped into a service which managed ingress and autoscaling (analogous to a Kubernetes service of type loadBalancer). ECS can run containerized applications on EC2 virtual machines or on AWS Fargate serverless infrastructure. - [Azure Container Instances (ACI)](https://learn.microsoft.com/en-us/azure/container-instances/): A serverless container runtime service that enables you to run containers on-demand without having to manage the underlying infrastructure. -- [AWS Fargate](https://aws.amazon.com/fargate/): A serverless compute engine for containers that allows you to run containers without having to manage the underlying infrastructure. - [Azure Container Apps (ACA)](https://learn.microsoft.com/en-us/azure/container-apps/): A fully managed serverless container platform that abstracts away the need to manage containers altogether. - ACI [Container Groups](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-groups): A container group is a collection of containers that get scheduled on the same host machine. The containers in a container group share a lifecycle, resources, local network, and storage volumes. It's similar in concept to a pod in Kubernetes. - ACI [nGroups](https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups): ACI feature that provides you with advanced capabilities for managing multiple related container groups, e.g. maintaining multiple instances, rolling upgrades, high availability, managed identity support, confidential container support, load balancing, zone rebalancing. @@ -47,7 +46,7 @@ Primarily, our users are application developers, operators, and platform enginee ### Challenge(s) faced by the user -With the additional complexity and specialized knowledge required to manage and operate Kubernetes clusters, many users are looking to serverless compute platforms as a way to simplify their infrastructure and reduce operational overhead. Services like ACI and ECS provide a way to run containerized workloads without the need to manage the underlying infrastructure, while still providing the benefits of containers like portability and scalability. Azure Container Apps and AWS Fargate take this a step further by providing a fully managed serverless container platform that abstracts away the need to manage containers altogether. +With the additional complexity and specialized knowledge required to manage and operate Kubernetes clusters, many users are looking to serverless compute platforms as a way to simplify their infrastructure and reduce operational overhead. Services like Azure Container Apps and ECS Fargate provide a way to run containerized workloads without the need to manage the underlying infrastructure, while still providing the benefits of containers like portability and scalability. Users who build applications exclusively on serverless compute platforms are not able to adopt Radius. These users face the same challenges as those who have chosen to adopt Radius for Kubernetes: making sense of complex architectures, suboptimal troubleshooting experiences, pain in dealing with a plethora of cross-platform tools, difficulties in enforcing best practices, and hindered team collaboration due to unclear separation of concerns. From f86ef4d24fc266a59f0357c36e9586398c89f880 Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Thu, 13 Feb 2025 20:11:34 -0800 Subject: [PATCH 09/19] address feedback Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- .../2025-01-serverless-feature-spec.md | 132 ++++++++++++------ 1 file changed, 88 insertions(+), 44 deletions(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 76a27dc..43723de 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -6,29 +6,31 @@ Given the importance of serverless infrastructure in the modern application landscape, it is a priority for Radius to expand beyond Kubernetes and support additional container platforms with lower operational overhead. The initial expansion will focus on support for Azure Container Instances, then AWS Elastic Container Service including AWS Fargate. This will be followed by more feature rich platforms including Azure Container Apps and eventually Google CloudRun. -This document describes the high-level overview for deploying Radius-managed applications to serverless container platforms. The goal is to provide a consistent development experience for developers regardless of which container platform the platform engineer has chosen. In other words, with serverless support in Radius, applications will now be portable between cloud providers and now container platforms. +This document describes the high-level overview for deploying Radius-managed applications to serverless container platforms. The goal is to provide a consistent development experience for developers regardless of which container platform the platform engineer has chosen. In other words, with serverless support in Radius, applications will now be portable between cloud providers and container platforms too. ## Terms and definitions - [AWS Elastic Container Service (ECS)](https://aws.amazon.com/ecs/): A fully managed container orchestration service that allows you to run containerized applications. Containers are grouped into tasks (analogous to Kubernetes pods) which are then grouped into a service which managed ingress and autoscaling (analogous to a Kubernetes service of type loadBalancer). ECS can run containerized applications on EC2 virtual machines or on AWS Fargate serverless infrastructure. -- [Azure Container Instances (ACI)](https://learn.microsoft.com/en-us/azure/container-instances/): A serverless container runtime service that enables you to run containers on-demand without having to manage the underlying infrastructure. +- [Azure Container Instances (ACI)](https://learn.microsoft.com/en-us/azure/container-instances/): A serverless container platform service that enables you to run containers on-demand without having to manage the underlying infrastructure. - [Azure Container Apps (ACA)](https://learn.microsoft.com/en-us/azure/container-apps/): A fully managed serverless container platform that abstracts away the need to manage containers altogether. - ACI [Container Groups](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-container-groups): A container group is a collection of containers that get scheduled on the same host machine. The containers in a container group share a lifecycle, resources, local network, and storage volumes. It's similar in concept to a pod in Kubernetes. - ACI [nGroups](https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups): ACI feature that provides you with advanced capabilities for managing multiple related container groups, e.g. maintaining multiple instances, rolling upgrades, high availability, managed identity support, confidential container support, load balancing, zone rebalancing. - ACI [Confidential Containers](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-confidential-overview): ACI feature that provides a secure enclave for your containerized applications to run in a confidential computing environment. - ACI [Spot Instances](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-spot-containers-overview): ACI feature that allows you to run interruptible workloads at a reduced cost compared to the standard price by taking advantage of unused capacity in Azure datacenters. +- User Defined Types (UDT): A Radius feature that allows you to define custom resource types that can be used in your application definitions. As of this writing, the feature is a work in progress with its design document available [here](https://github.com/radius-project/design-notes/blob/main/architecture/2024-07-user-defined-types.md). ### Top level goals 1. Genericize the Radius platform to support deployment of applications on serverless (and other) compute platforms beyond Kubernetes. 1. Enable developers to deploy and manage serverless containers in a way that is consistent with the existing Radius environment and application model. 1. Make all Radius features (Recipes, Connections, App Graph, UDT, etc.) available to engineers building applications on Radius-enabled supported serverless platforms. -1. Radius support for unopinionated or process-driven serverless container runtimes (e.g. AWS Elastic Container Service, Azure Container Instances, Azure Container Apps, AWS Fargate). +1. Radius support for unopinionated (e.g. AWS Elastic Container Service, Azure Container Instances) or process-driven (Azure Container Apps, AWS Fargate, Google Cloudrun) serverless container platforms. ### Non-goals (out of scope) -1. Hosting the Radius control plane on serverless (and other) compute platforms outside of Kubernetes. -1. Radius support for more opinionated or event-driven serverless platforms (e.g. Azure Functions, AWS Lambda). +1. Hosting the Radius control plane on serverless (and other) compute platforms outside of Kubernetes. This is a separate project tracked in the roadmap: https://github.com/radius-project/roadmap/issues/39. +1. The ability to run the Radius control plane separately from the compute cluster to which it is deploying applications. This is a separate project tracked in the roadmap: https://github.com/radius-project/roadmap/issues/42. +1. Radius support for more opinionated or event-driven serverless platforms (e.g. Azure Functions, AWS Lambda, Knative, etc.). ## User profile and challenges @@ -36,7 +38,7 @@ Primarily, our users are application developers, operators, and platform enginee ### User persona(s) -**IT Operator**: An IT operator is responsible for helping developers deploy and manage applications on serverless compute platforms. They are familiar with containerized applications (e.g. Docker) and may have experience with Kubernetes and other cloud-native technologies. They are responsible for ensuring that applications are deployed and running correctly, and that the platform is secure, scalable, reliable, and follows best practices. They use Radius to preconfigure Environments and Recipes for use by the application developers that they support. +**IT Operator**: An IT operator is responsible for helping developers deploy and manage applications on serverless compute platforms. They are familiar with containerized applications and may have experience with Kubernetes and other cloud-native technologies. They are responsible for ensuring that applications are deployed and running correctly, and that the platform is secure, scalable, reliable, and follows best practices. They use Radius to preconfigure Environments and Recipes for use by the application developers that they support. **Application Developer**: A developer building applications that need to run and be managed on serverless compute platforms. They are familiar with containerized applications (e.g. Docker), but may or may not have experience with Kubernetes and other cloud-native technologies. With Radius, they will primarily deal with application definition files (e.g. `app.bicep`) and will leverage Radius features like Connections and Recipes. In many cases, their environments will be provided to them and preconfigured with the necessary settings and Recipes. @@ -48,9 +50,7 @@ Primarily, our users are application developers, operators, and platform enginee With the additional complexity and specialized knowledge required to manage and operate Kubernetes clusters, many users are looking to serverless compute platforms as a way to simplify their infrastructure and reduce operational overhead. Services like Azure Container Apps and ECS Fargate provide a way to run containerized workloads without the need to manage the underlying infrastructure, while still providing the benefits of containers like portability and scalability. -Users who build applications exclusively on serverless compute platforms are not able to adopt Radius. These users face the same challenges as those who have chosen to adopt Radius for Kubernetes: making sense of complex architectures, suboptimal troubleshooting experiences, pain in dealing with a plethora of cross-platform tools, difficulties in enforcing best practices, and hindered team collaboration due to unclear separation of concerns. - -Even if the user uses a mix of Kubernetes and serverless and they have adopted Radius, the same challenges persist and are even exacerbated when they can only use Radius for only their Kubernetes applications. +For application developers, partcularly those deploying to Kubernetes, using Radius helps them address pain points in: making sense of complex architectures, suboptimal troubleshooting experiences, pain in dealing with a plethora of cross-platform tools, difficulties in enforcing best practices, and hindered team collaboration due to unclear separation of concerns. However, without support for additional compute platforms like serverless, developers will continue to be forced to choose a container platform prior to actually building the application. It is very difficult to move between platforms once an application has been built — even if it is containerized. ### Positive user outcome @@ -67,12 +67,12 @@ Users can specify Radius [application](https://docs.radapp.io/reference/resource > Must be sure to include serverless platform specific customizations via a `runtimes` property, similar to how Kubernetes patching was implemented for [containers](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#runtimes). Allow for Radius abstraction of a [container](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/) resource that can be deployed to serverless compute platforms. Must be sure to include serverless platform specific configurations via [connections](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#connections), [extensions](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#extensions), and routing to other resources via [gateways](https://docs.radapp.io/reference/resource-schema/core-schema/gateway/). -> One future idea to explore is whether we can we build extensibility via Recipes - i.e. allow Recipes for Containers, which themselves can be serverless containers. +> One future idea to explore is whether we can we build extensibility via UDT and Recipes - i.e. allow Recipes to deploy predefined Container types, which themselves can be serverless containers. In other words, developers can deploy a UDT container resource using a Recipe such that all the user has to do when declaring a container is specifying the bare minimum (e.g. name, application) and let the predefined Recipe handle the rest. This will require the implementation of UDTs (e.g. defining specific types of containers as individual resource types). ### Scenario 2: "Punch-through" to platform-specific features and incremental adoption of Radius into existing serverless applications Allow for platform-specific features to be used in Radius applications via abstraction "punch-through" mechanisms, similar to how Kubernetes-specific features are supported in Radius via [base YAML or PodSpec patching](https://docs.radapp.io/guides/author-apps/containers/overview/#kubernetes) functionalities that is achieved through the [`runtimes`](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#runtimes) property in the container definition. For example, ACI has a confidential containers offering that may not be common across serverless platforms, but Radius should allow for this feature to be used in ACI containers. -> Stretch goal: Ability to add Radius to existing serverless applications without requiring a full rewrite of the application, similar to how Radius can be added to existing Kubernetes applications via [Kubernetes manifest](https://docs.radapp.io/tutorials/add-radius/) or [Helm chart](https://docs.radapp.io/tutorials/helm/) annotations. Note: The Kubernetes/Helm support works because Kubernetes itself is extensible. Other systems like ACA are not extensible in the same way but we should explore if there are options to make this work. +> Stretch goal: Ability to add Radius to existing serverless applications without requiring a full rewrite of the application, similar to how Radius can be added to existing Kubernetes applications via [Kubernetes manifest](https://docs.radapp.io/tutorials/add-radius/) or [Helm chart](https://docs.radapp.io/tutorials/helm/) annotations. Note: The Kubernetes/Helm support works because Kubernetes itself is extensible. Other systems like ACI, ECS, ACA may not be extensible in the same way but we should explore if there are options to make this work. ### Scenario 3: User interfaces for serverless--Radius API, CLI, Dashboard @@ -89,6 +89,8 @@ Enable deployment and management of serverless compute resources via the existin **Risk: Differing platforms between Radius control plane and application runtime** - There is a need to deploy and maintain a Kubernetes cluster to host the Radius control plane is needed even if the application is completely serverless and this might not work for some customers (e.g. those who absolutely do not want to use Kubernetes). For now we will accept this risk and consider alternative hosting platforms for the Radius control plane to be out of scope. +**Risk: Deploying to multiple clusters** - Given that the initial implementation for Serverless support will assume that the Radius control plane is hosted on Kubernetes and can deploy applications to serverless platforms, we might have to partially implement (or at least take into consideration) the [Inter-cluster app deployment and management](https://github.com/radius-project/roadmap/issues/42) feature that is currently in our roadmap. In other words, Radius should be able to manage applications across environments that are not on the same Kubernetes cluster as the control plane. This will be something to be explored as part of the tech design and implementation. + **Risk: Significant refactoring work anticipated** - There will be a significant amount of work to refactor the Radius code as a part of implementation, especially since the current codebase is heavily Kubernetes-centric. While this is an unavoidable risk, we will ensure that the refactoring work is done with future extensibility in mind so that it becomes easier to add support for additional compute platforms in the future. ## Key assumptions to test and questions to answer @@ -100,6 +102,8 @@ Enable deployment and management of serverless compute resources via the existin **Question**: How do we handle the case where the underlying serverless platform (especially opinionated ones like Azure Container Apps or AWS Fargate) does not provide the necessary APIs to support the Radius model? We will answer this question by building prototypes for additional serverless platforms and evaluating the feasibility of supporting them in Radius. +**Question**: Which are Environment scoped properties and which are Container scoped properties for each compute platform? e.g. Are confidential containers or spot instances scoped to the Environment or the individual Container? We will answer this question by consulting the documentation for each serverless platform and validating our assumptions with the implementation. + ## Current state There has been a proof-of-concept for ACI support in Radius built on top of the ACI nGroups features. Here is a demo of the prototype that has been created so far: https://youtu.be/NMNZE22nSQI?si=Dq7Q5WVKgHularsO&t=1201 @@ -119,38 +123,58 @@ resource environment 'Applications.Core/environments@2023-10-01-preview' = { name: 'myenv' properties: { compute: { -+ kind: 'aci' // Required. The kind of container runtime to use, e.g. 'aci' for Azure Container Instances and 'ecs' for AWS Elastic Container Service -+ namespace: 'default' // This is currently required, but may be made optional if it doesn't apply to all compute platforms -+ resourceGroup: '/subscriptions/.../resourceGroups/myrg' // This was introduced as a part of the ACI prototype, but may be changed depending on the implementation, need to figure out if namespace and resourceGroup should be combined into a single more generic property ++ // Required. The kind of container runtime to use, ++ // e.g. 'aci' for Azure Container Instances and 'ecs' for AWS Elastic Container Service ++ kind: 'aci' ++ // This is currently required, but may be made optional if ++ // it doesn't apply to all compute platforms ++ namespace: 'default' ++ // This was introduced as a part of the ACI prototype, but may be changed ++ // depending on the implementation, need to figure out if namespace and ++ // resourceGroup should be combined into a single more generic property ++ resourceGroup: '/subscriptions/.../resourceGroups/myrg' } -+ providers: { // This was introduced as a part of the ACI prototype, but may be changed depending on the implementation -+ azure: { -+ scope: '/subscriptions/.../resourceGroups/myrg' -+ } -+ } - extensions: [ ++ // one provider per environment ++ providers: { ++ // if the compute platform for this environment is ACI, then the provider must be Azure ++ azure: { ++ scope: '/subscriptions/.../resourceGroups/myrg' ++ } ++ // if the compute platform for this environment is ECS, then the provider must be AWS ++ aws: { ++ scope: '/planes/aws/aws/accounts/${account}/regions/${region}' ++ } ++ } ++ extensions: [ // these are all currently Kubernetes-centric, will need to update for serverless { kind: 'kubernetesMetadata' labels: { 'team.contact.name': 'frontend' } -+ // Add other serverless platform-specific extensions here ++ // Add other serverless platform-specific extensions here } ] } } ``` +> Note: the `providers` property in the environment is necessary to specify the scope of the environment and is different from the provider credentials that were registered using `rad init`. The provider credentials are used to authenticate with the underlying compute provider, while the `providers` environment property is used to specify the scope of the environment. Thus, we might consider renaming the Environment `providers` property to avoid confusion. + +> Note: Currently it's one entry per `extension` for Kubernetes, but as a part of implementation we should determine if this needs to be further nested into one `extension` per platform and within each platform's `extension` can have multiple entries. + ### Step 2: Define a Radius Application for serverless The user defines a new Radius Application for serverless compute by creating a new Application definition file (e.g. `app.bicep`) and specifying the necessary settings for the container runtime platform. Note that the application definitions are container runtime platform-agnostic, thus this same application definition can be deployed to both Kubernetes and serverless compute platforms. ```diff +@description('The environment ID of your Radius Application. Set automatically by the rad CLI.') +param environment string + resource app 'Applications.Core/applications@2023-10-01-preview' = { name: 'myapp' properties: { environment: environment -+ extensions: [ // these are all currently Kubernetes-centric, will need to update for serverless ++ extensions: [ // these are all currently Kubernetes-centric, will need to update for serverless { kind: 'kubernetesNamespace' namespace: 'myapp' @@ -161,7 +185,7 @@ resource app 'Applications.Core/applications@2023-10-01-preview' = { 'team.contact.name': 'frontend' } } -+ // add other serverless extensions as applicable ++ // add other serverless extensions as applicable ] } } @@ -183,38 +207,42 @@ resource demo 'Applications.Core/containers@2023-10-01-preview' = { containerPort: 3000 } } -+ // all the other container properties should also be implemented (e.g. `env`, `readinessProbe`, `livenessProbe`, etc.) ++ // all the other container properties should also be implemented (e.g. `env`, `readinessProbe`, `livenessProbe`, etc.) } extensions: [ -+ { -+ kind: 'manualScaling' -+ replicas: 2 -+ } ++ { ++ kind: 'manualScaling' ++ replicas: 2 ++ } ] runtimes: { -+ aci: { -+ // Add ACI-specific properties here to punch-through the Radius abstraction, e.g. sku, osType, etc. -+ sku: 'Confidential' // 'Standard', 'Dedicated', etc. -+ } -+ ecs: { -+ // Add AWS ECS-specific properties here to punch-through the Radius abstraction -+ } ++ aci: { ++ // Add ACI-specific properties here to punch-through the Radius abstraction, e.g. sku, osType, etc. ++ sku: 'Confidential' // 'Standard', 'Dedicated', etc. ++ } ++ ecs: { ++ // Add AWS ECS-specific properties here to punch-through the Radius abstraction ++ } + } } } ``` -> AWS ECS schema reference: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html +> Note: We assume that the `sku: 'Confidential'` property for ACI (and other comparable properties across platforms like ECS) is scoped to the individual container and not the entire application or environment. We will need to validate this assumption as a part of the implementation. -> AWS ECS API reference: https://docs.aws.amazon.com/pdfs/AmazonECS/latest/APIReference/ecs-api.pdf#Welcome +> Note: To make the app definition portable, we must allow several `runtimes` to be declared per container and these should all be optional for cases where the user wants to punch through the Radius abstraction. If they declare a `runtimes` property that doesn't match the targeted deployment environment's compute, we should simply ignore that property for that deployment. If they haven't declared any `runtimes` property that match the compute of the targeted deployment environment, then we should deploy their container assuming that no `runtimes` property was provided and thus no "punch-through" behavior will be applied. -> ACI schema reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-reference-yaml +> Schema references: +> - AWS ECS task definition template reference: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-definition-template.html +> - AWS ECS API reference: https://docs.aws.amazon.com/pdfs/AmazonECS/latest/APIReference/ecs-api.pdf#Welcome +> - ACI schema reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-reference-yaml +> - ACI Container Group Profiles: https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups#container-group-profile-cg-profile ### Step 4: Define and connect other resources to the serverless container The user defines other resources (e.g. databases, message queues, etc.) that the container can connect to. These resources can be defined in the same application definition file (e.g. `app.bicep`) and connected to the container using the `connections` property for serverless containers just as they can be today for Kubernetes containers. -```bicep +```diff resource demo 'Applications.Core/containers@2023-10-01-preview' = { name: 'demo' properties: { @@ -227,14 +255,26 @@ resource demo 'Applications.Core/containers@2023-10-01-preview' = { } } } - connections: { - redis: { - source: db.id - } - } + runtimes: { ++ aci: { ++ // Add ACI-specific properties here to punch-through the Radius abstraction, e.g. sku, osType, etc. ++ sku: 'Confidential' // 'Standard', 'Dedicated', etc. ++ } ++ ecs: { ++ // Add AWS ECS-specific properties here to punch-through the Radius abstraction ++ } ++ } + } ++ // connections to other resources remains the same for serverless containers as they are for Kubernetes containers ++ connections: { ++ redis: { ++ source: db.id ++ } ++ } } } ++ // Definitions for other resources that the container can connect to remain unchanged resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { name: 'db' properties: { @@ -273,6 +313,10 @@ Add support for platform-specific features for containers via abstraction "punch > This is similar to how Kubernetes-specific features are supported in Radius via base YAML or [PodSpec patching](https://docs.radapp.io/guides/author-apps/kubernetes/patch-podspec/) functionalities. +> For example, in AWS ECS, we would support punch-through via the [task definition template](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-definition-template.html) + +> For example, in ACI, we would support punch-through via the [Container Group Profiles](https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups#container-group-profile-cg-profile) + ### Feature 5: User interfaces for serverless--Radius API, CLI, Dashboard Add support for deploying and managing serverless compute resources via the existing Radius API and CLI commands. Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. \ No newline at end of file From d9cb3b4c85add39d9efb68684968666f56cdf8da Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Thu, 13 Feb 2025 20:13:02 -0800 Subject: [PATCH 10/19] add UDT spec link Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 43723de..1b70664 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -17,7 +17,7 @@ This document describes the high-level overview for deploying Radius-managed app - ACI [nGroups](https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups): ACI feature that provides you with advanced capabilities for managing multiple related container groups, e.g. maintaining multiple instances, rolling upgrades, high availability, managed identity support, confidential container support, load balancing, zone rebalancing. - ACI [Confidential Containers](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-confidential-overview): ACI feature that provides a secure enclave for your containerized applications to run in a confidential computing environment. - ACI [Spot Instances](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-spot-containers-overview): ACI feature that allows you to run interruptible workloads at a reduced cost compared to the standard price by taking advantage of unused capacity in Azure datacenters. -- User Defined Types (UDT): A Radius feature that allows you to define custom resource types that can be used in your application definitions. As of this writing, the feature is a work in progress with its design document available [here](https://github.com/radius-project/design-notes/blob/main/architecture/2024-07-user-defined-types.md). +- [User Defined Types](https://github.com/radius-project/design-notes/blob/main/architecture/2024-06-resource-extensibility-feature-spec.md) (UDT): A Radius feature that allows you to define custom resource types that can be used in your application definitions. As of this writing, the feature is a work in progress with its design document available [here](https://github.com/radius-project/design-notes/blob/main/architecture/2024-07-user-defined-types.md). ### Top level goals From 6069cb50835aa97b514344019091275ae0c4d1ad Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Thu, 13 Feb 2025 20:16:45 -0800 Subject: [PATCH 11/19] add more detail about control plane and compute separation Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 1b70664..1c87927 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -29,7 +29,7 @@ This document describes the high-level overview for deploying Radius-managed app ### Non-goals (out of scope) 1. Hosting the Radius control plane on serverless (and other) compute platforms outside of Kubernetes. This is a separate project tracked in the roadmap: https://github.com/radius-project/roadmap/issues/39. -1. The ability to run the Radius control plane separately from the compute cluster to which it is deploying applications. This is a separate project tracked in the roadmap: https://github.com/radius-project/roadmap/issues/42. +1. The ability to run the Radius control plane separately from the compute cluster to which it is deploying applications. This is a separate project tracked in the roadmap: https://github.com/radius-project/roadmap/issues/42. However, given that the initial requirement is for a Kubernetes-hosted Radius control plane to deploy applications to a serverless compute platform, we might need to partially implement or at least take into consideration the separation of the Radius control plane cluster from the target deployment cluster as a part of serverless implementation. 1. Radius support for more opinionated or event-driven serverless platforms (e.g. Azure Functions, AWS Lambda, Knative, etc.). ## User profile and challenges From b84bb0bbf1063c44b20a9747103fb332ba903458 Mon Sep 17 00:00:00 2001 From: Will <28876888+willtsai@users.noreply.github.com> Date: Thu, 13 Feb 2025 20:20:40 -0800 Subject: [PATCH 12/19] Update architecture/2025-01-serverless-feature-spec.md Co-authored-by: Zach Casper Signed-off-by: Will <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 1c87927..afe5afc 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -40,7 +40,7 @@ Primarily, our users are application developers, operators, and platform enginee **IT Operator**: An IT operator is responsible for helping developers deploy and manage applications on serverless compute platforms. They are familiar with containerized applications and may have experience with Kubernetes and other cloud-native technologies. They are responsible for ensuring that applications are deployed and running correctly, and that the platform is secure, scalable, reliable, and follows best practices. They use Radius to preconfigure Environments and Recipes for use by the application developers that they support. -**Application Developer**: A developer building applications that need to run and be managed on serverless compute platforms. They are familiar with containerized applications (e.g. Docker), but may or may not have experience with Kubernetes and other cloud-native technologies. With Radius, they will primarily deal with application definition files (e.g. `app.bicep`) and will leverage Radius features like Connections and Recipes. In many cases, their environments will be provided to them and preconfigured with the necessary settings and Recipes. +**Application Developer**: A developer building applications that need to run and be managed on serverless compute platforms. They are familiar with containerized applications (e.g. Docker), but may or may not have experience with Kubernetes and other cloud-native technologies. With Radius, they will primarily deal with application definition files (e.g. `app.bicep`) and will leverage Radius features like Resources and Connections. In many cases, their environments will be provided to them and preconfigured with the necessary settings and Recipes. **Platform Engineer**: A platform engineer responsible for building and maintaining the developer platform that developers and operators use to build and deploy applications. They are likely familiar with Kubernetes and other cloud-native technologies, but are definitely familiar with containerized applications (e.g. Docker). They are responsible for ensuring that the platform is secure, scalable, reliable, easy to use, and enforces best practices. From c8246f292017d5a6562bf01c48ef4c99dc2e4364 Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Thu, 13 Feb 2025 20:59:16 -0800 Subject: [PATCH 13/19] add Contour and Dapr details Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- .../2025-01-serverless-feature-spec.md | 73 ++++++++++++++++++- 1 file changed, 69 insertions(+), 4 deletions(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index afe5afc..f1e8741 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -78,6 +78,10 @@ Allow for platform-specific features to be used in Radius applications via abstr Enable deployment and management of serverless compute resources via the existing Radius API and CLI commands. Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. +### Scenario 4: Support for Dapr resources + +Enable the ability to define and run Dapr resources in a Radius application definition file (e.g. `app.bicep`) if there is native support for Dapr built into the serverless compute platform. + ## Key dependencies and risks @@ -85,6 +89,10 @@ Enable deployment and management of serverless compute resources via the existin **Dependency: orchestrator and API for the underlying serverless platform** - For each additional serverless platform that Radius supports, we have a dependency on the underlying orchestrator for that platform. For example, for Azure Container Instances, we depend on the Azure API. For AWS Elastic Container Service, we depend on the AWS API. There is a risk that the underlying orchestrator may not provide the necessary APIs to support the Radius model, the APIs may not be stable or reliable, or the APIs may not be publicly available. This will need to be investigated as a part of the implementation. +**Dependency: ingress/gateway for Radius** - Today, Radius installs and has a direct dependency on [Contour](https://projectcontour.io/) to manage and route incoming traffic to the appropriate services within a Kubernetes cluster. The equivalent default ingress/gateway for serverless platforms will need to be evaluated and implemented. Furthermore, it should be implemented in a way that allows for flexibility in the future to support other ingress/gateway options as there is a Radius backlog item for [removing dependency on Contour](https://github.com/radius-project/radius/issues/6521). + +**Risk: the serverless compute platform might not provide native Dapr support** - Given that Dapr resources are implemented as top-level resources in Radius, these can only be implemented in compute platforms that support Dapr natively. + **Risk: platform specific features that cannot be implemented in Radius** - There might be platform-specific features that cannot (or should not) be implemented in Radius. For example, Kubernetes has features for taints and tolerations that are not common across compute platforms and thus should not be implemented in Radius. This risk can be mitigated by providing mechanisms to punch-through the Radius abstraction and use platform-specific features directly, like the [Kubernetes customization options](https://docs.radapp.io/guides/author-apps/containers/overview/#kubernetes) currently available in Radius. **Risk: Differing platforms between Radius control plane and application runtime** - There is a need to deploy and maintain a Kubernetes cluster to host the Radius control plane is needed even if the application is completely serverless and this might not work for some customers (e.g. those who absolutely do not want to use Kubernetes). For now we will accept this risk and consider alternative hosting platforms for the Radius control plane to be out of scope. @@ -100,6 +108,8 @@ Enable deployment and management of serverless compute resources via the existin **Assumption**: As a result of the implementation, the refactoring done to the core Radius codebase to support Serverless container runtimes should also allow for easier extensibility of Radius to support other new platforms going forward. In other words, the work done for Serverless should be genericized within the Radius code and not resemble custom implementations for Kubernetes and select Serverless platforms. +**Assumption**: If there was an app that the platform team wanted to run on serverless (e.g. ECS, ACI) and another on Kubernetes (e.g. EKS, AKS) they would configure two different environments. Semantically, an environment is a single place, so we should not have multiple places (i.e. different container platforms) within a single environment. We think that this assumption is reasonable given multiple compute targets would require a cross container compute service discovery which users are unlikely to have the desire in setting up. We'll validate this assumption with feedback from potential users. + **Question**: How do we handle the case where the underlying serverless platform (especially opinionated ones like Azure Container Apps or AWS Fargate) does not provide the necessary APIs to support the Radius model? We will answer this question by building prototypes for additional serverless platforms and evaluating the feasibility of supporting them in Radius. **Question**: Which are Environment scoped properties and which are Container scoped properties for each compute platform? e.g. Are confidential containers or spot instances scoped to the Environment or the individual Container? We will answer this question by consulting the documentation for each serverless platform and validating our assumptions with the implementation. @@ -158,7 +168,9 @@ resource environment 'Applications.Core/environments@2023-10-01-preview' = { } ``` -> Note: the `providers` property in the environment is necessary to specify the scope of the environment and is different from the provider credentials that were registered using `rad init`. The provider credentials are used to authenticate with the underlying compute provider, while the `providers` environment property is used to specify the scope of the environment. Thus, we might consider renaming the Environment `providers` property to avoid confusion. +> Note: The `providers` property in the environment is necessary to specify the scope of the environment and is different from the provider credentials that were registered using `rad init`. The provider credentials are used to authenticate with the underlying compute provider, while the `providers` environment property is used to specify the scope of the environment. Thus, we might consider renaming the Environment `providers` property to avoid confusion. + +> Note: We need to evaluate if we should add an additional `kubernetes` provider to the environment definition, though this might only be applicable for the [Inter-cluster app deployment and management](https://github.com/radius-project/roadmap/issues/42) feature. > Note: Currently it's one entry per `extension` for Kubernetes, but as a part of implementation we should determine if this needs to be further nested into one `extension` per platform and within each platform's `extension` can have multiple entries. @@ -264,7 +276,6 @@ resource demo 'Applications.Core/containers@2023-10-01-preview' = { + // Add AWS ECS-specific properties here to punch-through the Radius abstraction + } + } - } + // connections to other resources remains the same for serverless containers as they are for Kubernetes containers + connections: { + redis: { @@ -284,11 +295,65 @@ resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { } ``` -### Step 5: Deploy the application to the serverless platform +### Step 5: Define a Dapr sidecar for the serverless container and connect it to a Dapr resource + +> Note: This step is only applicable if the serverless compute platform has native support for Dapr. + +The user defines a Dapr sidecar in the container definition and connects it to a Dapr resource. + +```diff +resource demo 'Applications.Core/containers@2023-10-01-preview' = { + name: 'demo' + properties: { + application: application + container: { + image: 'ghcr.io/radius-project/samples/demo:latest' + ports: { + web: { + containerPort: 3000 + } + } + } + runtimes: { ++ aci: { ++ // Add ACI-specific properties here to punch-through the Radius abstraction, e.g. sku, osType, etc. ++ sku: 'Confidential' // 'Standard', 'Dedicated', etc. ++ } ++ ecs: { ++ // Add AWS ECS-specific properties here to punch-through the Radius abstraction ++ } ++ } ++ extensions: [ ++ { ++ kind: 'daprSidecar' ++ appId: 'demo' ++ appPort: 3000 ++ } ++ ] ++ // connections to other resources remains the same for serverless containers as they are for Kubernetes containers ++ connections: { ++ redis: { ++ source: daprStateStore.id ++ } ++ } + } +} + ++ // Definitions for Dapr resources that the container can connect to remain unchanged +resource daprStateStore 'Applications.Dapr/stateStores@2023-10-01-preview' = { + name: 'statestore' + properties: { + environment: environment + application: application + } +} +``` + +### Step 6: Deploy the application to the serverless platform The user deploys the application to the serverless platform by running the `rad run` or `rad deploy` command with the application definition file targeting the serverless environment (e.g. `app.bicep`). -### Step 6: View the serverless containers in the CLI and Radius Dashboard +### Step 7: View the serverless containers in the CLI and Radius Dashboard After successful deployment, the user can view the serverless containers in the CLI using the `rad app graph` command or via the Application Graph in the Radius Dashboard. From c62e73a2b7dfecb0b2db6925dcbd6acb6da68213 Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Thu, 13 Feb 2025 21:04:30 -0800 Subject: [PATCH 14/19] add Dapr details Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index f1e8741..5881320 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -80,7 +80,7 @@ Enable deployment and management of serverless compute resources via the existin ### Scenario 4: Support for Dapr resources -Enable the ability to define and run Dapr resources in a Radius application definition file (e.g. `app.bicep`) if there is native support for Dapr built into the serverless compute platform. +Enable the ability to define and run Dapr resources in a Radius application definition file (e.g. `app.bicep`) if there is native support for Dapr built into the serverless compute platform. A user would declare a Dapr sidecar in the serverless container definition and connect it to a Dapr resource, much like how it can be done today in [Kubernetes containers](https://docs.radapp.io/guides/author-apps/dapr/how-to-dapr-building-block/). ## Key dependencies and risks @@ -382,6 +382,10 @@ Add support for platform-specific features for containers via abstraction "punch > For example, in ACI, we would support punch-through via the [Container Group Profiles](https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups#container-group-profile-cg-profile) -### Feature 5: User interfaces for serverless--Radius API, CLI, Dashboard +### Feature 5: Support for Dapr resources and sidecars + +Add support for adding a Dapr sidecar to a Radius container definition and connecting it to a Dapr resource. This will only be applicable if the serverless compute platform has native support for Dapr. + +### Feature 6: User interfaces for serverless--Radius API, CLI, Dashboard Add support for deploying and managing serverless compute resources via the existing Radius API and CLI commands. Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. \ No newline at end of file From 6915cf78e1c450f1ce9f02cd611c214cea2c6350 Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Fri, 14 Feb 2025 16:55:21 -0800 Subject: [PATCH 15/19] address feedback from feb 14 Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 5881320..234de73 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -242,6 +242,8 @@ resource demo 'Applications.Core/containers@2023-10-01-preview' = { > Note: We assume that the `sku: 'Confidential'` property for ACI (and other comparable properties across platforms like ECS) is scoped to the individual container and not the entire application or environment. We will need to validate this assumption as a part of the implementation. +> Note: There are a certain set of core properties that would be common across all container runtimes (e.g. `image`, `ports`, `env`) and these should be included as top-level properties within the container definition and should not be ignored if provided, assuming that these core properties are limited to ones that are universal across container platforms. The properties that may not be universal across platforms (e.g. `sku: 'Confidential'`, `osType`) should not be top-level properties but should rather be encapsulated in the `runtimes` property. + > Note: To make the app definition portable, we must allow several `runtimes` to be declared per container and these should all be optional for cases where the user wants to punch through the Radius abstraction. If they declare a `runtimes` property that doesn't match the targeted deployment environment's compute, we should simply ignore that property for that deployment. If they haven't declared any `runtimes` property that match the compute of the targeted deployment environment, then we should deploy their container assuming that no `runtimes` property was provided and thus no "punch-through" behavior will be applied. > Schema references: From 88f1bc51ed3678c0a030a7cfcc12825c818604ec Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Fri, 14 Feb 2025 16:59:43 -0800 Subject: [PATCH 16/19] spelling corrections Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- .github/config/en-custom.txt | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/.github/config/en-custom.txt b/.github/config/en-custom.txt index 5989696..ecdac47 100644 --- a/.github/config/en-custom.txt +++ b/.github/config/en-custom.txt @@ -908,3 +908,35 @@ SecOps kube workspace's Authorizer +CloudRun +Fargate +autoscaling +loadBalancer +ACA +nGroups +rebalancing +datacenters +interruptible +Genericize +unopinionated +roadmap +preconfigure +suboptimal +PodSpec +natively +tolerations +orchestrators +Dq +NMNZE +WVKgHularsO +nGroups +nSQI +si +youtu +APIReference +AmazonECS +developerguide +ecs +ngroups +pdf +pdfs \ No newline at end of file From ed7daffff840c1b041db0f0b779162651724a36b Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Fri, 14 Feb 2025 17:04:05 -0800 Subject: [PATCH 17/19] spelling corrections Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- .github/config/en-custom.txt | 4 +++- architecture/2025-01-serverless-feature-spec.md | 4 ++-- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/.github/config/en-custom.txt b/.github/config/en-custom.txt index ecdac47..8f08027 100644 --- a/.github/config/en-custom.txt +++ b/.github/config/en-custom.txt @@ -939,4 +939,6 @@ developerguide ecs ngroups pdf -pdfs \ No newline at end of file +pdfs +Knative +genericized \ No newline at end of file diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 234de73..b53e85b 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -24,7 +24,7 @@ This document describes the high-level overview for deploying Radius-managed app 1. Genericize the Radius platform to support deployment of applications on serverless (and other) compute platforms beyond Kubernetes. 1. Enable developers to deploy and manage serverless containers in a way that is consistent with the existing Radius environment and application model. 1. Make all Radius features (Recipes, Connections, App Graph, UDT, etc.) available to engineers building applications on Radius-enabled supported serverless platforms. -1. Radius support for unopinionated (e.g. AWS Elastic Container Service, Azure Container Instances) or process-driven (Azure Container Apps, AWS Fargate, Google Cloudrun) serverless container platforms. +1. Radius support for unopinionated (e.g. AWS Elastic Container Service, Azure Container Instances) or process-driven (Azure Container Apps, AWS Fargate, Google CloudRun) serverless container platforms. ### Non-goals (out of scope) @@ -50,7 +50,7 @@ Primarily, our users are application developers, operators, and platform enginee With the additional complexity and specialized knowledge required to manage and operate Kubernetes clusters, many users are looking to serverless compute platforms as a way to simplify their infrastructure and reduce operational overhead. Services like Azure Container Apps and ECS Fargate provide a way to run containerized workloads without the need to manage the underlying infrastructure, while still providing the benefits of containers like portability and scalability. -For application developers, partcularly those deploying to Kubernetes, using Radius helps them address pain points in: making sense of complex architectures, suboptimal troubleshooting experiences, pain in dealing with a plethora of cross-platform tools, difficulties in enforcing best practices, and hindered team collaboration due to unclear separation of concerns. However, without support for additional compute platforms like serverless, developers will continue to be forced to choose a container platform prior to actually building the application. It is very difficult to move between platforms once an application has been built — even if it is containerized. +For application developers, particularly those deploying to Kubernetes, using Radius helps them address pain points in: making sense of complex architectures, suboptimal troubleshooting experiences, pain in dealing with a plethora of cross-platform tools, difficulties in enforcing best practices, and hindered team collaboration due to unclear separation of concerns. However, without support for additional compute platforms like serverless, developers will continue to be forced to choose a container platform prior to actually building the application. It is very difficult to move between platforms once an application has been built — even if it is containerized. ### Positive user outcome From cd63d5519d93c5e9c324b058693d7dca46e0d40f Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Thu, 27 Feb 2025 10:27:26 -0800 Subject: [PATCH 18/19] address feedback Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- architecture/2025-01-serverless-feature-spec.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index b53e85b..350c2bb 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -176,7 +176,7 @@ resource environment 'Applications.Core/environments@2023-10-01-preview' = { ### Step 2: Define a Radius Application for serverless -The user defines a new Radius Application for serverless compute by creating a new Application definition file (e.g. `app.bicep`) and specifying the necessary settings for the container runtime platform. Note that the application definitions are container runtime platform-agnostic, thus this same application definition can be deployed to both Kubernetes and serverless compute platforms. +The user defines a new Radius Application by creating a new Application definition file (e.g. `app.bicep`) and uses this same application definition to deploy to an Environment with either Kubernetes or serverless as its underlying compute platform. Via the `extension` property in the Application definition, the user can optionally configure custom settings that are specific to the container runtime platform and get applied only when deploying to that platform. For any `extension` properties that are not applicable to the targeted deployment container platform, they will be ignored. ```diff @description('The environment ID of your Radius Application. Set automatically by the rad CLI.') @@ -205,7 +205,7 @@ resource app 'Applications.Core/applications@2023-10-01-preview' = { ### Step 3: Define a Radius Container for serverless -The user defines a Radius Container within the application definition (e.g. `app.bicep`) and specifies relevant container properties, such as `extensions` or `runtimes` to set platform specific configurations. Note that the container definitions are container runtime platform-agnostic, thus this same container definition can be deployed to both Kubernetes and serverless compute platforms if common functionalities across compute platforms are used. +The user defines a Radius Container within the application definition (e.g. `app.bicep`) including all the platform-agnostic [`container` properties](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#container), e.g. `image`, `env`, etc. and deploy the application container to the serverless environment. The user may also set platform specific configurations via the `extensions` or `runtimes` properties, but any properties that are not applicable to the targeted compute platform will be ignored at deploy time. Note that the container definitions should be container runtime platform-agnostic, thus this same container definition can be deployed to both Kubernetes and serverless compute platforms. ```diff resource demo 'Applications.Core/containers@2023-10-01-preview' = { @@ -301,7 +301,7 @@ resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { > Note: This step is only applicable if the serverless compute platform has native support for Dapr. -The user defines a Dapr sidecar in the container definition and connects it to a Dapr resource. +The user defines a Dapr sidecar in the container definition and connects it to a Dapr resource. At deploy time, Radius will check if the serverless compute platform has native support for Dapr and if so, deploy the Dapr sidecar alongside the application containers and connect it to the Dapr resource. ```diff resource demo 'Applications.Core/containers@2023-10-01-preview' = { From fc8208091d4c4a4e2f185362a8f82c7a5f3dcaba Mon Sep 17 00:00:00 2001 From: Will Tsai <28876888+willtsai@users.noreply.github.com> Date: Tue, 4 Mar 2025 16:20:42 -0800 Subject: [PATCH 19/19] add more details on gateways, secrets, extenders Signed-off-by: Will Tsai <28876888+willtsai@users.noreply.github.com> --- .../2025-01-serverless-feature-spec.md | 163 ++++++++++++++++-- 1 file changed, 152 insertions(+), 11 deletions(-) diff --git a/architecture/2025-01-serverless-feature-spec.md b/architecture/2025-01-serverless-feature-spec.md index 350c2bb..2fa2fb7 100644 --- a/architecture/2025-01-serverless-feature-spec.md +++ b/architecture/2025-01-serverless-feature-spec.md @@ -69,16 +69,20 @@ Users can specify Radius [application](https://docs.radapp.io/reference/resource Allow for Radius abstraction of a [container](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/) resource that can be deployed to serverless compute platforms. Must be sure to include serverless platform specific configurations via [connections](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#connections), [extensions](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#extensions), and routing to other resources via [gateways](https://docs.radapp.io/reference/resource-schema/core-schema/gateway/). > One future idea to explore is whether we can we build extensibility via UDT and Recipes - i.e. allow Recipes to deploy predefined Container types, which themselves can be serverless containers. In other words, developers can deploy a UDT container resource using a Recipe such that all the user has to do when declaring a container is specifying the bare minimum (e.g. name, application) and let the predefined Recipe handle the rest. This will require the implementation of UDTs (e.g. defining specific types of containers as individual resource types). -### Scenario 2: "Punch-through" to platform-specific features and incremental adoption of Radius into existing serverless applications +### Scenario 2: Define and create core resources (i.e. Gateway, Secret Store, Extender) for use in serverless containers + +Enable the ability to define and create core Radius resources (i.e. Gateway, Secret Store, Extender) for use in serverless containers. Radius would leverage the solution available on the hosting platform to create and manage these resources. For example, if the serverless compute platform has a built-in secret store, Radius would use that instead of creating its own. This would allow for a consistent experience across different serverless compute platforms while still leveraging the unique features of each platform. + +### Scenario 3: "Punch-through" to platform-specific features and incremental adoption of Radius into existing serverless applications Allow for platform-specific features to be used in Radius applications via abstraction "punch-through" mechanisms, similar to how Kubernetes-specific features are supported in Radius via [base YAML or PodSpec patching](https://docs.radapp.io/guides/author-apps/containers/overview/#kubernetes) functionalities that is achieved through the [`runtimes`](https://docs.radapp.io/reference/resource-schema/core-schema/container-schema/#runtimes) property in the container definition. For example, ACI has a confidential containers offering that may not be common across serverless platforms, but Radius should allow for this feature to be used in ACI containers. > Stretch goal: Ability to add Radius to existing serverless applications without requiring a full rewrite of the application, similar to how Radius can be added to existing Kubernetes applications via [Kubernetes manifest](https://docs.radapp.io/tutorials/add-radius/) or [Helm chart](https://docs.radapp.io/tutorials/helm/) annotations. Note: The Kubernetes/Helm support works because Kubernetes itself is extensible. Other systems like ACI, ECS, ACA may not be extensible in the same way but we should explore if there are options to make this work. -### Scenario 3: User interfaces for serverless--Radius API, CLI, Dashboard +### Scenario 4: User interfaces for serverless--Radius API, CLI, Dashboard Enable deployment and management of serverless compute resources via the existing Radius API and CLI commands. Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. -### Scenario 4: Support for Dapr resources +### Scenario 5: Support for Dapr resources Enable the ability to define and run Dapr resources in a Radius application definition file (e.g. `app.bicep`) if there is native support for Dapr built into the serverless compute platform. A user would declare a Dapr sidecar in the serverless container definition and connect it to a Dapr resource, much like how it can be done today in [Kubernetes containers](https://docs.radapp.io/guides/author-apps/dapr/how-to-dapr-building-block/). @@ -252,9 +256,9 @@ resource demo 'Applications.Core/containers@2023-10-01-preview' = { > - ACI schema reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-reference-yaml > - ACI Container Group Profiles: https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups#container-group-profile-cg-profile -### Step 4: Define and connect other resources to the serverless container +### Step 4: Define and connect other portable resources to the serverless container -The user defines other resources (e.g. databases, message queues, etc.) that the container can connect to. These resources can be defined in the same application definition file (e.g. `app.bicep`) and connected to the container using the `connections` property for serverless containers just as they can be today for Kubernetes containers. +The user defines other portable resources (e.g. databases, message queues, etc.) that the container can connect to. These resources can be defined in the same application definition file (e.g. `app.bicep`) and connected to the container using the `connections` property for serverless containers just as they can be today for Kubernetes containers. ```diff resource demo 'Applications.Core/containers@2023-10-01-preview' = { @@ -297,7 +301,128 @@ resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { } ``` -### Step 5: Define a Dapr sidecar for the serverless container and connect it to a Dapr resource +> Note: The user should also be able to define `connections` between containers to enable service-to-service communication in the serverless application (see [service-to-service communication](https://docs.radapp.io/guides/author-apps/networking/overview/#service-to-service-communication)). + +### Step 5: Define a Radius Gateway for the serverless container +The user defines an `Applications.Core/gateways` resource for the serverless container in the application definition file (e.g. `app.bicep`) and specifies traffic routing to the root path ("/") to the `demo` container, which can be hosted on serverless compute. The user may also optionally specify TLS Termination and SSL Passthrough in the Gateway. + +```diff +resource gateway 'Applications.Core/gateways@2023-10-01-preview' = { + name: 'gateway' + properties: { + application: application + routes: [ + { + path: '/' ++ destination: 'http://${demo.name}:3000' + } + ] + } +} +``` + +> Note: For the Kubernetes implementation, Radius installs and configures Contour as the default ingress controller to enable Gateway features. For serverless platforms, we will need to evaluate the default ingress/gateway for each platform and implement it in a way that allows for flexibility in the future to support other ingress/gateway options. + +### Step 6: Define a Radius Secret Store for the serverless container +The user defines a `Applications.Core/secretStores` resource in the `app.bicep` and then deploys it into the serverless environment. Radius leverages the serverless hosting platform's secrets management solution to create and store the secret. The user can then reference the secret store in the application definition file (e.g. `app.bicep`) and use it to store secrets for the serverless container. + +```diff +resource secrets 'Applications.Core/secretStores@2023-10-01-preview' = { + name: 'secrets' + properties:{ + application: application + type: 'generic' ++ data: { ++ 'password': { ++ value: password ++ } ++ } + } +} + +resource demo 'Applications.Core/containers@2023-10-01-preview' = { + name: 'demo' + properties: { + application: application + container: { + image: 'ghcr.io/radius-project/samples/demo:latest' + ports: { + web: { + containerPort: 3000 + } + } + env:{ ++ PASSWORD: { ++ valueFrom: { ++ secretRef: { ++ source: secrets.id ++ key: 'password' ++ } ++ } ++ } + } + } + runtimes: { ++ aci: { ++ // Add ACI-specific properties here to punch-through the Radius abstraction, e.g. sku, osType, etc. ++ sku: 'Confidential' // 'Standard', 'Dedicated', etc. ++ } ++ ecs: { ++ // Add AWS ECS-specific properties here to punch-through the Radius abstraction ++ } ++ } + } +} +``` + +> Note: "By default, Radius leverages the hosting platform's secrets management solution to create and store the secret. For example, if you are deploying to Kubernetes, the secret store will be created as a Kubernetes Secret." (source: [How To: Create new Secret Store](https://docs.radapp.io/guides/author-apps/secrets/howto-new-secretstore/)) + +### Step 7: Define a Radius Extender (or UDT) resource for the serverless container +The user defines a `Applications.Core/extenders` resource in the application definition file (e.g. `app.bicep`) and, if the deployment target is a serverless environment, Radius will enable me to reference properties and values from the Extender to connect to my serverless container. The same would apply to a UDT resource, once that feature is implemented. + +```diff +resource twilio 'Applications.Core/extenders@2023-10-01-preview' = { ++ name: 'twilio' + properties: { + application: application + environment: environment + recipe: { ++ name: 'twilio' + } + } +} + +resource demo 'Applications.Core/containers@2023-10-01-preview' = { + name: 'demo' + properties: { + application: application + container: { + image: 'ghcr.io/radius-project/samples/demo:latest' + ports: { + web: { + containerPort: 3000 + } + } ++ env:{ ++ TWILIO_ACCOUNT: { ++ value: twilio.listSecrets().authToken ++ } ++ } + } + runtimes: { ++ aci: { ++ // Add ACI-specific properties here to punch-through the Radius abstraction, e.g. sku, osType, etc. ++ sku: 'Confidential' // 'Standard', 'Dedicated', etc. ++ } ++ ecs: { ++ // Add AWS ECS-specific properties here to punch-through the Radius abstraction ++ } ++ } + } +} +``` + +### Step 8: Define a Dapr sidecar for the serverless container and connect it to a Dapr resource > Note: This step is only applicable if the serverless compute platform has native support for Dapr. @@ -374,7 +499,15 @@ Add support for defining serverless compute resources in a Radius Application de Add support for defining serverless container resources for container functionalities that are common across all platforms (e.g. `image`, `env`, `volumes`, etc.) within a Radius application definition file. -### Feature 4: Punch-through to platform-specific features +### Feature 4: Model Radius Connections for serverless + +Add support for defining serverless connections between containers and/or portable resources within a Radius application definition file. Connections should be defined in the same way as they are today for Kubernetes containers, but should also support serverless-specific connections. This includes connections between serverless containers to establish service-to-service communication, as well as connections to other portable resources (e.g. databases, message queues, etc.) that the serverless container can connect to. + +### Feature 5: Define and create core resources (i.e. Gateway, Secret Store, Extender) for use in serverless containers + +Add support for defining and creating core Radius resources (i.e. Gateway, Secret Store, Extender) for use in serverless containers. Radius would leverage the solution available on the hosting platform to create and manage these resources. Connections between serverless containers and these core resources can be specified in the application definition file (e.g. `app.bicep`) and Radius will create the necessary connections at deploy time. + +### Feature 6: Punch-through to platform-specific features Add support for platform-specific features for containers via abstraction "punch-through" mechanisms. This will allow users to use platform-specific features, such as confidential containers or spot instances, in Radius applications. @@ -384,10 +517,18 @@ Add support for platform-specific features for containers via abstraction "punch > For example, in ACI, we would support punch-through via the [Container Group Profiles](https://learn.microsoft.com/en-us/azure/container-instances/container-instance-ngroups/container-instances-about-ngroups#container-group-profile-cg-profile) -### Feature 5: Support for Dapr resources and sidecars +### Feature 7: User interfaces for serverless--Radius API, CLI, Dashboard -Add support for adding a Dapr sidecar to a Radius container definition and connecting it to a Dapr resource. This will only be applicable if the serverless compute platform has native support for Dapr. +Add support for deploying and managing serverless compute resources via the existing Radius API and CLI commands. Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. -### Feature 6: User interfaces for serverless--Radius API, CLI, Dashboard +### Feature 8: Support for Dapr resources and sidecars -Add support for deploying and managing serverless compute resources via the existing Radius API and CLI commands. Serverless resources that are modeled in Radius should be available in the App Graph and Dashboard for visualization and management. \ No newline at end of file +Add support for adding a Dapr sidecar to a Radius container definition and connecting it to a Dapr resource. This will only be applicable if the serverless compute platform has native support for Dapr. + +## Design Review Notes +- [x] Clarify the relationship between compute platform and environment (e.g. is it one-to-one or one-to-many?) +- [x] Clarify the scope of multi-cluster deployments and how this will work with serverless compute platforms (i.e. users will have to host Radius on Kubernetes, but can deploy to serverless compute platforms) +- [x] Specify how Radius will handle platform-specific features and how these will be modeled in the Radius application definition file +- [x] Add more details about defining and creating Gateways for use in serverless platforms +- [x] Add more details about defining and creating Secrets for use in serverless platforms +- [x] Add more details about defining and creating Extenders/UDT for use in serverless platforms \ No newline at end of file