Releases: ystia/yorc
v3.1.0-M1
v3.0.0
Download links
Yorc orchestration engine is available to download here: https://github.com/ystia/yorc/releases/tag/v3.0.0
A docker image is published on docker hub: https://hub.docker.com/r/ystia/yorc/
Yorc plugin for Alien4Cloud is available to download here: https://github.com/ystia/yorc-a4c-plugin/releases/tag/v3.0.0
New and noteworthy:
Naming & Open Source community
Yorc 3.0.0 is the first major version since we open-sourced the formerly known Janus project. Previous versions have been made available on GitHub.
We are still shifting some of our tooling like road maps and backlogs publicly available tools. The idea is to make project management clear and to open Yorc to external contributions.
Shifting to Alien4Cloud 2.0
Alien4Cloud released recently a fantastic major release with new features leveraged by Yorc to deliver a great orchestration solution.
Among many features, the ones we will focus on below are:
- UI redesign: Alien4Cloud 2.0.0 includes various changes in UI in order to make it more consistent and easier to use.
- Topology modifiers: Alien4Cloud 2.0.0 allows to define modifiers that could be executed in various phases prior to the deployment. Those modifiers allow to transform a given TOSCA topology.
New GCP infrastructure
We are really excited to announce our first support of Google Cloud Platform.
Yorc now natively supports Google Compute Engine to create compute on demand on GCE.
New Hosts Pool infrastructure
Yorc 3.0.0 supports a new infrastructure that we called "Hosts Pool". It allows to register generic hosts into Yorc and let Yorc allocate them for deployments. These hosts can be anything, VMs, physical machines, containers, ... whatever as long as we can ssh into them for provisioning. Yorc exposes a REST API and a CLI that allow to manage the hosts pool, making it easy to integrate it with other tools.
For more informations about the Hosts Pool infrastructure, check out our dedicated documentation.
Slurm infrastructure
We made some improvements with our Slurm integration:
- We now support Slurm "features" (which are basically tags on nodes) and "constraints" syntax to allocate nodes. Examples here.
- Support of srun and sbatch commands (see Jobs scheduling below)
Refactoring Kubernetes infrastructure support
In Yorc 2 we made a first experimental integration with Kubernetes. This support and associated TOSCA types are deprecated in Yorc 3.0. Instead we switched to new TOSCA types defined collectively with Alien4Cloud.
This new integration will allow to build complex Kubernetes topologies.
Support of Alien4Cloud Services
Alien4Cloud has a great feature called "Services". It allows both to define part of an application to be exposed as a service so that it can be consumed by other applications, or to register an external service in Alien4Cloud to be exposed and consumed by applications.
This feature allows to build new use cases like cross-infrastructure deployments or shared services among many others.
We are very excited to support it!
Operations on orchestrator's host
Yet another super interesting feature! Until now TOSCA components handled by Yorc were designed to be hosted on a compute (whatever it was) that means that component's life-cycle scripts were executed on the provisioned compute. This feature allows to design components that will not necessary be hosted on a compute, and if not, life-cycle scripts are executed on the Yorc's host.
This opens a wide range of new use cases. You can for instance implement new computes implementations in pure TOSCA by calling cloud-providers CLI tools or interact with external services
Icing on the cake, for security reasons those executions are by default sand-boxed into containers to protect the host from mistakes and malicious usages.
Jobs scheduling
This release brings a tech preview support of jobs scheduling. It allows to design workloads made of Jobs that could interact with each other and with other "standard" TOSCA component within an application. We worked hard together with the Alien4Cloud team to extent TOSCA to support Jobs scheduling.
In this release we mainly focused on the integration with Slurm for supporting this feature (but we are also working on Kubernetes for the next release 😄). Bellow are new supported TOSCA types and implementations:
- SlurmJobs: will lead to issuing a srun command with a given executable file.
- SlurmBatch: will lead to issuing a sbatch command with a given batch file and associated executables
- Singularity integration: allows to execute a Singularity container instead of an executable file.
Mutual SSL auth between Alien & Yorc
Alien4Cloud and Yorc can now mutually authenticate themselves with TLS certificates.
New Logs formating
We constantly try to improve feedback returned to our users about runtime execution. In this release we are publishing logs with more context on the node/instance/operation/interface to which the log relates to.
Monitoring
Yorc 3.0 brings foundations on applicative monitoring, it allows to monitor compute liveness at a interval defined by the user. When a compute goes down or up we use or events API to notify the user and Alien4Cloud to monitor an application visually within the runtime view.
Our monitoring implementation was designed to be a fault-tolerant service.
v3.0.0-M8
Changelog
User story
- [BRBDCF-755] - Integrate Alien 2 Kubernetes artifacts with Yorc
- [BRBDCF-1302] - GCP Compute Creation
v3.0.0-M7
Changelog
Bug
- [BRBDCF-1229] - [Yorc/Slurm] Failed to create slurm infrastructure: node name:Compute: Malformed command : squeue -n MAG-TESTS-Environment -j 2 --noheader -o "%N,%P"
- [BRBDCF-1272] - Type 'list of string' as input for custom interface causes an error
- [BRBDCF-1316] - Unsupported operation "standard.start" attempting to deploy a container on k8s
User story
- [BRBDCF-582] - Mutual SSL auth between Alien & Janus
- [BRBDCF-610] - Monitor Compute liveness
- [BRBDCF-1246] - Support sbatch Jobs
Doc Change Request
- [BRBDCF-545] - Slurm Compute node : user is required (application tab) eventhough is already set by admin
v3.0.0-M6
Changelog
Bug
- [BRBDCF-563] - Error when deploying a topology with a network without connectivity
User story
- [BRBDCF-937] - Allow operations executions on orchestrator host
- [BRBDCF-973] - Support Alien Services
- [BRBDCF-1279] - Use SLURM node features to allocate resource
v3.0.0-M5
Changelog:
Bug
- [BRBDCF-1094] - Yorc A4C Plugin documentation versions are not up to date
- [BRBDCF-1227] - [Janus] dial tcp 127.0.0.1:8500: socket: too many open files
User story
- [BRBDCF-1142] - Allow to allocate a host in the hostspool several times (shared mode)
v3.0.0-M4
Changelog:
Bug
- [BRBDCF-389] - [A4C/Janus] ERROR: yaml: did not find expected key
- [BRBDCF-391] - [A4C/Janus] ERROR: yaml: found unknown escape character
- [BRBDCF-407] - huge latency when listing deployments with the CLI when there is hundreds of deployments
- [BRBDCF-1092] - Updating hostspool compute attribute always update the user to connect to, when it's has no been asked to be modified
- [BRBDCF-1129] - Yorc OpenStack FIP modifier doesn't preserve the workflow ordering
*[BRBDCF-1165] - Alien generates a workflow with a dependency between Computes and BlockStorage in the wrong order - [BRBDCF-1167] - Ansible doesn't handle SSH connection with port different than 22
- [BRBDCF-1172] - Can't deploy a 4 compute nodes application on a Hosts Pool
User story
- [BRBDCF-311] - [SPIKE] Model a generic singularity Slurm job
- [BRBDCF-447] - [SKIPE] POC on how to model and handle SLURM jobs
- [BRBDCF-1043] - Create hosts in hosts pool from a given file
- [BRBDCF-1126] - Refactor logs in Janus
- [BRBDCF-1142] - Allow to allocate a host in the hostspool several times (shared mode)
Feature Request
- [BRBDCF-555] - Janus Slurm plugin should support connection to Slurm client using private/public key
v3.0.0-M3
Release tag v3.0.0-M3
v3.0.0-M2
Release tag v3.0.0-M2
v3.0.0-M1
Release tag v3.0.0-M1