You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> CodeFlare is evolving! Check our [updates](https://github.com/project-codeflare/codeflare#pipeline-execution-and-scaling) for CodeFlare Pipelines and related contributions to Ray Workflows under Ray project. -->
39
37
40
-
# Scale complex AI/ML pipelines anywhere
38
+
# Simplified and efficient AI/ML on the hybrid cloud
41
39
42
-
CodeFlare is a framework to simplify the integration, scaling and acceleration of complex multi-step analytics and machine learning pipelines on the cloud.
43
-
44
-
Its main features are:
45
-
46
-
***Simplified user experience**:
47
-
Interactive and rich command line interface and live dashboards enabling automation to deploy, run and monitor end-to-end pipelines, significantly minimizing the effort and skills needed to scale AI and ML workflows.
48
-
49
-
***Pipeline execution and scaling**:
50
-
Integrated with Ray Workflows, CodeFlare Pipelines faciltates the definition and parallel execution of pipelines. It unifies pipeline workflows across multiple frameworks while providing nearly optimal scale-out parallelism on pipelined computations.
51
-
<!--CodeFlare Pipelines facilities the definition and parallel execution of pipelines. It unifies pipeline workflows across multiple platforms such as [scikit-learn](https://scikit-learn.org/) and [Apache Spark](https://spark.apache.org/), while providing nearly optimal scale-out parallelism on pipelined computations.-->
52
-
53
-
***Deploy and integrate anywhere**:
54
-
CodeFlare simplifies deployment and integration by enabling a serverless user experience with the integration with Red Hat OpenShift and IBM Cloud Code Engine and providing adapters and connectors to make it simple to load data and connect to data services.
This project is under active development. See the [Documentation](https://codeflare.readthedocs.io/en/latest/index.html) for design descriptions and the latest version of the APIs.
63
-
-->
40
+
CodeFlare provides a simple, user-friendly abstraction for developing, scaling, and managing resources for distributed AI/ML on the Hybrid Cloud platform with OpenShift Container Platform.
64
41
65
42
---
66
-
## Quick start
67
-
68
-
### Run on your laptop
69
-
70
-
### Installing CodeFlare client
71
-
72
-
See instructions [here](https://github.com/project-codeflare/codeflare-cli) for installing CodeFlare CLI and Dashboard.
*[Python 3.7 or 3.8](https://www.python.org/downloads/)
84
-
*[JupyterLab](https://jupyter.org)*(to run examples)*
46
+
CodeFlare stack consists of the following main components. This project is organized as a metarepo, gathering pointers and artifacts to deploy and use the stack.
85
47
86
-
We recommend installing Python 3.8.6 using
87
-
[pyenv](https://github.com/pyenv/pyenv). You can find [here](https://codeflare.readthedocs.io/en/latest/getting_started/setting_python_env.html) recommended steps to set up the Python environment.
You can try CodeFlare by running the docker image from [Docker Hub](https://hub.docker.com/r/projectcodeflare/codeflare/tags):
108
-
- `projectcodeflare/codeflare:latest` has the latest released version installed.
109
-
110
-
The command below starts the most recent development build in a clean environment:
111
-
112
-
```bash
113
-
docker run --rm -it -p 8888:8888 projectcodeflare/codeflare:latest
114
-
```
48
+
***Simplified user experience**:
49
+
CodeFlare [SDK](https://github.com/project-codeflare/codeflare-sdk) and [CLI](https://github.com/project-codeflare/codeflare-cli) to define, develop, and control remote distributed compute jobs and infrastructure from either a python-based environment or command-line interface
115
50
116
-
It should produce an output similar to the one below, where you can then find the URL to run CodeFlare from a Jupyter notebook in your local browser.
51
+
***Efficient resource management**:
52
+
Multi-Cluster Application Dispatcher [(MCAD)](https://github.com/project-codeflare/multi-cluster-app-dispatcher) for queueing, resource quotas, and management of batch jobs. And [Instascale](https://github.com/project-codeflare/instacale) for on-demand resource scaling of an OpenShift cluster
117
53
118
-
```
119
-
[I <time_stamp> ServerApp] Jupyter Server <version> is running at:
[CodeFlare Operator](https://github.com/project-codeflare/codeflare-operator) for automating deployment and configuration of the Project CodeFlare stack
124
56
125
-
<!-- #### Using Binder service
57
+
With CodeFlare stack, users automate and simplify the execution and scaling of the steps in the life cycle of model development, from data pre-processing, distributed model training, model adaptation and validation.
126
58
127
-
You can try out some of CodeFlare features using the My Binder service.
59
+
Through transparent integration with [Ray](https://github.com/ray-project/ray) and [PyTorch](https://github.com/pytorch/pytorch) frameworks, and the rich library ecosystem that run on them, CodeFlare enables data scientists to **spend more time on model development and minimum time on resource deployment and scaling**.
128
60
129
-
Click on the link below to try CodeFlare, on a sandbox environment, without having to install anything.
In addition to running standalone, Project CodeFlare is deployed as part of and integrated with the [Open Data Hub](https://github.com/opendatahub-io/distributed-workloads), leveraging [OpenShift Container Platform](https://www.openshift.com).
135
67
136
-
## Pipeline execution and scaling
68
+
With OpenShift, CodeFlare can be deployed anywhere, from on-prem to cloud, and integrate easily with other cloud-native ecosystems.
> As of January 2022, this feature is now built on [Ray Workflows](https://docs.ray.io/en/releases-1.9.0/workflows/concepts.html) with parts of it in [Ray core](https://github.com/ray-project/ray/releases/tag/ray-1.7.0) and the rest in a [DAG contribution repository](https://github.com/ray-project/contrib-workflow-dag). Please follow these links to contribute to CodeFlare Pipelines.
144
-
145
-
**CodeFlare Pipelines** reimagined pipelines to provide a more intuitive API for the data scientist to create AI/ML pipelines, data workflows, pre-processing, post-processing tasks, and many more which can scale from a laptop to a cluster seamlessly.
146
-
147
-
See the API documentation [here](https://codeflare.readthedocs.io/en/latest/codeflare.pipelines.html), and reference use case documentation in the Examples section.
148
-
149
-
A set of reference examples are provided as executable [notebooks](https://github.com/project-codeflare/codeflare/tree/main/notebooks).
74
+
---
150
75
151
-
To run examples, if you haven't done so yet, clone the CodeFlare project with:
The step above should automatically open a browser window and connect to a running Jupyter server.
85
+
To get started using the Project CodeFlare stack, try this [end-to-end example](https://github.com/opendatahub-io/distributed-workloads/blob/main/Quick-Start.md)!
168
86
169
-
If you are using any one of the recommended cloud based deployments (see below), examples are found in the `codeflare/notebooks` directory in the container image. The examples can be executed directly from the Jupyter environment.
87
+
For more basic walk-throughs and in-depth tutorials, see our [demo notebooks](https://github.com/project-codeflare/codeflare-sdk/tree/main/demo-notebooks/guided-demos)!
170
88
171
-
As a first example of the API usage, see the [sample pipeline](https://github.com/project-codeflare/codeflare/blob/main/notebooks/sample_pipeline.ipynb).
89
+
## Development
172
90
173
-
For an example of how CodeFlare Pipelines can be used to scale out common machine learning problems, see the [grid search](https://github.com/project-codeflare/codeflare/blob/develop/notebooks/Grid%20Search%20Sample.ipynb) example. It shows how hyperparameter optimization for a reference pipeline can be scaled and accelerated with both task and data parallelism.
91
+
See more details in any of the component repos linked above, or get started by taking a look at the [project board](https://github.com/orgs/project-codeflare/projects/8)for open tasks/issues!
174
92
175
-
##Deploy and integrate anywhere
93
+
### Architecture
176
94
177
-
CodeFlare is built on [Red Hat OpenShift Container Platform](https://www.openshift.com) and can be deployed anywhere, from on-prem to cloud, and integrate easily with other cloud-native ecosystems.
95
+
We attempt to document all architectural decisions in our [ADR documents](https://github.com/project-codeflare/adr). Start here to understand the architectural details of Project CodeFlare.
178
96
179
-
See [Running with Red Hat OpenShift](./deploy/redhat_openshift) for detailed instructions on how to run CodeFlare on OpenShift Container Platform.
97
+
---
180
98
181
-
## Contributing
99
+
## 🎉 Getting Involved and Contributing
182
100
183
-
Join us in making CodeFlare Better! We encourage you to take a look at our [Contributing](CONTRIBUTING.md) page.
101
+
Join our [Slack community][slack]to get involved or ask questions.
184
102
185
103
## Blog
186
104
@@ -189,3 +107,16 @@ CodeFlare related blogs are published on our [Medium publication](https://medium
189
107
## License
190
108
191
109
CodeFlare is an open-source project with an [Apache 2.0 license](LICENSE).
0 commit comments