diff --git a/.circleci/config.yml b/.circleci/config.yml deleted file mode 100644 index f05fda7..0000000 --- a/.circleci/config.yml +++ /dev/null @@ -1,18 +0,0 @@ -version: 2.1 - -orbs: - python: circleci/python@0.2.1 - -jobs: - build-and-test: - executor: python/default - steps: - - checkout - - python/load-cache - - python/install-deps - - python/save-cache - -workflows: - main: - jobs: - - build-and-test diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml new file mode 100644 index 0000000..0e49f15 --- /dev/null +++ b/.github/workflows/ci.yml @@ -0,0 +1,51 @@ +name: CI + +on: + push: + branches: ["master"] + pull_request: + +jobs: + tests: + name: "Python ${{ matrix.python-version }}" + runs-on: "ubuntu-latest" + strategy: + fail-fast: false + matrix: + python-version: ["3.7"] + + services: + redisai: + image: redislabs/redisai:edge-cpu-bionic + ports: + - 6379:6379 + options: >- + --health-cmd "redis-cli ping" + --health-interval 10s + --health-timeout 5s + --health-retries 5 + steps: + - name: Checkout Code + uses: "actions/checkout@v2" + with: + lfs: true + - name: Setup Python + uses: "actions/setup-python@v2" + with: + python-version: ${{ matrix.python-version }} + - name: Cache dependencies + uses: actions/cache@v2 + with: + path: /opt/venv + key: /opt/venv-${{ hashFiles('**/requirements.txt') }} + - name: Install dependencies + if: steps.cache.outputs.cache-hit != 'true' + run: | + pip install --upgrade pip + pip install -r requirements.txt + - name: Run Tests + env: + REDIS_HOST: localhost + REDIS_PORT: 6379 + run: | + pytest test.py diff --git a/.gitignore b/.gitignore index 9410cc4..9f65ef5 100644 --- a/.gitignore +++ b/.gitignore @@ -103,5 +103,18 @@ ENV/ # js node_modules -# pycharm -.idea \ No newline at end of file +# editors +.idea +.vscode + +# test output +.test_results + +# model formats +*.pt +*.onnx +*.pb +*.pth +*.pbtxt +*.pkl +*.ckpt diff --git a/ImageClassificationWithPytorch.ipynb b/ImageClassificationWithPytorch.ipynb new file mode 100644 index 0000000..6ed3070 --- /dev/null +++ b/ImageClassificationWithPytorch.ipynb @@ -0,0 +1,294 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "f0c2a6e5", + "metadata": {}, + "source": [ + "# Image Classification with PyTorch\n", + "Pytorch has been both researcher's and engineer's preferred choice of framework for DL development but when it comes to productionizing pytorch models, there still hasn't been a consensus on what to use. This guide run you through building a simple image classification model using Pytorch and then deploying that to RedisAI. Let's start with importing the necessary packages" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "1e657632", + "metadata": {}, + "outputs": [], + "source": [ + "import torchvision.models as models\n", + "import torch\n", + "\n", + "import json\n", + "import time\n", + "from redisai import Client\n", + "import ml2rt\n", + "from skimage import io\n", + "\n", + "import os\n", + "from redisai import Client" + ] + }, + { + "cell_type": "markdown", + "id": "43cd67a3", + "metadata": {}, + "source": [ + "## Build Model\n", + "For this example, we use a pretrained model from torchvision for image classification - the renowned resnet50. Since RedisAI is a C/C++ runtime, we'd need to export the torch model into [TorchScript](https://pytorch.org/docs/stable/jit.html). Here is how to do it but you can read more about TorchScript in the attached link" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "56edea97", + "metadata": {}, + "outputs": [], + "source": [ + "model = models.resnet50(pretrained=True)\n", + "model.eval()\n", + "\n", + "scripted_model = torch.jit.script(model)\n", + "torch.jit.save(scripted_model, 'resnet50.pt')" + ] + }, + { + "cell_type": "markdown", + "id": "75c8faa0", + "metadata": {}, + "source": [ + "## Setup RedisAI\n", + "This tutorial assumes you already have a RedisAI server running. The easiest way to setup one instance is using docker\n", + "\n", + "```\n", + "docker run -p 6379:6379 redislabs/redisai:latest-cpu-x64-bionic\n", + "```\n", + "\n", + "Take a look at this [quickstart](https://oss.redis.com/redisai/quickstart/) for more details. Here we setup the connection credentials and ping the server to verify we can talk " + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "59b6599a", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "REDIS_HOST = os.getenv(\"REDIS_HOST\", \"localhost\")\n", + "REDIS_PORT = int(os.getenv(\"REDIS_PORT\", 6379))\n", + "con = Client(host=REDIS_HOST, port=REDIS_PORT)\n", + "con.ping()" + ] + }, + { + "cell_type": "markdown", + "id": "f68e8993", + "metadata": {}, + "source": [ + "## Load model\n", + "Next step is to load the model we trained above into RedisAI for serving. We are using a convinent package [ml2rt](https://pypi.org/project/ml2rt/) here for loading but it's not a mandatory dependency if you want to keep your `requirements.txt` small. Take a look at the `load_model` function. This will give us a binary blob of the model we have built above. We need to send this to RedisAI and also inform which backend we'd like to use and which device this should run on. We'll set the model on a key so we can reference this key later\n", + "\n", + "Note: If you want to run on GPU, take a look at the above quick start to setup RedisAI on GPU" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "f7ddde68", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'OK'" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model = ml2rt.load_model(\"resnet50.pt\")\n", + "con.modelstore(\"pytorch_model\", backend=\"TORCH\", device=\"CPU\", data=model)" + ] + }, + { + "cell_type": "markdown", + "id": "c635e5f4", + "metadata": {}, + "source": [ + "## Load script\n", + "Why do you need Script? It's very likely that your deep learning model would have a pre/post processing step, like changing the dimensionality of the input (adding batch dimension) or doing normalizatoin etc. You normally do this from your client code and send the processed data to model server. With script, you can club this into your model serving pipeline. Script is one of the powerful feature of RedisAI. RedisAI Scripts are built on top of [TorchScript](https://pytorch.org/docs/stable/jit.html) and it's recommended to take a look if TorcScript is new to you. Torchscript is a subset of python programming langauge i.e it looks and smells like python but all the python functionalities are not available in torchscript. Now if you are wondering what's the benefit of TorchScript in RedisAI, there are few\n", + "\n", + "- It runs on a highly effecient C++ runtime\n", + "- It can pipeline your preprocessing and postprocessing jobs, right where your model and data resides. So no back and forth of huge data blobs between your model server and pre/post processing scripts\n", + "- It can run in a single redis pipeline or in RedisAI Dag which makes serving channel implementation smooth\n", + "- You can use it with any framework, not just pytorch\n", + "\n", + "You can load the script from a file (`ml2rt.load_script` does this for you) which is probably your workflow normally since you save the script in a file but here we pass the string into the `scriptstore` method" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "50bb90b1", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'OK'" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "script = \"\"\"\n", + "def pre_process(tensors: List[Tensor], keys: List[str], args: List[str]):\n", + " image = tensors[0]\n", + " mean = torch.zeros(3).float().to(image.device)\n", + " std = torch.zeros(3).float().to(image.device)\n", + " mean[0], mean[1], mean[2] = 0.485, 0.456, 0.406\n", + " std[0], std[1], std[2] = 0.229, 0.224, 0.225\n", + " mean = mean.unsqueeze(1).unsqueeze(1)\n", + " std = std.unsqueeze(1).unsqueeze(1)\n", + " temp = image.float().div(255).permute(2, 0, 1)\n", + " return temp.sub(mean).div(std).unsqueeze(0)\n", + "\n", + "\n", + "def post_process(tensors: List[Tensor], keys: List[str], args: List[str]):\n", + " output = tensors[0]\n", + " return output.max(1)[1]\n", + "\"\"\"\n", + "con.scriptstore(\"processing_script\", device=\"CPU\", script=script, entry_points=(\"pre_process\", \"post_process\"))" + ] + }, + { + "cell_type": "markdown", + "id": "5b16d972", + "metadata": {}, + "source": [ + "## Load the image and final classes\n", + "Here we load the input image and the final classes to find the predicted output" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "fe95d716", + "metadata": {}, + "outputs": [], + "source": [ + "image = io.imread(\"data/cat.jpg\")\n", + "class_idx = json.load(open(\"data/imagenet_classes.json\"))" + ] + }, + { + "cell_type": "markdown", + "id": "9440afb8", + "metadata": {}, + "source": [ + "## Run the model serving pipeline\n", + "Here we run the serving pipeline one by one and finally fetch the results out. The pipeline is organized into 5 steps\n", + "\n", + "```\n", + "Setting Input -> Pre-processing Script -> Running Model -> Post-processing Script -> Fetching Output\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "f24ce05d", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "281 tabby, tabby catamount\n" + ] + } + ], + "source": [ + "con.tensorset('image', image)\n", + "con.scriptexecute('processing_script', 'pre_process', inputs='image', outputs='processed')\n", + "con.modelexecute('pytorch_model', 'processed', 'model_out')\n", + "con.scriptexecute('processing_script', 'post_process', inputs='model_out', outputs='final')\n", + "final = con.tensorget('final')\n", + "print(final[0], class_idx[str(final[0])])" + ] + }, + { + "cell_type": "markdown", + "id": "7b75d0ba", + "metadata": {}, + "source": [ + "## Running with DAG\n", + "Although this looks good, each of these calls has a network overhead of going back and forth and sometimes it's better to run everything as a single execution and that's what you can do with RedisAI DAG. DAGs are much more powerful than that but let's discuss that in another tutorial. Here we first setup a dag object and track all the operations we did above in the dag. Note that none of these tracking steps sends a request to RedisAI server. Once the dag object is ready with all the paths, you can trigger `dag.execute()` to initiate the DAG execution in RedisAI backend" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "40e02215", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "281 tabby, tabby catamount\n" + ] + } + ], + "source": [ + "dag = con.dag(routing='default')\n", + "dag.tensorset('image', image)\n", + "dag.scriptexecute('processing_script', 'pre_process', inputs='image', outputs='processed')\n", + "dag.modelexecute('pytorch_model', 'processed', 'model_out')\n", + "dag.scriptexecute('processing_script', 'post_process', inputs='model_out', outputs='final')\n", + "dag.tensorget('final')\n", + "\n", + "final = dag.execute()[-1]\n", + "print(final[0], class_idx[str(final[0])])" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.6" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/ImageClassificationWithTensorflow.ipynb b/ImageClassificationWithTensorflow.ipynb new file mode 100644 index 0000000..1febdb7 --- /dev/null +++ b/ImageClassificationWithTensorflow.ipynb @@ -0,0 +1,317 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "0e9ffdce", + "metadata": {}, + "source": [ + "# Image Classification with Tensorflow (1.x)\n", + "\n", + "This example is built with Tensorflow 1.5 (tensorflow 2.x support for RedisAI is can be implemented using graph freezing. Checkout the [documentation](https://github.com/RedisAI/RedisAI)). We first take a prebuilt model from Tensorflow hub and then push that to RedisAI for production. Towards the end of this example, we also show how the loaded model can be used for inference" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "26841334", + "metadata": {}, + "outputs": [], + "source": [ + "import tensorflow as tf\n", + "import tensorflow_hub as hub\n", + "import json\n", + "import time\n", + "from redisai import Client\n", + "from ml2rt import load_model, load_script, save_tensorflow\n", + "from skimage import io\n", + "import os" + ] + }, + { + "cell_type": "markdown", + "id": "9f501f18", + "metadata": {}, + "source": [ + "## Downloading and save the model\n", + "The pretrained model is downloaded from the tensorlfow hub. We then use a dummy input `image` to serialize the graph into the disk" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "5fc2e4d4", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "INFO:tensorflow:Froze 272 variables.\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:tensorflow:Froze 272 variables.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "INFO:tensorflow:Converted 272 variables to const ops.\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "INFO:tensorflow:Converted 272 variables to const ops.\n" + ] + } + ], + "source": [ + "url = 'https://tfhub.dev/google/imagenet/resnet_v2_50/classification/1'\n", + "images = tf.placeholder(tf.float32, shape=(1, 224, 224, 3), name='images')\n", + "module = hub.Module(url)\n", + "logits = module(images)\n", + "logits = tf.identity(logits, 'output')\n", + "\n", + "with tf.Session() as sess:\n", + " sess.run([tf.global_variables_initializer()])\n", + " save_tensorflow(sess, 'resnet50.pb', output=['output'])" + ] + }, + { + "cell_type": "markdown", + "id": "c3d45a93", + "metadata": {}, + "source": [ + "## Setup RedisAI\n", + "This tutorial assumes you already have a RedisAI server running. The easiest way to setup one instance is using docker\n", + "\n", + "```\n", + "docker run -p 6379:6379 redislabs/redisai:latest-cpu-x64-bionic\n", + "```\n", + "\n", + "Take a look at this [quickstart](https://oss.redis.com/redisai/quickstart/) for more details. Here we setup the connection credentials and ping the server to verify we can talk " + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "2be4e169", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "REDIS_HOST = os.getenv(\"REDIS_HOST\", \"localhost\")\n", + "REDIS_PORT = int(os.getenv(\"REDIS_PORT\", 6379))\n", + "con = Client(host=REDIS_HOST, port=REDIS_PORT)\n", + "con.ping()" + ] + }, + { + "cell_type": "markdown", + "id": "ea511bfc", + "metadata": {}, + "source": [ + "## Load model\n", + "Next step is to load the model we trained above into RedisAI for serving. We are using a convinent package [ml2rt](https://pypi.org/project/ml2rt/) here for loading but it's not a mandatory dependency if you want to keep your `requirements.txt` small. Take a look at the `load_model` function. This will give us a binary blob of the model we have built above. We need to send this to RedisAI and also inform which backend we'd like to use and which device this should run on. We'll set the model on a key so we can reference this key later\n", + "\n", + "Note: If you want to run on GPU, take a look at the above quick start to setup RedisAI on GPU" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "bf775104", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'OK'" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model = load_model(\"resnet50.pb\")\n", + "con.modelstore(\"tensorflow_model\", backend=\"TF\", device=\"CPU\", inputs=['images'], outputs=['output'], data=model)" + ] + }, + { + "cell_type": "markdown", + "id": "3b5fdd8c", + "metadata": {}, + "source": [ + "## Load script\n", + "Why do you need Script? It's very likely that your deep learning model would have a pre/post processing step, like changing the dimensionality of the input (adding batch dimension) or doing normalizatoin etc. You normally do this from your client code and send the processed data to model server. With script, you can club this into your model serving pipeline. Script is one of the powerful feature of RedisAI. RedisAI Scripts are built on top of [TorchScript](https://pytorch.org/docs/stable/jit.html) and it's recommended to take a look if TorcScript is new to you. Torchscript is a subset of python programming langauge i.e it looks and smells like python but all the python functionalities are not available in torchscript. Now if you are wondering what's the benefit of TorchScript in RedisAI, there are few\n", + "\n", + "- It runs on a highly effecient C++ runtime\n", + "- It can pipeline your preprocessing and postprocessing jobs, right where your model and data resides. So no back and forth of huge data blobs between your model server and pre/post processing scripts\n", + "- It can run in a single redis pipeline or in RedisAI Dag which makes serving channel implementation smooth\n", + "- As in this example, even if your model is in Tensorflow, you can use Script to pipe the worflow\n", + "\n", + "You can load the script from a file (`ml2rt.load_script` does this for you) which is probably your workflow normally since you save the script in a file but here we pass the string into the `scriptstore` method" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "f097999c", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'OK'" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "script = \"\"\"\n", + "def pre_process(tensors: List[Tensor], keys: List[str], args: List[str]):\n", + " image = tensors[0]\n", + " return image.float().div(255).unsqueeze(0)\n", + "\n", + "def post_process(tensors: List[Tensor], keys: List[str], args: List[str]):\n", + " output = tensors[0]\n", + " # tf model has 1001 classes, hence negative 1\n", + " return output.max(1)[1] - 1\n", + "\"\"\"\n", + "con.scriptstore(\"processing_script\", device=\"CPU\", script=script, entry_points=(\"pre_process\", \"post_process\"))" + ] + }, + { + "cell_type": "markdown", + "id": "7d9f2834", + "metadata": {}, + "source": [ + "## Load the image and final classes\n", + "Here we load the input image and the final classes to find the predicted output" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "3bef00e0", + "metadata": {}, + "outputs": [], + "source": [ + "image = io.imread(\"data/cat.jpg\")\n", + "class_idx = json.load(open(\"data/imagenet_classes.json\"))" + ] + }, + { + "cell_type": "markdown", + "id": "44013e64", + "metadata": {}, + "source": [ + "## Run the model serving pipeline\n", + "Here we run the serving pipeline one by one and finally fetch the results out. The pipeline is organized into 5 steps\n", + "\n", + "```\n", + "Setting Input -> Pre-processing Script -> Running Model -> Post-processing Script -> Fetching Output\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "7a3a24ff", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "281 tabby, tabby catamount\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/Users/hhsecond/asgard/redisai-examples/venv/lib/python3.7/site-packages/ipykernel_launcher.py:2: DeprecationWarning: Call to deprecated method scriptrun. (Use scriptexecute instead) -- Deprecated since version 1.2.0.\n", + " \n", + "/Users/hhsecond/asgard/redisai-examples/venv/lib/python3.7/site-packages/ipykernel_launcher.py:3: DeprecationWarning: Call to deprecated method modelrun. (Use modelexecute instead) -- Deprecated since version 1.2.0.\n", + " This is separate from the ipykernel package so we can avoid doing imports until\n", + "/Users/hhsecond/asgard/redisai-examples/venv/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: Call to deprecated method scriptrun. (Use scriptexecute instead) -- Deprecated since version 1.2.0.\n", + " after removing the cwd from sys.path.\n" + ] + } + ], + "source": [ + "con.tensorset('image', image)\n", + "out4 = con.scriptrun('processing_script', 'pre_process', inputs='image', outputs='processed')\n", + "out5 = con.modelrun('tensorflow_model', 'processed', 'model_out')\n", + "out6 = con.scriptrun('processing_script', 'post_process', 'model_out', 'final')\n", + "final = con.tensorget('final')\n", + "print(final[0], class_idx[str(final[0])])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b8aa981c", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.6" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/notebooks/scikit_learn/ScikitLearn2Production.ipynb b/LinearRegressionWithScikitLearn.ipynb similarity index 51% rename from notebooks/scikit_learn/ScikitLearn2Production.ipynb rename to LinearRegressionWithScikitLearn.ipynb index 1bf8ed0..890a305 100644 --- a/notebooks/scikit_learn/ScikitLearn2Production.ipynb +++ b/LinearRegressionWithScikitLearn.ipynb @@ -9,91 +9,25 @@ "source": [ "# Taking ML (Scikit Learn) to highly scalable production using RedisAI\n", "Scikit learn is probably the most used machine learning package in the industry. Even though, there are few options readily available for taking deep learning to production (with tfserving etc), there were no widely accepted attempts to build a framework that could help us to take ML to production. Microsoft had build [ONNXRuntime](https://github.com/microsoft/onnxruntime) and the scikit learn exporter for this very purpose. \n", - "Very recently RedisAI had announced the support for ONNXRuntime as the third backend (Tensorflow and PyTorch was already supported). This makes us capable of pushing a scikit-learn model through ONNX to a super scalable production. This demo is focusing on showing how this can be accomplished. We'll train a linear regression model for predicting boston house price first. The trained model is then converted to ONNX IR using [sk2onnx](https://github.com/onnx/sklearn-onnx). Third part of the demo shows how to load the onnx binary into RedisAI runtime and how to communicate. " + "With the support for ONNXRuntime as the third backend (Tensorflow and PyTorch was already supported) in RedisAI, it is now easy to serve models from almost any traditional ML frameworks. This demo is focusing on showing how this can be accomplished. We'll train a linear regression model for predicting boston house price first. The trained model is then converted to ONNX IR using [sk2onnx](https://github.com/onnx/sklearn-onnx). Third part of the demo shows how to load the onnx binary into RedisAI runtime and how to communicate. " ] }, { "cell_type": "code", - "execution_count": 3, - "metadata": { - "colab": {}, - "colab_type": "code", - "id": "qsftvU2WMNMd" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: skl2onnx in /home/sherin/miniconda3/lib/python3.7/site-packages (1.4.9)\n", - "Requirement already satisfied: onnxconverter-common>=1.4.2 in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (1.5.0)\n", - "Requirement already satisfied: protobuf in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (3.7.0)\n", - "Requirement already satisfied: scikit-learn>=0.19 in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (0.21.2)\n", - "Requirement already satisfied: numpy>=1.15 in /home/sherin/miniconda3/lib/python3.7/site-packages/numpy-1.16.4-py3.7-linux-x86_64.egg (from skl2onnx) (1.16.4)\n", - "Requirement already satisfied: six in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (1.12.0)\n", - "Requirement already satisfied: onnx>=1.2.1 in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (1.5.0)\n", - "Requirement already satisfied: setuptools in /home/sherin/miniconda3/lib/python3.7/site-packages (from protobuf->skl2onnx) (40.2.0)\n", - "Requirement already satisfied: joblib>=0.11 in /home/sherin/miniconda3/lib/python3.7/site-packages (from scikit-learn>=0.19->skl2onnx) (0.13.2)\n", - "Requirement already satisfied: scipy>=0.17.0 in /home/sherin/miniconda3/lib/python3.7/site-packages (from scikit-learn>=0.19->skl2onnx) (1.2.1)\n", - "Requirement already satisfied: typing-extensions>=3.6.2.1 in /home/sherin/miniconda3/lib/python3.7/site-packages (from onnx>=1.2.1->skl2onnx) (3.7.2)\n", - "Requirement already satisfied: typing>=3.6.4 in /home/sherin/miniconda3/lib/python3.7/site-packages (from onnx>=1.2.1->skl2onnx) (3.6.6)\n", - "Requirement already satisfied: skl2onnx in /home/sherin/miniconda3/lib/python3.7/site-packages (1.4.9)\n", - "Requirement already satisfied: scikit-learn>=0.19 in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (0.21.2)\n", - "Requirement already satisfied: protobuf in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (3.7.0)\n", - "Requirement already satisfied: six in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (1.12.0)\n", - "Requirement already satisfied: onnxconverter-common>=1.4.2 in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (1.5.0)\n", - "Requirement already satisfied: numpy>=1.15 in /home/sherin/miniconda3/lib/python3.7/site-packages/numpy-1.16.4-py3.7-linux-x86_64.egg (from skl2onnx) (1.16.4)\n", - "Requirement already satisfied: onnx>=1.2.1 in /home/sherin/miniconda3/lib/python3.7/site-packages (from skl2onnx) (1.5.0)\n", - "Requirement already satisfied: scipy>=0.17.0 in /home/sherin/miniconda3/lib/python3.7/site-packages (from scikit-learn>=0.19->skl2onnx) (1.2.1)\n", - "Requirement already satisfied: joblib>=0.11 in /home/sherin/miniconda3/lib/python3.7/site-packages (from scikit-learn>=0.19->skl2onnx) (0.13.2)\n", - "Requirement already satisfied: setuptools in /home/sherin/miniconda3/lib/python3.7/site-packages (from protobuf->skl2onnx) (40.2.0)\n", - "Requirement already satisfied: typing-extensions>=3.6.2.1 in /home/sherin/miniconda3/lib/python3.7/site-packages (from onnx>=1.2.1->skl2onnx) (3.7.2)\n", - "Requirement already satisfied: typing>=3.6.4 in /home/sherin/miniconda3/lib/python3.7/site-packages (from onnx>=1.2.1->skl2onnx) (3.6.6)\n", - "Collecting git+https://github.com/RedisAI/redisai-py/@onnxruntime\n", - " Cloning https://github.com/RedisAI/redisai-py/ (to revision onnxruntime) to /tmp/pip-req-build-pu_kkk06\n", - " Running command git clone -q https://github.com/RedisAI/redisai-py/ /tmp/pip-req-build-pu_kkk06\n", - " Running command git checkout -b onnxruntime --track origin/onnxruntime\n", - " Switched to a new branch 'onnxruntime'\n", - " Branch 'onnxruntime' set up to track remote branch 'onnxruntime' from 'origin'.\n", - "Requirement already satisfied: redis in /home/sherin/miniconda3/lib/python3.7/site-packages (from redisai==0.3.0) (3.2.1)\n", - "Requirement already satisfied: hiredis in /home/sherin/miniconda3/lib/python3.7/site-packages (from redisai==0.3.0) (1.0.0)\n", - "Requirement already satisfied: rmtest in /home/sherin/miniconda3/lib/python3.7/site-packages (from redisai==0.3.0) (0.7.0)\n", - "Building wheels for collected packages: redisai\n", - " Building wheel for redisai (setup.py) ... \u001b[?25ldone\n", - "\u001b[?25h Stored in directory: /tmp/pip-ephem-wheel-cache-g5np7tfg/wheels/bc/41/6c/294c468fc56049440cf0957709cbc453e271fed1c009123730\n", - "Successfully built redisai\n", - "Installing collected packages: redisai\n", - " Found existing installation: redisai 0.2.0\n", - " Uninstalling redisai-0.2.0:\n", - " Successfully uninstalled redisai-0.2.0\n", - "Successfully installed redisai-0.3.0\n" - ] - } - ], - "source": [ - "# Installing dependencies\n", - "!pip install skl2onnx\n", - "!pip install skl2onnx\n", - "# !pip install redisai\n", - "# hack since the redisai version is not updated in pypi yet\n", - "!pip install git+https://github.com/RedisAI/redisai-py/@onnxruntime" - ] - }, - { - "cell_type": "code", - "execution_count": 4, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import load_boston\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.linear_model import LinearRegression\n", - "import sklearn" + "import sklearn\n", + "import numpy as np" ] }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 2, "metadata": { "colab": {}, "colab_type": "code", @@ -118,7 +52,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 3, "metadata": { "colab": {}, "colab_type": "code", @@ -126,13 +60,12 @@ }, "outputs": [], "source": [ - "import redisai as rai\n", - "from redisai.model import Model as raimodel\n", - "try:\n", - " if rai.__version__ < '0.3.0':\n", - " raise\n", - "except:\n", - " raise RuntimeError('ONNX is introduced in redisai-py version 0.3.0. Upgrade!!')" + "import os\n", + "from redisai import Client\n", + "from ml2rt import load_model, save_onnx\n", + "\n", + "REDIS_HOST = os.getenv(\"REDIS_HOST\", \"localhost\")\n", + "REDIS_PORT = int(os.getenv(\"REDIS_PORT\", 6379))" ] }, { @@ -147,7 +80,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", @@ -162,7 +95,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 5, "metadata": { "colab": {}, "colab_type": "code", @@ -175,7 +108,7 @@ "(379, 13)" ] }, - "execution_count": 8, + "execution_count": 5, "metadata": {}, "output_type": "execute_result" } @@ -196,7 +129,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 6, "metadata": { "colab": {}, "colab_type": "code", @@ -206,10 +139,10 @@ { "data": { "text/plain": [ - "LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)" + "LinearRegression()" ] }, - "execution_count": 9, + "execution_count": 6, "metadata": {}, "output_type": "execute_result" } @@ -221,7 +154,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 7, "metadata": { "colab": {}, "colab_type": "code", @@ -232,7 +165,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Mean Squared Error: 22.90649510340278\n" + "Mean Squared Error: 24.384386924554153\n" ] } ], @@ -255,28 +188,20 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 8, "metadata": { "colab": {}, "colab_type": "code", "id": "IBEX0pSWM4r6" }, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "The maximum opset needed by this model is only 1.\n" - ] - } - ], + "outputs": [], "source": [ "# 1 is batch size and 13 is num features\n", "# reference: https://github.com/onnx/sklearn-onnx/blob/master/skl2onnx/convert.py\n", "initial_type = [('float_input', FloatTensorType([1, 13]))]\n", "\n", "onnx_model = convert_sklearn(model, initial_types=initial_type)\n", - "raimodel.save(onnx_model, 'boston.onnx')" + "save_onnx(onnx_model, 'boston.onnx')" ] }, { @@ -292,7 +217,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 9, "metadata": { "colab": {}, "colab_type": "code", @@ -300,7 +225,7 @@ }, "outputs": [], "source": [ - "con = rai.Client(host='localhost', port=6379, db=0)" + "con = Client(host=REDIS_HOST, port=REDIS_PORT, db=0)" ] }, { @@ -315,7 +240,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 10, "metadata": { "colab": {}, "colab_type": "code", @@ -325,17 +250,17 @@ { "data": { "text/plain": [ - "b'OK'" + "'OK'" ] }, - "execution_count": 15, + "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "model = raimodel.load(\"boston.onnx\")\n", - "con.modelset(\"onnx_model\", rai.Backend.onnx, rai.Device.cpu, model)" + "model = load_model(\"boston.onnx\")\n", + "con.modelstore(\"onnx_model\", \"ONNX\", \"CPU\", model)" ] }, { @@ -350,7 +275,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": 11, "metadata": { "colab": {}, "colab_type": "code", @@ -360,23 +285,19 @@ { "data": { "text/plain": [ - "b'OK'" + "'OK'" ] }, - "execution_count": 16, + "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "\n", "# dummydata taken from sklearn.datasets.load_boston().data[0]\n", - "dummydata = [\n", - " 0.00632, 18.0, 2.31, 0.0, 0.538, 6.575, 65.2, 4.09, 1.0, 296.0, 15.3, 396.9, 4.98]\n", - "tensor = rai.Tensor.scalar(rai.DType.float, *dummydata)\n", - "# If the tensor is too complex to pass it as python list, you can use BlobTensor that takes numpy array\n", - "# tensor = rai.BlobTensor.from_numpy(np.array(dummydata, dtype='float32'))\n", - "con.tensorset(\"input\", tensor)" + "dummydata = np.array([\n", + " 0.00632, 18.0, 2.31, 0.0, 0.538, 6.575, 65.2, 4.09, 1.0, 296.0, 15.3, 396.9, 4.98], dtype=np.float32)\n", + "con.tensorset(\"input\", dummydata.reshape((1, 13)))" ] }, { @@ -392,7 +313,7 @@ }, { "cell_type": "code", - "execution_count": 17, + "execution_count": 13, "metadata": { "colab": {}, "colab_type": "code", @@ -402,16 +323,16 @@ { "data": { "text/plain": [ - "b'OK'" + "'OK'" ] }, - "execution_count": 17, + "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "con.modelrun(\"onnx_model\", [\"input\"], [\"output\"])" + "con.modelexecute(\"onnx_model\", [\"input\"], [\"output\"])" ] }, { @@ -426,7 +347,7 @@ }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 14, "metadata": { "colab": {}, "colab_type": "code", @@ -437,14 +358,21 @@ "name": "stdout", "output_type": "stream", "text": [ - "House cost predicted by model is $29969.89631652832\n" + "House cost predicted by model is $30287.53662109375\n" ] } ], "source": [ - "outtensor = con.tensorget(\"output\", as_type=rai.BlobTensor)\n", - "print(f\"House cost predicted by model is ${outtensor.to_numpy().item() * 1000}\")" + "outtensor = con.tensorget(\"output\")\n", + "print(f\"House cost predicted by model is ${outtensor.item() * 1000}\")" ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { @@ -454,7 +382,7 @@ "version": "0.3.2" }, "kernelspec": { - "display_name": "Python 3", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -468,7 +396,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.5" + "version": "3.8.7" } }, "nbformat": 4, diff --git a/bash_client/onnx_mnist.sh b/reference/bash_client/onnx_mnist.sh similarity index 100% rename from bash_client/onnx_mnist.sh rename to reference/bash_client/onnx_mnist.sh diff --git a/bash_client/sklearn_linear_regression.sh b/reference/bash_client/sklearn_linear_regression.sh similarity index 100% rename from bash_client/sklearn_linear_regression.sh rename to reference/bash_client/sklearn_linear_regression.sh diff --git a/bash_client/sklearn_logistic_regression.sh b/reference/bash_client/sklearn_logistic_regression.sh similarity index 100% rename from bash_client/sklearn_logistic_regression.sh rename to reference/bash_client/sklearn_logistic_regression.sh diff --git a/bash_client/tensorflow_tinyyolo.sh b/reference/bash_client/tensorflow_tinyyolo.sh similarity index 100% rename from bash_client/tensorflow_tinyyolo.sh rename to reference/bash_client/tensorflow_tinyyolo.sh diff --git a/go_client/tensorflow_imagenet.go b/reference/go_client/tensorflow_imagenet.go similarity index 100% rename from go_client/tensorflow_imagenet.go rename to reference/go_client/tensorflow_imagenet.go diff --git a/go_client/torch_charrnn.go b/reference/go_client/torch_charrnn.go similarity index 100% rename from go_client/torch_charrnn.go rename to reference/go_client/torch_charrnn.go diff --git a/go_client/torch_imagenet.go b/reference/go_client/torch_imagenet.go similarity index 100% rename from go_client/torch_imagenet.go rename to reference/go_client/torch_imagenet.go diff --git a/js_client/helpers.js b/reference/js_client/helpers.js similarity index 100% rename from js_client/helpers.js rename to reference/js_client/helpers.js diff --git a/js_client/package-lock.json b/reference/js_client/package-lock.json similarity index 100% rename from js_client/package-lock.json rename to reference/js_client/package-lock.json diff --git a/js_client/package.json b/reference/js_client/package.json similarity index 100% rename from js_client/package.json rename to reference/js_client/package.json diff --git a/js_client/readme.md b/reference/js_client/readme.md similarity index 100% rename from js_client/readme.md rename to reference/js_client/readme.md diff --git a/js_client/tensorflow_imagenet.js b/reference/js_client/tensorflow_imagenet.js similarity index 100% rename from js_client/tensorflow_imagenet.js rename to reference/js_client/tensorflow_imagenet.js diff --git a/js_client/tensorflow_mobilenet.js b/reference/js_client/tensorflow_mobilenet.js similarity index 100% rename from js_client/tensorflow_mobilenet.js rename to reference/js_client/tensorflow_mobilenet.js diff --git a/js_client/torch_charrnn.js b/reference/js_client/torch_charrnn.js similarity index 100% rename from js_client/torch_charrnn.js rename to reference/js_client/torch_charrnn.js diff --git a/js_client/torch_imagenet.js b/reference/js_client/torch_imagenet.js similarity index 100% rename from js_client/torch_imagenet.js rename to reference/js_client/torch_imagenet.js diff --git a/models/onnx/mnist/mnist.onnx b/reference/models/onnx/mnist/mnist.onnx similarity index 100% rename from models/onnx/mnist/mnist.onnx rename to reference/models/onnx/mnist/mnist.onnx diff --git a/models/pytorch/charrnn/charrnn_model.pt b/reference/models/pytorch/charrnn/charrnn_model.pt similarity index 100% rename from models/pytorch/charrnn/charrnn_model.pt rename to reference/models/pytorch/charrnn/charrnn_model.pt diff --git a/models/pytorch/charrnn/charrnn_pipeline.pt b/reference/models/pytorch/charrnn/charrnn_pipeline.pt similarity index 100% rename from models/pytorch/charrnn/charrnn_pipeline.pt rename to reference/models/pytorch/charrnn/charrnn_pipeline.pt diff --git a/models/pytorch/charrnn/model_checker.py b/reference/models/pytorch/charrnn/model_checker.py similarity index 100% rename from models/pytorch/charrnn/model_checker.py rename to reference/models/pytorch/charrnn/model_checker.py diff --git a/models/pytorch/charrnn/model_saver.py b/reference/models/pytorch/charrnn/model_saver.py similarity index 100% rename from models/pytorch/charrnn/model_saver.py rename to reference/models/pytorch/charrnn/model_saver.py diff --git a/models/pytorch/charrnn/model_trainer.py b/reference/models/pytorch/charrnn/model_trainer.py similarity index 100% rename from models/pytorch/charrnn/model_trainer.py rename to reference/models/pytorch/charrnn/model_trainer.py diff --git a/models/pytorch/chatbot/decoder.pt b/reference/models/pytorch/chatbot/decoder.pt similarity index 100% rename from models/pytorch/chatbot/decoder.pt rename to reference/models/pytorch/chatbot/decoder.pt diff --git a/models/pytorch/chatbot/encoder.pt b/reference/models/pytorch/chatbot/encoder.pt similarity index 100% rename from models/pytorch/chatbot/encoder.pt rename to reference/models/pytorch/chatbot/encoder.pt diff --git a/models/pytorch/chatbot/model_checker.py b/reference/models/pytorch/chatbot/model_checker.py similarity index 100% rename from models/pytorch/chatbot/model_checker.py rename to reference/models/pytorch/chatbot/model_checker.py diff --git a/models/pytorch/chatbot/model_saver.py b/reference/models/pytorch/chatbot/model_saver.py similarity index 100% rename from models/pytorch/chatbot/model_saver.py rename to reference/models/pytorch/chatbot/model_saver.py diff --git a/models/pytorch/chatbot/voc.json b/reference/models/pytorch/chatbot/voc.json similarity index 100% rename from models/pytorch/chatbot/voc.json rename to reference/models/pytorch/chatbot/voc.json diff --git a/models/pytorch/chatbot/weights/decoder.pth b/reference/models/pytorch/chatbot/weights/decoder.pth similarity index 100% rename from models/pytorch/chatbot/weights/decoder.pth rename to reference/models/pytorch/chatbot/weights/decoder.pth diff --git a/models/pytorch/chatbot/weights/encoder.pth b/reference/models/pytorch/chatbot/weights/encoder.pth similarity index 100% rename from models/pytorch/chatbot/weights/encoder.pth rename to reference/models/pytorch/chatbot/weights/encoder.pth diff --git a/models/pytorch/imagenet/data_processing_script.txt b/reference/models/pytorch/imagenet/data_processing_script.txt similarity index 100% rename from models/pytorch/imagenet/data_processing_script.txt rename to reference/models/pytorch/imagenet/data_processing_script.txt diff --git a/models/pytorch/imagenet/model_checker.py b/reference/models/pytorch/imagenet/model_checker.py similarity index 100% rename from models/pytorch/imagenet/model_checker.py rename to reference/models/pytorch/imagenet/model_checker.py diff --git a/models/pytorch/imagenet/model_saver.py b/reference/models/pytorch/imagenet/model_saver.py similarity index 100% rename from models/pytorch/imagenet/model_saver.py rename to reference/models/pytorch/imagenet/model_saver.py diff --git a/models/pytorch/imagenet/resnet50.pt b/reference/models/pytorch/imagenet/resnet50.pt similarity index 100% rename from models/pytorch/imagenet/resnet50.pt rename to reference/models/pytorch/imagenet/resnet50.pt diff --git a/models/sklearn/boston_house_price_prediction/boston.onnx b/reference/models/sklearn/boston_house_price_prediction/boston.onnx similarity index 100% rename from models/sklearn/boston_house_price_prediction/boston.onnx rename to reference/models/sklearn/boston_house_price_prediction/boston.onnx diff --git a/models/sklearn/boston_house_price_prediction/model_checker.py b/reference/models/sklearn/boston_house_price_prediction/model_checker.py similarity index 100% rename from models/sklearn/boston_house_price_prediction/model_checker.py rename to reference/models/sklearn/boston_house_price_prediction/model_checker.py diff --git a/models/sklearn/boston_house_price_prediction/model_saver.py b/reference/models/sklearn/boston_house_price_prediction/model_saver.py similarity index 100% rename from models/sklearn/boston_house_price_prediction/model_saver.py rename to reference/models/sklearn/boston_house_price_prediction/model_saver.py diff --git a/models/sklearn/linear_regression/linear_regression.onnx b/reference/models/sklearn/linear_regression/linear_regression.onnx similarity index 100% rename from models/sklearn/linear_regression/linear_regression.onnx rename to reference/models/sklearn/linear_regression/linear_regression.onnx diff --git a/models/sklearn/linear_regression/model_checker.py b/reference/models/sklearn/linear_regression/model_checker.py similarity index 100% rename from models/sklearn/linear_regression/model_checker.py rename to reference/models/sklearn/linear_regression/model_checker.py diff --git a/models/sklearn/linear_regression/model_saver.py b/reference/models/sklearn/linear_regression/model_saver.py similarity index 100% rename from models/sklearn/linear_regression/model_saver.py rename to reference/models/sklearn/linear_regression/model_saver.py diff --git a/models/sklearn/logistic_regression/logistic.onnx b/reference/models/sklearn/logistic_regression/logistic.onnx similarity index 100% rename from models/sklearn/logistic_regression/logistic.onnx rename to reference/models/sklearn/logistic_regression/logistic.onnx diff --git a/models/sklearn/logistic_regression/model_checker.py b/reference/models/sklearn/logistic_regression/model_checker.py similarity index 100% rename from models/sklearn/logistic_regression/model_checker.py rename to reference/models/sklearn/logistic_regression/model_checker.py diff --git a/models/sklearn/logistic_regression/model_saver.py b/reference/models/sklearn/logistic_regression/model_saver.py similarity index 100% rename from models/sklearn/logistic_regression/model_saver.py rename to reference/models/sklearn/logistic_regression/model_saver.py diff --git a/models/spark/decisiontree_with_pipeline/model_saver.py b/reference/models/spark/decisiontree_with_pipeline/model_saver.py similarity index 100% rename from models/spark/decisiontree_with_pipeline/model_saver.py rename to reference/models/spark/decisiontree_with_pipeline/model_saver.py diff --git a/models/spark/decisiontree_with_pipeline/sample_libsvm_data.txt b/reference/models/spark/decisiontree_with_pipeline/sample_libsvm_data.txt similarity index 100% rename from models/spark/decisiontree_with_pipeline/sample_libsvm_data.txt rename to reference/models/spark/decisiontree_with_pipeline/sample_libsvm_data.txt diff --git a/models/spark/decisiontree_with_pipeline/spark.onnx b/reference/models/spark/decisiontree_with_pipeline/spark.onnx similarity index 100% rename from models/spark/decisiontree_with_pipeline/spark.onnx rename to reference/models/spark/decisiontree_with_pipeline/spark.onnx diff --git a/models/spark/linear_regression/linear_regression.onnx b/reference/models/spark/linear_regression/linear_regression.onnx similarity index 100% rename from models/spark/linear_regression/linear_regression.onnx rename to reference/models/spark/linear_regression/linear_regression.onnx diff --git a/models/spark/linear_regression/model_checker.py b/reference/models/spark/linear_regression/model_checker.py similarity index 100% rename from models/spark/linear_regression/model_checker.py rename to reference/models/spark/linear_regression/model_checker.py diff --git a/models/spark/linear_regression/model_saver.py b/reference/models/spark/linear_regression/model_saver.py similarity index 100% rename from models/spark/linear_regression/model_saver.py rename to reference/models/spark/linear_regression/model_saver.py diff --git a/models/spark/one_vs_rest/model_checker.py b/reference/models/spark/one_vs_rest/model_checker.py similarity index 100% rename from models/spark/one_vs_rest/model_checker.py rename to reference/models/spark/one_vs_rest/model_checker.py diff --git a/models/spark/one_vs_rest/model_saver.py b/reference/models/spark/one_vs_rest/model_saver.py similarity index 100% rename from models/spark/one_vs_rest/model_saver.py rename to reference/models/spark/one_vs_rest/model_saver.py diff --git a/models/spark/one_vs_rest/multiclass_classification_data.txt b/reference/models/spark/one_vs_rest/multiclass_classification_data.txt similarity index 100% rename from models/spark/one_vs_rest/multiclass_classification_data.txt rename to reference/models/spark/one_vs_rest/multiclass_classification_data.txt diff --git a/models/spark/one_vs_rest/spark.onnx b/reference/models/spark/one_vs_rest/spark.onnx similarity index 100% rename from models/spark/one_vs_rest/spark.onnx rename to reference/models/spark/one_vs_rest/spark.onnx diff --git a/models/spark/pca/model_checker.py b/reference/models/spark/pca/model_checker.py similarity index 100% rename from models/spark/pca/model_checker.py rename to reference/models/spark/pca/model_checker.py diff --git a/models/spark/pca/model_saver.py b/reference/models/spark/pca/model_saver.py similarity index 100% rename from models/spark/pca/model_saver.py rename to reference/models/spark/pca/model_saver.py diff --git a/models/spark/pca/spark.onnx b/reference/models/spark/pca/spark.onnx similarity index 100% rename from models/spark/pca/spark.onnx rename to reference/models/spark/pca/spark.onnx diff --git a/models/tensorflow/imagenet/data_processing_script.txt b/reference/models/tensorflow/imagenet/data_processing_script.txt similarity index 100% rename from models/tensorflow/imagenet/data_processing_script.txt rename to reference/models/tensorflow/imagenet/data_processing_script.txt diff --git a/models/tensorflow/imagenet/model_checker.py b/reference/models/tensorflow/imagenet/model_checker.py similarity index 100% rename from models/tensorflow/imagenet/model_checker.py rename to reference/models/tensorflow/imagenet/model_checker.py diff --git a/models/tensorflow/imagenet/model_saver.py b/reference/models/tensorflow/imagenet/model_saver.py similarity index 100% rename from models/tensorflow/imagenet/model_saver.py rename to reference/models/tensorflow/imagenet/model_saver.py diff --git a/models/tensorflow/imagenet/resnet50.pb b/reference/models/tensorflow/imagenet/resnet50.pb similarity index 100% rename from models/tensorflow/imagenet/resnet50.pb rename to reference/models/tensorflow/imagenet/resnet50.pb diff --git a/models/tensorflow/mobilenet/mobilenet_224.pb b/reference/models/tensorflow/mobilenet/mobilenet_224.pb similarity index 100% rename from models/tensorflow/mobilenet/mobilenet_224.pb rename to reference/models/tensorflow/mobilenet/mobilenet_224.pb diff --git a/models/tensorflow/tinyyolo/tinyyolo.pb b/reference/models/tensorflow/tinyyolo/tinyyolo.pb similarity index 100% rename from models/tensorflow/tinyyolo/tinyyolo.pb rename to reference/models/tensorflow/tinyyolo/tinyyolo.pb diff --git a/models/tensorflow/tinyyolo/yolo_boxes_script.py b/reference/models/tensorflow/tinyyolo/yolo_boxes_script.py similarity index 100% rename from models/tensorflow/tinyyolo/yolo_boxes_script.py rename to reference/models/tensorflow/tinyyolo/yolo_boxes_script.py diff --git a/python_client/cli.py b/reference/python_client/cli.py similarity index 100% rename from python_client/cli.py rename to reference/python_client/cli.py diff --git a/python_client/sklearn_boston_house_price_prediction.py b/reference/python_client/sklearn_boston_house_price_prediction.py similarity index 100% rename from python_client/sklearn_boston_house_price_prediction.py rename to reference/python_client/sklearn_boston_house_price_prediction.py diff --git a/python_client/sklearn_linear_regression.py b/reference/python_client/sklearn_linear_regression.py similarity index 100% rename from python_client/sklearn_linear_regression.py rename to reference/python_client/sklearn_linear_regression.py diff --git a/python_client/sklearn_logistic_regression.py b/reference/python_client/sklearn_logistic_regression.py similarity index 100% rename from python_client/sklearn_logistic_regression.py rename to reference/python_client/sklearn_logistic_regression.py diff --git a/python_client/spark_decisiontree.py b/reference/python_client/spark_decisiontree.py similarity index 100% rename from python_client/spark_decisiontree.py rename to reference/python_client/spark_decisiontree.py diff --git a/python_client/spark_linear_regression.py b/reference/python_client/spark_linear_regression.py similarity index 100% rename from python_client/spark_linear_regression.py rename to reference/python_client/spark_linear_regression.py diff --git a/python_client/spark_one_vs_rest.py b/reference/python_client/spark_one_vs_rest.py similarity index 100% rename from python_client/spark_one_vs_rest.py rename to reference/python_client/spark_one_vs_rest.py diff --git a/python_client/spark_pca.py b/reference/python_client/spark_pca.py similarity index 100% rename from python_client/spark_pca.py rename to reference/python_client/spark_pca.py diff --git a/python_client/tensorflow_imagenet.py b/reference/python_client/tensorflow_imagenet.py similarity index 100% rename from python_client/tensorflow_imagenet.py rename to reference/python_client/tensorflow_imagenet.py diff --git a/python_client/tensorflow_tinyyolo.py b/reference/python_client/tensorflow_tinyyolo.py similarity index 100% rename from python_client/tensorflow_tinyyolo.py rename to reference/python_client/tensorflow_tinyyolo.py diff --git a/python_client/torch_charrnn.py b/reference/python_client/torch_charrnn.py similarity index 100% rename from python_client/torch_charrnn.py rename to reference/python_client/torch_charrnn.py diff --git a/python_client/torch_imagenet.py b/reference/python_client/torch_imagenet.py similarity index 100% rename from python_client/torch_imagenet.py rename to reference/python_client/torch_imagenet.py diff --git a/sentinel/README.md b/reference/sentinel/README.md similarity index 100% rename from sentinel/README.md rename to reference/sentinel/README.md diff --git a/sentinel/installation.sh b/reference/sentinel/installation.sh similarity index 100% rename from sentinel/installation.sh rename to reference/sentinel/installation.sh diff --git a/sentinel/model_run.py b/reference/sentinel/model_run.py similarity index 100% rename from sentinel/model_run.py rename to reference/sentinel/model_run.py diff --git a/sentinel/model_set.py b/reference/sentinel/model_set.py similarity index 100% rename from sentinel/model_set.py rename to reference/sentinel/model_set.py diff --git a/sentinel/redis_configs/redis.conf b/reference/sentinel/redis_configs/redis.conf similarity index 100% rename from sentinel/redis_configs/redis.conf rename to reference/sentinel/redis_configs/redis.conf diff --git a/sentinel/redis_configs/sentinel.conf b/reference/sentinel/redis_configs/sentinel.conf similarity index 100% rename from sentinel/redis_configs/sentinel.conf rename to reference/sentinel/redis_configs/sentinel.conf diff --git a/sentinel/run_server.sh b/reference/sentinel/run_server.sh similarity index 100% rename from sentinel/run_server.sh rename to reference/sentinel/run_server.sh diff --git a/sentinel/setup.py b/reference/sentinel/setup.py similarity index 100% rename from sentinel/setup.py rename to reference/sentinel/setup.py diff --git a/requirements.txt b/requirements.txt index 6c31bbc..51fdf4e 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,15 +1,131 @@ +absl-py==1.0.0 +ansiwrap==0.8.4 +appnope==0.1.2 +argcomplete==1.12.3 +argon2-cffi==21.1.0 +astor==0.8.1 +attrs==21.2.0 +backcall==0.2.0 +black==21.11b1 +bleach==4.1.0 +cached-property==1.5.2 certifi==2019.9.11 +cffi==1.15.0 +charset-normalizer==2.0.8 +click==8.0.3 cycler==0.10.0 +debugpy==1.5.1 decorator==4.4.0 +defusedxml==0.7.1 +Deprecated==1.2.13 +entrypoints==0.3 +fonttools==4.28.5 +gast==0.2.2 +google-pasta==0.2.0 +grpcio==1.43.0 +h5py==3.6.0 hiredis==1.0.0 -imageio==2.6.1 -kiwisolver==1.1.0 -matplotlib==3.1.1 +idna==3.3 +imageio==2.13.3 +importlib-metadata==4.10.0 +importlib-resources==5.4.0 +iniconfig==1.1.1 +ipykernel==6.5.1 +ipython==7.29.0 +ipython-genutils==0.2.0 +ipywidgets==7.6.5 +jedi==0.18.1 +Jinja2==3.0.3 +joblib==1.1.0 +jsonschema==4.2.1 +jupyter==1.0.0 +jupyter-client==7.1.0 +jupyter-console==6.4.0 +jupyter-core==4.9.1 +jupyterlab-pygments==0.1.2 +jupyterlab-widgets==1.0.2 +Keras-Applications==1.0.8 +Keras-Preprocessing==1.1.2 +kiwisolver==1.3.2 +Markdown==3.3.6 +MarkupSafe==2.0.1 +matplotlib==3.5.1 +matplotlib-inline==0.1.3 +mistune==0.8.4 ml2rt==0.2.0 -networkx==2.4 -Pillow==6.2.0 -pyparsing==2.4.2 -python-dateutil==2.8.0 -PyWavelets==1.1.1 -redisai==1.2.0 +mypy-extensions==0.4.3 +nbclient==0.5.9 +nbconvert==6.3.0 +nbformat==5.1.3 +nest-asyncio==1.5.1 +networkx==2.6.3 +notebook==6.4.6 +numpy==1.21.4 +onnx==1.10.2 +onnxconverter-common==1.8.1 +opt-einsum==3.3.0 +packaging==21.3 +pandas==1.3.4 +pandocfilters==1.5.0 +papermill==2.3.3 +parso==0.8.2 +pathspec==0.9.0 +pexpect==4.8.0 +pickleshare==0.7.5 +Pillow==8.4.0 +platformdirs==2.4.0 +pluggy==1.0.0 +prometheus-client==0.12.0 +prompt-toolkit==3.0.22 +protobuf==3.19.1 +ptyprocess==0.7.0 +py==1.11.0 +pycparser==2.21 +Pygments==2.10.0 +pyparsing==3.0.6 +pyrsistent==0.18.0 +pytest==6.2.5 +python-dateutil==2.8.2 +pytz==2021.3 +PyWavelets==1.2.0 +PyYAML==6.0 +pyzmq==22.3.0 +qtconsole==5.2.1 +QtPy==1.11.2 +redis==3.5.3 +redisai==1.2.1 +regex==2021.11.10 +requests==2.26.0 +scikit-image==0.18.3 +scikit-learn==1.0.1 +scipy==1.7.3 +Send2Trash==1.8.0 six==1.12.0 +skl2onnx==1.10.2 +tenacity==8.0.1 +tensorboard==1.15.0 +tensorflow==1.15.0 +tensorflow-estimator==1.15.1 +tensorflow-hub==0.12.0 +termcolor==1.1.0 +terminado==0.12.1 +testpath==0.5.0 +textwrap3==0.9.2 +threadpoolctl==3.0.0 +tifffile==2021.11.2 +toml==0.10.2 +tomli==1.2.2 +torch==1.10.0 +torchvision==0.11.1 +tornado==6.1 +tqdm==4.62.3 +traitlets==5.1.1 +typed-ast==1.5.1 +typing-extensions==4.0.0 +urllib3==1.26.7 +wcwidth==0.2.5 +webencodings==0.5.1 +Werkzeug==2.0.2 +widgetsnbextension==3.5.2 +wrapt==1.13.3 +zipp==3.6.0 diff --git a/notebooks/shapley_explainability/XGBoostGenericShapleyFraudDetection.ipynb b/shapley_explainability/XGBoostGenericShapleyFraudDetection.ipynb similarity index 100% rename from notebooks/shapley_explainability/XGBoostGenericShapleyFraudDetection.ipynb rename to shapley_explainability/XGBoostGenericShapleyFraudDetection.ipynb diff --git a/notebooks/shapley_explainability/XGBoostShapleyFraudDetection.ipynb b/shapley_explainability/XGBoostShapleyFraudDetection.ipynb similarity index 100% rename from notebooks/shapley_explainability/XGBoostShapleyFraudDetection.ipynb rename to shapley_explainability/XGBoostShapleyFraudDetection.ipynb diff --git a/notebooks/shapley_explainability/data/.gitkeep b/shapley_explainability/data/.gitkeep similarity index 100% rename from notebooks/shapley_explainability/data/.gitkeep rename to shapley_explainability/data/.gitkeep diff --git a/notebooks/shapley_explainability/models/.gitkeep b/shapley_explainability/models/.gitkeep similarity index 100% rename from notebooks/shapley_explainability/models/.gitkeep rename to shapley_explainability/models/.gitkeep diff --git a/notebooks/shapley_explainability/shapscript.py b/shapley_explainability/shapscript.py similarity index 100% rename from notebooks/shapley_explainability/shapscript.py rename to shapley_explainability/shapscript.py diff --git a/notebooks/shapley_explainability/torch_shapley.py b/shapley_explainability/torch_shapley.py similarity index 100% rename from notebooks/shapley_explainability/torch_shapley.py rename to shapley_explainability/torch_shapley.py diff --git a/test.py b/test.py new file mode 100644 index 0000000..f7e6fba --- /dev/null +++ b/test.py @@ -0,0 +1,14 @@ +import papermill as pm +import os +import glob +import pytest + + +@pytest.mark.parametrize("notebook_path", glob.glob(os.path.join(os.path.dirname(__file__), "*.ipynb"))) +def test_notebook_runner(notebook_path): + """ + This pytest function runs all the ipynb files in the repository + using papermill - a notebook runner that throws if any exception occurs while executing the notebook + """ + # Run all the notebooks in the repository + pm.execute_notebook(notebook_path, "-")