Skip to content

Commit d6743a2

Browse files
committed
getting ready for release on pip
1 parent 20fda33 commit d6743a2

File tree

5 files changed

+47
-21
lines changed

5 files changed

+47
-21
lines changed

HISTORY.rst

+18-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,24 @@
22
History
33
=======
44

5-
0.1.0 (2023-04-22)
5+
0.2.1 (August, 2023)
6+
------------------
7+
8+
* Preparing for a release on pip
9+
10+
0.2.0 (July, 2023)
11+
------------------
12+
13+
* Lots of GUI improvements
14+
* Default model is now run on a proxy server (still does not require and setup or API key)
15+
16+
0.1.1 (July, 2023)
17+
------------------
18+
19+
* More models are now supported!
20+
* Everything now runs locally (no need for an OpenAI API key)
21+
22+
0.1.0 (July, 2023)
623
------------------
724

825
* Initial package version.

LICENSE

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
MIT License
22

3-
Copyright (c) 2023, NeuroMatch
3+
Copyright (c) 2023, Contextual Dynamnics Laboratory
44

55
Permission is hereby granted, free of charge, to any person obtaining a copy
66
of this software and associated documentation files (the "Software"), to deal

README.md

+24-15
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,6 @@
55

66
Chatify is a python package that enables ipython magic commands to Jupyter notebooks that provide LLM-driven enhancements to markdown and code cells. This package is currently in the *alpha* stage: expect broken things, crashes, bad (wrong, misleading) answers, and other serious issues. That said, we think Chatify is pretty neat even in this early form, and we're excited about its future!
77

8-
9-
10-
118
# Background
129

1310
This tool was originally created to supplement the [Neuromatch Academy](https://compneuro.neuromatch.io/tutorials/intro.html) materials. A "Chatify-enhanced" version of the Neuromatch computational neuroscience course may be found [here](https://contextlab.github.io/course-content/tutorials/intro.html), and an enhanced version of the deep learning course may be found [here](https://contextlab.github.io/course-content-dl/tutorials/intro.html).
@@ -23,7 +20,7 @@ davos.config.suppress_stdout = True
2320
```
2421

2522
```python
26-
smuggle chatify # pip: git+https://github.com/ContextLab/chatify.git
23+
smuggle chatify
2724
%load_ext chatify
2825
```
2926

@@ -35,27 +32,39 @@ The first cell installs and enables the [Davos](https://github.com/ContextLab/da
3532

3633
The `smuggle` statement in the second cell is what actually installs, configures, and imports Chatify, and the `%load_ext` line loads and enables the Chatify extension (accessed by adding the `%%explain` magic command to the top of any code cell in the notebook).
3734

38-
If you like to live on the wild side and don't care about protecting your runtime environment from potential side effects of installing Chatify (note: this is **not recommended** and may **break other aspects of your setup**!), you can replace those two cells above with the following:
35+
If you don't care about protecting your runtime environment from potential side effects of installing Chatify (note: this is **not recommended** and may **break other aspects of your setup**!), you can replace those two cells above with the following:
3936

4037
```python
41-
!pip install -qqq git+https://github.com/ContextLab/chatify.git
38+
!pip install -qqq chatify
4239
import chatify
4340
%load_ext chatify
4441
```
4542

46-
### Why isn't Chatify on pip/conda?
43+
Note: the quiet (`-qqq`) flags are optional, but including them will avoid cluttering up your notebook with installation-related messages. It's probably fine to use this "direct/unsafe install" method if you're running Chatify in a Colab notebook, inside of a container, or in a dedicated virtual environment, but you shouldn't install the unsafe version of Chatify locally. GPU-friendly libraries are notoriously finicky, and since Chatify will mess with some of those libraries it's reasonably likely that your other GPU libraries will stop working correctly if you don't isolate Chatify's effects on your system.
44+
45+
### I like to live on the wild side-- give me the bleeding edge version!
46+
47+
Psst...if you really want to see our lack of quality control on full display, we can tell you the secret password 🤫. Step this way...that's right. Don't mind those animals, pay no attention...🐯🦁🦎...🦌🐘🐬🦕...to install Chatify directly from GitHub (**even though this is not recommended!!**), with no safety checks or guarantees whatsoever, you can add and run this cell to the top of your notebook:
48+
49+
```python
50+
!pip install -qqq chatify git+https://github.com/ContextLab/chatify.git
51+
import chatify
52+
%load_ext chatify
53+
```
4754

48-
It will be soon! We're doing some usability testing first. We'll likely make the package installable via pip initially, and then "later on" (i.e., when someone requests it and/or we get around to it!) we'll add conda support too. For now, you're stuck with the bleeding edge "install directly from GitHub" option.
55+
You should really only do this in a fully protected (containerized, virtual)
56+
environment or on Colab, since it could break other stuff on your system. Don't
57+
say we didn't warn you!
4958

5059
## Customizing Chatify
5160

52-
Chatify is designed to work by default in the free tiers of [Colaboratory](https://colab.research.google.com/) and [Kaggle](https://www.kaggle.com/code) notebooks, and to operate without requiring any additional costs or setup beyond installing and enabling Chatify itself. In addition to Colaboratory and Kaggle notebooks, Chatify also supports a variety of other systems and setups, including running locally or on other cloud-based systems. For setups with additional resources, it is possible to switch to better-performing or lower-cost models. Chatify works in CPU-only environments, but it is GPU-friendly (for both CUDA-enabled and Metal-enabled systems). We support any text-generation model on [Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending), Meta's [Llama 2](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) models, and OpenAI's [ChatGPT](https://chat.openai.com/) models (both ChatGPT-3.5 and ChatGPT-4). Models that run on Hugging Face or OpenAI's servers require either a [Hugging Face API key](https://huggingface.co/docs/api-inference/quicktour#get-your-api-token) or an [OpenAI API key](https://platform.openai.com/signup), respectively.
61+
Chatify is designed to work by default in the free tiers of [Colaboratory](https://colab.research.google.com/) and [Kaggle](https://www.kaggle.com/code) notebooks, and to operate without requiring any additional costs or setup beyond installing and enabling Chatify itself. There's some server magic that happens behind the scenes to make that happen. In addition to Colaboratory and Kaggle notebooks, Chatify also supports a variety of other systems and setups, including running locally or on other cloud-based systems (e.g., if you don't want our servers to see what you're doing 🕵️). For setups with additional resources, it is possible to switch to better-performing or lower-cost models. Chatify works in CPU-only environments, but it is GPU-friendly (for both CUDA-enabled and Metal-enabled systems). We support any text-generation model on [Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending), Meta's [Llama 2](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) models, and OpenAI's [ChatGPT](https://chat.openai.com/) models (e.g., ChatGPT-3.5 and ChatGPT-4). Models that run on Hugging Face or OpenAI's servers require either a [Hugging Face API key](https://huggingface.co/docs/api-inference/quicktour#get-your-api-token) or an [OpenAI API key](https://platform.openai.com/signup), respectively.
5362

5463
Once you have your API key(s), if needed, create a `config.yaml` file in the directory where you launch your notebook. For the OpenAI configuration, replace `<OPANAI API KEY>` with your actual OpenAI API key (with no quotes) and then create a `config.yaml` file with the following contents:
5564

5665
### OpenAI configuration
5766

58-
If you have an OpenAI API key, adding this config.yaml file to your local directory (after adding your API key) will substantially improve your experience:
67+
If you have an OpenAI API key, adding this config.yaml file to your local directory (after adding your API key) will substantially improve your experience by generating higher quality responses:
5968

6069
```yaml
6170
cache_config:
@@ -69,7 +78,7 @@ feedback: False
6978
model_config:
7079
open_ai_key: <OPENAI API KEY>
7180
model: open_ai_model
72-
model_name: gpt-3.5-turbo
81+
model_name: gpt-4
7382
max_tokens: 2500
7483

7584
chain_config:
@@ -81,7 +90,7 @@ prompts_config:
8190
8291
### Llama 2 configuration
8392
84-
If you're running your notebook on a well-resourced machine, you can use this config file to get good performance for free! The 7B and 13B variants of llama 2 both run on the free tier of Google Colaboratory and Kaggle, but the 13B is substantially slower (therefore we recommend the 7B variant if you're using Colaboratory or Kaggle notebooks). Note that using this configuration requires installing the "HuggingFace" dependencies (`pip install chatify[hf]`).
93+
If you're running your notebook on a well-resourced machine, you can use this config file to get good performance for free, and without using our servers! The 7B and 13B variants of llama 2 both run on the free tier of Google Colaboratory and Kaggle, but the 13B is substantially slower (therefore we recommend the 7B variant if you're using Colaboratory or Kaggle notebooks). Note that using this configuration requires installing the "HuggingFace" dependencies (`pip install chatify[hf]`).
8594

8695
```yaml
8796
cache_config:
@@ -109,7 +118,7 @@ prompts_config:
109118

110119
### Hugging Face configuration (local)
111120

112-
If you're running your notebook on a well-resourced machine, you can use this config file to get good performance for free! This will likely require lots of RAM. It's a nice way to explore a wide variety of models. Note that using this configuration requires installing the "HuggingFace" dependencies (`pip install chatify[hf]`).
121+
If you're running your notebook on a well-resourced machine, you can use this config file to get good performance for free, and without using our servers! For most models this will require lots of RAM. It's a nice way to explore a wide variety of models. Note that using this configuration requires installing the "HuggingFace" dependencies (`pip install chatify[hf]`).
113122

114123
```yaml
115124
cache_config:
@@ -139,12 +148,12 @@ After saving your `config.yaml` file, follow the "[**Installing and enabling Cha
139148

140149
# What do I do if I have questions or problems?
141150

142-
We'd love to hear from you! Please consider filling out our [feedback survey](https://forms.gle/V9ZGssyukjmFR9bk7) or submitting an [issue](https://github.com/ContextLab/chatify/issues).
151+
We'd love to hear from you 🤩! If you're using Chatify as part of one of the NMA courses, please consider filling out our [feedback survey](https://forms.gle/V9ZGssyukjmFR9bk7) to help us improve and/or better understand the experience. For more general problems, please submit an [issue](https://github.com/ContextLab/chatify/issues). And for other questions, comments, concerns, etc., just send us an [email]([email protected]).
143152

144153

145154
# I want to help!
146155

147-
Yay-- welcome 🎉! This is a new project (in the "concept" phase) and we're looking for all the help we can get! If you're new around here and want to explore/contribute, here's how:
156+
Yay-- welcome 🎉! This is a *very* new project (in the "alpha" phase) and we're looking for all the help we can get! If you're new around here and want to explore/contribute, here's how:
148157

149158
1. [Fork](https://github.com/ContextLab/chatify/fork) this repository so that you can work with your own "copy" of the code base
150159
2. Take a look at our [Project Board](https://github.com/orgs/ContextLab/projects/3) and/or the list of open [issues](https://github.com/ContextLab/chatify/issues) to get a sense of the current project status, todo list, etc.

setup.cfg

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
[bumpversion]
2-
current_version = 0.2.0
2+
current_version = 0.2.1
33
commit = True
44
tag = True
55

setup.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@
1919
]
2020

2121
setup(
22-
author="NeuroMatch",
23-
author_email='[email protected]',
22+
author="Contextual Dynamics Lab",
23+
author_email='[email protected]',
2424
python_requires='>=3.6',
2525
classifiers=[
2626
'Development Status :: 2 - Pre-Alpha',
@@ -47,6 +47,6 @@
4747
tests_require=test_requirements,
4848
package_data={'': ['**/*.yaml']},
4949
url='https://github.com/ContextLab/chatify',
50-
version='0.2.0',
50+
version='0.2.1',
5151
zip_safe=False,
5252
)

0 commit comments

Comments
 (0)