This app provides a starter frontend for Ollama servers that provide a REST API interface. Contains chat history and the ability to customize Modelfile settings for creating personas.
(Click below for a video preview)
- Ollama installation accessible via HTTP(S).
- This repo, configured.
docker pull ollama/ollama- Run the container
- CPU-only:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama - Nvidia GPUs:
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
- CPU-only:
- Choose and run a model:
docker exec -it ollama ollama run llama3.2 - optional Verify list of available models:
http://localhost:11434/api/tags(Host and port may vary)
Different models can be chosen from this list: Ollama Library
- Install required tools
- Node 22+, which includes corepack
- Make sure corepack is enabled after making sure you're running Node 18+ (for yarn) -
corepack enable
Note: It's no longer necessary to install yarn by itself from npm. Node bundles corepack, and corepack installs + manages yarn.
- Clone this repo -
git clone [email protected]:user27828/AgentOne.git cd AgentOne- Create and setup the root
.envfile. Ensure the following variables are defined, and with your own settings:VITE_API_HOST='http://localhost'VITE_API_PORT=3001OLLAMA_API_URL='http://localhost:11434'
yarnyarn run- Access the URL shown in the console.
yarn dev- Run the client and server in dev mode (hot reloading, transpile/interpret on save, etc)yarn build- Transpile the TypeScript files, and other files into a distributable package inclient/dist/*andserver/dist/*yarn start- Run the built files. This assumes you previously ranyarn build. Currently, vite just serves a dev instance. I will circle back to this and make express serve the bundle for local validation purposes.
- Not profitable
- Add more functionality offered by the Ollama API.
![[Sample video]](https://img.youtube.com/vi/vVfMWTNXFLo/0.jpg)