Start by downloading the Ollama application from the official website: Ollama Download. Once installed, Ollama will be running at: http://localhost:11434
Explore the various models available in the Ollama library: Ollama Library. To run a model, use the following command:
ollama run deepseek-coder
Recommended Models:
- deepseek-coder
- llama3
Visit Ngrok Getting Started for installation instructions. Follow steps 1 to 3 to set up Ngrok on your machine.
Once Ngrok is installed, run the following command to expose your local Ollama server:
ngrok http 11434 --host-header="localhost:11434"
This command will provide you with a public URL to access your Ollama instance.
Access the Codebase AI interface at: CodebaseAI.
- Add the Ngrok URL in the sidebar.
- Enjoy exploring the capabilities of Ollama!
Feel free to customize this setup to fit your needs and enjoy your demo!