This repository contains a powerful coding assistant application that integrates with Ollama to process user conversations and generate structured JSON responses. It offers both a command-line interface and a modern Streamlit web UI, allowing users to read local file contents, create new files, and apply diff edits to existing files in real time. While making this fork of DeepSeek Engineer our goal was to reduce the dependencies and to be able to use any self-hosted model while not adding more code than necessary to achieve this result.
- Python 3.8 or higher
- Ollama installed and running with a model
-
Dual Interface Support
- Command-line interface for quick interactions
- Modern Streamlit web UI for enhanced visual experience
-
Ollama Integration
- Uses local Ollama instance with the qwen2.5-coder:14b model
- Streams responses for real-time interaction
- Structured JSON output for precise code modifications
-
Data Models
- Leverages Pydantic for type-safe handling of file operations, including: • FileToCreate – describes files to be created or updated • FileToEdit – describes specific snippet replacements in an existing file • AssistantResponse – structures chat responses and potential file operations
-
System Prompt
- A comprehensive system prompt guides conversation, ensuring all replies strictly adhere to JSON output with optional file creations or edits
-
Helper Functions
- read_local_file: Reads a target filesystem path and returns its content as a string
- create_file: Creates or overwrites a file with provided content
- show_diff_table: Presents proposed file changes in a clear, readable format
- apply_diff_edit: Applies snippet-level modifications to existing files
-
File Management
- Command-line: Use "/add path/to/file" to quickly read a file's content
- Streamlit UI: Drag-and-drop file upload with syntax-highlighted preview
- All files are organized in session-specific folders under tmp/
-
Conversation Flow
- Maintains conversation history to track messages between user and assistant
- Streams the assistant's replies via Ollama, parsing them as JSON
- Visual diff previews for code changes
The Streamlit interface provides a modern, user-friendly way to interact with Ollama Engineer:
-
Starting the UI
streamlit run streamlit_app.py
-
Features
- File Upload: Drag and drop files directly into the UI
- Chat Interface: Natural conversation with syntax-highlighted code
- Visual Diff: Side-by-side comparison of code changes
- File Management: Browse and preview uploaded files in the sidebar
- Session Management: Reset conversation and start fresh anytime
-
Advantages
- More intuitive file handling with visual feedback
- Better code visualization with syntax highlighting
- Easy approval/rejection of proposed changes
- Persistent chat history within the session
- Mobile-friendly responsive design
- Install Ollama from https://ollama.ai
- Pull the qwen2.5-coder model:
ollama pull qwen2.5-coder:14b
-
Clone the repository:
git clone https://github.com/dustinwloring1988/ollama-engineer.git cd ollama-engineer
-
Install dependencies:
pip install -r requirements.txt
-
Start Ollama server (if not already running)
-
Run the application:
python main.py
-
Enjoy multi-line streaming responses, file read-ins with "/add path/to/file", and precise file edits when approved.
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install development dependencies:
pip install -r requirements.txt
-
Install pre-commit hooks (optional):
pip install pre-commit pre-commit install
ollama-engineer/
├── main.py # Main application file
├── requirements.txt # Project dependencies
├── README.md # Project documentation
└── .gitignore # Git ignore rules
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
-
Ollama Connection Issues
- Ensure Ollama is running (
ollama serve
) - Check if the default port (11434) is available
- Verify your firewall settings
- Ensure Ollama is running (
-
Model Issues
- Try re-pulling the model:
ollama pull qwen2.5-coder:14b
- Check Ollama logs for any errors
- Try re-pulling the model:
-
Python Environment Issues
- Ensure you're using Python 3.8+
- Try recreating your virtual environment
- Verify all dependencies are installed
This project is licensed under the MIT License - see the LICENSE file for details.
- Original DeepSeek Engineer project for inspiration
- Ollama team for providing local LLM capabilities
- QWen team for the excellent code-focused model
Note: This is a modified version of the original DeepSeek Engineer project, adapted to work with Ollama and the qwen2.5-coder model locally. It provides similar capabilities without requiring API keys or external services.