A local, private interface for running DeepSeek language models on your own machine. All processing happens locally - files and queries never leave your computer.
- Run DeepSeek language models entirely on your local machine
- Upload and process files privately
- Optimized for systems with 8GB GPU memory using 4-bit quantization
- Support for various DeepSeek models including DeepSeek-Coder
- Code-specific formatting with syntax highlighting
-
Clone this repository:
git clone https://github.com/cschladetsch/deepseek-local.git cd deepseek-local
-
Run the installation script:
chmod +x install_deepseek.sh ./install_deepseek.sh
-
During installation, you'll be prompted for your Hugging Face token if you want to use gated models.
-
Install additional dependencies for code formatting:
source venv/bin/activate pip install markdown pygments
-
Clone this repository:
git clone https://github.com/yourusername/deepseek-local.git cd deepseek-local
-
Create and activate a virtual environment:
python3 -m venv venv source venv/bin/activate
-
Install the required dependencies:
pip install -r requirements.txt
-
Download a model from Hugging Face:
mkdir -p models cd models git clone https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct cd ..
-
Start the local interface:
./start_deepseek.sh
-
To share with other devices on your network:
./start_deepseek_network.sh
-
Access the interface at http://127.0.0.1:7860 in your web browser
The installation script supports various options:
./install_deepseek.sh [options]
Options:
-h, --help Show help message
-m, --model MODEL_ID Specify model ID (e.g., deepseek-ai/deepseek-v2)
-l, --list List available recommended models
-s, --small Use smaller models (for systems with less RAM)
--no-auth Skip Hugging Face authentication
--cleanup Remove temporary files and fix permissions
--uninstall Remove the installation completely
The startup script also supports options:
./start_deepseek.sh [options]
Options:
-p, --port PORT Specify port (default: 7860)
-m, --model DIR Specify model directory name (default: auto-detect)
-h, --help Show help
Models are downloaded from Hugging Face. Some recommended models:
- deepseek-ai/deepseek-v2 (requires auth)
- deepseek-ai/deepseek-coder-33b-instruct (requires auth)
- mistralai/Mistral-7B-Instruct-v0.2
- deepseek-ai/deepseek-coder-6.7b-instruct
- microsoft/phi-3-mini-4k-instruct
- deepseek-ai/deepseek-coder-1.3b-instruct
- microsoft/phi-2
- Ubuntu 20.04+ or Windows WSL2
- Python 3.8+
- 8GB RAM minimum (16GB+ recommended for medium models)
- NVIDIA GPU with 8GB VRAM (for GPU acceleration) or CPU-only mode
You can customize the interface by editing:
style.css
- For UI appearancedeepseek_repl.py
- For functionality changes
deepseek-local/
- install_deepseek.sh # Installation script
- start_deepseek.sh # Local startup script
- start_deepseek_network.sh # Network-accessible startup script
- deepseek_repl.py # Main Python interface
- style.css # CSS styling
- requirements.txt # Python dependencies
- models/ # Downloaded models
� - deepseek-coder-6.7b-instruct/ # Example model
- venv/ # Python virtual environment
The project requires several Python packages listed in requirements.txt
. The main dependencies are:
- torch - For deep learning operations
- transformers - For loading and running the models
- gradio - For the web interface
- bitsandbytes - For model quantization
- markdown and pygments - For code formatting
You can install all dependencies using:
pip install -r requirements.txt
All processing happens locally on your machine. Files uploaded to the interface and all queries are processed entirely on your local hardware and are never sent to external servers.
If port 7860 is already in use, specify a different port:
./start_deepseek.sh --port 7861
If you encounter GPU memory errors, try a smaller model:
./install_deepseek.sh --model deepseek-ai/deepseek-coder-1.3b-instruct
If you encounter permission issues:
./install_deepseek.sh --cleanup
If you see errors about missing Python modules:
source venv/bin/activate
pip install -r requirements.txt
This project is licensed under the MIT License - see the LICENSE file for details.