Last Updated: 2025-10-14 | Version: 1.0.0 | Status: Production Ready
This directory contains configuration files for running Accelerapp in air-gapped (offline) environments.
Accelerapp can operate entirely offline using local LLM models and autonomous agent collaboration. This enables secure, air-gapped code generation for sensitive environments without any external network dependencies.
- Ollama Support: Run code generation models locally
- Multiple Backends: Support for Ollama, LocalAI, and llama.cpp
- Model Management: Download and manage models offline
- Fallback System: Automatic fallback to alternative models
- Message Bus: Internal pub/sub messaging without external dependencies
- Coordination: Central agent coordinator for task distribution
- Shared Context: Thread-safe shared state across agents
- Collaboration Protocols: Predefined interaction patterns
- Knowledge Base: Local vector database for code patterns
- Template System: Versioned code templates with optimization
- Pattern Learning: Analyze generated code for improvements
- Offline Docs: Searchable local documentation
- Access Control: Role-based permissions
- Audit Logging: Complete operation logs for compliance
- Encryption: Secure storage for sensitive data
- Network Isolation: Enforced offline operation
Edit config/airgap/settings.yaml to customize your deployment:
airgap:
enabled: true
offline_mode: true
strict_mode: false
llm:
backend: ollama
base_url: http://localhost:11434
default_model: codellama:7bOverride settings with environment variables:
export ACCELERAPP_AIRGAP_ENABLED=true
export ACCELERAPP_LLM_BACKEND=ollama
export ACCELERAPP_LLM_MODEL=codellama:7b# See deployment/README.md for installation options
sudo bash deployment/install/install-airgap.sh# Install Ollama (on a connected system first)
curl https://ollama.ai/install.sh | sh
# Pull required models
ollama pull codellama:7b
ollama pull llama2:7b
# For air-gapped: Copy ~/.ollama/models to target system# Copy air-gap config
cp config/airgap/settings.yaml ~/.accelerapp/config.yaml
# Edit as needed
vim ~/.accelerapp/config.yaml# Check health
accelerapp info
# Run health check
python deployment/monitoring/health_check.py# Create project config
accelerapp init mydevice.yaml
# Generate firmware, software, and UI
accelerapp generate mydevice.yaml --output ./output┌─────────────────────────────────────────────────────┐
│ Accelerapp Core │
├─────────────────────────────────────────────────────┤
│ CLI │ Core Orchestration │ Generators │
└──┬───────────────┬───────────────────────┬──────────┘
│ │ │
┌──▼───────────┐ ┌─▼──────────────┐ ┌─────▼────────┐
│ LLM Module │ │ Communication │ │ Knowledge │
│ │ │ │ │ │
│ • Ollama │ │ • Message Bus │ │ • Templates │
│ • LocalAI │ │ • Coordinator │ │ • Patterns │
│ • llama.cpp │ │ • Shared Ctx │ │ • Docs │
└──────────────┘ └────────────────┘ └──────────────┘
- User Request → CLI parses config
- Core Orchestration → Routes to generators
- Generator → Uses LLM service for code generation
- LLM Service → Calls local Ollama/LocalAI
- Communication → Agents coordinate via message bus
- Knowledge → Templates and patterns enhance output
- Output → Generated firmware/software/UI code
Configure different models for different tasks:
llm:
models:
firmware:
model: codellama:7b
temperature: 0.5
software:
model: codellama:13b
temperature: 0.7
ui:
model: llama2:7b
temperature: 0.8Add custom code templates:
from accelerapp.knowledge import TemplateManager, Template, TemplateCategory
tm = TemplateManager()
template = Template(
id="custom-firmware",
name="Custom Firmware Template",
category=TemplateCategory.FIRMWARE,
content="void setup() { {{init_code}} }",
variables=["init_code"]
)
tm.add_template(template)Enable pattern learning for continuous improvement:
knowledge:
patterns:
enable_learning: true
min_frequency: 3-
Ollama Connection Failed
# Check if Ollama is running curl http://localhost:11434/api/tags # Restart Ollama systemctl restart ollama
-
Model Not Found
# List available models ollama list # Pull missing model ollama pull codellama:7b
-
Permission Denied
# Fix directory permissions sudo chown -R $USER:$USER ~/.accelerapp
-
Out of Memory
- Use smaller models (7B instead of 13B/34B)
- Reduce max_tokens in config
- Add swap space if needed
# System health check
python deployment/monitoring/health_check.py
# Check disk space
df -h ~/.accelerapp
# View logs
tail -f ~/.accelerapp/logs/accelerapp.log
# Test LLM connection
curl http://localhost:11434/api/generate -d '{
"model": "codellama:7b",
"prompt": "Write a hello world in C"
}'-
Firewall Configuration
# Block all outbound traffic except local sudo ufw deny out sudo ufw allow out to 127.0.0.1 sudo ufw allow out to 172.16.0.0/12 -
DNS Configuration
- Use only internal DNS
- Disable public DNS servers
-
Air-Gap Verification
# Test network isolation ping -c 1 8.8.8.8 # Should fail curl https://google.com # Should fail
Enable authentication and RBAC:
security:
authentication:
enabled: true
method: local
access_control:
enabled: true
default_role: developerReview audit logs regularly:
# View recent security events
tail -f ~/.accelerapp/logs/audit.log
# Search for failed access attempts
grep "success.*false" ~/.accelerapp/logs/audit.log| Component | Minimum | Recommended | Optimal |
|---|---|---|---|
| CPU | 4 cores | 8 cores | 16+ cores |
| RAM | 8GB | 16GB | 32GB+ |
| Storage | 50GB SSD | 100GB SSD | 500GB+ NVMe |
| GPU | None | 8GB VRAM | 16GB+ VRAM |
Choose models based on your hardware:
- 7B models: Entry-level, 8GB RAM minimum
- 13B models: Mid-range, 16GB RAM recommended
- 34B models: High-end, 32GB+ RAM + GPU
Enable caching for faster repeated operations:
performance:
enable_cache: true
cache_ttl: 3600# Backup models
tar czf models-backup.tar.gz ~/.accelerapp/models/
# Backup knowledge base
tar czf knowledge-backup.tar.gz ~/.accelerapp/knowledge/
# Backup templates
tar czf templates-backup.tar.gz ~/.accelerapp/templates/
# Backup configurations
tar czf config-backup.tar.gz ~/.accelerapp/config/# Restore from backup
tar xzf models-backup.tar.gz -C ~/
tar xzf knowledge-backup.tar.gz -C ~/
tar xzf templates-backup.tar.gz -C ~/
tar xzf config-backup.tar.gz -C ~/Automated health monitoring:
# Run health check
python deployment/monitoring/health_check.py
# Check exit code
echo $? # 0=healthy, 1=unhealthy, 2=degradedKey metrics to monitor:
- LLM response time
- Agent task completion rate
- Memory usage
- Disk space
- Cache hit rate
For issues and questions:
- Documentation: See main README.md
- GitHub Issues: https://github.com/thewriterben/Accelerapp/issues
- Deployment Guide: See deployment/README.md
- Security: See SECURITY.md
MIT License - See LICENSE file in project root
Last Updated: 2025-10-14 | Version: 1.0.0 | Deployment Type: Air-Gapped/Offline