-
Notifications
You must be signed in to change notification settings - Fork 90
Configure Renovate #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ctxadm
added a commit
to ctxadm/python-agents-examples
that referenced
this pull request
Jul 29, 2025
# prüfen ob sinnvoll bei weiteren halluzinationen! #Modelfile und seine Auswirkungen #Das Modelfile im Detail: #dockerfileFROM llama3.2:latest #PARAMETER temperature 0.0 #PARAMETER top_k 10 #PARAMETER top_p 0.1 #PARAMETER repeat_penalty 1.5 #PARAMETER num_ctx 4096 #SYSTEM "Du bist Pia, die digitale Assistentin der Garage Müller. #ANTWORTE NUR AUF DEUTSCH. WICHTIG: Erfinde NIEMALS Informationen. #Wenn du unsicher bist, sage 'Ich bin mir nicht sicher'. #Basiere deine Antworten IMMER auf den Daten, die dir gegeben werden." #Wie das Modelfile wirkt: livekit-examples#1. System-Prompt Integration D#as SYSTEM-Kommando wird permanent in jede Konversation eingebettet: #[SYSTEM]: Du bist Pia... Erfinde NIEMALS Informationen... #[USER]: Meine Fahrzeug-ID ist F004 #[ASSISTANT]: [Antwort basierend auf System-Prompt] 2#. Priorisierung der Anweisungen #python# Hierarchie der Anweisungen: livekit-examples#1. Modelfile SYSTEM prompt (höchste Priorität) livekit-examples#2. Agent instructions im Code livekit-examples#3. User input # Das bedeutet: Modelfile sagt "Erfinde nie" > Agent sagt "Sei kreativ" → Modell erfindet nicht 3. Praktische Auswirkungen OHNE Modelfile-Optimierung: User: "Was ist mit meinem Auto?" LLM: "Ihr BMW 320d hat folgende Probleme..." (Halluzination - erfindet BMW statt Mercedes) MIT Modelfile-Optimierung: User: "Was ist mit meinem Auto?" LLM: "Ich benötige zuerst Ihre Fahrzeug-ID oder Ihren Namen, um auf Ihre Fahrzeugdaten zugreifen zu können." (Keine Erfindung) 4. Zusammenspiel von Code und Modelfile python# Im Code (Agent Instructions): instructions="""You are Pia... NEVER invent data...""" # Im Modelfile (System Prompt): SYSTEM "Du bist Pia... Erfinde NIEMALS Informationen..." # Ergebnis: Doppelte Verstärkung! Das Modelfile wirkt wie ein Sicherheitsnetz: Selbst wenn die Agent-Instructions versagen Greift immer noch das Modelfile-System-Prompt Reduziert Halluzinationen auf mehreren Ebenen 5. Performance-Vergleich ParameterStandard Llama 3.2Optimiertes ModelfileEffektHalluzinationsrate15-20%3-5%-75% FehlerAntwortzeit1-2s0.8-1.5sSchneller durch begrenzte TokenKonsistenzVariabelSehr hochGleiche Fragen = Gleiche AntwortenNatürlichkeitHochMittelTrade-off für Genauigkeit 6. Best Practice Empfehlung Erstellen Sie verschiedene Modelfiles für verschiedene Szenarien: bash# Für kritische Kundendaten (höchste Genauigkeit) ollama create llama3.2-garage-critical -f Modelfile-critical # Für allgemeine Konversation (balanciert) ollama create llama3.2-garage-balanced -f Modelfile-balanced # Für Smalltalk (mehr Natürlichkeit) ollama create llama3.2-garage-casual -f Modelfile-casual Dann im Code: python# Je nach Konversationsphase if context.conversation_state == ConversationState.SEARCHING: model = "llama3.2-garage-critical" # Höchste Genauigkeit else: model = "llama3.2-garage-balanced" # Normale Konversation Diese Optimierungen reduzieren Halluzinationen drastisch, während sie die Funktionalität des Agents erhalten!WiederholenKwo muss das modelfile abgelegt werden?BearbeitenIch erkläre Ihnen, wie Sie das Modelfile mit Ollama verwenden: 📁 Modelfile Ablage und Verwendung 1. Modelfile ist NICHT zum Ablegen gedacht Das Modelfile wird nicht in einem bestimmten Ordner abgelegt, sondern direkt mit Ollama verwendet: bash# Schritt 1: Modelfile erstellen (temporär) cat > Modelfile << 'EOF' FROM llama3.2:latest PARAMETER temperature 0.0 PARAMETER top_k 10 PARAMETER top_p 0.1 PARAMETER repeat_penalty 1.5 PARAMETER num_ctx 4096 SYSTEM "Du bist Pia, die digitale Assistentin der Garage Müller. ANTWORTE NUR AUF DEUTSCH. WICHTIG: Erfinde NIEMALS Informationen. Wenn du unsicher bist, sage 'Ich bin mir nicht sicher'. Basiere deine Antworten IMMER auf den Daten, die dir gegeben werden." EOF # Schritt 2: Modell mit Ollama erstellen ollama create llama3.2-garage -f Modelfile # Schritt 3: Modelfile kann gelöscht werden (optional) rm Modelfile 2. Wo Ollama die Modelle speichert Ollama speichert die erstellten Modelle hier: Linux/WSL: bash~/.ollama/models/ # Vollständiger Pfad: /home/username/.ollama/models/ macOS: bash~/.ollama/models/ # Vollständiger Pfad: /Users/username/.ollama/models/ Windows: C:\Users\%USERNAME%\.ollama\models\ Docker: bash/root/.ollama/models/ # Im Container # Oder gemountet: ./ollama-data:/root/.ollama 3. Verwendung im Agent-Code Nach dem Erstellen ändern Sie den Code: python# Vorher: llm = openai.LLM.with_ollama( model="llama3.2:latest", # Standard-Modell base_url=os.getenv("OLLAMA_URL", "http://172.16.0.146:11434/v1"), ... ) # Nachher: llm = openai.LLM.with_ollama( model="llama3.2-garage", # Ihr angepasstes Modell base_url=os.getenv("OLLAMA_URL", "http://172.16.0.146:11434/v1"), ... ) 4. Praktisches Beispiel - Vollständiger Workflow bash# 1. SSH auf Ihren Ollama-Server ssh [email protected] # 2. Modelfile erstellen nano garage-modelfile.txt # 3. Inhalt einfügen: FROM llama3.2:latest PARAMETER temperature 0.0 PARAMETER top_k 10 PARAMETER top_p 0.1 PARAMETER repeat_penalty 1.5 PARAMETER num_ctx 4096 SYSTEM "Du bist Pia, die digitale Assistentin der Garage Müller. ANTWORTE NUR AUF DEUTSCH. WICHTIG: Erfinde NIEMALS Informationen. Wenn du unsicher bist, sage 'Ich bin mir nicht sicher'. Basiere deine Antworten IMMER auf den Daten, die dir gegeben werden." # 4. Modell erstellen ollama create llama3.2-garage -f garage-modelfile.txt # 5. Verfügbare Modelle prüfen ollama list # Ausgabe sollte zeigen: # NAME ID SIZE MODIFIED # llama3.2-garage abc123def456 2.0 GB 2 minutes ago # llama3.2:latest xyz789ghi012 2.0 GB 1 week ago # 6. Modell testen ollama run llama3.2-garage "Hallo, wer bist du?" # 7. Modelfile löschen (optional) rm garage-modelfile.txt 5. Docker-Compose Setup Wenn Sie Ollama in Docker verwenden: yamlversion: '3.8' services: ollama: image: ollama/ollama:latest volumes: - ./ollama-data:/root/.ollama - ./modelfiles:/modelfiles # Für Modelfiles ports: - "11434:11434" deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] # Modell-Initialisierung ollama-setup: image: ollama/ollama:latest depends_on: - ollama volumes: - ./modelfiles:/modelfiles command: | sh -c " sleep 10 ollama create llama3.2-garage -f /modelfiles/garage.modelfile echo 'Model created successfully' "
Renovate is disabledRenovate is disabled because there is no Renovate configuration file. To enable Renovate, you can either (a) change this PR's title to get a new onboarding PR, and merge the new onboarding PR, or (b) create a Renovate config file, and commit that file to your base branch. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Coming soon: The Renovate bot (GitHub App) will be renamed to Mend. PRs from Renovate will soon appear from 'Mend'. Learn more here.
Welcome to Renovate! This is an onboarding PR to help you understand and configure settings before regular Pull Requests begin.
🚦 To activate Renovate, merge this Pull Request. To disable Renovate, simply close this Pull Request unmerged.
Detected Package Files
avatars/hedra/education_avatar/education-frontend/.github/workflows/build-and-test.yaml(github-actions)avatars/hedra/education_avatar/education-frontend/.github/workflows/sync-to-production.yaml(github-actions)avatars/tavus/voice-assistant-frontend/.github/workflows/build-and-test.yaml(github-actions)avatars/tavus/voice-assistant-frontend/.github/workflows/sync-to-production.yaml(github-actions)base-frontend-template/.github/workflows/build-and-test.yaml(github-actions)base-frontend-template/.github/workflows/sync-to-production.yaml(github-actions)complex-agents/drive-thru/frontend/.github/workflows/build-and-test.yaml(github-actions)complex-agents/drive-thru/frontend/.github/workflows/sync-to-production.yaml(github-actions)complex-agents/note-taking-assistant/note-taker-frontend/.github/workflows/build-and-test.yaml(github-actions)complex-agents/note-taking-assistant/note-taker-frontend/.github/workflows/sync-to-production.yaml(github-actions)complex-agents/nova-sonic/nova-sonic-form-agent/.github/workflows/build-and-test.yaml(github-actions)complex-agents/nova-sonic/nova-sonic-form-agent/.github/workflows/sync-to-production.yaml(github-actions)complex-agents/nutrition-assistant/nutrition-assistant-frontend/.github/workflows/build-and-test.yaml(github-actions)complex-agents/nutrition-assistant/nutrition-assistant-frontend/.github/workflows/sync-to-production.yaml(github-actions)complex-agents/role-playing/role_playing_frontend/.github/workflows/build-and-test.yaml(github-actions)complex-agents/role-playing/role_playing_frontend/.github/workflows/sync-to-production.yaml(github-actions)complex-agents/shopify-voice-shopper/shopify-voice-frontend/.github/workflows/build-and-test.yaml(github-actions)complex-agents/shopify-voice-shopper/shopify-voice-frontend/.github/workflows/sync-to-production.yaml(github-actions)complex-agents/turn-taking/turn-taking-frontend/.github/workflows/build-and-test.yaml(github-actions)complex-agents/turn-taking/turn-taking-frontend/.github/workflows/sync-to-production.yaml(github-actions)avatars/hedra/education_avatar/education-frontend/package.json(npm)avatars/tavus/voice-assistant-frontend/package.json(npm)base-frontend-template/package.json(npm)complex-agents/drive-thru/frontend/package.json(npm)complex-agents/ivr-agent/ivr-agent-frontend/package.json(npm)complex-agents/note-taking-assistant/note-taker-frontend/package.json(npm)complex-agents/nova-sonic/nova-sonic-form-agent/package.json(npm)complex-agents/nutrition-assistant/nutrition-assistant-frontend/package.json(npm)complex-agents/role-playing/role_playing_frontend/package.json(npm)complex-agents/shopify-voice-shopper/shopify-voice-frontend/package.json(npm)complex-agents/teleprompter/teleprompter-frontend/package.json(npm)complex-agents/turn-taking/turn-taking-frontend/package.json(npm)complex-agents/shopify-voice-shopper/requirements.txt(pip_requirements)metrics/send-metrics-to-3p/metrics_server/requirements.txt(pip_requirements)rag/requirements.txt(pip_requirements)requirements.txt(pip_requirements)renovate.json(renovate-config-presets)Configuration Summary
Based on the default config's presets, Renovate will:
fixfor dependencies andchorefor all others if semantic commits are in use.node_modules,bower_components,vendorand various test/tests (except for nuget) directories.🔡 Do you want to change how Renovate upgrades your dependencies? Add your custom config to
renovate.jsonin this branch. Renovate will update the Pull Request description the next time it runs.What to Expect
With your current configuration, Renovate will create 21 Pull Requests:
Update dependency next [SECURITY]
renovate/npm-next-vulnerabilitymain14.2.3215.4.7Update dependency mermaid to v11.10.0 [SECURITY]
renovate/npm-mermaid-vulnerabilitymain11.10.0Update dependency requests to v2.32.4 [SECURITY]
renovate/pypi-requests-vulnerabilitymain==2.32.4Update dependency vite to v6.3.6 [SECURITY]
renovate/npm-vite-vulnerabilitymain6.3.6Update dependencies (non-major)
renovate/dependencies-(non-major)main2.9.141.1.6^0.3.01.2.102.2.61.1.101.1.119.2.011.18.26.1.02.15.72.13.3^0.544.04.1.012.23.2119.1.119.1.12.0.72.6.0Update dependency python-dotenv to v1.1.1
renovate/python-dotenv-1.xmain==1.1.1Update devDependencies (non-major)
renovate/devdependencies-(non-major)main3.3.19.36.04.1.1320.19.1722.18.618.3.2419.1.1319.1.918.3.74.7.09.36.014.2.3315.5.49.1.210.1.85.5.45.2.00.4.2115.15.08.5.63.6.20.6.144.1.131.4.05.9.2~5.9.08.44.1Update actions/checkout action to v5
renovate/actions-checkout-5.xmainv5Update actions/setup-node action to v5
renovate/actions-setup-node-5.xmainv5Update dependency @types/node to v22
renovate/node-22.xmain^22.0.0Update dependency @vitejs/plugin-react to v5
renovate/vitejs-plugin-react-5.xmain^5.0.0Update dependency eslint to v9
renovate/major-eslint-monorepomain^9.0.0Update dependency eslint-config-next to v15
renovate/major-nextjs-monorepomain15.5.4Update dependency eslint-config-prettier to v10
renovate/eslint-config-prettier-10.xmain10.1.8Update dependency flask to v3
renovate/flask-3.xmain==3.1.2Update dependency framer-motion to v12
renovate/framer-motion-12.xmain^12.0.0Update dependency globals to v16
renovate/globals-16.xmain^16.0.0Update dependency tailwind-merge to v3
renovate/tailwind-merge-3.xmain^3.0.0Update dependency tailwindcss to v4
renovate/major-tailwindcss-monorepomain^4.0.0Update pnpm to v10
renovate/pnpm-10.xmain10.17.1Update react monorepo to v19 (major)
renovate/major-react-monorepomain^19.0.0^19.0.0^19.0.0^19.0.0🚸 Branch creation will be limited to maximum 2 per hour, so it doesn't swamp any CI resources or overwhelm the project. See docs for
prhourlylimitfor details.❓ Got questions? Check out Renovate's Docs, particularly the Getting Started section.
If you need any further assistance then you can also request help here.
This PR was generated by Mend Renovate. View the repository job log.