You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the growth around being able to make rag AI easier and cheaper to run on hardware I was thinking if we were to have chat gpt that's already iffy enough when it comes to harm reduction where it's just kind of a novelty in my opinion to have it. What are your thoughts? As I believe we should start looking into the locally posted model about can be ran on our servers and completely customized with the data trips it has built up
There's a few models that I have in mind the main one would be llama 3.2 but there is one specifically that I believe would benefit us the best I'll comment and update if I end up finding the name of it.
But local AI is also an option as we can actually do GPT I believe gpt for all I believe is the name several options to look intotakeI work on the website and mainly as you can tell trying out the action workflows on GitHub and failing massively so I thought to make a discussion post and let y'all decide if this is the route to take
The text was updated successfully, but these errors were encountered:
Even without advanced machine learning (ML) knowledge, this could work because there are tools and workflows designed to make the process more accessible. Here's how it might unfold:
We could use a pre-trained model like LLaMA 3.2 or similar, which can run efficiently on modest hardware. These models don’t require deep ML expertise to customize for specific use cases.
The database we already have is structured in JSON, which is ideal for either fine-tuning the model or powering a RAG (Retrieval-Augmented Generation) setup. This means we can feed the AI relevant information it can draw upon to answer questions or provide advice. + By leveraging open-source workflows and platforms, we can fine-tune the model with minimal effort or even use pre-built integrations to get started quickly. There are plenty of guides on quite a few repos to help along the way.
And to address a few points of concern;
Cost and Accessibility
Modern GPUs like an RTX 2070 can handle smaller models (e.g., 3 billion parameters), so we wouldn’t need overly expensive hardware. This ensures the project remains cost-effective and practical for our needs.
The result would be an AI capable of helping users without the liability, that already exists hosting the combosheeet:
Explain safe and unsafe substance combinations.
Provide reassurance during stressful moments (like bad trips).
Suggest immediate actions, including contacting EMS if necessary.
Integrate voice chat (e.g., with Twilio) to offer real-time support.
This wouldn't replace human help but could serve as a valuable tool to reassure and guide users while they wait for assistance. It’s a practical, impactful solution within reach!
With the growth around being able to make rag AI easier and cheaper to run on hardware I was thinking if we were to have chat gpt that's already iffy enough when it comes to harm reduction where it's just kind of a novelty in my opinion to have it. What are your thoughts? As I believe we should start looking into the locally posted model about can be ran on our servers and completely customized with the data trips it has built up
There's a few models that I have in mind the main one would be llama 3.2 but there is one specifically that I believe would benefit us the best I'll comment and update if I end up finding the name of it.
But local AI is also an option as we can actually do GPT I believe gpt for all I believe is the name several options to look intotakeI work on the website and mainly as you can tell trying out the action workflows on GitHub and failing massively so I thought to make a discussion post and let y'all decide if this is the route to take
The text was updated successfully, but these errors were encountered: