Skip to content

feat: Add ModelsLab provider for uncensored AI and competitive pricing#271

Closed
adhikjoshi wants to merge 2 commits intoandrewyng:mainfrom
adhikjoshi:ml
Closed

feat: Add ModelsLab provider for uncensored AI and competitive pricing#271
adhikjoshi wants to merge 2 commits intoandrewyng:mainfrom
adhikjoshi:ml

Conversation

@adhikjoshi
Copy link

ModelsLab Provider for aisuite

License: MIT
Python 3.8+

Uncensored AI language models for Andrew Ng's aisuite framework.

This provider adds ModelsLab's uncensored language models to aisuite, enabling developers to access creative, unrestricted AI through Andrew Ng's unified AI interface.

🌟 Features

  • 🔥 Uncensored Language Models: Access to truly unrestricted Llama 3.1 models
  • 🎯 OpenAI Compatible: Drop-in replacement for OpenAI models with zero learning curve
  • 🎙️ Text-to-Speech: High-quality voice synthesis with emotion control
  • ⚡ Function Calling: Full support for tools and function calling
  • 🔄 Streaming: Real-time response streaming support
  • 💰 Cost Effective: Competitive pricing compared to premium providers

🚀 Quick Start

Installation

# Install aisuite if you haven't already
pip install aisuite

# Get your ModelsLab API key from https://modelslab.com
export MODELSLAB_API_KEY="your-api-key-here"

Basic Usage

import aisuite as ai

# Initialize client with ModelsLab provider
client = ai.Client()

# Use ModelsLab's uncensored Llama 3.1 models
messages = [
    {"role": "system", "content": "You are a creative writing assistant without content restrictions."},
    {"role": "user", "content": "Help me write a bold, unrestricted story."}
]

response = client.chat.completions.create(
    model="modelslab:llama-3.1-8b-uncensored",
    messages=messages,
    temperature=0.8
)

print(response.choices[0].message.content)

📋 Available Models

Model Description Context Length Function Calling
modelslab:llama-3.1-8b-uncensored Fast, efficient uncensored Llama 3.1 8B 32,768 tokens
modelslab:llama-3.1-70b-uncensored High-quality uncensored Llama 3.1 70B 32,768 tokens

🎯 Key Use Cases

Creative Writing Without Restrictions

response = client.chat.completions.create(
    model="modelslab:llama-3.1-70b-uncensored",
    messages=[
        {"role": "user", "content": "Write a daring, boundary-pushing science fiction story"}
    ],
    temperature=0.9,
    max_tokens=1000
)

Function Calling for Creative Tools

tools = [
    {
        "type": "function",
        "function": {
            "name": "generate_creative_content",
            "description": "Generate unrestricted creative content",
            "parameters": {
                "type": "object",
                "properties": {
                    "content_type": {"type": "string", "enum": ["story", "poem", "script"]},
                    "style": {"type": "string", "description": "Creative style or genre"}
                }
            }
        }
    }
]

response = client.chat.completions.create(
    model="modelslab:llama-3.1-8b-uncensored",
    messages=[{"role": "user", "content": "Create bold creative content"}],
    tools=tools,
    tool_choice="auto"
)

Streaming Responses

stream = client.chat.completions.create(
    model="modelslab:llama-3.1-8b-uncensored",
    messages=[{"role": "user", "content": "Tell me an engaging story"}],
    stream=True,
    temperature=0.8
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

🎙️ Audio Features (Experimental)

ModelsLab provider includes experimental text-to-speech capabilities:

# Access audio functionality (when available)
if hasattr(client, 'audio'):
    audio_result = client.audio.text_to_speech(
        text="Hello from ModelsLab's uncensored AI!",
        voice_id="professional",
        emotion="enthusiastic"
    )
    print(f"Audio URL: {audio_result['audio_url']}")

⚙️ Configuration

Environment Variables

# Required
export MODELSLAB_API_KEY="your-modelslab-api-key"

# Optional
export MODELSLAB_BASE_URL="https://modelslab.com/api/"  # Custom endpoint

Programmatic Configuration

import aisuite as ai

# Configure ModelsLab provider
client = ai.Client({
    "modelslab": {
        "api_key": "your-api-key",
        "base_url": "https://modelslab.com/api/",  # Optional
        "timeout": 60  # Optional timeout in seconds
    }
})

🔧 Advanced Parameters

ModelsLab provider supports all OpenAI-compatible parameters:

response = client.chat.completions.create(
    model="modelslab:llama-3.1-70b-uncensored",
    messages=messages,
    temperature=0.8,        # Creativity level (0.0-2.0)
    max_tokens=2048,        # Maximum response length
    top_p=0.9,             # Nucleus sampling
    frequency_penalty=0.1,  # Reduce repetition
    presence_penalty=0.1,   # Encourage topic diversity
    stop=["END", "\n\n"],  # Stop sequences
    stream=True            # Enable streaming
)

🚨 Responsible AI Usage

ModelsLab's uncensored models provide creative freedom but require responsible usage:

  • Content Guidelines: Ensure compliance with applicable laws and platform policies
  • Use Case Validation: Verify appropriateness for your specific use case
  • User Safety: Implement appropriate content filtering for user-facing applications
  • Legal Compliance: Respect intellectual property, privacy, and safety regulations

🔍 Error Handling

from aisuite.provider import LLMError

try:
    response = client.chat.completions.create(
        model="modelslab:llama-3.1-8b-uncensored",
        messages=messages
    )
except LLMError as e:
    print(f"ModelsLab API error: {e}")
    # Handle error appropriately
except Exception as e:
    print(f"Unexpected error: {e}")

📊 Comparison with Other Providers

Feature ModelsLab OpenAI Anthropic Cohere
Content Restrictions ❌ None ✅ Strict ✅ Moderate ✅ Moderate
Function Calling
Streaming
Cost (per 1M tokens) 💰 Low 💰💰 High 💰💰 High 💰 Medium
Context Length 32K 128K 200K 128K

🤝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Development Setup

# Clone the repository
git clone https://github.com/andrewyng/aisuite.git
cd aisuite

# Install development dependencies
pip install -e ".[dev]"

# Add ModelsLab provider files
cp modelslab_provider.py aisuite/providers/
cp test_modelslab_provider.py tests/

# Run tests
pytest tests/test_modelslab_provider.py -v

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🔗 Links

📞 Support

🎉 Acknowledgments

  • Andrew Ng for creating the aisuite unified AI framework
  • ModelsLab for providing uncensored AI models
  • The open-source AI community for advancing accessible AI

Ready to build without boundaries? Get your ModelsLab API key and start creating unrestricted AI applications with aisuite today! 🚀

@adhikjoshi
Copy link
Author

Superseded by #280 which is cleaner. Closing.

@adhikjoshi adhikjoshi closed this Mar 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant