Skip to content

Latest commit

 

History

History
501 lines (404 loc) · 9.55 KB

File metadata and controls

501 lines (404 loc) · 9.55 KB

Make.com Scenarios for Autonomous Agent

Overview

These scenarios embody the autonomous AI operations manager. Each scenario monitors external services, performs intelligent analysis, makes decisions, and pushes outcomes to Supabase via webhooks.

Remember: The frontend is PASSIVE. It only observes what Make.com does.


Prerequisites

  1. ✅ Make.com account (free tier works)
  2. ✅ Google account (Gmail, Calendar, Drive access)
  3. ✅ OpenAI API key
  4. ✅ AssemblyAI API key (for transcription)
  5. ✅ Supabase project with webhook endpoints deployed
  6. ✅ Webhook secret configured

Scenario 1: Gmail Email Processor

Purpose: Autonomously analyze incoming emails and extract tasks

Flow:

Gmail: Watch Emails (every 15 min)
   ↓
Router: Filter work-related emails
   ↓
OpenAI: Analyze email content
   ↓
JSON: Parse AI response
   ↓
HTTP: POST /task-batch (create tasks)
   ↓
HTTP: POST /ai-action-log (log action)

Setup:

Module 1: Gmail - Watch emails

  • Connection: Your Gmail account
  • Folder: INBOX
  • Criteria: is:unread
  • Max results: 10

Module 2: Router (optional filter)

  • Condition: Subject contains work keywords
  • Or: From specific clients

Module 3: OpenAI - Create a Chat Completion

  • Model: gpt-4o-mini
  • System message:
You are an autonomous AI operations manager analyzing freelancer emails.
Extract actionable tasks, deadlines, and priorities.
  • User message:
Analyze this email:

From: {{1.from.name}} <{{1.from.address}}>
Subject: {{1.subject}}
Body: {{1.textPlain}}

Return JSON:
{
  "has_tasks": boolean,
  "tasks": [
    {
      "title": "string",
      "description": "string",
      "priority": 1-3,
      "due_date": "YYYY-MM-DD or null"
    }
  ],
  "client_name": "string or null",
  "requires_response": boolean
}
  • Response format: json_object

Module 4: JSON - Parse JSON

  • JSON string: {{3.choices[1].message.content}}

Module 5: Router

  • Filter: {{4.has_tasks}} equals true

Module 6: HTTP - POST /task-batch

  • URL: https://YOUR_PROJECT.supabase.co/functions/v1/task-batch
  • Method: POST
  • Headers:
    • Content-Type: application/json
    • x-webhook-secret: YOUR_SECRET
  • Body:
{
  "user_id": "YOUR_USER_ID",
  "source": "gmail_analysis",
  "tasks": {{4.tasks}}
}

Module 7: HTTP - POST /ai-action-log

  • URL: https://YOUR_PROJECT.supabase.co/functions/v1/ai-action-log
  • Method: POST
  • Headers: (same as above)
  • Body:
{
  "user_id": "YOUR_USER_ID",
  "action_type": "email_processed",
  "description": "Analyzed email from {{1.from.name}} and created {{length(4.tasks)}} tasks",
  "context": {
    "email_subject": "{{1.subject}}",
    "from": "{{1.from.address}}",
    "tasks_created": {{length(4.tasks)}}
  }
}

Scenario 2: Email Response Drafter

Purpose: Autonomously draft email responses for user approval

Flow:

Gmail: Watch Emails
   ↓
OpenAI: Analyze if response needed
   ↓
Router: Filter responses needed
   ↓
OpenAI: Draft professional response
   ↓
HTTP: POST /email-draft (pending approval)
   ↓
HTTP: POST /ai-action-log

Key Modules:

OpenAI Analysis:

Analyze if this email requires a response:

From: {{1.from.name}}
Subject: {{1.subject}}
Body: {{1.textPlain}}

Return JSON:
{
  "requires_response": boolean,
  "response_urgency": "low" | "medium" | "high",
  "suggested_tone": "formal" | "friendly" | "technical",
  "context_summary": "brief summary"
}

OpenAI Drafting:

Draft a professional response to this email.

Context: {{previous AI response}}
From: {{1.from.name}}
Original email: {{1.textPlain}}

Write a clear, professional response that:
- Acknowledges their request
- Sets clear expectations
- Maintains freelancer professionalism

Return only the email body text.

HTTP /email-draft:

{
  "operation": "create",
  "user_id": "YOUR_USER_ID",
  "to_email": "{{1.from.address}}",
  "subject": "Re: {{1.subject}}",
  "body": "{{OpenAI response}}",
  "context": {
    "original_email_id": "{{1.id}}",
    "drafted_reason": "{{AI analysis.context_summary}}"
  }
}

Scenario 3: Meeting Transcription & Action Items

Purpose: Autonomously transcribe meetings and extract tasks

Flow:

Drive: Watch Files (Meet Recordings folder)
   ↓
Drive: Download File
   ↓
AssemblyAI: Transcribe Audio
   ↓
OpenAI: Extract action items & decisions
   ↓
HTTP: POST /task-batch
   ↓
HTTP: POST /ai-action-log

Key Modules:

Drive: Watch Files

  • Folder: /Meet Recordings/ (or your folder)
  • File type: .mp4, .webm

AssemblyAI: Transcribe

  • Audio URL: Download URL from Drive
  • Speaker labels: true
  • Punctuation: true

OpenAI: Extract Actions

Analyze this meeting transcript and extract action items:

{{AssemblyAI.text}}

Return JSON:
{
  "action_items": [
    {
      "title": "string",
      "description": "string",
      "assigned_to": "string or null",
      "deadline": "YYYY-MM-DD or null",
      "priority": 1-3
    }
  ],
  "key_decisions": ["string"],
  "follow_up_needed": boolean
}

Scenario 4: Deadline Risk Detector

Purpose: Autonomously detect deadline risks and surface insights

Flow:

Schedule: Every 6 hours
   ↓
HTTP: GET tasks from Supabase
   ↓
OpenAI: Analyze for risks
   ↓
Router: Filter if risks found
   ↓
HTTP: POST /agent-insight

Key Modules:

HTTP: GET Tasks

  • URL: https://YOUR_PROJECT.supabase.co/rest/v1/tasks
  • Method: GET
  • Headers:
    • apikey: YOUR_SUPABASE_ANON_KEY
    • Authorization: Bearer YOUR_SUPABASE_ANON_KEY
  • Query parameters: user_id=eq.YOUR_USER_ID&status=neq.completed

OpenAI: Risk Analysis

Analyze these tasks for deadline risks:

{{HTTP response.data}}

Identify:
- Overdue tasks
- Tasks approaching deadline with no progress
- Conflicting deadlines
- Workload imbalance

Return JSON:
{
  "has_risks": boolean,
  "insights": [
    {
      "type": "risk" | "opportunity",
      "title": "string",
      "description": "string",
      "confidence": 0.0-1.0,
      "affected_tasks": ["task_ids"]
    }
  ]
}

HTTP: POST /agent-insight

{
  "user_id": "YOUR_USER_ID",
  "insight_type": "{{item.type}}",
  "title": "{{item.title}}",
  "description": "{{item.description}}",
  "confidence_score": {{item.confidence}},
  "context": {
    "affected_tasks": {{item.affected_tasks}},
    "detection_time": "{{now}}"
  }
}

Scenario 5: Email Send Handler (Approval Flow)

Purpose: Actually send emails after user approval

Flow:

Webhook: Receive approval from frontend
   ↓
HTTP: GET draft from Supabase
   ↓
Gmail: Send Email
   ↓
HTTP: POST /email-draft (mark sent)
   ↓
HTTP: POST /ai-action-log

Key Modules:

Custom Webhook

  • Name: email-approval-handler
  • Expected data:
{
  "user_id": "string",
  "draft_id": "string",
  "action": "send"
}

HTTP: GET Draft

  • URL: https://YOUR_PROJECT.supabase.co/rest/v1/email_drafts
  • Query: id=eq.{{1.draft_id}}&user_id=eq.{{1.user_id}}
  • Get: to_email, subject, body

Gmail: Send Email

  • To: {{HTTP response.to_email}}
  • Subject: {{HTTP response.subject}}
  • Body: {{HTTP response.body}}

HTTP: POST /email-draft

{
  "operation": "mark_sent",
  "user_id": "{{1.user_id}}",
  "draft_id": "{{1.draft_id}}"
}

Scenario 6: Calendar Sync

Purpose: Autonomously sync calendar events to dashboard

Flow:

Google Calendar: Watch Events
   ↓
HTTP: POST /meetings (custom endpoint)
   ↓
HTTP: POST /ai-action-log

Setup:

Google Calendar: Watch Events

  • Calendar: Primary
  • Event types: All

HTTP: POST /meetings (you'll need to create this endpoint)

{
  "user_id": "YOUR_USER_ID",
  "title": "{{1.summary}}",
  "start_time": "{{1.start.dateTime}}",
  "end_time": "{{1.end.dateTime}}",
  "attendees": {{1.attendees}},
  "meeting_url": "{{1.hangoutLink}}"
}

Variables & Secrets

Store in Make.com:

  1. USER_ID (your Firebase UID)

    • Get from: /get-uid page
    • Store as: Data store or variable
  2. WEBHOOK_SECRET

    • Same as Supabase
    • Store as: Make.com variable
  3. SUPABASE_PROJECT_URL

    • Your project URL
    • Store as: Variable

Use in HTTP Modules:

URL: {{SUPABASE_PROJECT_URL}}/functions/v1/endpoint
Headers: x-webhook-secret: {{WEBHOOK_SECRET}}
Body: user_id: {{USER_ID}}

Testing Checklist

For each scenario:

  1. ✅ Test with sample data
  2. ✅ Verify webhook calls succeed
  3. ✅ Check Supabase Edge Function logs
  4. ✅ Confirm data appears in database
  5. ✅ Verify frontend updates via realtime
  6. ✅ Test error cases
  7. ✅ Monitor execution history

Production Best Practices

  1. Error Handling:

    • Add error handlers to each module
    • Log failures to Supabase
    • Set up email alerts
  2. Rate Limiting:

    • Gmail: 15-minute intervals (free tier)
    • Don't overwhelm APIs
  3. Cost Management:

    • Use gpt-4o-mini (cheapest)
    • Cache common responses
    • Set operation limits
  4. Monitoring:

    • Review execution history daily
    • Check for failed scenarios
    • Monitor API costs

The Autonomous Flow

External World
    ↓
Make.com (brain)
    ↓
Webhooks (nervous system)
    ↓
Supabase (memory)
    ↓
Realtime (perception)
    ↓
Frontend (eyes)
    ↓
User (observer)

The AI does the work. You observe.


Next Steps

  1. Create scenarios one by one
  2. Test each thoroughly
  3. Deploy to production
  4. Monitor for a week
  5. Refine based on patterns
  6. Add more scenarios as needed

Remember: You're building an autonomous AI employee, not a task manager.