Agentic AI Meets Brain Wellness 60-second cognitive assessment powered by 5 AI agents
- What is project Atlas?
- Project Structure
- Tech Stack
- Quick Start
- Environment Variables
- Development Roadmap
- Design Specifications
- Metrics & Goals
- Contributing
- License
- Team
- Support
Project Atlas is a revolutionary brain wellness app that uses 5 AI agents to analyze a simple 60-second animal naming test. Users get personalized cognitive insights and can contribute to brain health research.
- Enter age (18-99) for personalized scoring
- Record 60 seconds of animal naming
- AI agents analyze speech, efficiency, flexibility, strategy, and insights
- Get results with brain wellness score
- Share on social or help research via wellness survey
- Speech Agent - Cleans and processes audio
- Efficiency Agent - Detects repetitions and errors
- Flexibility Agent - Identifies semantic categories
- Strategy Agent - Analyzes cognitive approach
- Insight Agent - Generates personalized tips
DA_project_atlas_native/
├── 📁 api/ # Backend containing scoring algorithm
├── 📁 app/
│ ├── ⚙️ config/ # Configs
│ │ └── api.ts # Api config
│ ├── 🛠️ services/ # Backend services
│ │ ├── api.ts # Service for the api
│ │ └── mockData.ts # Mock data service
│ ├── 📊 utils/ # Utilities
│ │ └── errorHandler.ts # Error handler (for demo only)
│ ├── _layout.tsx # Layout
│ ├── age_input.tsx # Screen with age input
│ ├── index.tsx # Welcome screen
│ ├── insutructions.tsx # Instructions screen
│ ├── recording.tsx # Recording screen
│ ├── results.tsx # Results screen
├── 📁 assets/ # Fonts & images
│ └── ...
├──📱 App.tsx # Main app entry point
├──⚙️ app.json # Expo configuration
└── ... # Package files
- Framework: Expo SDK with React Native
- Language: TypeScript
- Navigation: React Navigation 6
- State Management: React Hooks + Context
- UI Components: React Native + Custom styling
- Audio: Expo AV for recording
- Storage: Expo SecureStore + AsyncStorage
- Sharing: Expo Sharing
- API: REST endpoints for assessment processing
- User Events: Custom tracking system
- Node.js 18+
- npm or yarn
- Expo CLI
- Expo Go app on your phone (for testing)
- If using an emulator, additional setup is needed on your side.
- Refer to Set Up Your React Native Environment
- Install Expo Go from App Store/Play Store
- Scan the QR code from your terminal
- App loads on your phone in the Expo app
- Install Expo CLI globally
npm install -g @expo/cli- Clone the project
git clone <your-forked-repo-url>
cd DA_project_atlas_native- Install dependencies
npm install- Start the development server
# When working on private network
npx start
# When working on public network or the test device is on another network
npx start --tunnel- Scan QR code with Expo Go app or press 'a' for Android emulator
# Start development server
npm start
# or
npx expo start
# Start on specific platform
npm run android # Android emulator
npm run ios # iOS simulator
npm run web # Web browser
# Build for production
npx expo build:android # Android APK/AAB
npx expo build:ios # iOS IPA
# TypeScript checking
npm run type-check
# Run tests
npm test
# Lint code
npm run lintCreate .env file in project root:
# API Endpoints
REACT_APP_AZURE_BACKEND=https://your-api.azurewebsites.net- Project setup with Expo
- Basic navigation structure
- Welcome screen with branding
- Age input with slider component
- Instructions screen with permissions
- Basic recording screen with timer
- Mock results display
- Audio recording with proper format (WAV, 44.1kHz, 16-bit)
- Azure integration
- AI processing API connection
- Real-time "AI agents analyzing" animation
- Results screen with actual data
- Share functionality
- Wellness survey implementation
- Device testing (iOS/Android)
- Performance optimization
- Analytics tracking implementation
- Error handling and edge cases
- App store assets and metadata
- TestFlight submission
- Wellness data analytics dashboard
- Partnership integration APIs
- A/B testing infrastructure
- Viral sharing optimization
- User onboarding optimization
- Primary Colors: Purple gradient (
#667eeato#764ba2) - Recording Screen: Black background (TikTok-friendly)
- Typography: System fonts, bold weights
- Layout: Mobile-first, portrait orientation
- Flow: Linear progression through 5 screens
- Duration: Complete assessment in under 2 minutes
- Accessibility: Voice prompts, clear visual hierarchy
- Performance: <3s app launch, <1s screen transitions
- App Opens → Assessment Started → Recording Completed → Results Viewed → Results Shared
- Target: >60% completion rate (opens → results)
- Viral Goal: >25% share rate
- App Launch: <3 seconds
- Recording Start: <1 second
- AI Processing: 5-15 seconds
- Results Display: Instant
- Demographics (age, education, location)
- Sleep patterns and mood tracking
- Exercise habits and cognitive performance
- Family history data (anonymized)
- Please contribute only if you were explicitly allowed. All unauthorized PRs will be rejected.
This project is licensed under the MIT License - see the LICENSE file for details.
- Author: Kevin Mekulu (kxm5924@psu.edu)
- Founding Software Engineer: Ernest Saakian
- Founding ML Engineer: Alp Karalar
- Institution: Penn State University
- Project: Brain Wellness Research Initiative
- App won't start: Run
expo doctorto check setup - Audio not recording: Check device permissions
- Build fails: Clear cache with
expo start -c
- 📧 Email: kxm5924@psu.edu
- 📚 Expo Docs
- 📚 React Navigation Docs
🧠 "5 AI Agents revolutionize brain wellness" - Project Atlas™