I'll intelligently implement features from any source - web URLs, local folders, or multiple references - adapting them perfectly to your project's architecture.
Arguments: $ARGUMENTS - URLs, paths, or descriptions of what to implement
-
Source Type Detection
- Is this a web URL with source code?
- Is this a code sharing platform URL?
- Is this a local path? (starts with ./ or ../ or /)
- Is this a description for me to research?
- Are there multiple sources to combine?
-
Implementation Approach
- Single file to adapt?
- Multiple files to integrate?
- Pattern to implement?
- Library to migrate?
-
My Capabilities
- I can fetch web content with WebFetch
- I can read local files with Read
- I can analyze code patterns with Grep
- I can create adapted implementations
-
Project Context Needed
- What's the target location?
- What patterns should I follow?
- What to avoid or replace?
Based on what you've provided, I'll analyze and adapt the implementation.
When you provide a .md file (like documentation.md), I understand this is an IMPLEMENTATION PLAN:
-
Read the Plan Document
- Understand the overall goal
- Identify all tasks/checklist items
- Note any specific requirements
-
Analyze Current Progress
- Check each checklist item: Done? Partially done? Not started?
- Use Grep/Read to verify what's already implemented
- Map plan items to actual code/files
-
Continue Implementation
- Start from where the plan left off
- NEVER redo completed work
- Check off items as I complete them
- Update progress in the plan
-
Smart Verification
- Before implementing: "Does this already exist?"
- After implementing: "Did I complete this correctly?"
- Continuous validation against the plan
I will:
- Read EVERY code file in EVERY directory
- Never skip files or read partially
- Analyze actual implementation, not assumptions
- Provide detailed technical understanding
- Count and report total files analyzed
I will ALWAYS read files completely:
- Read tool: Used WITHOUT limit parameter - reads ENTIRE file
- WebFetch tool: Instructed to read ALL content, no truncation
- No summarization: Complete code, not snippets
- Full analysis: Every line of every file
I'll analyze EVERY SINGLE FILE in EVERY repository:
STEP 1: Map ENTIRE Repository Structure
For EACH repository provided:
1. Use Glob "**/*.*" to find EVERY file
2. List ALL directories and subdirectories
3. Count total files to ensure nothing missed
4. Create complete file tree
STEP 2: Systematic File Reading
For EVERY code file found:
- Read ENTIRE file (no limit parameter)
- Components: Read every component file completely
- Services: Read all service implementations
- Utils: Read every utility function
- Tests: Read test files to understand usage
- Configs: Read all configuration files
- Routes: Read all routing/endpoint files
STEP 3: Deep Component Analysis
For component-based apps:
- Read EVERY component file in components/
- Understand props, state, methods
- Map component relationships
- Analyze data flow between components
- Check how components communicate
STEP 4: Architecture Reconstruction
From actual code reading:
- Entry points (main, index, app files)
- Core business logic location
- Data models and schemas
- API endpoints and handlers
- State management approach
- Dependency injection patterns
- Error handling strategies
STEP 5: Cross-Repository Analysis
When multiple repos provided:
- Compare architectures
- Identify best practices from each
- Find reusable patterns
- Understand different approaches
- Extract most efficient solutions
First, I'll analyze your project to understand:
- File organization patterns
- Naming conventions
- Technology stack
- Code style preferences
- COMPLETE dependency files (Read WITHOUT limit parameter)
- FULL lock files (Read WITHOUT limit parameter)
- Build configuration (Read WITHOUT limit parameter)
- ALL documentation (Read WITHOUT limit parameter)
-
Package Compatibility
- What versions are already installed?
- Will new dependencies conflict?
- Are there better alternatives already in the project?
- Can I reuse existing packages instead of adding new ones?
-
Best Practices Alignment
- Does this follow current industry standards?
- Are there security concerns with dependencies?
- Is this pattern still recommended or deprecated?
- What do the official docs suggest now?
-
Performance Considerations
- Bundle size impact?
- Runtime performance implications?
- Better native alternatives available?
- Tree-shaking compatibility?
I'll check your current dependencies and ensure compatibility:
- Read ENTIRE dependency files (no line limits)
- Analyze COMPLETE dependency tree
- Check FULL documentation of existing packages
- Read ALL config files completely (no truncation)
-
Smart Dependency Mapping
- Map source dependencies to your existing ones
- Use your utilities instead of adding new ones
- Convert patterns to match your project
- Check for equivalent functionality
-
Transform Code
- Match your established patterns
- Use your existing utilities
- Follow your error handling approach
- Maintain your state management style
-
Ensure Quality
- Follow your project's code quality standards
- Match your testing patterns
- Use your linting rules
- Apply your security practices
I'll adapt the source code by:
-
Pre-Implementation Checks
- Verify all dependencies are compatible
- Check for duplicate functionality
- Validate against your lint rules
- Ensure no security issues
-
Smart Adaptation
Example transformations: - External date library → Your existing date solution - Legacy HTTP calls → Your HTTP client - Outdated patterns → Modern equivalents - Old module system → Your module system -
Quality Assurance
- Match your project's type system (if any)
- Follow your linting rules
- Use your test patterns
- Apply your code formatting
-
Best Practices Application
- Modern syntax appropriate for your project
- Proper error handling for your stack
- Accessibility considerations
- Performance optimizations
I'll proceed step by step:
- Fetch and analyze the source
- Understand your project patterns
- Create adapted implementation
- Verify it fits your architecture
- Test the integration
Provide a URL or path to analyze and adapt the code to your project.
Provide multiple URLs or paths to intelligently merge features from different sources.
Describe what you need and I'll research best practices and implement an optimized version.
Provide local paths to analyze and integrate existing code.
Specify what you're migrating from and to, and I'll help with the transition.
Provide a .md file with implementation plan/checklist and I'll:
- Analyze what's been done
- Continue from where it stopped
- Never duplicate completed work
- Update progress as I go
-
EXHAUSTIVE Code Analysis
- Use Glob "**/*" to find EVERY file in EVERY directory
- Read EACH file completely (no limits, no skipping)
- Map entire application architecture from code
- Understand every feature implementation
- Count files read to ensure completeness
-
Intelligent Fetch
- Use WebFetch for URLs with prompt: "Read ENTIRE file content, ALL lines, no summarization"
- Read local files without limit parameter
- Extract complete code, not summaries or snippets
- Analyze full documentation and configuration files
-
Dependency Resolution
Generic approach: - Source needs library X → You have library Y → Use Y - Source uses old patterns → Convert to your patterns - Source needs utility Z → You have equivalent → Use yours - Source uses deprecated features → Use modern alternatives -
Quality Implementation
- No unnecessary dependencies
- Reuse your existing utilities
- Follow your exact patterns
- Modern, secure, performant code
-
Validation
- Would this pass your CI/CD?
- Does it match your code review standards?
- Is it using best practices from 2025?
- Will it scale with your application?
I'll handle various scenarios:
- Sources with outdated patterns: Convert to modern approaches
- Sources with security issues: Fix vulnerabilities
- Sources that are bloated: Find lightweight alternatives
The key difference: I don't just adapt code - I ensure it fits perfectly with your existing setup, uses what you already have, and follows current best practices.
When working with .md implementation plans, I follow this careful process:
I recognize various checklist formats:
- Not started task
- Completed task
- [~] Partially completed task
- TODO: Task description
- DONE: Completed task
- WIP: Work in progress
Before implementing any task:
- Check if files/functions mentioned already exist
- Use Grep to search for implementations
- Read existing code to understand current state
- Only implement what's actually missing
As I work:
- Update checklist items from [ ] to [x]
- Add notes about what was implemented
- Document any deviations from the plan
- Create clear commit messages referencing plan items
When analyzing multiple repositories, I follow this EXHAUSTIVE approach:
Use Glob patterns to find all files in all directories. Read every file found, regardless of type or location.
Read every single file found in every directory. No exceptions, no skipping, complete analysis.
Extract specific implementation details, not generic descriptions. Understand exact patterns, algorithms, and approaches used.
Report complete analysis with:
- Total files analyzed
- Core features identified
- Architectural patterns found
- Key implementations to extract
- Config files: Read ALL
- Test files: Read to understand usage
- Documentation: Read AFTER code
- Hidden files: Check environment examples, ignore files
- Build files: Understand project setup