|
11 | 11 |
|
12 | 12 | **AI agents that get smarter with every task 🧠** |
13 | 13 |
|
14 | | -Agentic Context Engine learns from your agent's successes and failures, automatically building a playbook of strategies. No prompt engineering. No fine-tuning. Just plug in and watch your agents improve. |
| 14 | +Agentic Context Engine learns from your agent's successes and failures. Just plug in and watch your agents improve. |
15 | 15 |
|
16 | | -⭐️ **Star this repo** if you're building self-improving agents |
| 16 | +Star ⭐️ this repo if you find it useful! |
17 | 17 |
|
18 | 18 | --- |
19 | 19 |
|
@@ -60,9 +60,7 @@ That's it! Your agent is now learning and improving. 🎉 |
60 | 60 |
|
61 | 61 | AI agents make the same mistakes repeatedly. Fine-tuning is expensive ($1K+ per iteration), slow (days/weeks), and requires labeled data. |
62 | 62 |
|
63 | | -**ACE changes that.** ACE enables agents to learn from execution feedback—no training data, no fine-tuning, just automatic improvement. |
64 | | - |
65 | | -ACE agents build a **"playbook"** of strategies that evolve based on experience—learning what works, what doesn't, and continuously improving. |
| 63 | +ACE enables agents to learn from execution feedback-what works, what doesn't, and continuously improving. No training data, no fine-tuning, just automatic improvement. |
66 | 64 |
|
67 | 65 | ### Clear Benefits |
68 | 66 | - 📈 **20-35% Better Performance**: Proven improvements on complex tasks |
@@ -128,28 +126,17 @@ for task in real_world_tasks: |
128 | 126 | *Based on the [ACE research framework](https://arxiv.org/abs/2510.04618) from Stanford & SambaNova* |
129 | 127 |
|
130 | 128 | ACE uses three specialized roles that work together: |
131 | | - |
132 | 129 | 1. **🎯 Generator** - Executes tasks using learned strategies from the playbook |
133 | 130 | 2. **🔍 Reflector** - Analyzes what worked and what didn't after each execution |
134 | 131 | 3. **📝 Curator** - Updates the playbook with new strategies based on reflection |
135 | 132 |
|
136 | | -The magic happens in the **Playbook**—a living document of strategies that evolves with experience. |
137 | | - |
138 | | -### The Learning Loop |
139 | | - |
140 | | -``` |
141 | | -Task → Execute → Reflect → Curate → Playbook → Better Next Time |
142 | | - ↑ │ |
143 | | - └──────────────────────────────────────────────────────┘ |
144 | | -``` |
145 | | - |
146 | 133 | Each execution teaches your agent: |
147 | | - |
148 | 134 | - **✅ Successes** → Extract patterns that work |
149 | 135 | - **❌ Failures** → Learn what to avoid |
150 | 136 | - **🔧 Tool usage** → Discover which tools work best for which tasks |
151 | 137 | - **🎯 Edge cases** → Remember rare scenarios and how to handle them |
152 | 138 |
|
| 139 | +The magic happens in the **Playbook**—a living document of strategies that evolves with experience. <br> |
153 | 140 | **Key innovation:** All learning happens **in context** through incremental updates—no fine-tuning, no training data, and complete transparency into what your agent learned. This approach prevents "context collapse" by preserving valuable strategies rather than rewriting the entire playbook. |
154 | 141 |
|
155 | 142 | ```mermaid |
@@ -246,8 +233,7 @@ If you use ACE in your research, please cite: |
246 | 233 |
|
247 | 234 | <br> |
248 | 235 |
|
249 | | -**⭐ Star this repo if you find it useful!** <br><br> |
250 | | - |
251 | | -**Built with ❤️ by [Kayba](https://kayba.ai) and the open-source community** |
| 236 | +**⭐ Star this repo if you find it useful!** <br> |
| 237 | +**Built with ❤️ by [Kayba](https://kayba.ai) and the open-source community.** |
252 | 238 |
|
253 | 239 | </div> |
0 commit comments