Most of the time, when we see something we don’t recognize, our instinct is to “quick search” it on a phone or laptop. That works, but it’s cramped and linear: one screen, one page, one scroll at a time.
We built TalkTalkXR because mixed reality gives us something better: space.
In XR, you’re not limited to a single flat page. You can spread concepts out, pin ideas around you, walk through them, and navigate a mindmap as if it were a physical object. TalkTalkXR takes that everyday moment of curiosity—“what is this, and how does it work?”—and turns it into a spatial learning experience you can explore, rearrange, and revisit, directly in the world around you. TalkTalkXR is our attempt to turn casual curiosity into an ongoing learning habit.
TalkTalkXR turns your field of view into an interactive knowledge graph you can explore in space.
-
Instant analysis
With a simple gesture, you capture what you’re seeing. The app identifies key objects, infers what you’re likely interested in, and produces a structured breakdown of the scene. -
Learning Mindmaps
Instead of a long paragraph, results appear as an interactive Mindmap with branches like:- “Identify” – what you’re looking at
- “How it works” – mechanisms, structure, or function
- “Related concepts” – nearby ideas worth knowing
- “Learning path” – how to go deeper from here
-
Spatial Library
You can save any Mindmap into a persistent Spatial Library, gradually building your own knowledge collection anchored in the things you’ve actually seen. -
Integrated browser
A built-in spatial browser lets you open articles, docs, or videos linked from the Mindmap without leaving MR, so deep dives live right next to the original context. -
Microgesture control
Subtle finger movements handle scrolling, navigation, and panel management, keeping interactions light and fluid instead of relying on big arm gestures or controllers.
-
Core stack
The app is built natively for Android in Kotlin, on top of the Meta Spatial SDK. -
UI/UX
The interface uses Jetpack Compose with a custom MR-friendly design system: high contrast, generous spacing, and elevated cards tuned for readability in a headset. -
AI integration
We use Gemini 2.5 Flash via the Gemini API. A strict system prompt and schema push the model to return structured JSON, which we feed directly into our Mindmap renderer. -
Gesture system
A custom gesture router listens to Meta’s spatial input events and maps specific microgestures (like a thumb swipe) to synthesized Android touch events, enabling natural scrolling, back/forward navigation, and tab switching in standard views. -
Persistence
A repository layer serializes Mindmap objects to local storage and restores them on app launch, so your library and past sessions survive restarts and feel consistent.
-
Voice interaction
Add full voice support so you can just ask, “What’s that component?” or “Explain this setup” while looking at it—no menus or typing required. -
Shared spatial learning
Enable multiple people to see the same annotations and Mindmaps anchored to the same real-world objects, unlocking teacher–student, lab, and guided-tour scenarios. -
Continuous video analysis
Evolve from single-frame snapshots to real-time understanding, so explanations and labels can update dynamically as you move around and change your viewpoint. -
Real World Analysis
Real-time analysis of what users see and guide them through mindmap, such as cooking, furniture assembly and other real-world tasks.