Building AI agents no longer requires a constant cloud connection or a massive server budget. Today, the most innovative models live where your users are: on the device. Join this hands-on workshop to master Edge AI, an emerging architectural pattern that offers zero per-request costs, total data privacy, and lightning-fast offline performance. We’ll move past simple chat windows and dive into building fully functional, "local-first" agents using lightweight models like Gemma, Llama, and DeepSeek.
What we’ll tackle:
On-Device Function Calling: Teach your agent to interact with local APIs (contacts, calendars, and sensors) with zero latency.
Local RAG: Query private documents and notes for context-aware answers that never leave the user’s phone.
The Hybrid Strategy: Learn the "Best of Both Worlds" architecture that balances edge privacy with cloud power.
The Project: We’ll get our hands dirty building a working prototype using Flutter, Gemma, and GenKit. (Native Android and iOS tracks are also supported!) By the end, you’ll understand the honest trade-offs between model size, performance, and capability.