I just finished watching Apple’s keynote, and like most years, it was a predictable lineup of iPhones, AirPods, and Apple Watches. The hardware got its annual refresh, but there wasn’t anything that felt new or unexpected. The biggest topic of conversation was what Apple didn’t show: updates on its lagging AI strategy.
The “Apple Intelligence” feature set still feels underwhelming, and it made me think back to the Knowledge Navigator AI agent concept video Apple made in 1987 that might give us a clue about what they might be working on today.
I explore that and show you my own AI agent workflows in my latest video.
I first saw the Knowledge Navigator video as a kid in the early ’90s, when some friends and I formed an Apple user group that received promotional videos like this from Apple.
At the time, the Knowledge Navigator seemed like science fiction, but watching it now, it feels like a plausible direction for Apple’s AI ambitions. The video depicts a professor interacting with a digital assistant that not only responds to commands but anticipates needs—pulling up articles, reminding him of events, leaving messages, and even coordinating schedules with presumably other people’s agents.
What struck me most was how the agent handled tasks on the professor’s behalf, like trying to reach someone by phone, leaving a message, and then being ready to relay instructions when she called back. It even set up meetings.
If both parties had agents, they could negotiate directly without human back-and-forth. That kind of invisible efficiency is something I’d welcome—scheduling meetings is one of the biggest time sinks I deal with. With language models as capable as they are now, this no longer feels like far-off science fiction.
I suspect Apple is quietly working on this agent model. Their recently released Apple Invites app caught my attention because it seemed like such an odd standalone product, but it would make sense as a building block in a future where AI agents manage more of our day-to-day logistics.
When Apple is finally ready to make their big AI push, I think it will be around agents. “I’ll have my Siri call your Siri and we’ll do lunch” might be in our near future.
I’ve been experimenting with this idea myself. Using an open-source tool called N8N, I’ve built a few agents that automate parts of my routine. One sends me a daily morning email with my calendar and curated stories from the gadget and cord-cutting sites I follow. It uses Google’s Gemini API model to filter through RSS feeds and highlight what I might want to cover on my channel. The setup works well enough that it reminds me of the professor’s morning briefing in that Apple demo.
Scheduling is trickier. I’ve tried building an agent that can handle booking meetings based on my availability, and while it sometimes works, it’s far from reliable. Getting the models to properly parse my calendar was a challenge until GPT-5 came along, but even then, the success rate isn’t high enough to trust it with real interactions. Still, the framework is there, and it feels like a glimpse of what’s possible once the technology matures.
Right now, most consumers are engaging with AI through search-like interactions, asking questions and getting quick answers more efficiently than searching on their own. But the real leap will come when agents can act on our behalf, working with other agents to complete tasks without constant human oversight. That’s the vision Apple hinted at nearly 40 years ago, and it may be the key to making their AI efforts feel truly impactful when they finally step into this space.

