CHUYI DAI

Chuyi DAI

PhD Student · 🇨🇦 University of Alberta

chuyi.dai@ualberta.ca

Researching knowledge–data models and exploring LLM agents.

Claude Sonnet 4.5's 'Imagine with Claude': An Experiment in Generative Desktop

October 1, 2025
8 min read
Chuyi DAI
ClaudeAIGenerative UIOperating SystemsAnthropic

Last week, Anthropic released Sonnet 4.

Last week, Anthropic released Sonnet 4.5 with a short-term research preview called "Imagine with Claude." Unlike previous model upgrades, this preview is not just about improved benchmarks—it's an experiment in creating an "interactive generative desktop." You can interact with it like a simplified operating system, clicking on icons for Claude Code, the Claw'd game console, and other applications. Claude generates the corresponding software or interface in real-time as you interact.

This experience differs fundamentally from running benchmarks or making API calls. It places the model directly in a "build software on-demand" scenario, showcasing Sonnet 4.5's improvements in long-running tasks, autonomy, and code reliability.

How It Works: Claude's "On-Demand Software Generation"

From a phenomenological perspective, this system resembles a desktop shell:

The Shell Layer provides icons, windows, input boxes, and other interactive elements.

User Events (clicks, inputs, selections) are transformed into context and concatenated into prompts.

Claude 4.5 returns code or interface definitions in real-time (mostly frontend HTML/JS/React snippets).

A Sandbox Environment instantly compiles and hot-reloads the rendering, allowing users to immediately see the results.

This creates a closed loop: Event → Model Generates Code → Sandbox Executes → New Interface → User Interacts Again.

Unlike running a one-time script, this emphasizes continuous interaction and iterative development, making it feel like Claude is genuinely "building applications" rather than merely interpreting commands.

Comparison with Google's Neural OS Project

Google recently demonstrated a similar exploration—a "Neural Operating System" prototype based on Gemini 2.5 Flash-Lite. The two projects share similarities but also differ in key ways:

Similarities

  • Both simulate a "generative operating system" experience where the model produces interfaces or applications instantly after user interactions.
  • Both emphasize low latency and immediate feedback, creating the sensation of using an actual OS.

Differences

Google's Prototype leans toward "generating the next screen UI":

  • It abstracts user interactions into JSON format.
  • The model generates full-screen HTML, which is then streamed and rendered.
  • The emphasis is on low-latency streaming experiences and mechanisms like "interface constitution + UI trajectory caching."

Anthropic's Preview leans toward "real-time software generation":

  • Users receive not just interfaces but runnable applications/tools.
  • The emphasis is on Claude 4.5's capabilities in long-running autonomy and complex task loops, rather than simple UI screen transitions.

In other words, Google is exploring "generative interface protocols," while Anthropic is demonstrating "models that can directly produce and run applications."

Final Thoughts

Although "Imagine with Claude" is only a five-day experiment, it showcases a compelling direction: models are no longer limited to answering questions or writing snippets of code—they can directly function as "application generation engines." If this capability can be integrated with real production environments in the future, it may reshape development tools and even introduce new forms of operating systems.

The implications extend beyond convenience. As models become capable of sustained, autonomous software generation, we may witness a fundamental shift in how we conceptualize computing interfaces—from static environments we navigate to dynamic systems that materialize applications in response to our needs.


References:

  • Anthropic's Sonnet 4.5 Release Announcement
  • Google's Neural Operating System Research Preview
  • Research on Generative UI and Model-Driven Interfaces