The transport-agnostic real-time framework for the JVM.
Build once with @Agent — stream to Web, Slack, Telegram, MCP, A2A, and any transport.
Atmosphere was built on one idea: your application code shouldn't care how the client is connected. Write once, and the framework delivers to every subscriber — whether they're on a WebSocket, an SSE stream, a long-polling loop, a gRPC channel, or an MCP session. Pluggable AI streaming adapters for Spring AI, LangChain4j, Google ADK, Embabel, and any OpenAI-compatible API.
One annotation. Commands, tools, skill file, and multi-channel delivery — all wired automatically:
@Agent(name = "devops", skillFile = "prompts/devops-skill.md",
description = "DevOps assistant with monitoring and deployment")
public class DevOpsAgent {
@Command(value = "/status", description = "Show service health")
public String status() { return "All services healthy"; }
@Command(value = "/deploy", description = "Deploy to staging",
confirm = "Deploy to staging environment?")
public String deploy(String args) { return "Deployed " + args; }
@AiTool(name = "check_service", description = "Check service health")
public String checkService(@Param("service") String service) { ... }
@Prompt
public void onPrompt(String message, StreamingSession session) {
session.stream(message);
}
}Slash commands execute instantly (no LLM cost). Natural language goes through the full AI pipeline — memory, tools, guardrails, RAG, metrics — on every transport.
Try it now — generate an agent from a skill file:
brew install Atmosphere/tap/atmosphere # or: curl -fsSL https://raw.githubusercontent.com/Atmosphere/atmosphere/main/cli/install.sh | sh
atmosphere new my-agent --skill-file skill.md
cd my-agent && LLM_API_KEY=sk-... ./mvnw spring-boot:runOr run a built-in sample:
LLM_API_KEY=sk-... atmosphere run spring-boot-dentist-agentOpen http://localhost:8080/atmosphere/console/ and type /help, /firstaid, or just describe your broken tooth. To connect Slack or Telegram, create a bot and set the token as an environment variable.
Add atmosphere-channels to the classpath and set a bot token — commands and AI route to Slack, Telegram, Discord, WhatsApp, and Messenger automatically. See docs/channels.md for the full channel matrix and setup.
The skillFile is a markdown file that becomes the system prompt verbatim. Its sections are also parsed for protocol metadata:
# DevOps Assistant
You are a DevOps assistant that helps teams monitor services.
## Skills
- Monitor service health and performance
- Manage deployments to staging and production
## Guardrails
- Never execute production deployments without confirmationSee the DevOps skill file and Dentist skill file for real examples. Full samples: spring-boot-agent-chat (DevOps agent) and spring-boot-dentist-agent (multi-channel with Slack and Telegram).
@Agent desugars to @AiEndpoint + CommandRouter + protocol bridges. For simpler cases without commands or channels, you can use @AiEndpoint directly:
@AiEndpoint(path = "/atmosphere/ai-chat",
systemPrompt = "You are a helpful assistant.",
conversationMemory = true)
public class MyChat {
@Prompt
public void onPrompt(String message, StreamingSession session) {
session.stream(message);
}
}Or skip Java entirely — zero-code AI chat:
LLM_API_KEY=your-key atmosphere run spring-boot-ai-consoleConnect to any Atmosphere endpoint from any framework. Install with npm install atmosphere.js.
// React
import { AtmosphereProvider, useStreaming } from 'atmosphere.js/react';
function App() {
return <AtmosphereProvider><Chat /></AtmosphereProvider>;
}
function Chat() {
const { fullText, isStreaming, send } = useStreaming({
request: { url: '/atmosphere/ai-chat', transport: 'websocket' },
});
return (
<div>
<button onClick={() => send('What is Atmosphere?')}>Ask</button>
<p>{fullText}</p>
{isStreaming && <span>Generating...</span>}
</div>
);
}Vue
<script setup lang="ts">
import { useStreaming } from 'atmosphere.js/vue';
const { fullText, isStreaming, send } = useStreaming({
url: '/atmosphere/ai-chat',
transport: 'websocket',
});
</script>
<template>
<button @click="send('What is Atmosphere?')">Ask</button>
<p>{{ fullText }}</p>
<span v-if="isStreaming">Generating...</span>
</template>Svelte
<script>
import { createStreamingStore } from 'atmosphere.js/svelte';
const { store, send } = createStreamingStore({
url: '/atmosphere/ai-chat',
transport: 'websocket',
});
</script>
<button on:click={() => send('What is Atmosphere?')}>Ask</button>
<p>{$store.fullText}</p>
{#if $store.isStreaming}<span>Generating...</span>{/if}React Native
import { setupReactNative, AtmosphereProvider } from 'atmosphere.js/react-native';
import { useStreamingRN } from 'atmosphere.js/react-native';
setupReactNative();
function App() {
return <AtmosphereProvider><Chat /></AtmosphereProvider>;
}
function Chat() {
const { fullText, isStreaming, isConnected, send } = useStreamingRN({
request: { url: 'https://your-server.com/atmosphere/ai-chat', transport: 'websocket' },
});
return (
<View>
<Button title="Ask" onPress={() => send('What is Atmosphere?')} />
<Text>{fullText}</Text>
{isStreaming && <Text>Generating...</Text>}
</View>
);
}Auto-connects on mount, streams tokens as they arrive, cleans up on unmount. See the atmosphere.js README for the full API.
Tools are declared with @AiTool — portable across all backends:
public class AssistantTools {
@AiTool(name = "get_weather", description = "Get weather for a city")
public String getWeather(@Param("city") String city) {
return weatherService.lookup(city);
}
}Swap the AI backend by changing one Maven dependency — no tool code changes:
| Backend | Dependency | Bridged via |
|---|---|---|
| Built-in (Gemini/OpenAI/Ollama/Embacle) | atmosphere-ai |
direct |
| Spring AI | atmosphere-spring-ai |
SpringAiToolBridge |
| LangChain4j | atmosphere-langchain4j |
LangChain4jToolBridge |
| Google ADK | atmosphere-adk |
AdkToolBridge |
| Embabel | atmosphere-embabel |
EmbabelAiSupport |
See spring-boot-ai-tools for the full tool-calling sample and spring-boot-ai-classroom for multi-persona conversation memory.
Three protocols for the agentic ecosystem, all riding Atmosphere's transport:
// MCP — expose tools to AI agents (Claude Desktop, Copilot, Cursor)
@McpServer(name = "my-tools", path = "/atmosphere/mcp")
public class MyTools {
@McpTool(name = "ask_ai", description = "Ask AI and stream the answer")
public String askAi(@McpParam(name = "question") String q, StreamingSession session) {
session.stream(q);
return "streaming";
}
}
// A2A — agent-to-agent discovery and task delegation (Google/Linux Foundation)
@A2aServer(name = "weather-agent", endpoint = "/atmosphere/a2a")
public class WeatherAgent {
@A2aSkill(id = "get-weather", name = "Get Weather", description = "Weather for a city")
@A2aTaskHandler
public void weather(TaskContext task, @A2aParam(name = "city") String city) {
task.addArtifact(Artifact.text(weatherService.lookup(city)));
task.complete("Done");
}
}
// AG-UI — stream agent state to frontends (CopilotKit compatible)
@AgUiEndpoint(path = "/atmosphere/agui")
public class Assistant {
@AgUiAction
public void onRun(RunContext run, StreamingSession session) {
session.emit(new AiEvent.AgentStep("analyze", "Thinking...", Map.of()));
session.emit(new AiEvent.TextDelta("Hello! "));
session.emit(new AiEvent.TextComplete("Hello!"));
}
}| Protocol | Purpose | Sample |
|---|---|---|
| MCP | Agent ↔ Tools | spring-boot-mcp-server |
| A2A | Agent ↔ Agent | spring-boot-a2a-agent |
| AG-UI | Agent ↔ Frontend | spring-boot-agui-chat |
The classic Atmosphere pattern — works with WebSocket, SSE, Long-Polling, gRPC, or any transport:
@ManagedService(path = "/chat")
public class Chat {
@Ready
public void onReady(AtmosphereResource r) {
log.info("{} connected via {}", r.uuid(), r.transport());
}
@Message(encoders = JacksonEncoder.class, decoders = JacksonDecoder.class)
public ChatMessage onMessage(ChatMessage message) {
return message; // broadcast to all subscribers
}
}# Install the Atmosphere CLI
curl -fsSL https://raw.githubusercontent.com/Atmosphere/atmosphere/main/cli/install.sh | sh
# Browse all 28+ samples and pick one to run
atmosphere install
# Run samples directly
atmosphere run spring-boot-dentist-agent # multi-channel agent
atmosphere run spring-boot-ai-chat --env LLM_API_KEY=your-key # AI streaming chat
atmosphere run spring-boot-chat # classic real-time chat
# Scaffold a new project
atmosphere new my-app --template ai-chatOr with npx (zero install):
npx create-atmosphere-app my-chat-app
npx create-atmosphere-app my-ai-app --template ai-chatSee cli/README.md for all commands and options.
Atmosphere has powered real-time Java applications for 18 years — from long-polling on Servlet 2.x to virtual threads on JDK 21.
- 280+ releases on Maven Central
- 18 years in continuous production — trading floors, healthcare, collaboration
- Evolution: Servlet 3.0 async (2008) → WebSocket (2013) → SSE (2016) → Virtual Threads (2024) → AI Agents (2025) → Multi-Agent Teams (2026)
- Atmosphere 3.x still maintained on the atmosphere-3.x branch
Agents — dentist agent (multi-channel), devops agent, multi-agent startup team
AI / LLM Streaming — built-in, Spring AI, LangChain4j, Google ADK, Embabel, tool calling, RAG, model routing, AI classroom
Protocols — MCP server, A2A agent, AG-UI chat
Infrastructure — durable sessions, OpenTelemetry, channels
Chat & Messaging — spring-boot-chat, quarkus-chat, grpc-chat, embedded-jetty
25+ modules: core transports, agents, AI adapters, protocol bridges, cloud infrastructure, framework starters, and clients. Full module reference →
Java 21+ · Spring Boot 4.0+ · Quarkus 3.21+ · JDK 21 virtual threads used by default.
Tutorial · Full docs · CLI · Project generator (JBang) · Samples · Javadoc
Need help? Commercial support and consulting available through Async-IO.org.
Apache 2.0 — @Copyright 2008-2026 Async-IO.org
