Skip to content

Atmosphere/atmosphere

Atmosphere

Atmosphere

The transport-agnostic real-time framework for the JVM.
Build once with @Agent — stream to Web, Slack, Telegram, MCP, A2A, and any transport.

Maven Central npm Atmosphere CI E2E Tests Atmosphere.js CI


Atmosphere was built on one idea: your application code shouldn't care how the client is connected. Write once, and the framework delivers to every subscriber — whether they're on a WebSocket, an SSE stream, a long-polling loop, a gRPC channel, or an MCP session. Pluggable AI streaming adapters for Spring AI, LangChain4j, Google ADK, Embabel, and any OpenAI-compatible API.

@Agent — Build AI Agents That Work Everywhere

One annotation. Commands, tools, skill file, and multi-channel delivery — all wired automatically:

@Agent(name = "devops", skillFile = "prompts/devops-skill.md",
       description = "DevOps assistant with monitoring and deployment")
public class DevOpsAgent {

    @Command(value = "/status", description = "Show service health")
    public String status() { return "All services healthy"; }

    @Command(value = "/deploy", description = "Deploy to staging",
             confirm = "Deploy to staging environment?")
    public String deploy(String args) { return "Deployed " + args; }

    @AiTool(name = "check_service", description = "Check service health")
    public String checkService(@Param("service") String service) { ... }

    @Prompt
    public void onPrompt(String message, StreamingSession session) {
        session.stream(message);
    }
}

Slash commands execute instantly (no LLM cost). Natural language goes through the full AI pipeline — memory, tools, guardrails, RAG, metrics — on every transport.

Try it now — generate an agent from a skill file:

brew install Atmosphere/tap/atmosphere    # or: curl -fsSL https://raw.githubusercontent.com/Atmosphere/atmosphere/main/cli/install.sh | sh
atmosphere new my-agent --skill-file skill.md
cd my-agent && LLM_API_KEY=sk-... ./mvnw spring-boot:run

Or run a built-in sample:

LLM_API_KEY=sk-... atmosphere run spring-boot-dentist-agent

Open http://localhost:8080/atmosphere/console/ and type /help, /firstaid, or just describe your broken tooth. To connect Slack or Telegram, create a bot and set the token as an environment variable.

Multi-Channel — One Agent, Every Platform

Add atmosphere-channels to the classpath and set a bot token — commands and AI route to Slack, Telegram, Discord, WhatsApp, and Messenger automatically. See docs/channels.md for the full channel matrix and setup.

Skill File — System Prompt + Agent Metadata

The skillFile is a markdown file that becomes the system prompt verbatim. Its sections are also parsed for protocol metadata:

# DevOps Assistant
You are a DevOps assistant that helps teams monitor services.

## Skills
- Monitor service health and performance
- Manage deployments to staging and production

## Guardrails
- Never execute production deployments without confirmation

See the DevOps skill file and Dentist skill file for real examples. Full samples: spring-boot-agent-chat (DevOps agent) and spring-boot-dentist-agent (multi-channel with Slack and Telegram).

Under the Hood

@Agent desugars to @AiEndpoint + CommandRouter + protocol bridges. For simpler cases without commands or channels, you can use @AiEndpoint directly:

@AiEndpoint(path = "/atmosphere/ai-chat",
            systemPrompt = "You are a helpful assistant.",
            conversationMemory = true)
public class MyChat {

    @Prompt
    public void onPrompt(String message, StreamingSession session) {
        session.stream(message);
    }
}

Or skip Java entirely — zero-code AI chat:

LLM_API_KEY=your-key atmosphere run spring-boot-ai-console

Client — atmosphere.js

Connect to any Atmosphere endpoint from any framework. Install with npm install atmosphere.js.

// React
import { AtmosphereProvider, useStreaming } from 'atmosphere.js/react';

function App() {
  return <AtmosphereProvider><Chat /></AtmosphereProvider>;
}

function Chat() {
  const { fullText, isStreaming, send } = useStreaming({
    request: { url: '/atmosphere/ai-chat', transport: 'websocket' },
  });

  return (
    <div>
      <button onClick={() => send('What is Atmosphere?')}>Ask</button>
      <p>{fullText}</p>
      {isStreaming && <span>Generating...</span>}
    </div>
  );
}
Vue
<script setup lang="ts">
import { useStreaming } from 'atmosphere.js/vue';

const { fullText, isStreaming, send } = useStreaming({
  url: '/atmosphere/ai-chat',
  transport: 'websocket',
});
</script>

<template>
  <button @click="send('What is Atmosphere?')">Ask</button>
  <p>{{ fullText }}</p>
  <span v-if="isStreaming">Generating...</span>
</template>
Svelte
<script>
  import { createStreamingStore } from 'atmosphere.js/svelte';

  const { store, send } = createStreamingStore({
    url: '/atmosphere/ai-chat',
    transport: 'websocket',
  });
</script>

<button on:click={() => send('What is Atmosphere?')}>Ask</button>
<p>{$store.fullText}</p>
{#if $store.isStreaming}<span>Generating...</span>{/if}
React Native
import { setupReactNative, AtmosphereProvider } from 'atmosphere.js/react-native';
import { useStreamingRN } from 'atmosphere.js/react-native';

setupReactNative();

function App() {
  return <AtmosphereProvider><Chat /></AtmosphereProvider>;
}

function Chat() {
  const { fullText, isStreaming, isConnected, send } = useStreamingRN({
    request: { url: 'https://your-server.com/atmosphere/ai-chat', transport: 'websocket' },
  });

  return (
    <View>
      <Button title="Ask" onPress={() => send('What is Atmosphere?')} />
      <Text>{fullText}</Text>
      {isStreaming && <Text>Generating...</Text>}
    </View>
  );
}

Auto-connects on mount, streams tokens as they arrive, cleans up on unmount. See the atmosphere.js README for the full API.

AI Tools — Framework-Agnostic

Tools are declared with @AiTool — portable across all backends:

public class AssistantTools {

    @AiTool(name = "get_weather", description = "Get weather for a city")
    public String getWeather(@Param("city") String city) {
        return weatherService.lookup(city);
    }
}

Swap the AI backend by changing one Maven dependency — no tool code changes:

Backend Dependency Bridged via
Built-in (Gemini/OpenAI/Ollama/Embacle) atmosphere-ai direct
Spring AI atmosphere-spring-ai SpringAiToolBridge
LangChain4j atmosphere-langchain4j LangChain4jToolBridge
Google ADK atmosphere-adk AdkToolBridge
Embabel atmosphere-embabel EmbabelAiSupport

See spring-boot-ai-tools for the full tool-calling sample and spring-boot-ai-classroom for multi-persona conversation memory.

Agent Protocols — MCP, A2A, AG-UI

Three protocols for the agentic ecosystem, all riding Atmosphere's transport:

// MCP — expose tools to AI agents (Claude Desktop, Copilot, Cursor)
@McpServer(name = "my-tools", path = "/atmosphere/mcp")
public class MyTools {
    @McpTool(name = "ask_ai", description = "Ask AI and stream the answer")
    public String askAi(@McpParam(name = "question") String q, StreamingSession session) {
        session.stream(q);
        return "streaming";
    }
}

// A2A — agent-to-agent discovery and task delegation (Google/Linux Foundation)
@A2aServer(name = "weather-agent", endpoint = "/atmosphere/a2a")
public class WeatherAgent {
    @A2aSkill(id = "get-weather", name = "Get Weather", description = "Weather for a city")
    @A2aTaskHandler
    public void weather(TaskContext task, @A2aParam(name = "city") String city) {
        task.addArtifact(Artifact.text(weatherService.lookup(city)));
        task.complete("Done");
    }
}

// AG-UI — stream agent state to frontends (CopilotKit compatible)
@AgUiEndpoint(path = "/atmosphere/agui")
public class Assistant {
    @AgUiAction
    public void onRun(RunContext run, StreamingSession session) {
        session.emit(new AiEvent.AgentStep("analyze", "Thinking...", Map.of()));
        session.emit(new AiEvent.TextDelta("Hello! "));
        session.emit(new AiEvent.TextComplete("Hello!"));
    }
}
Protocol Purpose Sample
MCP Agent ↔ Tools spring-boot-mcp-server
A2A Agent ↔ Agent spring-boot-a2a-agent
AG-UI Agent ↔ Frontend spring-boot-agui-chat

Real-Time Chat (Transport-Agnostic)

The classic Atmosphere pattern — works with WebSocket, SSE, Long-Polling, gRPC, or any transport:

@ManagedService(path = "/chat")
public class Chat {

    @Ready
    public void onReady(AtmosphereResource r) {
        log.info("{} connected via {}", r.uuid(), r.transport());
    }

    @Message(encoders = JacksonEncoder.class, decoders = JacksonDecoder.class)
    public ChatMessage onMessage(ChatMessage message) {
        return message; // broadcast to all subscribers
    }
}

Try It Now

# Install the Atmosphere CLI
curl -fsSL https://raw.githubusercontent.com/Atmosphere/atmosphere/main/cli/install.sh | sh

# Browse all 28+ samples and pick one to run
atmosphere install

# Run samples directly
atmosphere run spring-boot-dentist-agent                          # multi-channel agent
atmosphere run spring-boot-ai-chat --env LLM_API_KEY=your-key    # AI streaming chat
atmosphere run spring-boot-chat                                   # classic real-time chat

# Scaffold a new project
atmosphere new my-app --template ai-chat

Or with npx (zero install):

npx create-atmosphere-app my-chat-app
npx create-atmosphere-app my-ai-app --template ai-chat

See cli/README.md for all commands and options.

Since 2008

Atmosphere has powered real-time Java applications for 18 years — from long-polling on Servlet 2.x to virtual threads on JDK 21.

  • 280+ releases on Maven Central
  • 18 years in continuous production — trading floors, healthcare, collaboration
  • Evolution: Servlet 3.0 async (2008) → WebSocket (2013) → SSE (2016) → Virtual Threads (2024) → AI Agents (2025) → Multi-Agent Teams (2026)
  • Atmosphere 3.x still maintained on the atmosphere-3.x branch

Samples

Agentsdentist agent (multi-channel), devops agent, multi-agent startup team

AI / LLM Streamingbuilt-in, Spring AI, LangChain4j, Google ADK, Embabel, tool calling, RAG, model routing, AI classroom

ProtocolsMCP server, A2A agent, AG-UI chat

Infrastructuredurable sessions, OpenTelemetry, channels

Chat & Messagingspring-boot-chat, quarkus-chat, grpc-chat, embedded-jetty

Browse all 28+ samples →

Modules

25+ modules: core transports, agents, AI adapters, protocol bridges, cloud infrastructure, framework starters, and clients. Full module reference →

Requirements

Java 21+ · Spring Boot 4.0+ · Quarkus 3.21+ · JDK 21 virtual threads used by default.

Documentation

Tutorial · Full docs · CLI · Project generator (JBang) · Samples · Javadoc

Support

Need help? Commercial support and consulting available through Async-IO.org.

License

Apache 2.0 — @Copyright 2008-2026 Async-IO.org

About

The transport-agnostic real-time framework for the JVM. WebSocket, SSE, Long-Polling, gRPC, MCP — one API, any transport.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Sponsor this project

 

Contributors