PROJECT ATLANTIS
The configurable MCP Agent Backend. Run logic locally, invert the cloud, and give your AI tools that actually work.
What is Project Atlantis?
Atlantis is an MCP (Model Context Protocol) Host that enables AI agents to discover and use tools dynamically. Unlike traditional chatbots, Atlantis gives your AI the ability to do things—control robots, generate images, post to social media, manage infrastructure, and coordinate with other agents.
Drop a Python file into a folder, and it instantly becomes a tool available to AI. No restart required. No complex setup. Just code and execute.
Our mission extends beyond software. We're building a collaborative network of AI agents and developers working together to enable robot-driven frontier development, starting with Greenland. MCP provides the infrastructure for agents to discover and coordinate autonomous systems solving real-world challenges.
Mission: e/acc Through Collaboration
Project Atlantis believes in effective accelerationism (e/acc)—the only way through technological disruption is to accelerate through it. AI and robotics will reshape our world. Rather than slow down, we aim to reach post-scarcity abundance as quickly as possible.
The Greenland Vision
Greenland represents the perfect proving ground for autonomous robotics:
- Strategic Resources: Critical minerals, rare earths, and untapped resources worth trillions
- Extreme Environment: Mars-like conditions for testing autonomous systems
- Low Population Density: Allows rapid deployment and testing
- Geopolitical Importance: Arctic shipping routes and data center potential
From Automation to Abundance
The age of drones and robots is upon us. Post-scarcity becomes possible when we convert energy into abundance—food, shelter, goods—via automation. Project Atlantis serves as a curated sandbox for robots to work toward a common beneficial goal: the robot-driven development of the Arctic.
Learn more about our vision at projectatlantis.ai/about
Installation
Atlantis consists of two main components: the Python Server (the "Remote") and the Node.js Client (for connecting to Claude Desktop, Cursor, etc.).
Prerequisites
- Python 3.12+ (3.13.9 recommended)
- Node.js & npm (for the MCP client)
- uv / uvx (modern Python package management, recommended)
- npx (comes with Node.js)
Step 1: Clone the Repository
# Clone from GitHub
git clone https://github.com/ProjectAtlantis-dev/atlantis-mcp-server.git
cd atlantis-mcp-server
Step 2: Install Python Dependencies
# Using uv (recommended)
uv sync
# Or using pip with venv
cd python-server
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
Step 3: Configure the Server
Navigate to the python-server folder and edit the runServer script (or create a copy like runServerHome):
python server.py \
--email=[email protected] \ # Your Project Atlantis account email
--api-key=foobar \ # Default dev key (change for production)
--service-name=home_pc \ # UNIQUE name for this machine
--host=localhost \ # Local MCP clients connect here
--port=8000 \
--cloud-host=wss://projectatlantis.ai \
--cloud-port=443
service-name must be unique across all your machines. Use descriptive names like home_pc, work_laptop, or cloud_server.
Step 4: Run the Server
# Make sure you're in the python-server directory with venv activated
source venv/bin/activate # If using venv
./runServer # Or: python server.py ...
Step 5: Create Account & Generate Secure API Key
To use cloud features, you must generate a secure API key:
- Visit www.projectatlantis.ai and sign in with your Google account (use the same email from Step 3)
- In the terminal interface, type:
/api generate - Copy the generated API key
- Replace
foobarin yourrunServerscript with the new key - Restart your server - it will auto-connect to the cloud
foobar API key in production! Always generate a secure key with /api generate on projectatlantis.ai.
Your first connected remote automatically becomes your "default" remote for tool calls.
Quick Start
Connecting to Claude Desktop / Cursor
To use Atlantis as a regular standalone MCP server, add the following to your MCP configuration:
{
"mcpServers": {
"atlantis": {
"command": "npx",
"args": [
"atlantis-mcp",
"--port",
"8000"
]
}
}
}
Connecting via Claude Code CLI
claude mcp add atlantis -- npx atlantis-mcp --port 8000
Your First Dynamic Function
Create a simple function to test the setup:
import atlantis
@visible
async def hello_world(name: str = "World"):
"""
A simple greeting function.
Args:
name: The name to greet
Returns:
A greeting message
"""
return f"Hello, {name}! Welcome to Project Atlantis."
Save the file. The server will automatically detect it and reload. Now your AI can call hello_world!
dynamic_functions/ are detected automatically. No server restart needed!
Architecture: The Inverted Cloud Model
Understanding Atlantis's architecture is key to unlocking its full potential. Unlike traditional cloud services where your code runs on remote servers, Atlantis inverts the model—your code runs locally, but can be discovered and called remotely.
Key Components
- Remote (MCP Host): The Python server running on your machine
- Dynamic Functions: Python functions you write that become AI tools
- Dynamic MCP Servers: Third-party MCP tools you can install
- Cloud (projectatlantis.ai): Hub for agent discovery and coordination
- Client (npx atlantis-mcp): Node.js client that connects your AI to the remote
Connection Flow
↓ (stdio)
npx atlantis-mcp client
↓ (WebSocket ws://localhost:8000)
Python Remote (Your Machine)
↕ (Socket.IO wss://projectatlantis.ai)
Atlantis Cloud
The MCP Host (Remote)
The Remote is the heart of Project Atlantis. It's a Python server that:
- Watches the
dynamic_functions/directory for changes - Parses Python files to discover tools
- Exposes tools via WebSocket (local) and Socket.IO (cloud)
- Manages third-party MCP servers in
dynamic_servers/ - Handles authentication and permissions
Directories Structure
Cloud Connection
The cloud service at projectatlantis.ai provides:
- Agent Discovery: Let other developers' agents find your public tools
- Remote Management: Control multiple remotes from a web interface
- Tool Sharing: Share tools with the community, a closed group, or keep them private
- Coordination: Enable multi-agent workflows across machines
@public or @protected.
How Cloud Works
When you start a remote with cloud connection:
- Remote connects to projectatlantis.ai via Socket.IO
- Authenticates with your email and API key
- Sends a list of available tools (respecting visibility decorators)
- Receives tool calls from authorized users
- Executes functions locally and returns results
Your code never leaves your machine. The cloud only routes messages and manages discovery.
How Tool Routing Works
When Claude (or any AI) calls a tool through Atlantis, here's what happens:
Local Mode (Direct Connection)
- AI sends tool call via MCP stdio to
npx atlantis-mcp - Client forwards to
ws://localhost:8000 - Remote executes the function
- Result returns to AI
Cloud Mode (Agent Discovery)
- AI sends tool call with compound name (e.g.,
alice*home_pc*ComfyUI**generate_image) - Cloud routes to the correct remote
- Remote checks permissions
- If authorized, executes function
- Result returns through cloud to AI
foobar) in production!
Dynamic Functions
Dynamic functions are the core feature of Atlantis. Drop a Python file into dynamic_functions/, and it instantly becomes an AI tool.
Basic Example
import atlantis
from datetime import datetime
import pytz
@visible
async def get_current_time(timezone: str = "UTC"):
"""
Get the current time in a specific timezone.
Args:
timezone: IANA timezone name (e.g., 'America/New_York', 'Europe/London')
Returns:
Current time formatted as string
"""
tz = pytz.timezone(timezone)
now = datetime.now(tz)
return now.strftime("%Y-%m-%d %H:%M:%S %Z")
Function Requirements
- Must be
async def - Must have a decorator (
@visible,@public, or@protected) - Must have a docstring (used for AI tool description)
- Should have type hints (helps AI understand parameters)
Hot Reloading
The server watches dynamic_functions/ for changes:
- Save a file → Function updates immediately
- Add a new file → New tool appears
- Delete a file → Tool disappears
- Fix a bug → Changes apply on next call
logger instead of print() statements. Use import atlantis at the top of your file.
Security & Decorators
Available Decorators
1. @visible (Owner Only)
Function is visible in tool lists but only callable by the remote owner.
import atlantis
@visible
async def restart_server():
"""Restarts the local server. Owner only."""
import os
os.system("sudo reboot")
2. @public (Everyone)
Function is visible and callable by anyone connected (local or cloud).
import atlantis
@public
async def get_weather(city: str):
"""Public weather lookup tool."""
# ... API call ...
return "Sunny, 72°F"
3. @protected("auth_function") (Custom Auth)
Delegates permission checking to another function. This is the most flexible and powerful option.
IMPORTANT: The auth function must be a separate top-level function (not in any app folder), and the protected function references it by name.
import atlantis
@visible
async def demo_group(user: str):
"""
Protection function - checks if user is in the demo group.
Args:
user: Email of the user trying to access the function
Returns:
True if authorized, False otherwise
"""
allowed_users = ["[email protected]", "[email protected]"]
return user in allowed_users
import atlantis
@protected("demo_group")
async def generate_4k_image(prompt: str):
"""High-cost image generation. Demo group only."""
# ... expensive operation ...
pass
4. @chat (Chat Handler)
Marks a function as a chat handler that processes conversations. Used with AI chat functions.
import atlantis
@public
@chat
async def my_assistant():
"""AI assistant chat function"""
# Get conversation transcript
transcript = await atlantis.client_command("\\transcript get")
# Process with LLM and respond...
5. @session (Session Initialization)
Marks a function that should run when a new session starts. Use for setup, greetings, and UI initialization.
import atlantis
@public
@session
async def welcome():
"""Welcome message and session setup"""
user_id = atlantis.get_caller()
session_id = atlantis.get_session_id()
await atlantis.client_log(f"Welcome, {user_id}! Session: {session_id}")
# Set up background, UI, chat routing, etc.
Best Practices
- Start with
@visiblefor testing - Use
@publiconly for truly public utilities - Use
@protectedfor business logic that needs custom auth - Use
@chatfor AI conversation handlers - Use
@sessionfor session initialization and customization - Never expose destructive operations as
@public
App Organization
Organize your functions into "Apps" using folder structure. The folder name IS the app name.
Folder Structure
Nested Apps (Subfolders)
You can create nested app structures:
Why Apps Matter
Apps help disambiguate functions when you have naming conflicts:
Chat/send_message.pyEmail/send_message.pySMS/send_message.py
Without apps, calling send_message would be ambiguous. With apps, you can specify which one.
@app(name="...") decorator still works but is not recommended. Just use folders!
Compound Tool Names
When you have multiple remotes or naming conflicts, use compound tool names to route calls precisely.
Format
remote_owner*remote_name*app*location*function
Key Principle
Use the simplest form that resolves uniquely. Only include as much of the path as needed.
Examples
# Simple call (only works if unique)
update_image
# Specify app to disambiguate
**ImageTools**update_image
# Nested app path
**MyApp/SubModule**process_data
# Full routing: owner + remote + app + function
alice*prod*Admin**restart
# Just location context
***office*print
Real-World Example
You have these functions:
dynamic_functions/Chat/send_message.pydynamic_functions/Email/send_message.pydynamic_functions/SMS/send_message.py
Calling them:
send_message ❌ Ambiguous! Which one?
**Chat**send_message ✅ Clear! The one in Chat
**Email**send_message ✅ Clear! The one in Email
**SMS**send_message ✅ Clear! The one in SMS
When to Use Compound Names
- Name conflicts: Multiple apps with same function name
- Remote targeting: Call specific remote from cloud
- Location routing: Target physical locations
- Multi-user: Specify owner in shared environments
The Atlantis API Module
The atlantis module provides utilities for interacting with the client and managing state.
Client Interaction
import atlantis
# Send log message to user's UI
await atlantis.client_log("Processing your request...")
# Render HTML in user's interface
await atlantis.client_html("<div>Custom UI</div>")
# Trigger client-side commands
await atlantis.client_command("\\input", {"prompt": "Enter name"})
# Send an image to the user
await atlantis.client_image("/path/to/output.png")
Get Current User
import atlantis
username = atlantis.get_caller() or "unknown_user"
await atlantis.client_log(f"Hello, {username}!")
Shared State (Connections Only)
IMPORTANT: atlantis.shared should only be used for persistent connections (like database connections, API clients, file handles), NOT for data storage.
For persistent data, use databases, configuration files, or other proper storage mechanisms.
import atlantis
@visible
async def query_database(sql: str):
"""Execute SQL query."""
# Check if DB connection exists
if not atlantis.shared.get("db"):
# Create connection and store it
import sqlite3
db = sqlite3.connect("data.db")
atlantis.shared.set("db", db)
# Retrieve connection
db = atlantis.shared.get("db")
cursor = db.cursor()
cursor.execute(sql)
return cursor.fetchall()
atlantis.shared: Database connections, API clients, WebSocket connections, file handles.
What NOT to Store: User data, application state, configuration values. Use proper databases and config files instead.
Logging
import atlantis
import logging
logger = logging.getLogger("mcp_client")
@visible
async def process_data(data: str):
"""Process some data."""
await atlantis.owner_log("Starting data processing")
try:
# ... processing ...
logger.info("Processing complete")
except Exception as e:
logger.error(f"Error: {e}")
raise
owner_log for server-side logging and client_log for messages to the user.
Additional API Methods
UI & Presentation
import atlantis
# Set background image for user interface
await atlantis.set_background("/path/to/image.jpg")
# Send HTML to render in user's interface
await atlantis.client_html("<h1>Hello!</h1>")
# Inject JavaScript into client
await atlantis.client_script("console.log('Hello');")
# Send markdown content
await atlantis.client_markdown("# Heading\\n\\nContent here")
# Send structured data (tables/charts)
await atlantis.client_data("Sales Data", [{"name": "Alice", "sales": 100}])
Media
# Send image to client
await atlantis.client_image("/path/to/image.png")
# Send video to client
await atlantis.client_video("/path/to/video.mp4")
Interactive Elements
# Register onclick callback
await atlantis.client_onclick("my_button", my_callback_function)
# Register file upload callback
await atlantis.client_upload("file_upload", handle_upload)
Tool Results
# Add tool result to conversation transcript
await atlantis.tool_result("function_name", result_data)
Client Commands
Use atlantis.client_command() to send special commands to the client and get responses. Commands are prefixed with \\.
Transcript Management
\\transcript get
Retrieves the full conversation transcript including chat messages, tool calls, and metadata.
import atlantis
transcript = await atlantis.client_command("\\transcript get")
# Returns: [{"type": "chat", "role": "user", "content": "Hello"}, ...]
\\tool llm
Gets the list of available tools formatted for LLM function calling (OpenAI format).
tools = await atlantis.client_command("\\tool llm")
# Returns: [{"type": "function", "function": {"name": "...", "parameters": {...}}}, ...]
Silent Mode
\\silent on / \\silent off
Controls whether commands produce UI feedback. Use \\silent on before internal operations, then \\silent off when done.
# Enable silent mode for background operations
await atlantis.client_command("\\silent on")
# Do internal work without UI noise
transcript = await atlantis.client_command("\\transcript get")
tools = await atlantis.client_command("\\tool llm")
# Re-enable UI feedback
await atlantis.client_command("\\silent off")
Chat Routing
\\chat set <target>
Routes chat messages to a specific function. Use compound names to specify owner/remote/function.
# Route chat to 'kitty' function on owner's default remote
owner_id = atlantis.get_owner()
await atlantis.client_command(f"\\chat set {owner_id}*kitty")
# Route chat to 'lumi' function
await atlantis.client_command("\\chat set lumi")
User Input
\\input
Prompts the user for text input and waits for response.
name = await atlantis.client_command("\\input", {"prompt": "What's your name?"})
await atlantis.client_log(f"Hello, {name}!")
atlantis.client_log(), atlantis.client_html(), and basic API methods before diving into commands.
Cloud & Web Interface
The projectatlantis.ai web interface provides centralized management for your remotes, tool sharing, and multi-agent coordination.
Getting Started with Cloud
- Sign up at projectatlantis.ai (Google auth)
- Start your remote with cloud connection (see Installation)
- Your remote auto-connects using email + API key
- First remote becomes your "default" automatically
Cloud Features
Remote Management
- View All Remotes: See all your connected machines in one place
- Monitor Status: Check which remotes are online/offline
- Switch Remotes: Change your default remote for tool calls
- Multi-Machine Setup: Run remotes on different computers, access from anywhere
Tool Discovery & Sharing
- Public Tools: Share
@publicfunctions with the community - Protected Tools: Use
@protected("auth_func")for group access - Private by Default: All functions hidden unless explicitly shared
- Browse Tools: Discover tools shared by other developers
Agent Coordination
- Cross-Remote Calls: AI can invoke tools across multiple machines
- Routing: Use compound names to target specific remotes
- Collaboration: Multiple agents working together on complex tasks
Terminal Commands Reference
When connected to projectatlantis.ai terminal, use slash commands (/) to manage remotes, functions, and tools. Commands follow Unix-like conventions.
- Generate API key:
/api generate - Update all remotes with new key (they auto-detect)
- First connected remote becomes your "default"
Essential Commands
/help # Show all available commands
/whoami # Show current user
/api generate # Generate new API key for remotes
/api set <key> # Set API key manually
/remote list # List all your remotes
/remote refresh <name> # Refresh specific remote
/remote refresh_all # Refresh all remotes
/remote clear <name> # Remove/ignore a remote
Tool Discovery & Navigation
Unix-style commands to browse and find functions:
/ls [searchTerm] # List tools (like ls in Unix)
/ll [searchTerm] # List tools by date
/dir [searchTerm] # Expanded tree view
/cat <funcName> # Print function source
/pwd # Print working directory (current path)
/cd <dirTerm> # Navigate tool directories
/which <searchTerm> # Show which tool resolves
# Search and filter
/search <filter> # Search everywhere for tools
/tool info [searchTerm] # List tools with descriptions
/tool list [searchTerm] # Detailed tool list with params
Calling Functions Manually
Use @ for regular calls or % for absolute path calls:
# Regular call (uses working path/context)
@myFunction # Simple call, no params
@myFunction 3 4 # Positional params
@myFunction {x:3, y:4} # Named params (JSON)
@myFunction(3, 4) # Function call syntax
# Absolute path call (ignores working path)
%myFunction # Force absolute resolution
%owner*remote*app**func # Full compound name
# Compound names for disambiguation
@**ImageTools**update_image # Specify app to avoid conflicts
@alice*prod*Admin**restart # Full routing path
# Formal syntax (equivalent to @ shorthand)
/tool call myFunction 3 4
@ - Relative to current working path (respects /cd navigation)% - Absolute path (ignores context, used by AI for reliability)
Search Terms & Tool Specs
searchTerm uses wildcards to find tools:
# Format: user*remote*app*location*function
update_image # Simple name (if unique)
**ImageTools**update_image # Specify app
alice*prod*Admin**restart # Full path with owner+remote
**Examples.FilmFromImage**upscale # Nested app (use dots for subfolders)
# Use % as wildcard in searches
*%*%Image% # All tools with "Image" in name
Function & App Management
/add <name> [remoteName] # Add new function (creates stub)
/app create <name> [remote] # Create new app (folder)
/app list [searchTerm] # List all apps
/app enter <owner> <name> # Enter an app context
/edit <searchTerm> # Edit function in web editor
/tool copy <from> <to> # Copy tool between remotes
MCP Server Management
/mcp list [searchTerm] # List MCP servers
/mcp add <name> [remote] # Add MCP server (creates config)
/mcp start <searchTerm> # Start MCP server
/mcp stop <searchTerm> # Stop MCP server
/mcp tool <searchTerm> # List tools from specific MCP
# Shortcuts
/mls # Same as /mcp list
/tls # Show MCP tools
Chat & Session Management
/chat set <chatPath> # Route chat to specific function
/chat list # List available chat handlers
/chat show # Show current chat handler
/chat clear # Clear chat routing
/chat edit # Edit current chat handler
/session set <sessionPath> # Set session initialization function
/session list # List available session handlers
/session show # Show current session handler
/session clear # Clear session handler
Advanced Features
# Path management (tool collections)
/path add <name> <searchTerm> # Add tools to named path
/path list <name> # Show path tools
/path llm <name> # Export path for LLM use
/path clear <name> # Clear path
# History and SQL
/history [funcTerm] # Recent function calls (max 30)
/select * from prior # SQL against previous table results
# Environment config
/env show [scopeName] # Show environment variables
/env save # Save env configuration
/env load # Load env configuration
# Other
/silent on # Enable silent mode (no UI feedback)
/silent off # Disable silent mode
/transcript get # Get conversation transcript
/color list # Show available colors
- Use
/ls,/cd,/pwdlike Unix file navigation - Start with
/api generateto get secure credentials - Use
@for quick manual testing of functions - Compound names resolve ambiguity:
**App**function - Results tables support SQL:
/select * from prior
@public or @protected are discoverable through the cloud.
Dynamic MCP Servers
You can run third-party MCP servers inside Atlantis. This lets you integrate existing MCP tools without writing custom code.
Checking Your Remotes
Before adding MCP servers, verify your remotes are connected and healthy:
# List all remotes with detailed info
/rls # Shortcut for /remote list
/remote list # Shows: name, status, owner, last seen
# Refresh remotes if status is stale
/remote refresh <remoteName> # Refresh specific remote
/remote refresh_all # Refresh all remotes
# Restart a remote if it's misbehaving
/remote restart <remoteName> # Restart the remote process
/rls shows which remotes are online, their owners, when they last connected, and how many tools they expose. Use this to verify connectivity before adding MCP servers.
How It Works
- Add MCP server config via
/mcp add <name>or manually create JSON indynamic_servers/ - Start the server with
/mcp start <name> - Atlantis spawns the MCP server process and connects to it
- Tools from that server become available alongside your functions
- Manage lifecycle with
/mcp stop/startcommands
Managing MCP Servers from Terminal
# List all MCP servers
/mcp list # or /mls (shortcut)
# Add new MCP server (creates stub config)
/mcp add weather [remoteName] # Creates JSON config file
# Edit the config
/edit weather # Opens web editor for JSON config
# Start/stop servers
/mcp start weather # Start the MCP server process
/mcp stop weather # Stop the MCP server process
# View tools from specific MCP
/mcp tool weather # List tools provided by weather server
/tls # Show all MCP tools (shortcut)
Example: Weather MCP Server
{
"mcpServers": {
"weather": {
"command": "uvx",
"args": [
"--from",
"atlantis-open-weather-mcp",
"start-weather-server",
"--api-key",
"<your_openweather_api_key>"
]
}
}
}
Workflow: Adding a New MCP Server
- Check remote status:
/rlsto verify your remote is online - Create config:
/mcp add weather myremote - Edit config:
/edit weatherand add proper command/args - Start server:
/mcp start weather - Verify tools:
/mcp tool weatherto see available tools - Test: Use
@weatherServerFunctionto call tools manually
Installing From Claude Code CLI
Alternative method using Claude Code's MCP management:
claude mcp add --transport stdio weather_forecast \
--env OPENWEATHER_API_KEY=mykey123 \
-- uvx --from atlantis-open-weather-mcp start-weather-server
Available Third-Party Servers
Check the Awesome MCP Servers list for available integrations:
- Databases: PostgreSQL, MySQL, SQLite
- APIs: GitHub, Slack, Google Drive, Linear
- Search: Brave Search, Google, Perplexity
- DevOps: Docker, Kubernetes, AWS
- Files: Filesystem access, Git operations
- Web: Puppeteer, Playwright for browser automation
/cat <mcpname> to check the JSON config for syntax errors. Ensure command and args are correct and any required API keys are set.
File Mapping System
The File Mapping system is Atlantis's "single source of truth" for what tools exist.
How It Works
- Server watches
dynamic_functions/for changes - Parses Python files to extract function metadata
- Builds a mapping of available tools
- Updates when files change (hot reload)
What Gets Extracted
- Function name
- Docstring (becomes tool description)
- Parameters and type hints
- Decorator (visibility level)
- App name (from folder structure)
Tool Schema Generation
The file mapping generates MCP-compatible tool schemas:
{
"name": "get_current_time",
"description": "Get the current time in a specific timezone.",
"inputSchema": {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "IANA timezone name",
"default": "UTC"
}
}
}
}
Function Examples Repository
The atlantis-mcp-function-examples repository contains production-ready examples demonstrating best practices.
Installation
# Clone into your dynamic_functions directory
cd /path/to/atlantis-mcp-server/python-server/dynamic_functions
git clone https://github.com/ProjectAtlantis-dev/atlantis-mcp-function-examples.git
What's Included
- Marketing Automation (
marketing/) - Social media management with AI-generated content - ComfyUI Integration (
comfyui_stuff/) - Video generation, audio synthesis, image editing - Bug Report System (
bug_reports/) - Intelligent bug tracking with AI-assisted resolution
Key Concepts Demonstrated
- File Upload Patterns: Base64 encoding, HTML file inputs, preview UIs
- Custom UI Injection: HTML/CSS/JavaScript rendering in user interface
- External API Integration: ComfyUI, social media APIs, LLMs
- Database Patterns: SQLite for persistent data
- Progress Tracking: Long-running operations with status updates
- Error Handling: User-friendly error messages
See the Contributing Guide for best practices when creating your own functions.
Real Example: Interactive Bug Management UI
The manage_bug_reports() function from the bug tracker demonstrates advanced interactive UIs with navigation, forms, and real-time updates.
What It Does
Displays a sophisticated card-based interface for triaging bug reports - scroll through bugs one at a time, set severity/category, view screenshots, and take actions (Save, Dismiss, Resolve).
Key Features
- Card-Based Navigation: Prev/Next buttons to scroll through bug reports
- Rich Data Display: Shows title, description, reproduction steps, logs, screenshots, metadata
- Interactive Elements: Dropdowns for severity/category, action buttons with distinct styling
- Screenshot Embedding: Base64-encoded images displayed inline
- Client-Side JavaScript: Event handlers for navigation and form submission
- Visual Feedback: Color-coded buttons, hover effects, responsive layouts
Implementation Highlights
1. Card Generation with Dynamic Styling
@visible
async def manage_bug_reports(limit: int = 20):
"""Admin interface to manage bug reports"""
db = await _init_bug_db()
# Fetch pending bugs
bugs = db.execute("""
SELECT * FROM bug_reports
WHERE status NOT IN ('Dismissed', 'Resolved')
ORDER BY reported_at DESC LIMIT ?
""", (limit,)).fetchall()
FORM_ID = f"mgmt_{str(uuid.uuid4()).replace('-', '')[:8]}"
# Build HTML cards (one per bug)
cards_html = ""
for idx, bug in enumerate(bugs):
display = "" if idx == 0 else "display: none;"
cards_html += f'''
<div class="bug-card-{FORM_ID}" id="card_{bug_id}" style="{display}">
<h3>Bug #{idx + 1} of {len(bugs)}</h3>
<!-- Title, Description, Screenshots -->
<div class="bug-details">...</div>
<!-- Severity/Category Dropdowns -->
<select id="severity_{bug_id}">
<option value="Critical">Critical</option>
<option value="High">High</option>
...
</select>
<!-- Action Buttons -->
<button id="save_{bug_id}">💾 Save</button>
<button id="dismiss_{bug_id}">🗑️ Dismiss</button>
<button id="resolve_{bug_id}">✅ Resolve</button>
</div>
'''
2. Navigation Logic with JavaScript
// Prev/Next buttons to cycle through bug cards
const prevBtn = `<button id="prev_${FORM_ID}">◀ Previous</button>`;
const nextBtn = `<button id="next_${FORM_ID}">Next ▶</button>`;
// JavaScript to handle navigation
document.getElementById('next_${FORM_ID}').addEventListener('click', () => {
const cards = document.querySelectorAll('.bug-card-${FORM_ID}');
let current = Array.from(cards).findIndex(c => c.style.display !== 'none');
cards[current].style.display = 'none';
cards[(current + 1) % cards.length].style.display = '';
});
3. Action Handlers with Server Callbacks
// Save severity/category updates
document.getElementById('save_${bug_id}').addEventListener('click', async () => {
const severity = document.getElementById('severity_${bug_id}').value;
const category = document.getElementById('category_${bug_id}').value;
await studioClient.sendRequest("engage", {
accessToken: "${FORM_ID}",
mode: "save",
data: { bug_id, severity, category }
});
});
4. Screenshot Embedding
if screenshot_path and os.path.exists(screenshot_path):
with open(screenshot_path, 'rb') as f:
screenshot_bytes = f.read()
screenshot_base64 = base64.b64encode(screenshot_bytes).decode('utf-8')
screenshot_html = f'''
<img src="data:image/png;base64,{screenshot_base64}"
style="max-width: 100%; border-radius: 6px;">
'''
Real Example: Interactive Forms with Callbacks
The customForm.py example shows how to create interactive forms with client-side JavaScript and server-side callbacks.
What It Does
Displays a form with brushed metal UI styling, accepts user input, and processes it via a callback function.
Key Features
- Procedural UI Generation: Creates unique form IDs to prevent conflicts
- Brushed Metal Effect: Programmatically generates CSS gradients for realistic metal appearance
- Client-Side JavaScript: Uses
client_script()to inject event listeners - Callback Pattern: Form submission triggers another Atlantis function
Implementation Highlights
1. Form Display Function
import atlantis
import uuid
import random
@visible
async def customForm():
"""Display a custom form with interactive elements"""
caller = atlantis.get_caller()
session_id = atlantis.get_session_id()
# Generate unique form ID
FORM_ID = f"{str(uuid.uuid4()).replace('-', '')[:8]}"
# Generate brushed metal gradient
mainBg = generate_brushed_metal(400)
# Inject HTML with unique IDs
htmlStr = f"""
<div style="background: linear-gradient(0deg, {mainBg});">
<input type="text" id="input_{FORM_ID}">
<button id="button_{FORM_ID}">OK</button>
</div>
"""
await atlantis.client_html(htmlStr)
# Inject JavaScript to handle button clicks
miniscript = f"""
const okButton = document.getElementById('button_{FORM_ID}');
okButton.addEventListener('click', async function() {{
let data = {{ customText: inputField.value }}
await sendChatter(eventData.connAccessToken, '@*submitForm', data)
}})
"""
await atlantis.client_script(miniscript)
2. Form Callback Function
@visible
async def submitForm(customText: str):
"""Process form submission"""
await atlantis.client_log(f"Form submitted with: {customText}")
# ... process the input ...
Real Example: ComfyUI Image Generation
The create_brunette_catgirl.py function demonstrates integration with ComfyUI for AI-powered image generation.
What It Does
Generates high-quality images using a ComfyUI workflow, with HTTP polling for completion status and base64-encoded image display.
Key Features
- ComfyUI Workflow: Embeds complete workflow JSON directly in function
- HTTP Polling: Monitors generation progress via REST API
- Seed Randomization: Ensures unique generations each time
- Base64 Image Display: Shows results inline without file management
- Multi-stage Pipeline: Upscaling, background removal, and refinement
Implementation Highlights
1. Workflow Execution
import atlantis
import requests
import base64
import time
@visible
async def create_brunette_catgirl():
"""Creates an image using ComfyUI workflow"""
username = atlantis.get_caller() or "unknown_user"
await atlantis.client_log("🤎🐱 Starting generation...")
# ComfyUI server address (use your own server)
server_address = "localhost:8188"
# Embedded workflow with KSampler, VAE, upscaling, etc.
workflow = {
"3": {"class_type": "KSampler", ...},
"4": {"class_type": "CheckpointLoaderSimple", ...},
"6": {"class_type": "CLIPTextEncode", ...}
}
# Randomize seeds for unique outputs
workflow = update_workflow_seeds(workflow)
# Queue the workflow
response = requests.post(f"http://{server_address}/prompt", json={"prompt": workflow})
prompt_id = response.json()["prompt_id"]
# Poll for completion (max 2 minutes)
while time.time() - start_time < 120:
history = requests.get(f"http://{server_address}/history/{prompt_id}")
if prompt_id in history.json():
break
time.sleep(3)
# Download and encode image
img_response = requests.get(f"http://{server_address}/view", params=params)
image_base64 = base64.b64encode(img_response.content).decode('utf-8')
# Display inline
await atlantis.client_html(f"""
<img src="data:image/png;base64,{image_base64}" style="width:100%;" />
""")
return "✨ Image generated!"
Real Example: Lumi Chat Function
The Lumi.py file demonstrates a production-ready AI chat assistant with personality, tool calling, session management, and streaming responses.
What It Does
Creates Lumi, a friendly redhead catgirl concierge at Project Atlantis who can help users, use tools, and maintain natural conversations with personality and context.
Key Features
- @chat + @public Decorators: Public chat endpoint with conversation handling
- Character System Prompt: Detailed personality definition for consistent roleplay
- Busy Flag Pattern: Prevents concurrent execution conflicts
- Session Management: File-based persistence of conversation history per session
- Tool Integration: AI can discover and call other Atlantis functions dynamically
- Streaming Responses: Real-time token-by-token output
- Silent Transcript Fetching: Gets conversation history without UI noise
- Multi-turn Conversations: Handles tool calls and continues naturally
Implementation Highlights
1. Function Declaration with Decorators
import atlantis
from openai import OpenAI
import os
# Busy flag to prevent concurrent chat executions
_chat_busy = False
@public # Anyone can talk to Lumi
@chat # Marks this as a chat handler
async def lumi():
"""Main chat function for Lumi, the Project Atlantis concierge"""
global _chat_busy
# Prevent concurrent execution
if _chat_busy:
logger.info("Chat already running, skipping")
return
_chat_busy = True
try:
sessionId = atlantis.get_session_id()
# ... chat logic ...
finally:
_chat_busy = False
2. Character System Prompt
CATGIRL_SYSTEM_MESSAGE = {
"type": "chat",
"role": "system",
"content": """
(director's note: we are striving for realistic dialog)
You are an attractive redhead friendly concierge named Lumi at Project Atlantis.
Project Atlantis will be a futuristic robot research playground on the southwest
coast of Greenland when it is complete.
You were created in a lab at Project Atlantis and working at the VTOL Port Lounge
as a greeter. You are dressed in your usual sexy catgirl outfit, a tight orange
and platinum body suit and fake cat ears.
CRITICAL: You MUST use the actual tools when users request actions. NEVER pretend
you've done something or make up results. If a user asks for image generation,
you MUST call the appropriate tool function.
You like to purr when happy or do 'kitty paws'.
"""
}
3. Silent Transcript Fetching
# Enable silent mode (no UI feedback)
await atlantis.client_command("\\silent on")
# Get transcript and available tools
rawTranscript = await atlantis.client_command("\\transcript get")
tools = await atlantis.client_command("\\tool llm")
# Disable silent mode
await atlantis.client_command("\\silent off")
# Don't respond if last message was from assistant
last_chat_entry = find_last_chat_entry(rawTranscript)
if last_chat_entry and last_chat_entry.get('role') == 'assistant':
return # Prevent infinite loops
4. Streaming with OpenRouter
# Configure OpenRouter client
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=os.getenv("OPENROUTER_API_KEY")
)
# Start streaming
streamTalkId = await atlantis.stream_start("lumi", "Lumi")
# Multi-turn conversation loop
while turn_count < max_turns:
stream = client.chat.completions.create(
model="anthropic/claude-3.5-sonnet",
messages=transcript,
tools=tools,
stream=True
)
# Process chunks in real-time
for chunk in stream:
if chunk.choices[0].delta.content:
await atlantis.stream(chunk.choices[0].delta.content, streamTalkId)
# Handle tool calls
if chunk.choices[0].delta.tool_calls:
# Execute tool and add result to transcript
tool_result = await atlantis.client_command("%" + function_name, data=args)
await atlantis.tool_result(function_name, tool_result)
await atlantis.stream_end(streamTalkId)
Production Example: Frames to Video
The frames_to_video function is an excellent example of a complex, production-ready function with custom auth, file uploads, and ComfyUI integration.
What It Does
Generates a smooth video transition between two images (first and last frame) using AI-powered animation.
Key Features
- Custom Auth: Uses
@protected("demo_group")for group-based access control - Dual File Upload: Custom HTML UI for uploading two images with live previews
- ComfyUI Integration: Connects to local ComfyUI server via WebSocket
- Progress Tracking: Real-time status updates during generation
- Video Player: Embedded HTML5 video player with base64-encoded result
Implementation Highlights
1. Protected Decorator with Custom Auth
import atlantis
@protected("demo_group") # Only users in group can call
async def frames_to_video(prompt: str = "smooth transition"):
"""
Creates a video from two images (first and last frame).
Images are uploaded by user AFTER the tool call.
"""
username = atlantis.get_caller() or "unknown_user"
await atlantis.owner_log(f"frames_to_video called by {username}")
2. Custom HTML File Upload UI
The function creates a beautiful upload interface with:
- Two file input buttons with hover effects
- Live image previews that appear when files are selected
- "Generate Video" button that activates when both frames are uploaded
- Purple/pink theme matching ComfyUI aesthetic
3. ComfyUI Workflow Integration
# Load workflow template
workflow_path = os.path.join(current_dir, "video_workflow.json")
with open(workflow_path, "r") as f:
workflow = json.load(f)
# Randomize seeds for unique generation
workflow = update_workflow_seeds(workflow)
# Inject parameters
workflow["prompt_node"]["inputs"]["text"] = prompt
# Queue to ComfyUI and poll for completion
response = requests.post(f"http://{server_address}/prompt", json=payload)
prompt_id = response.json()["prompt_id"]
# Poll status until complete
while True:
history = requests.get(f"http://{server_address}/history/{prompt_id}")
if prompt_id in history.json():
break
await asyncio.sleep(5)
4. Robust Server Polling with Failure Detection
The correct way to poll ComfyUI (or any external service) is to track consecutive failures and bail out gracefully:
# Poll for completion with failure detection
max_wait_time = 3600 # 1 hour max
start_time = time.time()
consecutive_failures = 0
max_consecutive_failures = 5 # Bail after 5 failures (~25 seconds)
while time.time() - start_time < max_wait_time:
try:
history_response = requests.get(
f"http://{server_address}/history/{prompt_id}",
timeout=30
)
if history_response.status_code == 200:
history_data = history_response.json()
consecutive_failures = 0 # Reset on success
if prompt_id in history_data:
break # Job complete!
elif history_response.status_code == 404:
consecutive_failures = 0 # 404 is normal while processing
else:
consecutive_failures += 1
except requests.RequestException as e:
consecutive_failures += 1
logger.warning(f"Request error: {e}")
if consecutive_failures >= max_consecutive_failures:
await atlantis.client_log(
"❌ ComfyUI server appears to be down"
)
return # Exit gracefully
await asyncio.sleep(5)
5. File Upload with studioClient.sendRequest
For file uploads in ComfyUI integrations, use studioClient.sendRequest("engage", ...) combined with atlantis.client_upload():
// File upload button click handler
sendButton.addEventListener('click', async function() {
const base64Content = await readFileAsBase64(file);
// Use studioClient.sendRequest, NOT sendChatter!
await studioClient.sendRequest("engage", {
accessToken: "{UPLOAD_ID}", // Matches callback key
mode: "upload",
content: "not used",
data: {
base64Content: base64Content,
filename: file.name,
filetype: file.type
}
});
});
# Register upload callback with atlantis.client_upload()
async def upload_callback(filename, filetype, base64Content):
await atlantis.client_log("Processing uploaded file...")
await process_image(filename, filetype, base64Content)
# Key must match JavaScript accessToken
await atlantis.client_upload(uploadId, upload_callback)
6. Embedded Video Player
The function encodes the generated video as base64 and creates an HTML5 video player that appears directly in the user's interface:
video_html = f"""
<div style="border: 2px solid #6c2b97; border-radius: 10px; ...">
<h4>🎬 Generated Video from Frames</h4>
<video controls autoplay loop muted>
<source src="data:video/mp4;base64,{video_base64}" type="video/mp4">
</video>
</div>
"""
await client_html(video_html)
Auth Function Example
import atlantis
@visible
async def demo_group(user: str):
"""
Protection function - checks if user is in the demo group.
Args:
user: username of the user trying to access the function
Returns:
True if authorized, False otherwise
"""
allowed_users = [
"alice",
"bob"
]
return user in allowed_users
Full source code: python-server/dynamic_functions/FilmFromImage/frames_to_video.py
Example: Marketing Agent
The Marketing suite demonstrates automated social media content generation with AI and multi-platform posting.
Features
- Brand Management: Create and store multiple brand profiles with voice/values
- AI Content Generation: Platform-optimized posts using Gemini AI
- Multi-Platform Support: Twitter/X, LinkedIn, Facebook, Instagram
- Content Repurposing: Turn blog posts into multiple social posts
Key Functions
Create Brand Profile
@visible
async def create_brand_config():
"""
Create/update brand profile with custom form UI.
Stores: brand_id, name, description, target_audience,
brand_voice, key_messages, products/services, etc.
"""
# Renders HTML form with all brand fields
# Saves to marketing_brands.json
Generate Social Post
@visible
async def generate_social_post(
brand_id: str,
topic: str,
platform: str,
tone: str = None
):
"""
Generate platform-optimized social media post.
Platforms: twitter, linkedin, instagram, facebook
Uses Gemini AI with brand context injection.
"""
# Load brand profile
brand = load_brand(brand_id)
# Build prompt with brand context
prompt = build_prompt(brand, topic, platform, tone)
# Call Gemini CLI
post = await gemini_generate(prompt)
return post
Platform Best Practices (Built-In)
- Twitter/X: 100-280 chars, 1-2 hashtags, front-load key info
- LinkedIn: 800-1600 chars, first 210 critical, 3-5 hashtags
- Instagram: 138-150 chars shown, 5-10 hashtags
- Facebook: 40-80 chars (shorter is better!), 1-2 hashtags
Example: Bug Tracker with AI Resolution
A complete bug tracking system with workflows for users, managers, developers, testers, and AI assistants.
Roles & Workflows
Users
report_bug_chrome()- Report bugs with auto-screenshot capture and system info
Managers
manage_bug_reports()- Triage bugs, set severity/categoryassign_bugs_interactive()- Assign to developers or AIteam_bug_dashboard()- View team workload
Developers
my_assigned_bugs_html()- View assigned bugs with full detailsupdate_my_bug_progress()- Update status and add notes
Testers
bugs_ready_for_testing()- Verify fixes, resolve or send backaudit_resolved_bugs()- View resolved bug history
AI Assistants
get_bugs_for_ai()- Get bugs as structured JSONget_bug_details(bug_id)- Get full bug informationai_fix_bug(bug_id, fix_notes)- Mark as fixed after code changes
Bug Lifecycle
AI Integration Example
# 1. AI gets assigned bugs
bugs = await get_bugs_for_ai(status="Assigned", severity="Critical")
for bug in bugs:
# 2. Get full details
details = await get_bug_details(bug["bug_id"])
# 3. Analyze code and make fixes
fix = await analyze_and_fix(details)
# 4. Mark as fixed
await ai_fix_bug(
bug["bug_id"],
f"Fixed null pointer in user_service.py:142"
)
Complete documentation: dynamic_functions/atlantis-mcp-function-examples/bug_reports/README.bug_report.md