API Reference
HexaClaw gives you a single API key for every AI service your agent needs: LLM completions, web search, image generation, text-to-speech, speech-to-text, embeddings, vector storage, browser automation, and security scanning.
Base URL
https://api.hexaclaw.comAll endpoints use HTTPS. Requests and responses are JSON unless otherwise noted. The API is OpenAI-compatible for chat completions, embeddings, TTS, and STT -- drop in your existing OpenAI SDK code and change the base URL.
Authentication
Include your API key in the Authorization header as a Bearer token.
Authorization: Bearer hx_live_YOUR_API_KEYAuth methods
1. API Key (recommended)
Create keys in the dashboard or via the POST /v1/keys endpoint. Keys start with hx_live_ and are 48 characters.
2. Device Token
Used by the HexaClaw CLI. Provision via POST /v1/device/provision, then link to your account with POST /v1/device/link.
3. Firebase ID Token
For web dashboard and key management endpoints. Pass a Firebase Auth ID token in the same Bearer header.
Response headers
Every API response includes useful metadata headers:
| Header | Description |
|---|---|
X-Generation-Id | Unique ID for this generation (use with GET /v1/generation) |
X-Model-Requested | The model ID you requested |
X-Model-Used | The actual model used (may differ if alias was used) |
X-RateLimit-Limit | Your rate limit ceiling for this window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Seconds until the rate limit resets |
Idempotency-Key | Echo of your Idempotency-Key header (if sent) |
X-Idempotent-Replayed | Set to 'true' if response was a cached replay |
Idempotency
Include an Idempotency-Key header to safely retry requests. If the same key is sent again within 24 hours, the cached response is returned instead of creating a new generation.
Bring Your Own Key (BYOK)
Pass your own provider API key via the X-Provider-Key header to route directly to the provider. BYOK requests are charged a 5% platform fee (minimum 1 credit) instead of the full credit cost.
Quickstart
Send your first request in seconds. HexaClaw is OpenAI-compatible -- just change the base URL and API key.
curl -X POST https://api.hexaclaw.com/v1/chat/completions \
-H "Authorization: Bearer hx_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"messages": [{"role": "user", "content": "Hello, world!"}]
}'from openai import OpenAI
client = OpenAI(
base_url="https://api.hexaclaw.com/v1",
api_key="hx_live_YOUR_KEY",
)
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[{"role": "user", "content": "Hello, world!"}],
)
print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.hexaclaw.com/v1",
apiKey: "hx_live_YOUR_KEY",
});
const response = await client.chat.completions.create({
model: "claude-sonnet-4-6",
messages: [{ role: "user", content: "Hello, world!" }],
});
console.log(response.choices[0].message.content);Models
Discover available models and their pricing. All tiers have access to all models.
/v1/modelsList all available models with pricing information.
/v1/models/:idGet details for a specific model.
Model aliases
You can use convenient short aliases instead of full model IDs. For example, claude-sonnet resolves to claude-sonnet-4-6.
| Alias | Resolves to |
|---|---|
claude-sonnet | claude-sonnet-4-6 |
claude-haiku | claude-haiku-4-5 |
claude-opus | claude-opus-4-6 |
gpt-4o | gpt-4.1 |
gpt-4o-mini | gpt-4.1-mini |
gemini-flash | gemini-2.5-flash |
gemini-pro | gemini-2.5-pro |
grok | grok-3 |
deepseek | deepseek-chat |
mistral | mistral-large-latest |
llama | llama-3.3-70b-versatile |
50+ aliases are supported in total. Use GET /v1/models to see the full list.
Chat Completions
OpenAI-compatible chat completions endpoint. Supports streaming, function calling, JSON mode, and vision across all providers.
/v1/chat/completionsCreate a chat completion. Routes to the appropriate provider based on model ID.
Request body
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Required | Model ID (e.g. claude-sonnet-4-6, gpt-4.1, gemini-2.5-flash) |
messages | array | Required | Array of message objects with role and content |
stream | boolean | Default: false | Enable server-sent events streaming |
temperature | number | Default: 1.0 | Sampling temperature (0-2) |
max_tokens | integer | Optional | Maximum tokens to generate |
tools | array | Optional | Function calling tools (OpenAI format) |
response_format | object | Optional | Set to {"type":"json_object"} for JSON mode |
curl -X POST https://api.hexaclaw.com/v1/chat/completions \
-H "Authorization: Bearer hx_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"stream": true,
"messages": [{"role": "user", "content": "Write a haiku about APIs"}]
}'Messages (Anthropic Native)
Native Anthropic Messages API for Claude models. Use this if you need Anthropic-specific features like extended thinking or cache control. Also supports x-api-key header authentication (Anthropic SDK convention). The anthropic-version and anthropic-beta headers are forwarded to the upstream provider.
/v1/messagesCreate a message using the Anthropic Messages API format.
Request body
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Required | Claude model ID (claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5) |
messages | array | Required | Array of message objects (Anthropic format) |
max_tokens | integer | Required | Maximum tokens to generate |
stream | boolean | Default: false | Enable SSE streaming |
system | string | Optional | System prompt |
temperature | number | Default: 1.0 | Sampling temperature (0-1) |
import anthropic
client = anthropic.Anthropic(
base_url="https://api.hexaclaw.com/v1",
api_key="hx_live_YOUR_KEY",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Explain quantum computing"}],
)
print(message.content[0].text)Web Search
/v1/search1 credit/querySearch the web using Brave Search. Returns structured results with titles, URLs, and descriptions.
Request body
| Parameter | Type | Required | Description |
|---|---|---|---|
query | string | Required | Search query |
count | integer | Default: 10 | Number of results (1-20) |
curl -X POST https://api.hexaclaw.com/v1/search \
-H "Authorization: Bearer hx_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"query": "latest AI research papers 2026"}'Image Generation
Generate images using Flux models via fal.ai. Three quality tiers available.
/v1/images/generate1-6 creditsGenerate an image from a text prompt.
Request body
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt | string | Required | Image description |
model | string | Default: schnell | Quality tier: "schnell" (1cr), "dev" (3cr), or "pro" (6cr) |
size | string | Default: 1024x1024 | Image dimensions |
curl -X POST https://api.hexaclaw.com/v1/images/generate \
-H "Authorization: Bearer hx_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"prompt": "A cyberpunk city at sunset", "model": "dev"}'Text-to-Speech
/v1/audio/speech1-2 credits/1K charsConvert text to speech using OpenAI TTS. Returns audio data.
Request body
| Parameter | Type | Required | Description |
|---|---|---|---|
input | string | Required | Text to convert to speech |
model | string | Default: tts-1 | "tts-1" (1cr/1K chars) or "tts-1-hd" (2cr/1K chars) |
voice | string | Default: alloy | Voice: alloy, echo, fable, onyx, nova, shimmer |
response_format | string | Default: mp3 | Output format: mp3, opus, aac, flac |
Speech-to-Text
/v1/audio/transcriptions1 credit/minTranscribe audio to text using Whisper. Accepts multipart/form-data.
Request body (multipart/form-data)
| Parameter | Type | Required | Description |
|---|---|---|---|
file | file | Required | Audio file (mp3, wav, webm, m4a, etc.) |
model | string | Default: whisper-1 | Transcription model |
language | string | Optional | ISO-639-1 language code (optional, auto-detected) |
Embeddings
Generate text embeddings for semantic search, clustering, and RAG pipelines. OpenAI-compatible endpoint.
/v1/embeddings1-13 credits/1M tokensCreate embeddings for input text.
Request body
| Parameter | Type | Required | Description |
|---|---|---|---|
input | string | string[] | Required | Text to embed (single string or array) |
model | string | Required | Model ID (see table below) |
Available embedding models
| Model | Provider | Dimensions | Credits/1M tokens |
|---|---|---|---|
text-embedding-3-small | OpenAI | 1,536 | 2 |
text-embedding-3-large | OpenAI | 3,072 | 13 |
gemini-embedding-001 | 3,072 | 1 |
Vector Storage
Managed vector database for RAG, semantic search, and long-term memory. Each user gets an isolated namespace.
/v1/vectors/upsert1 credit/1K vectorsInsert or update vectors. Auto-creates collection on first use.
/v1/vectors/queryFreeSearch for similar vectors by embedding. Returns scored results with payloads.
/v1/vectors/deleteFreeDelete vectors by IDs or filter.
Upsert request body
| Parameter | Type | Required | Description |
|---|---|---|---|
points | array | Required | Array of {id, vector, payload?} objects. Max 10,000 per request. |
curl -X POST https://api.hexaclaw.com/v1/vectors/upsert \
-H "Authorization: Bearer hx_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"points": [
{"id": "doc-1", "vector": [0.1, 0.2, ...], "payload": {"text": "Hello world"}},
{"id": "doc-2", "vector": [0.3, 0.4, ...], "payload": {"text": "Goodbye world"}}
]
}'Query request body
| Parameter | Type | Required | Description |
|---|---|---|---|
vector | number[] | Required | Query embedding vector |
top_k | integer | Default: 10 | Number of results (max 100) |
filter | object | Optional | Qdrant filter object for metadata filtering |
Browser Sessions
Launch headless browser sessions for web scraping, testing, and automation via Browserbase. Billed per minute of session time.
/v1/browser/sessions2 credits/minCreate a new browser session. Returns a WebSocket connect URL.
/v1/browser/sessions/:idGet the status of a browser session.
/v1/browser/sessions/:idTerminate a browser session. Credits are billed on termination (minimum 1 minute).
Create session body
| Parameter | Type | Required | Description |
|---|---|---|---|
proxy | boolean | Default: false | Enable proxy for the session |
Guardian Security Max Plan
AI security scanning for prompt injection, PII leaks, and malicious content. Guardian runs automatically on all Max plan requests. These endpoints are for explicit on-demand scanning.
/v1/guardian/scan/input1 creditMax OnlyScan user input for prompt injection and data exfiltration attempts.
/v1/guardian/scan/output1 creditMax OnlyScan LLM output for secrets, PII, and policy violations.
/v1/guardian/scan1 creditMax OnlyFull scan: input + output in a single request.
/v1/guardian/analyze1 creditMax OnlyDeep threat analysis with detailed risk breakdown.
/v1/guardian/statusMax OnlyCheck Guardian availability and capabilities.
API Key Management
Create, list, update, and deactivate API keys. These endpoints require Firebase ID token authentication (not API key auth).
/v1/keyIntrospect the current API key -- returns key metadata, scopes, and owner info.
/v1/keysCreate a new API key with optional name and permission scopes.
/v1/keysList all active API keys for your account.
/v1/keys/:idUpdate key name, scopes, or rate limits.
/v1/keys/:idDeactivate an API key (cannot be undone).
Usage & Credits
Track credit balance, usage history, and spending across models and services.
/v1/usageCurrent credit balance and usage summary.
/v1/usage/historyDaily usage history with per-service breakdown.
/v1/usage/summaryDashboard summary widget data (aggregated stats).
/v1/usage/by-modelUsage breakdown by model. Supports ?from= and ?to= date filters.
/v1/usage/by-serviceUsage breakdown by service type. Supports ?from= and ?to= date filters.
/v1/credits/ledgerDetailed credit transaction ledger.
Pricing
1 credit = $0.01 USD. Credits are deducted per request based on token usage.
LLM Models
| Model | Provider | Input/1M | Output/1M |
|---|---|---|---|
| Anthropic | |||
claude-opus-4-6 | Claude Opus 4.6 | 650 | 3,250 |
claude-sonnet-4-6 | Claude Sonnet 4.6 | 390 | 1,950 |
claude-haiku-4-5 | Claude Haiku 4.5 | 100 | 500 |
| OpenAI | |||
gpt-4.1 | GPT-4.1 | 260 | 1,040 |
gpt-4.1-mini | GPT-4.1 Mini | 40 | 160 |
gpt-4.1-nano | GPT-4.1 Nano | 10 | 40 |
o3 | o3 | 260 | 1,040 |
o4-mini | o4-mini | 143 | 572 |
gemini-2.5-flash-lite | Gemini 2.5 Flash Lite | 10 | 40 |
gemini-2.5-flash | Gemini 2.5 Flash | 15 | 60 |
gemini-2.5-pro | Gemini 2.5 Pro | 163 | 1,300 |
gemini-3-flash-preview | Gemini 3.0 Flash | 65 | 390 |
| xAI | |||
grok-3 | Grok 3 | 390 | 1,950 |
grok-3-fast | Grok 3 Fast | 650 | 3,250 |
grok-3-mini | Grok 3 Mini | 39 | 65 |
grok-3-mini-fast | Grok 3 Mini Fast | 78 | 520 |
| DeepSeek | |||
deepseek-chat | DeepSeek V3 | 18 | 36 |
deepseek-reasoner | DeepSeek R1 | 72 | 290 |
| Mistral | |||
mistral-large-latest | Mistral Large | 260 | 780 |
mistral-small-latest | Mistral Small | 13 | 39 |
codestral-latest | Codestral | 39 | 117 |
| Groq | |||
llama-3.3-70b-versatile | Llama 3.3 70B | 78 | 103 |
llama-4-scout-17b-16e-instruct | Llama 4 Scout | 15 | 15 |
llama-4-maverick-17b-128e-instruct | Llama 4 Maverick | 65 | 100 |
qwen-qwq-32b | Qwen QwQ 32B | 39 | 52 |
gemma2-9b-it | Gemma 2 9B | 3 | 3 |
Services
| Service | Cost | Unit |
|---|---|---|
| Web Search | 1 credit | Per query |
| Image Gen (schnell) | 1 credit | Per image |
| Image Gen (dev) | 3 credits | Per image |
| Image Gen (pro) | 6 credits | Per image |
| TTS (standard) | 1 credit | Per 1K characters |
| TTS (HD) | 2 credits | Per 1K characters |
| Speech-to-Text | 1 credit | Per minute |
| Browser Session | 2 credits | Per minute |
| Embeddings (small) | 2 credits | Per 1M tokens |
| Embeddings (large) | 13 credits | Per 1M tokens |
| Embeddings (gemini) | 1 credit | Per 1M tokens |
| Vectors Upsert | 1 credit | Per 1K vectors |
| Vectors Query | Free | -- |
| Guardian Scan | 1 credit | Per scan |
Rate Limits
Rate limits are applied per API key. Exceeding limits returns HTTP 429.
| Tier | Req/min | Req/day | Concurrent | Monthly Credits |
|---|---|---|---|---|
| Trial | 30 | 500 | 3 | 200 |
| Pro ($19.99/mo) | 60 | 2,000 | 5 | 2,000 |
| Max ($49/mo) | 120 | 10,000 | 10 | 5,000 |
| Enterprise | 600 | 100,000 | 50 | 50,000 |
Rate limit headers are included in every response: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset.
Errors
All errors return a consistent JSON format with an error object.
{
"error": {
"message": "Insufficient credits",
"type": "payment_required",
"code": 402
}
}Status codes
| Code | Description |
|---|---|
200 | Success |
400 | Bad request -- invalid parameters |
401 | Unauthorized -- missing or invalid API key |
402 | Payment required -- insufficient credits |
403 | Forbidden -- API key lacks required scope |
404 | Not found -- invalid endpoint or resource |
429 | Too many requests -- rate limit exceeded |
500 | Internal server error |
502 | Bad gateway -- upstream provider error |
503 | Service unavailable -- provider temporarily down |
504 | Gateway timeout -- provider request timed out |
SDKs & Libraries
HexaClaw is compatible with existing OpenAI and Anthropic SDKs. No custom SDK needed -- just change the base URL.
OpenAI Python SDK
pip install openai
from openai import OpenAI
client = OpenAI(base_url="https://api.hexaclaw.com/v1", api_key="hx_live_YOUR_KEY")OpenAI Node.js SDK
npm install openai
import OpenAI from "openai";
const client = new OpenAI({ baseURL: "https://api.hexaclaw.com/v1", apiKey: "hx_live_YOUR_KEY" });Anthropic Python SDK
pip install anthropic
import anthropic
client = anthropic.Anthropic(base_url="https://api.hexaclaw.com/v1", api_key="hx_live_YOUR_KEY")cURL
curl https://api.hexaclaw.com/v1/chat/completions \
-H "Authorization: Bearer hx_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"claude-sonnet-4-6","messages":[{"role":"user","content":"Hi"}]}'CLI & Terminal
HexaClaw includes a full CLI for managing agents, channels, and cloud services from your terminal. Install it with a single command.
Installation
# One-line install (macOS / Linux)
curl -sSL https://hexaclaw.com/install-cli | bash
# Or install via npm
npm install -g hexaclaw
# Verify installation
hexaclaw --versionThe installer writes your auth token, enables the HexaClaw cloud plugin, and optionally starts the gateway. You can also generate a pre-authenticated install command from the Dashboard.
Basic Commands
| Command | Description |
|---|---|
hexaclaw --version | Show installed version |
hexaclaw --help | List all available commands |
hexaclaw configure | Interactive setup wizard |
hexaclaw config get <key> | Read a config value |
hexaclaw channels status | Check connected channels and gateway health |
hexaclaw agents list | List configured agents and their models |
hexaclaw dashboard | Open the gateway web UI in your browser |
Configuration Files
All config lives in ~/.openclaw/:
| File | Purpose |
|---|---|
cloud-auth.json | Firebase auth tokens (auto-refreshed) |
hexaclaw.json | Cloud credits, spending cap, plugin settings |
openclaw.json | Core config (plugins, models, agent defaults) |
auth-profiles.json | Device auth profiles (optional) |
Agent Commands
Run AI agent turns directly from your terminal. The agent has access to 60+ skills, web search, image generation, file operations, scheduling, and more.
Quick Start
# Ask a question
hexaclaw agent --agent main --message "What is the capital of France?"
# Get JSON output (for scripting)
hexaclaw agent --agent main --message "What is 2+2?" --json
# Set thinking level
hexaclaw agent --agent main --message "Analyze this code" --thinking mediumWeb Search
The agent can search the web in real-time using the built-in web search tool:
hexaclaw agent --agent main \
--message "Search the web for the latest AI news and summarize the top 3 stories"Image Generation
Generate images using fal.ai models (1-6 credits per image):
hexaclaw agent --agent main \
--message "Generate an image of a sunset over mountains, photorealistic style"
# The agent returns the image URL in its responseFile Operations
Read, write, and edit files on your machine:
# Read a file
hexaclaw agent --agent main --message "Read ~/notes.txt and summarize it"
# Write a file
hexaclaw agent --agent main \
--message "Write a Python script that sorts a list of numbers to ~/sort.py"Scheduling (Cron)
Set up recurring tasks via the built-in scheduler:
# List scheduled jobs
hexaclaw agent --agent main --message "List my cron jobs"
# Create a daily summary
hexaclaw agent --agent main \
--message "Schedule a daily job at 9am that searches for AI news and sends me a summary"Agent Options
| Flag | Description |
|---|---|
--agent <id> | Target a specific agent (default: main) |
-m, --message <text> | The message to send to the agent |
--json | Output result as JSON (for scripting and pipelines) |
--thinking <level> | Thinking depth: off, minimal, low, medium, high |
--timeout <seconds> | Max execution time (default: 600s) |
--local | Run locally with your own API keys |
--deliver | Send the reply to a connected channel |
--session-id <id> | Continue an existing conversation session |
--verbose on | Enable detailed logging for debugging |
JSON Output Format
When using --json, the response includes the full result with metadata:
{
"runId": "ce6fcc29-c3f3-4653-ad16-9057c6d04ecf",
"status": "ok",
"summary": "completed",
"result": {
"payloads": [
{
"text": "The capital of France is Paris.",
"mediaUrl": null
}
],
"meta": {
"durationMs": 2270,
"agentMeta": {
"provider": "hexaclaw-cloud",
"model": "gemini-2.5-flash",
"usage": {
"input": 49977,
"output": 12,
"total": 49989
}
}
}
}
}Common Use Cases
Real-world examples showing how to use HexaClaw for everyday tasks. All commands below assume you have HexaClaw installed and authenticated.
LLM Proxy (Basic Chat)
Send prompts to any model via the CLI. HexaClaw routes to the best provider automatically.
# Simple prompt
hexaclaw agent --local --message "What is 2+2? Reply with just the number." --agent main --thinking low
# Specific model (use any of the 25+ models)
hexaclaw agent --local --message "Hello, who are you?" --agent main --thinking lowSlash Commands (Interactive)
Run these inside the HexaClaw shell or in an agent conversation. Each command uses the service API and deducts credits accordingly.
/search "latest news on AI agents"/imagine a cyberpunk cat wearing sunglasses --model schnell
/imagine a mountain landscape at sunset --model dev
/imagine photorealistic portrait of a robot --model pro/speak "Hello, this is HexaClaw speaking" --voice nova/transcribe /path/to/audio.mp3/browse/video a cat walking on the moon --model wan-2.1-turbo/music upbeat electronic lo-fi beat --duration 10/scrape https://example.com/remember HexaClaw launched in March 2026
/recall when did HexaClaw launch/email user@example.com "Test Subject" This is a test email/budgetAgent with Tool Use
The agent automatically picks the right tool based on your request. No need to specify which service to use — the agent decides.
# Agent will use web search tool
hexaclaw agent --local --message "Search the web for the latest Claude model and summarize" --agent main --thinking low
# Agent will use image gen tool
hexaclaw agent --local --message "Generate an image of a futuristic city" --agent main --thinking low
# Agent factory — spawn a specialist
hexaclaw agent --local --message "Analyze my system and create a Python data analysis agent" --agent main --thinking lowOther Useful CLI Commands
# Check auth status
hexaclaw login --status
# List available models
hexaclaw models list
# List agents
hexaclaw agents list
# List plugins
hexaclaw plugins list
# Check config
hexaclaw config get agents.defaults.modelGateway & TUI
The HexaClaw gateway is a local server that provides a web-based control UI, WebSocket connections for real-time chat, and manages channel integrations.
Starting the Gateway
# Start the gateway (runs as a background service)
hexaclaw gateway start
# Or run interactively
hexaclaw gateway run --bind loopback --port 18789
# Check gateway status
hexaclaw gateway statusWeb Control UI
Once the gateway is running, open the control UI in your browser:
# Open the dashboard
hexaclaw dashboard
# Or navigate directly to:
# http://127.0.0.1:18789/The control UI provides a chat interface, agent management, skill browser, and system overview -- all accessible from your browser at http://127.0.0.1:18789.
Available routes:
| Route | Description |
|---|---|
/overview | System status, health, and uptime |
/chat | Interactive chat with your AI agent |
/agents | Manage agents and their configurations |
/skills | Browse and manage installed skills |
Gateway Management
| Command | Description |
|---|---|
hexaclaw gateway start | Start the gateway as a background service |
hexaclaw gateway stop | Stop the running gateway |
hexaclaw gateway restart | Restart the gateway service |
hexaclaw gateway status | Show gateway process status and config |
hexaclaw gateway run | Run gateway in foreground (for debugging) |
Available Tools
The agent has access to a variety of built-in tools. Here are the most commonly used:
read / write / editFile operations on your machine
execRun shell commands
web_searchSearch the web via Brave API
web_fetchFetch and parse web pages
generate_imageCreate images via fal.ai
ttsText-to-speech audio
browserControl a headless browser
cronSchedule recurring tasks
messageSend messages to channels
agent_factoryCreate and compose new agents
memory_searchSearch agent memory
sessions_spawnSpawn sub-agents
Skills
HexaClaw ships with 60+ skills across categories like productivity, development, communication, research, and automation. Skills extend the agent with domain-specific capabilities.
# The agent automatically loads skills based on your message
# Examples of skill-powered tasks:
# Email (via himalaya / gmail skill)
hexaclaw agent --agent main --message "Check my inbox for unread emails"
# GitHub (via github / gh-issues skill)
hexaclaw agent --agent main --message "List open PRs in my repo"
# Calendar (via calendar skill)
hexaclaw agent --agent main --message "What meetings do I have today?"
# Weather
hexaclaw agent --agent main --message "What's the weather in San Francisco?"
# Research
hexaclaw agent --agent main --message "Do a deep research on quantum computing trends"