Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
178 changes: 177 additions & 1 deletion sdk/guides/agent-server/cloud-workspace.mdx
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
---
title: OpenHands Cloud Workspace
description: Connect to OpenHands Cloud for fully managed sandbox environments.
description: Connect to OpenHands Cloud for fully managed sandbox environments with optional SaaS credential inheritance.
---

> A ready-to-run example is available [here](#ready-to-run-example)!

The `OpenHandsCloudWorkspace` demonstrates how to use the [OpenHands Cloud](https://app.all-hands.dev) to provision and manage sandboxed environments for agent execution. This provides a seamless experience with automatic sandbox provisioning, monitoring, and secure execution without managing your own infrastructure.

Check warning on line 8 in sdk/guides/agent-server/cloud-workspace.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/guides/agent-server/cloud-workspace.mdx#L8

Did you really mean 'sandboxed'?

## Key Concepts

Expand Down Expand Up @@ -71,13 +71,55 @@

```python icon="python" focus={1-3}
result = workspace.execute_command(
"echo 'Hello from OpenHands Cloud sandbox!' && pwd"

Check warning on line 74 in sdk/guides/agent-server/cloud-workspace.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/guides/agent-server/cloud-workspace.mdx#L74

Did you really mean 'pwd'?
)
logger.info(f"Command completed: {result.exit_code}, {result.stdout}")
```

This verifies connectivity to the cloud sandbox and ensures the environment is ready.

### Inheriting SaaS Credentials

Instead of providing your own `LLM_API_KEY`, you can inherit the LLM configuration and secrets from your OpenHands Cloud account. This means you only need `OPENHANDS_CLOUD_API_KEY` — no separate LLM key required.

#### `get_llm()`

Fetches your account's LLM settings (model, API key, base URL) and returns a ready-to-use `LLM` instance:

```python icon="python" focus={2-3}
with OpenHandsCloudWorkspace(...) as workspace:
llm = workspace.get_llm()
agent = Agent(llm=llm, tools=get_default_tools())
```

You can override any parameter:

```python icon="python"
llm = workspace.get_llm(model="gpt-4o", temperature=0.5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic job on supporting this override

```

Under the hood, `get_llm()` calls `GET /api/v1/users/me?expose_secrets=true` with dual authentication: the Bearer token (from your Cloud API key) in the `Authorization` header and the session key (unique per sandbox) in the `X-Session-API-Key` header.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m really not sure we should let the agent say “dual auth” when one is sent to the client via the other… 😿


#### `get_secrets()`

Builds `LookupSecret` references for your SaaS-configured secrets. Raw values **never transit through the SDK client** — they are resolved lazily by the agent-server inside the sandbox:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very interesting and cool!


```python icon="python" focus={2-3}
with OpenHandsCloudWorkspace(...) as workspace:
secrets = workspace.get_secrets()
conversation.update_secrets(secrets)
```

You can also filter to specific secrets:

```python icon="python"
gh_secrets = workspace.get_secrets(names=["GITHUB_TOKEN"])
```

<Tip>
See the [SaaS Credentials example](#saas-credentials-example) below for a complete working example.
</Tip>

## Comparison with Other Workspace Types

| Feature | OpenHandsCloudWorkspace | APIRemoteWorkspace | DockerWorkspace |
Expand Down Expand Up @@ -206,6 +248,140 @@
uv run python examples/02_remote_agent_server/07_convo_with_cloud_workspace.py
```

## SaaS Credentials Example

<Note>
This example is available on GitHub: [examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py](https://github.com/OpenHands/software-agent-sdk/blob/main/examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py)
</Note>

This example demonstrates the simplified flow where your OpenHands Cloud account's LLM configuration and secrets are inherited automatically — no need to provide `LLM_API_KEY` separately:

```python icon="python" expandable examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py
"""Example: Inherit SaaS credentials via OpenHandsCloudWorkspace.

This example shows the simplified flow where your OpenHands Cloud account's
LLM configuration and secrets are inherited automatically — no need to
provide LLM_API_KEY separately.

Compared to 07_convo_with_cloud_workspace.py (which requires a separate
LLM_API_KEY), this approach uses:
- workspace.get_llm() → fetches LLM config from your SaaS account
- workspace.get_secrets() → builds lazy LookupSecret references for your secrets

Raw secret values never transit through the SDK client. The agent-server
inside the sandbox resolves them on demand.

Usage:
uv run examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py

Requirements:
- OPENHANDS_CLOUD_API_KEY: API key for OpenHands Cloud (the only credential needed)

Optional:
- OPENHANDS_CLOUD_API_URL: Override the Cloud API URL (default: https://app.all-hands.dev)
- LLM_MODEL: Override the model from your SaaS settings
"""

import os
import time

from openhands.sdk import (
Conversation,
RemoteConversation,
get_logger,
)
from openhands.tools.preset.default import get_default_agent
from openhands.workspace import OpenHandsCloudWorkspace


logger = get_logger(__name__)


cloud_api_key = os.getenv("OPENHANDS_CLOUD_API_KEY")
if not cloud_api_key:
logger.error("OPENHANDS_CLOUD_API_KEY required")
exit(1)

cloud_api_url = os.getenv("OPENHANDS_CLOUD_API_URL", "https://app.all-hands.dev")
logger.info(f"Using OpenHands Cloud API: {cloud_api_url}")

with OpenHandsCloudWorkspace(
cloud_api_url=cloud_api_url,
cloud_api_key=cloud_api_key,
) as workspace:
# --- LLM from SaaS account settings ---
# get_llm() calls GET /users/me?expose_secrets=true
# (dual auth: Bearer + session key) and returns a
# fully configured LLM instance.
# Override any parameter: workspace.get_llm(model="gpt-4o")
llm = workspace.get_llm()
logger.info(f"LLM configured: model={llm.model}")

# --- Secrets from SaaS account ---
# get_secrets() fetches secret *names* (not values) and builds LookupSecret
# references. Values are resolved lazily inside the sandbox.
secrets = workspace.get_secrets()
logger.info(f"Available secrets: {list(secrets.keys())}")

# Build agent and conversation
agent = get_default_agent(llm=llm, cli_mode=True)
received_events: list = []
last_event_time = {"ts": time.time()}

def event_callback(event) -> None:
received_events.append(event)
last_event_time["ts"] = time.time()

conversation = Conversation(
agent=agent, workspace=workspace, callbacks=[event_callback]
)
assert isinstance(conversation, RemoteConversation)

# Inject SaaS secrets into the conversation
if secrets:
conversation.update_secrets(secrets)
logger.info(f"Injected {len(secrets)} secrets into conversation")

# Build a prompt that exercises the injected secrets by asking the agent to
# print the last 50% of each token — proves values resolved without leaking
# full secrets in logs.
secret_names = list(secrets.keys()) if secrets else []
if secret_names:
names_str = ", ".join(f"${name}" for name in secret_names)
prompt = (
f"For each of these environment variables: {names_str} — "
"print the variable name and the LAST 50% of its value "
"(i.e. the second half of the string). "
"Then write a short summary into SECRETS_CHECK.txt."
)
else:
# No secret was configured on OpenHands Cloud
prompt = "Tell me, is there any secret configured for you?"

try:
conversation.send_message(prompt)
conversation.run()

while time.time() - last_event_time["ts"] < 2.0:
time.sleep(0.1)

cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost
print(f"EXAMPLE_COST: {cost}")
finally:
conversation.close()

logger.info("✅ Conversation completed successfully.")
logger.info(f"Total {len(received_events)} events received during conversation.")
```

```bash Running the SaaS Credentials Example
export OPENHANDS_CLOUD_API_KEY="your-cloud-api-key"
# Optional: override LLM model from your SaaS settings
# export LLM_MODEL="gpt-4o"
cd agent-sdk
uv run python examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py
```

## Next Steps

- **[API-based Sandbox](/sdk/guides/agent-server/api-sandbox)** - Connect to Runtime API service
Expand Down
Loading