-
Notifications
You must be signed in to change notification settings - Fork 15
docs: add get_llm() and get_secrets() to OpenHands Cloud Workspace guide #401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
dc62b09
docs: add get_llm() and get_secrets() to OpenHands Cloud Workspace guide
openhands-agent bb4ec36
docs: update example filename saas_credentials → share_credentials
openhands-agent 30bc164
docs: clarify session key is unique per sandbox
openhands-agent c125f0e
docs: sync code block with SDK example and clarify auth description
openhands-agent File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,11 +1,11 @@ | ||
| --- | ||
| title: OpenHands Cloud Workspace | ||
| description: Connect to OpenHands Cloud for fully managed sandbox environments. | ||
| description: Connect to OpenHands Cloud for fully managed sandbox environments with optional SaaS credential inheritance. | ||
| --- | ||
|
|
||
| > A ready-to-run example is available [here](#ready-to-run-example)! | ||
|
|
||
| The `OpenHandsCloudWorkspace` demonstrates how to use the [OpenHands Cloud](https://app.all-hands.dev) to provision and manage sandboxed environments for agent execution. This provides a seamless experience with automatic sandbox provisioning, monitoring, and secure execution without managing your own infrastructure. | ||
|
|
||
| ## Key Concepts | ||
|
|
||
|
|
@@ -71,13 +71,55 @@ | |
|
|
||
| ```python icon="python" focus={1-3} | ||
| result = workspace.execute_command( | ||
| "echo 'Hello from OpenHands Cloud sandbox!' && pwd" | ||
| ) | ||
| logger.info(f"Command completed: {result.exit_code}, {result.stdout}") | ||
| ``` | ||
|
|
||
| This verifies connectivity to the cloud sandbox and ensures the environment is ready. | ||
|
|
||
| ### Inheriting SaaS Credentials | ||
|
|
||
| Instead of providing your own `LLM_API_KEY`, you can inherit the LLM configuration and secrets from your OpenHands Cloud account. This means you only need `OPENHANDS_CLOUD_API_KEY` — no separate LLM key required. | ||
|
|
||
| #### `get_llm()` | ||
|
|
||
| Fetches your account's LLM settings (model, API key, base URL) and returns a ready-to-use `LLM` instance: | ||
|
|
||
| ```python icon="python" focus={2-3} | ||
| with OpenHandsCloudWorkspace(...) as workspace: | ||
| llm = workspace.get_llm() | ||
| agent = Agent(llm=llm, tools=get_default_tools()) | ||
| ``` | ||
|
|
||
| You can override any parameter: | ||
|
|
||
| ```python icon="python" | ||
| llm = workspace.get_llm(model="gpt-4o", temperature=0.5) | ||
| ``` | ||
|
|
||
| Under the hood, `get_llm()` calls `GET /api/v1/users/me?expose_secrets=true` with dual authentication: the Bearer token (from your Cloud API key) in the `Authorization` header and the session key (unique per sandbox) in the `X-Session-API-Key` header. | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I’m really not sure we should let the agent say “dual auth” when one is sent to the client via the other… 😿 |
||
|
|
||
| #### `get_secrets()` | ||
|
|
||
| Builds `LookupSecret` references for your SaaS-configured secrets. Raw values **never transit through the SDK client** — they are resolved lazily by the agent-server inside the sandbox: | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is very interesting and cool! |
||
|
|
||
| ```python icon="python" focus={2-3} | ||
| with OpenHandsCloudWorkspace(...) as workspace: | ||
| secrets = workspace.get_secrets() | ||
| conversation.update_secrets(secrets) | ||
| ``` | ||
|
|
||
| You can also filter to specific secrets: | ||
|
|
||
| ```python icon="python" | ||
| gh_secrets = workspace.get_secrets(names=["GITHUB_TOKEN"]) | ||
| ``` | ||
|
|
||
| <Tip> | ||
| See the [SaaS Credentials example](#saas-credentials-example) below for a complete working example. | ||
| </Tip> | ||
|
|
||
| ## Comparison with Other Workspace Types | ||
|
|
||
| | Feature | OpenHandsCloudWorkspace | APIRemoteWorkspace | DockerWorkspace | | ||
|
|
@@ -206,6 +248,140 @@ | |
| uv run python examples/02_remote_agent_server/07_convo_with_cloud_workspace.py | ||
| ``` | ||
|
|
||
| ## SaaS Credentials Example | ||
|
|
||
| <Note> | ||
| This example is available on GitHub: [examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py](https://github.com/OpenHands/software-agent-sdk/blob/main/examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py) | ||
xingyaoww marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| </Note> | ||
|
|
||
| This example demonstrates the simplified flow where your OpenHands Cloud account's LLM configuration and secrets are inherited automatically — no need to provide `LLM_API_KEY` separately: | ||
|
|
||
| ```python icon="python" expandable examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py | ||
| """Example: Inherit SaaS credentials via OpenHandsCloudWorkspace. | ||
|
|
||
| This example shows the simplified flow where your OpenHands Cloud account's | ||
| LLM configuration and secrets are inherited automatically — no need to | ||
| provide LLM_API_KEY separately. | ||
|
|
||
| Compared to 07_convo_with_cloud_workspace.py (which requires a separate | ||
| LLM_API_KEY), this approach uses: | ||
| - workspace.get_llm() → fetches LLM config from your SaaS account | ||
| - workspace.get_secrets() → builds lazy LookupSecret references for your secrets | ||
|
|
||
| Raw secret values never transit through the SDK client. The agent-server | ||
| inside the sandbox resolves them on demand. | ||
|
|
||
| Usage: | ||
| uv run examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py | ||
|
|
||
| Requirements: | ||
| - OPENHANDS_CLOUD_API_KEY: API key for OpenHands Cloud (the only credential needed) | ||
|
|
||
| Optional: | ||
| - OPENHANDS_CLOUD_API_URL: Override the Cloud API URL (default: https://app.all-hands.dev) | ||
| - LLM_MODEL: Override the model from your SaaS settings | ||
| """ | ||
|
|
||
| import os | ||
| import time | ||
|
|
||
| from openhands.sdk import ( | ||
| Conversation, | ||
| RemoteConversation, | ||
| get_logger, | ||
| ) | ||
| from openhands.tools.preset.default import get_default_agent | ||
| from openhands.workspace import OpenHandsCloudWorkspace | ||
|
|
||
|
|
||
| logger = get_logger(__name__) | ||
|
|
||
|
|
||
| cloud_api_key = os.getenv("OPENHANDS_CLOUD_API_KEY") | ||
| if not cloud_api_key: | ||
| logger.error("OPENHANDS_CLOUD_API_KEY required") | ||
| exit(1) | ||
|
|
||
| cloud_api_url = os.getenv("OPENHANDS_CLOUD_API_URL", "https://app.all-hands.dev") | ||
| logger.info(f"Using OpenHands Cloud API: {cloud_api_url}") | ||
|
|
||
| with OpenHandsCloudWorkspace( | ||
| cloud_api_url=cloud_api_url, | ||
| cloud_api_key=cloud_api_key, | ||
| ) as workspace: | ||
| # --- LLM from SaaS account settings --- | ||
| # get_llm() calls GET /users/me?expose_secrets=true | ||
| # (dual auth: Bearer + session key) and returns a | ||
| # fully configured LLM instance. | ||
| # Override any parameter: workspace.get_llm(model="gpt-4o") | ||
| llm = workspace.get_llm() | ||
| logger.info(f"LLM configured: model={llm.model}") | ||
|
|
||
| # --- Secrets from SaaS account --- | ||
| # get_secrets() fetches secret *names* (not values) and builds LookupSecret | ||
| # references. Values are resolved lazily inside the sandbox. | ||
| secrets = workspace.get_secrets() | ||
| logger.info(f"Available secrets: {list(secrets.keys())}") | ||
|
|
||
| # Build agent and conversation | ||
| agent = get_default_agent(llm=llm, cli_mode=True) | ||
| received_events: list = [] | ||
| last_event_time = {"ts": time.time()} | ||
|
|
||
| def event_callback(event) -> None: | ||
| received_events.append(event) | ||
| last_event_time["ts"] = time.time() | ||
|
|
||
| conversation = Conversation( | ||
| agent=agent, workspace=workspace, callbacks=[event_callback] | ||
| ) | ||
| assert isinstance(conversation, RemoteConversation) | ||
|
|
||
| # Inject SaaS secrets into the conversation | ||
| if secrets: | ||
| conversation.update_secrets(secrets) | ||
| logger.info(f"Injected {len(secrets)} secrets into conversation") | ||
|
|
||
| # Build a prompt that exercises the injected secrets by asking the agent to | ||
| # print the last 50% of each token — proves values resolved without leaking | ||
| # full secrets in logs. | ||
| secret_names = list(secrets.keys()) if secrets else [] | ||
| if secret_names: | ||
| names_str = ", ".join(f"${name}" for name in secret_names) | ||
| prompt = ( | ||
| f"For each of these environment variables: {names_str} — " | ||
| "print the variable name and the LAST 50% of its value " | ||
| "(i.e. the second half of the string). " | ||
| "Then write a short summary into SECRETS_CHECK.txt." | ||
| ) | ||
| else: | ||
| # No secret was configured on OpenHands Cloud | ||
| prompt = "Tell me, is there any secret configured for you?" | ||
|
|
||
| try: | ||
| conversation.send_message(prompt) | ||
| conversation.run() | ||
|
|
||
| while time.time() - last_event_time["ts"] < 2.0: | ||
| time.sleep(0.1) | ||
|
|
||
| cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost | ||
| print(f"EXAMPLE_COST: {cost}") | ||
| finally: | ||
| conversation.close() | ||
|
|
||
| logger.info("✅ Conversation completed successfully.") | ||
| logger.info(f"Total {len(received_events)} events received during conversation.") | ||
| ``` | ||
|
|
||
| ```bash Running the SaaS Credentials Example | ||
| export OPENHANDS_CLOUD_API_KEY="your-cloud-api-key" | ||
| # Optional: override LLM model from your SaaS settings | ||
| # export LLM_MODEL="gpt-4o" | ||
| cd agent-sdk | ||
| uv run python examples/02_remote_agent_server/10_cloud_workspace_share_credentials.py | ||
| ``` | ||
|
|
||
| ## Next Steps | ||
|
|
||
| - **[API-based Sandbox](/sdk/guides/agent-server/api-sandbox)** - Connect to Runtime API service | ||
|
|
||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic job on supporting this override