Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions docs/services/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,14 @@

The pgEdge Control Plane lets you run services alongside your
databases. Services are applications that attach to a database, run on
any host in the cluster, and connect via automatically-managed
database credentials.
any host in the cluster, and connect using a database user you specify
with the `connect_as` field.

## What Are Supporting Services?

A supporting service is an application that runs alongside a database.
Each service instance runs on a single host and receives its own set of
database credentials scoped to that instance. The Control Plane supports
Each service instance runs on a single host and connects to the database
using the credentials of the `connect_as` user. The Control Plane supports
the following service types:

- The [pgEdge Postgres MCP Server](mcp.md) connects AI agents and
Expand All @@ -25,9 +25,9 @@ the following service types:

When you add a service to a database, the Control Plane creates one
service instance per host listed in the service's `host_ids`. Each
instance runs on a single host and receives its own database
credentials. Services can run on any host in the cluster; they do not
need to be co-located with database instances.
instance runs on a single host and connects to the database using the
credentials of the `connect_as` user. Services can run on any host in
the cluster; they do not need to be co-located with database instances.

The following table describes the lifecycle states for service
instances:
Expand All @@ -52,8 +52,8 @@ deployment patterns are common:
with no database instance, which isolates the service workload from
the database.
- In a multiple-instances topology, one service instance runs per host
for redundancy or regional proximity; each instance receives its own
credentials and connects to the database independently.
for redundancy or regional proximity; each instance uses the same
`connect_as` credentials and connects to the database independently.

In the following example, the service runs on the same host as the
database node (`host-1`):
Expand Down
13 changes: 11 additions & 2 deletions docs/services/managing.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,15 +46,24 @@ database with one MCP service instance:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"llm_enabled": true,
"llm_enabled": true,
"llm_provider": "anthropic",
"llm_model": "claude-sonnet-4-5",
"anthropic_api_key": "sk-ant-..."
Expand Down Expand Up @@ -161,7 +170,7 @@ use a different model:

To remove a service, submit an update request that omits the service
from the `services` array. The Control Plane stops and deletes all
service instances for that service and revokes its database credentials.
service instances for that service.

!!! warning

Expand Down
42 changes: 39 additions & 3 deletions docs/services/mcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ project.
## Overview

The Control Plane provisions an MCP server container on each specified
host. The server connects to the database using automatically-managed
credentials. AI agents call the server's tools to query data, inspect
host. The server connects to the database using the credentials of the
`connect_as` user. AI agents call the server's tools to query data, inspect
schemas, run EXPLAIN plans, and perform vector similarity searches.

See [Managing Services](managing.md) for instructions on adding,
Expand Down Expand Up @@ -49,7 +49,7 @@ security configuration fields:

| Field | Type | Default | Description |
|------------------|---------|---------|-------------|
| `allow_writes` | boolean | `false` | When `true`, the service connects using the read-write database user (`svc_{service_id}_rw`) and the `query_database` tool can execute write statements. When `false`, the read-only user (`svc_{service_id}_ro`) is used and write statements are rejected at the database level. |
| `allow_writes` | boolean | `false` | When `true`, the `query_database` tool can execute write statements and the service connects to the primary node. When `false`, write statements are rejected by the MCP server and the service prefers a standby node. |
| `init_token` | string | — | A bootstrap token for initial access to the MCP server. See [Bootstrapping](#bootstrapping). |
| `init_users` | array | — | Initial user accounts to create on the MCP server. See [Bootstrapping](#bootstrapping). |

Expand Down Expand Up @@ -161,13 +161,22 @@ you connect via an MCP client that supplies its own LLM:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"init_token": "my-bootstrap-token",
"init_users": [
Expand Down Expand Up @@ -197,13 +206,22 @@ Anthropic as the provider:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"llm_enabled": true,
"llm_provider": "anthropic",
Expand Down Expand Up @@ -237,13 +255,22 @@ OpenAI and configures embedding support:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"llm_enabled": true,
"llm_provider": "openai",
Expand Down Expand Up @@ -280,13 +307,22 @@ to use a self-hosted Ollama server for both the LLM and embeddings:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"llm_enabled": true,
"llm_provider": "ollama",
Expand Down