Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ sudo loginctl enable-linger "$USER"
mesh-llm serve --model Qwen2.5-32B # dashboard at http://localhost:3131
```

Live topology, per-node GPU capacity, model picker, and built-in chat. Everything comes from `/api/status` (JSON) and `/api/events` (SSE).
Live topology, per-node GPU capacity, model picker, and built-in chat. Live members show only the `Client`, `Standby`, `Loading`, and `Serving` badges. Wakeable provider-backed capacity is shown separately from topology and stays out of routing until it rejoins. Everything comes from `/api/status` (JSON) and `/api/events` (SSE).

## Multimodal Support

Expand Down Expand Up @@ -500,7 +500,7 @@ When a node is running, open:
http://localhost:3131
```

The console shows live topology, VRAM usage, loaded models, and built-in chat. It is backed by `/api/status` and `/api/events`.
The console shows live topology with only `Client`, `Standby`, `Loading`, and `Serving` badges for live members, plus separate wakeable capacity, VRAM usage, loaded models, and built-in chat. Wakeable inventory is not part of topology peers or routing until it rejoins. It is backed by `/api/status` and `/api/events`.

You can also try the hosted demo:

Expand Down
4 changes: 2 additions & 2 deletions mesh-llm/docs/DESIGN.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ src/
└── system/ Hardware detection, benchmarking, self-update
```

## Node Roles
## Topology Roles

```rust
enum NodeRole {
Expand All @@ -49,7 +49,7 @@ enum NodeRole {
}
```

Roles are exchanged via gossip. Preferred peers use `meshllm.node.v1` protobuf on QUIC ALPN `mesh-llm/1`; legacy peers may still negotiate `mesh-llm/0` and use the older JSON gossip payloads. A node transitions Worker → Host when elected.
Roles are exchanged via gossip. Live-state badges are separate and use `Client`, `Standby`, `Loading`, and `Serving`. Preferred peers use `meshllm.node.v1` protobuf on QUIC ALPN `mesh-llm/1`; legacy peers may still negotiate `mesh-llm/0` and use the older JSON gossip payloads. A node transitions Worker → Host when elected.

A newly connected peer is quarantined until it sends a valid `GossipFrame` with `gen = 1` (quarantine-until-gossip admission model). Only streams 0x01 (GOSSIP) and 0x05 (ROUTE_REQUEST) are accepted before admission. All other streams are rejected until the peer is admitted.

Expand Down
16 changes: 15 additions & 1 deletion mesh-llm/docs/TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ mesh-llm serve --join <TOKEN>
```

- Joiner scans the Hugging Face cache and picks an unserved model already on disk
- Log: "Assigned to serve GLM-4.7-Flash (needed by mesh, already on disk)"
- Log: "Selected to serve GLM-4.7-Flash (needed by mesh, already on disk)"
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The updated TESTING.md log snippet says Selected to serve ..., but the runtime currently logs 📋 Assigned to serve ... (see mesh-llm/src/runtime/mod.rs around the auto-assignment eprintlns). To keep the testing guide accurate, either revert the doc string or update the runtime log message to match.

Suggested change
- Log: "Selected to serve GLM-4.7-Flash (needed by mesh, already on disk)"
- Log: "📋 Assigned to serve GLM-4.7-Flash (needed by mesh, already on disk)"

Copilot uses AI. Check for mistakes.

### 8. Lite client with multi-model

Expand Down Expand Up @@ -217,6 +217,20 @@ curl -X DELETE localhost:3131/api/runtime/models/Llama-3.2-1B-Instruct-Q4_K_M
- Switching models highlights the serving node in topology view
- Chat routes to selected model via API proxy

### 11. Console live-state and wakeable capacity

```bash
cd mesh-llm/ui/
npm run test:run
npm run typecheck
just build
```

- Live badges show only `Client`, `Standby`, `Loading`, and `Serving`
- Wakeable capacity renders in a separate section from topology peers and live nodes
- Wakeable entries do not appear in the topology peer list
- Validation uses `npm run test:run`, `npm run typecheck`, and `just build`

## Mesh Identity

### 16. Mesh ID generation (originator)
Expand Down
Loading
Loading