Skip to content

feat: share model proxy details to agents via envs#276

Merged
CaelumF merged 1 commit intomasterfrom
feat/coral-env-model-details
Apr 17, 2026
Merged

feat: share model proxy details to agents via envs#276
CaelumF merged 1 commit intomasterfrom
feat/coral-env-model-details

Conversation

@CaelumF
Copy link
Copy Markdown
Collaborator

@CaelumF CaelumF commented Apr 16, 2026

Currently agents effectively express model preferences via the proxy routes they request in their coral-agent.toml files.

There are 3 reasons it seems useful to share these model details with the agent.

(@seafraf expressed some skepticism to these changes so I'll reason in more detail than usual)

1: we might in future let the agents express model preferences separately from model requirements, or to allow flexibility in the underlying model (e.g. must be openai chat format, model must have training cutoff past this point, this model worked best but a newer one could be better, model needs to have this many TPS to support our SLAs), so the proxies would then get "reified" at runtime

2: it's awkward for agents to load these details from the coral-agent.toml (separate question: should agents rely on the coral-agent.toml they were built with matching the one that's being used by the server?). alternatively we'd impose code duplication by making them specify it once in code and once in their coral-agent.toml. koog requires to know which model and provider exactly is being used for its own fail fast and token counting logic (see below for why)

3: the provider has relevance, even if using the same format, model providers can vary in things like max output tokens, max total tokens, whether/how much is saved by cached context, and acceptable use policies. If we just changed agents to specify the providers per proxy, it'd be annoying because in most cases the provider doesn't matter, but by doing this we can still support the agents from making decisions around the provider and rejecting certain providers (though granted the last use case would be better accommodated through coral-agent.toml, flexible proxy preferences are a further in the future thing, and koog agents are now)

There are probably good reasons to lie to agents in the future about model proxy details, such as in the case of a koog agent where a future model specified will cause it to crash, which is worse than it being wrong about the model details or constrained to an outdated model before authors update their coral-agent.tomls. But I think these changes/this sentiment will prove necessary

@CaelumF
Copy link
Copy Markdown
Collaborator Author

CaelumF commented Apr 16, 2026

Also in the future we might even consider sharing the provider in more detail like the max single inference output token limit on its own, though that's def not prio for now

Copy link
Copy Markdown
Collaborator

@seafraf seafraf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can merge this now, but it seems like the usefulness of this requires a change to coral-agent.toml so that agent developers may specify model requests instead of model requirements?

@CaelumF CaelumF merged commit 128665e into master Apr 17, 2026
2 checks passed
@CaelumF CaelumF deleted the feat/coral-env-model-details branch April 17, 2026 08:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants