feat: bypass canonical model filtering for LM Studio#8043
feat: bypass canonical model filtering for LM Studio#8043nelsonr462 wants to merge 2 commits intoblock:mainfrom
Conversation
…o-auth providers - Add skip_model_filtering() trait method to Provider - Implement in OpenAiProvider to enable unfiltered models when - Update LM Studio declarative config with dynamic env var - Show warning UI message for LM Studio users about tool-calling compatibility - Local providers can now discover all available models without canonical registry filtering Signed-off-by: Nelson Ramirez <nelsonr462@gmail.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 25498e96c6
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
crates/goose/src/providers/openai.rs
Outdated
| custom_headers: config.headers, | ||
| supports_streaming: config.supports_streaming.unwrap_or(true), | ||
| name: config.name.clone(), | ||
| skip_model_filter: !config.requires_auth, |
There was a problem hiding this comment.
Restrict model-filter bypass to truly local providers
This now enables unfiltered /models responses for every OpenAI-compatible provider with requires_auth = false, not just LM Studio. Both custom-provider flows let users create such providers (ui/desktop/src/components/settings/providers/modal/subcomponents/forms/CustomProviderForm.tsx:241 and crates/goose-cli/src/commands/configure.rs:2009), so a no-auth custom endpoint will now skip the text/tool-call screening in fetch_recommended_models() and expose embeddings or other incompatible models. The new UI warning does not cover that path either, because it is only shown for p.name === "lmstudio" in modelInterface.ts:74, so users can select broken models with no warning.
Useful? React with 👍 / 👎.
Signed-off-by: Nelson Ramirez <nelsonr462@gmail.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6a7eab8ace
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| "env_vars": [ | ||
| { | ||
| "name": "LMSTUDIO_HOST", | ||
| "required": false, | ||
| "secret": false, | ||
| "description": "LM Studio server URL (e.g. http://localhost:1234/v1)", | ||
| "default": "http://localhost:1234/v1" | ||
| } |
There was a problem hiding this comment.
Preserve LM Studio validation after adding default config keys
Adding LMSTUDIO_HOST as an optional key with a default moves LM Studio into the allOptionalWithDefaults branch in DefaultSubmitHandler.tsx:49-65, which returns before the existing getProviderModels() probe runs. In the desktop configure flow, saving a typoed host or a stopped LM Studio server will now look successful and only fail later when the user tries to list or use models, whereas this provider was previously validated on save.
Useful? React with 👍 / 👎.
| if self.skip_model_filtering() { | ||
| let models = self.fetch_supported_models().await?; | ||
| tracing::warn!( | ||
| provider = self.get_name(), | ||
| count = models.len(), | ||
| "Returning all available models without canonical filtering — \ | ||
| some models may not support tool calling and could be incompatible with Goose" | ||
| ); | ||
| return Ok(models); |
There was a problem hiding this comment.
Block automatic fallback to the first unfiltered LM Studio model
Returning the raw /models list here feeds arbitrary LM Studio IDs into SwitchModelModal, whose findPreferredModel() helper falls back to validModels[0] and is invoked automatically whenever a provider is chosen with no current model (SwitchModelModal.tsx:62-80,390-400). Because LM Studio model names often do not match the hard-coded preference patterns, opening the modal or switching to LM Studio can silently preselect the alphabetically first model—including embedding or non-tool-capable entries—and save a broken configuration unless the user notices and corrects it manually.
Useful? React with 👍 / 👎.
Declarative providers serving arbitrary local models (e.g. LM Studio) have model names that won't match the canonical registry, so filtering them out leaves an empty model list. Add a boolean skip_canonical_filtering field to DeclarativeProviderConfig (defaults false) and honour it in fetch_recommended_models. Set it to true in lmstudio.json so all locally-served models are returned as-is. Fixes the same problem addressed by #8043 and #8001. Signed-off-by: Douwe Osinga <douwe@squareup.com>
|
Thanks for digging into this! The root cause is real — LM Studio's locally-served model names don't match the canonical registry so they all get filtered out. We've opened a minimal fix in #8052 that adds a |
Directly addresses:
This enables LM Studio users to discover all available models without canonical registry filtering, as their model names are dynamic and don't match the canonical list.
This also allows for LM Studio users to define custom ports/host if needed via an env variable.
Summary
skip_model_filtering()trait method to Provider (defaults to false)skip_model_filter: config.name == "lmstudio"infrom_declarative()${LMSTUDIO_HOST}env var instead of hardcoded URLdynamic_models: trueflagTesting
All unit/integration tests ran locally, passed
Manual testing and validation with LM Studio and Goose
Related Issues
Relates to: #8026
Screenshots/Demos (for UX changes)
Before:
Model selection modal only displaying filtered results

After:
Now showing unfiltered results from LMStudio, with warning
