Skip to content

feat: bypass canonical model filtering for LM Studio#8043

Closed
nelsonr462 wants to merge 2 commits intoblock:mainfrom
nelsonr462:feat/unfiltered-local-models
Closed

feat: bypass canonical model filtering for LM Studio#8043
nelsonr462 wants to merge 2 commits intoblock:mainfrom
nelsonr462:feat/unfiltered-local-models

Conversation

@nelsonr462
Copy link

@nelsonr462 nelsonr462 commented Mar 20, 2026

Directly addresses:
This enables LM Studio users to discover all available models without canonical registry filtering, as their model names are dynamic and don't match the canonical list.

This also allows for LM Studio users to define custom ports/host if needed via an env variable.

Summary

  • Add skip_model_filtering() trait method to Provider (defaults to false)
  • Implement in OpenAiProvider to enable unfiltered models for no-auth providers
    • Set via skip_model_filter: config.name == "lmstudio" in from_declarative()
    • Only affects built-in LM Studio config, won't impact other no-auth/declarative configs
  • Update LM Studio declarative config:
    • Dynamic ${LMSTUDIO_HOST} env var instead of hardcoded URL
    • dynamic_models: true flag
  • Show UI warning for LM Studio users about tool-calling compatibility
  • LM Studio can now discover all available models without canonical registry filtering

Testing

All unit/integration tests ran locally, passed
Manual testing and validation with LM Studio and Goose

Related Issues

Relates to: #8026

Screenshots/Demos (for UX changes)

Before:

Model selection modal only displaying filtered results
image

After:

Now showing unfiltered results from LMStudio, with warning
image

image

…o-auth providers

- Add skip_model_filtering() trait method to Provider
- Implement in OpenAiProvider to enable unfiltered models when - Update LM Studio declarative config with dynamic  env var
- Show warning UI message for LM Studio users about tool-calling compatibility
- Local providers can now discover all available models without canonical registry filtering

Signed-off-by: Nelson Ramirez <nelsonr462@gmail.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 25498e96c6

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

custom_headers: config.headers,
supports_streaming: config.supports_streaming.unwrap_or(true),
name: config.name.clone(),
skip_model_filter: !config.requires_auth,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Restrict model-filter bypass to truly local providers

This now enables unfiltered /models responses for every OpenAI-compatible provider with requires_auth = false, not just LM Studio. Both custom-provider flows let users create such providers (ui/desktop/src/components/settings/providers/modal/subcomponents/forms/CustomProviderForm.tsx:241 and crates/goose-cli/src/commands/configure.rs:2009), so a no-auth custom endpoint will now skip the text/tool-call screening in fetch_recommended_models() and expose embeddings or other incompatible models. The new UI warning does not cover that path either, because it is only shown for p.name === "lmstudio" in modelInterface.ts:74, so users can select broken models with no warning.

Useful? React with 👍 / 👎.

Signed-off-by: Nelson Ramirez <nelsonr462@gmail.com>
@nelsonr462 nelsonr462 changed the title feat: bypass canonical model filtering for no-auth providers feat: bypass canonical model filtering for LM Studio Mar 20, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6a7eab8ace

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +8 to +15
"env_vars": [
{
"name": "LMSTUDIO_HOST",
"required": false,
"secret": false,
"description": "LM Studio server URL (e.g. http://localhost:1234/v1)",
"default": "http://localhost:1234/v1"
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve LM Studio validation after adding default config keys

Adding LMSTUDIO_HOST as an optional key with a default moves LM Studio into the allOptionalWithDefaults branch in DefaultSubmitHandler.tsx:49-65, which returns before the existing getProviderModels() probe runs. In the desktop configure flow, saving a typoed host or a stopped LM Studio server will now look successful and only fail later when the user tries to list or use models, whereas this provider was previously validated on save.

Useful? React with 👍 / 👎.

Comment on lines +546 to +554
if self.skip_model_filtering() {
let models = self.fetch_supported_models().await?;
tracing::warn!(
provider = self.get_name(),
count = models.len(),
"Returning all available models without canonical filtering — \
some models may not support tool calling and could be incompatible with Goose"
);
return Ok(models);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Block automatic fallback to the first unfiltered LM Studio model

Returning the raw /models list here feeds arbitrary LM Studio IDs into SwitchModelModal, whose findPreferredModel() helper falls back to validModels[0] and is invoked automatically whenever a provider is chosen with no current model (SwitchModelModal.tsx:62-80,390-400). Because LM Studio model names often do not match the hard-coded preference patterns, opening the modal or switching to LM Studio can silently preselect the alphabetically first model—including embedding or non-tool-capable entries—and save a broken configuration unless the user notices and corrects it manually.

Useful? React with 👍 / 👎.

DOsinga pushed a commit that referenced this pull request Mar 21, 2026
Declarative providers serving arbitrary local models (e.g. LM Studio)
have model names that won't match the canonical registry, so filtering
them out leaves an empty model list.

Add a boolean skip_canonical_filtering field to DeclarativeProviderConfig
(defaults false) and honour it in fetch_recommended_models. Set it to
true in lmstudio.json so all locally-served models are returned as-is.

Fixes the same problem addressed by #8043 and #8001.

Signed-off-by: Douwe Osinga <douwe@squareup.com>
@DOsinga
Copy link
Collaborator

DOsinga commented Mar 21, 2026

Thanks for digging into this! The root cause is real — LM Studio's locally-served model names don't match the canonical registry so they all get filtered out.

We've opened a minimal fix in #8052 that adds a skip_canonical_filtering boolean directly to DeclarativeProviderConfig and sets it to true in lmstudio.json. This avoids the name-string hardcode and generalises cleanly to any future declarative provider with the same characteristic. Closing this one in favour of that.

@DOsinga DOsinga closed this Mar 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants