Support Qiniu provider with OpenAI API compatibility#4256
Support Qiniu provider with OpenAI API compatibility#4256liangchaoboy wants to merge 4 commits intoQuantumNous:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (2)
WalkthroughAdds Qiniu as channel type 58: constant, base URL, name; maps it to OpenAI API type; enables stream options and upstream model fetching for Qiniu; updates frontend channel option and icon; and adds a unit test for fetching Qiniu /v1/models. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant Backend
participant RelayAdaptor
participant QiniuAPI
User->>Frontend: Open channel edit / request model list
Frontend->>Backend: GET /channels/:id/models (channelType=58)
Backend->>RelayAdaptor: fetchChannelUpstreamModelIDs(channel)
RelayAdaptor->>QiniuAPI: GET /v1/models (Authorization: Bearer <key>)
QiniuAPI-->>RelayAdaptor: 200 JSON {data: [...]}
RelayAdaptor-->>Backend: parsed model ID list
Backend-->>Frontend: model list response
Frontend-->>User: display upstream models
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@controller/channel_upstream_update_qiniu_test.go`:
- Around line 17-25: The handler in channel_upstream_update_qiniu_test.go is
calling t.Fatalf from the server goroutine (lines checking r.Method, r.URL.Path,
and Authorization); instead, capture the observed values (method, path, auth
header, etc.) into local variables or send them over a channel/sync struct from
the handler, return a normal HTTP response, then after invoking
fetchChannelUpstreamModelIDs in the test goroutine assert those captured values
with t.Fatalf/require. Locate the inline HTTP handler in the test and change its
t.Fatalf calls to store values (or send them on a channel) and perform the
assertions in the test goroutine after fetchChannelUpstreamModelIDs returns.
In `@relay/common/relay_info.go`:
- Line 308: Remove constant.ChannelTypeQiniu from the streamSupportedChannels
map so Qiniu isn't treated as supporting stream_options; locate the map named
streamSupportedChannels and delete the entry "constant.ChannelTypeQiniu: true"
(or conditionally exclude ChannelTypeQiniu wherever streamSupportedChannels is
used) to prevent sending StreamOptions/include_usage to Qiniu's API.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a75ab5aa-124a-48e2-b342-39d5094ca018
📒 Files selected for processing (8)
common/api_type.goconstant/channel.gocontroller/channel_upstream_update_qiniu_test.gorelay/channel/openai/adaptor.gorelay/common/relay_info.goweb/src/components/table/channels/modals/EditChannelModal.jsxweb/src/constants/channel.constants.jsweb/src/helpers/render.jsx
| if r.Method != http.MethodGet { | ||
| t.Fatalf("unexpected method: %s", r.Method) | ||
| } | ||
| if r.URL.Path != "/v1/models" { | ||
| t.Fatalf("unexpected path: %s", r.URL.Path) | ||
| } | ||
| if got := r.Header.Get("Authorization"); got != "Bearer test-key" { | ||
| t.Fatalf("unexpected Authorization header: %q", got) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify Fatal*/FailNow usage in httptest handler contexts (read-only check).
rg -n --type=go -C4 'httptest\.NewServer\(http\.HandlerFunc\(func' controller/channel_upstream_update_qiniu_test.go
rg -n --type=go -C3 '\bt\.(Fatal|Fatalf|FailNow)\(' controller/channel_upstream_update_qiniu_test.goRepository: QuantumNous/new-api
Length of output: 1291
🏁 Script executed:
#!/bin/bash
# Search for other httptest.NewServer patterns in test files to identify if this is systemic
rg -n --type=go 'httptest\.NewServer\(http\.HandlerFunc' --glob='*_test.go' | head -20
# Check how many test files use this pattern
rg -l --type=go 'httptest\.NewServer\(http\.HandlerFunc' --glob='*_test.go' | wc -lRepository: QuantumNous/new-api
Length of output: 207
Avoid t.Fatalf inside the HTTP handler goroutine.
At lines 18, 21, and 24, t.Fatalf is invoked from the server handler goroutine. Fatalf/FailNow are only safe from the test goroutine; calling them from other goroutines can lead to unreliable test failures. Capture request fields in the handler and assert them after fetchChannelUpstreamModelIDs returns.
Suggested fix
func TestFetchChannelUpstreamModelIDs_Qiniu(t *testing.T) {
t.Parallel()
+ type requestSnapshot struct {
+ method string
+ path string
+ auth string
+ }
+ reqCh := make(chan requestSnapshot, 1)
+
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- if r.Method != http.MethodGet {
- t.Fatalf("unexpected method: %s", r.Method)
- }
- if r.URL.Path != "/v1/models" {
- t.Fatalf("unexpected path: %s", r.URL.Path)
- }
- if got := r.Header.Get("Authorization"); got != "Bearer test-key" {
- t.Fatalf("unexpected Authorization header: %q", got)
- }
+ reqCh <- requestSnapshot{
+ method: r.Method,
+ path: r.URL.Path,
+ auth: r.Header.Get("Authorization"),
+ }
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{"object":"list","data":[{"id":"deepseek/deepseek-v3.1-terminus-thinking"},{"id":"gpt-4"}]}`))
}))
defer srv.Close()
@@
got, err := fetchChannelUpstreamModelIDs(ch)
if err != nil {
t.Fatalf("fetchChannelUpstreamModelIDs returned error: %v", err)
}
+ req := <-reqCh
+ if req.method != http.MethodGet {
+ t.Fatalf("unexpected method: %s", req.method)
+ }
+ if req.path != "/v1/models" {
+ t.Fatalf("unexpected path: %s", req.path)
+ }
+ if req.auth != "Bearer test-key" {
+ t.Fatalf("unexpected Authorization header: %q", req.auth)
+ }
if len(got) != 2 || got[0] != "deepseek/deepseek-v3.1-terminus-thinking" || got[1] != "gpt-4" {
t.Fatalf("unexpected models: %#v", got)
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controller/channel_upstream_update_qiniu_test.go` around lines 17 - 25, The
handler in channel_upstream_update_qiniu_test.go is calling t.Fatalf from the
server goroutine (lines checking r.Method, r.URL.Path, and Authorization);
instead, capture the observed values (method, path, auth header, etc.) into
local variables or send them over a channel/sync struct from the handler, return
a normal HTTP response, then after invoking fetchChannelUpstreamModelIDs in the
test goroutine assert those captured values with t.Fatalf/require. Locate the
inline HTTP handler in the test and change its t.Fatalf calls to store values
(or send them on a channel) and perform the assertions in the test goroutine
after fetchChannelUpstreamModelIDs returns.
| // 定义支持流式选项的通道类型 | ||
| var streamSupportedChannels = map[int]bool{ | ||
| constant.ChannelTypeOpenAI: true, | ||
| constant.ChannelTypeQiniu: true, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Qiniu (七牛) OpenAI-compatible chat/completions API: does streaming support stream_options(especiallyinclude_usage)?
💡 Result:
No, Qiniu's OpenAI-compatible chat/completions API does not support the stream_options parameter (including include_usage) for streaming responses. Official documentation lists stream as supported but omits stream_options. OpenAPI specs and parameter tables confirm standard OpenAI parameters like model, messages, max_tokens, temperature are included, but stream_options is absent. Examples show streaming with stream: true without usage in final chunk. Separate usage query APIs exist (/v2/stat/usage). Some providers reject stream_options, causing 422 errors (similar to Mistral issues).
Citations:
- 1: https://developer.qiniu.com/aitokenapi/13390/chat-completions
- 2: https://developer.qiniu.com/aitokenapi/12882/ai-inference-api?category=api
- 3: https://apidocs.qnaigc.com/233004161d0
- 4: https://apidocs.qnaigc.com/421308051e0
- 5: https://apidocs.qnaigc.com/397197786e0
Remove Qiniu from streamSupportedChannels—it does not support the stream_options parameter.
Qiniu's OpenAI-compatible API supports stream but explicitly omits stream_options (including include_usage). Including Qiniu at line 308 will cause upstream failures when StreamOptions are passed. Separate usage APIs exist instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@relay/common/relay_info.go` at line 308, Remove constant.ChannelTypeQiniu
from the streamSupportedChannels map so Qiniu isn't treated as supporting
stream_options; locate the map named streamSupportedChannels and delete the
entry "constant.ChannelTypeQiniu: true" (or conditionally exclude
ChannelTypeQiniu wherever streamSupportedChannels is used) to prevent sending
StreamOptions/include_usage to Qiniu's API.
b8002ff to
89e5896
Compare
Introduce Qiniu (OpenAI-compatible) channel type, enable upstream /v1/models fetching in UI, and add a unit test for model listing. Made-with: Cursor
Replace hardcoded OpenAI/Azure channel type check with the SupportStreamOptions flag so all channels registered in streamSupportedChannels (including Qiniu) correctly preserve StreamOptions in upstream requests.
89e5896 to
f4c3225
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
web/src/components/table/channels/modals/EditChannelModal.jsx (1)
133-136: Consolidate fetchable-type source of truth.Line 133 redefines a fetchable-type set that is already centralized in
MODEL_FETCHABLE_CHANNEL_TYPES. Keeping both lists invites future drift when new types are added. Prefer reusing the shared constant directly.♻️ Suggested simplification
-const MODEL_FETCHABLE_TYPES = new Set([ - 1, 4, 14, 34, 17, 26, 27, 24, 47, 25, 20, 23, 31, 40, 42, 48, 43, - 58, -]); +const MODEL_FETCHABLE_TYPES = MODEL_FETCHABLE_CHANNEL_TYPES;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/components/table/channels/modals/EditChannelModal.jsx` around lines 133 - 136, The local constant MODEL_FETCHABLE_TYPES in EditChannelModal.jsx duplicates the centralized MODEL_FETCHABLE_CHANNEL_TYPES and should be removed; replace the local definition by importing and using MODEL_FETCHABLE_CHANNEL_TYPES (the shared source of truth) wherever MODEL_FETCHABLE_TYPES is referenced (e.g., in validation or conditional checks inside the EditChannelModal component), and ensure you add the appropriate import for MODEL_FETCHABLE_CHANNEL_TYPES from its module and update any variable name references to use the imported symbol.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@web/src/components/table/channels/modals/EditChannelModal.jsx`:
- Around line 133-136: The local constant MODEL_FETCHABLE_TYPES in
EditChannelModal.jsx duplicates the centralized MODEL_FETCHABLE_CHANNEL_TYPES
and should be removed; replace the local definition by importing and using
MODEL_FETCHABLE_CHANNEL_TYPES (the shared source of truth) wherever
MODEL_FETCHABLE_TYPES is referenced (e.g., in validation or conditional checks
inside the EditChannelModal component), and ensure you add the appropriate
import for MODEL_FETCHABLE_CHANNEL_TYPES from its module and update any variable
name references to use the imported symbol.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 97a36106-6b29-4e81-8319-1f9324de2a6b
📒 Files selected for processing (8)
common/api_type.goconstant/channel.gocontroller/channel_upstream_update_qiniu_test.gorelay/channel/openai/adaptor.gorelay/common/relay_info.goweb/src/components/table/channels/modals/EditChannelModal.jsxweb/src/constants/channel.constants.jsweb/src/helpers/render.jsx
✅ Files skipped from review due to trivial changes (1)
- web/src/constants/channel.constants.js
🚧 Files skipped from review as they are similar to previous changes (6)
- common/api_type.go
- web/src/helpers/render.jsx
- controller/channel_upstream_update_qiniu_test.go
- relay/channel/openai/adaptor.go
- relay/common/relay_info.go
- constant/channel.go
变更描述 / Description
本 PR 为 Qiniu(七牛) 增加了一个独立的渠道类型(
type=58),并将其按 OpenAI 兼容供应商接入现有转发链路:后端将ChannelTypeQiniu映射到APITypeOpenAI,因此请求转发与鉴权逻辑复用 OpenAI adaptor(Authorization: Bearer <key>),无需额外的请求体/响应体转换。同时,补齐了“从上游拉取模型列表”的能力:前端将
type=58加入可拉取模型列表的白名单集合,使渠道编辑弹窗可以调用现有的/v1/models拉取流程;后端沿用统一的/v1/models解析逻辑,返回data[].id作为模型列表。新增单元测试使用httptest模拟上游/v1/models,验证请求路径与 Bearer 鉴权头正确,并能正确解析模型 ID。变更类型 / Type of change
🔗 关联任务 / Related Issue
✅ 提交前检查项 / Checklist
Bug fix,我已提交或关联对应 Issue,且不会将设计取舍、预期不一致或理解偏差直接归类为 bug。运行证明 / Proof of Work
本地测试通过(只跑本次新增覆盖项):
go test ./controller -run Qiniu ok github.com/QuantumNous/new-api/controller 0.456sSummary by CodeRabbit
New Features
Tests