Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion config/source/editor-config/compat-baseline.json
Original file line number Diff line number Diff line change
Expand Up @@ -704,7 +704,16 @@
"rules/ui-design/rule.md": "43c943739781f2707a5b54559b237ed66f30114fd2388db46d89dc667057b53f",
"rules/web-development/browser-testing.md": "703e35774fe7b9bbb133531fd2f7df6eeca2420567cd70886ef33ebeeebf6614",
"rules/web-development/frameworks.md": "62fdc117d5174626b032ae51817a125614987eb9264e9a4ac9466d198a8fb6ab",
"rules/web-development/rule.md": "4614373678013ccc1f68a511a92c24f2ced4617f6bb9296ba70fa1c6895f5d43"
"rules/web-development/rule.md": "4614373678013ccc1f68a511a92c24f2ced4617f6bb9296ba70fa1c6895f5d43",
".agent/rules/ai-model-nodejs/references/tokenhub-direct-access.md": "2e00b4c18fe0bb0ffe6bafccc1302856bc8a045dce0de9f3eb000f8d642bbf7d",
".clinerules/ai-model-nodejs/references/tokenhub-direct-access.md": "2e00b4c18fe0bb0ffe6bafccc1302856bc8a045dce0de9f3eb000f8d642bbf7d",
".codebuddy/skills/ai-model-nodejs/references/tokenhub-direct-access.md": "2e00b4c18fe0bb0ffe6bafccc1302856bc8a045dce0de9f3eb000f8d642bbf7d",
".cursor/rules/ai-model-nodejs/references/tokenhub-direct-access.mdc": "2e00b4c18fe0bb0ffe6bafccc1302856bc8a045dce0de9f3eb000f8d642bbf7d",
".kiro/steering/ai-model-nodejs/references/tokenhub-direct-access.md": "2e00b4c18fe0bb0ffe6bafccc1302856bc8a045dce0de9f3eb000f8d642bbf7d",
".qoder/rules/ai-model-nodejs/references/tokenhub-direct-access.md": "2e00b4c18fe0bb0ffe6bafccc1302856bc8a045dce0de9f3eb000f8d642bbf7d",
".trae/rules/ai-model-nodejs/references/tokenhub-direct-access.md": "2e00b4c18fe0bb0ffe6bafccc1302856bc8a045dce0de9f3eb000f8d642bbf7d",
".windsurf/rules/ai-model-nodejs/references/tokenhub-direct-access.md": "2e00b4c18fe0bb0ffe6bafccc1302856bc8a045dce0de9f3eb000f8d642bbf7d",
"rules/ai-model-nodejs/references/tokenhub-direct-access.md": "2e00b4c18fe0bb0ffe6bafccc1302856bc8a045dce0de9f3eb000f8d642bbf7d"
}
}
}
Expand Down
37 changes: 28 additions & 9 deletions config/source/skills/ai-model-nodejs/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
name: ai-model-nodejs
description: Use this skill when developing Node.js backend services or CloudBase cloud functions (Express/Koa/NestJS, serverless, backend APIs) that need AI capabilities. Features text generation (generateText), streaming (streamText), AND image generation (generateImage) via @cloudbase/node-sdk ≥3.16.0. Built-in models include Hunyuan (hunyuan-2.0-instruct-20251111 recommended), DeepSeek (deepseek-v3.2 recommended), and hunyuan-image for images. This is the ONLY SDK that supports image generation. NOT for browser/Web apps (use ai-model-web) or WeChat Mini Program (use ai-model-wechat).
description: Use this skill when developing Node.js backend services or CloudBase cloud functions (Express/Koa/NestJS, serverless, backend APIs) that need CloudBase AI managed model capabilities via @cloudbase/node-sdk ≥3.16.0. It covers generateText, streamText, and CloudBase-managed generateImage(); within the CloudBase SDK family, Node SDK is the only SDK that exposes managed image generation. Do NOT use this skill when the user explicitly wants to call Tencent Cloud / Hunyuan model APIs directly, use separate cloud-side quotas or billing, or avoid the CloudBase AI managed access layer. Not for browser/Web apps (use ai-model-web) or WeChat Mini Program (use ai-model-wechat).
version: 2.18.0
alwaysApply: false
---
Expand All @@ -16,32 +16,49 @@ Keep local `references/...` paths for files that ship with the current skill dir

## When to use this skill

Use this skill for **calling AI models in Node.js backend or CloudBase cloud functions** using `@cloudbase/node-sdk`.
Use this skill for **calling CloudBase AI managed models in Node.js backends or CloudBase cloud functions** using `@cloudbase/node-sdk`.

**Use it when you need to:**

- Integrate AI text generation in backend services
- Generate images with Hunyuan Image model
- Call AI models from CloudBase cloud functions
- Server-side AI processing
- Integrate CloudBase AI managed text generation in backend services
- Generate images through CloudBase AI's managed `generateImage()` entry
- Call CloudBase AI models from CloudBase cloud functions or Node.js servers
- Keep model access on the server side behind your backend or function

**Do NOT use for:**

- Browser/Web apps → use `ai-model-web` skill
- WeChat Mini Program → use `ai-model-wechat` skill
- HTTP API integration → use `http-api` skill
- Requests that explicitly require direct Tencent Cloud / Hunyuan model HTTP API access
- Scenarios that require provider-native quotas, billing, API keys, or parameters instead of CloudBase AI managed access

---

## Available Providers and Models

CloudBase provides these built-in providers and models:
CloudBase AI currently exposes these managed providers and models through Node SDK:

| Provider | Models | Recommended |
|----------|--------|-------------|
| `hunyuan-exp` | `hunyuan-turbos-latest`, `hunyuan-t1-latest`, `hunyuan-2.0-thinking-20251109`, `hunyuan-2.0-instruct-20251111` | ✅ `hunyuan-2.0-instruct-20251111` |
| `deepseek` | `deepseek-r1-0528`, `deepseek-v3-0324`, `deepseek-v3.2` | ✅ `deepseek-v3.2` |

> **Important:** This table describes the models exposed by **CloudBase AI's managed access layer**. It does **not** mean this skill covers every direct Tencent Cloud / Hunyuan API or every provider-native feature.

## Scope boundary: CloudBase-managed vs direct cloud model access

- `app.ai().createModel()` and `createImageModel()` go through **CloudBase AI's managed access layer**.
- Use this skill when the user wants CloudBase cloud functions / CloudRun / Node.js backends to consume **CloudBase AI** models.
- Do **not** force this skill when the user explicitly says “直接调用腾讯云上的模型 / 混元 API”, cares about separate Tencent Cloud quotas or billing, or needs provider-native API behavior.
- In those direct-cloud cases, keep CloudBase only as the runtime host and call the provider's official API from backend code after consulting the official docs.

## If the user explicitly wants TokenHub direct access

- Read `references/tokenhub-direct-access.md` before proposing code.
- Treat **TokenHub** as the direct cloud model platform and **Token Plan** as the IDE / AI coding tools subscription, not a backend API entitlement.
- Do **not** recommend Token Plan API keys for cloud functions, cron jobs, backend services, or custom application servers.
- In this repo's wording, describe the built-in managed route as the **mini program incentive / growth-plan managed path**; do not present it as the general-purpose direct-cloud API route.

---

## Installation
Expand Down Expand Up @@ -170,7 +187,9 @@ const usage = await res.usage; // Token usage

## generateImage() - Image Generation

⚠️ **Image generation is only available in Node SDK**, not in JS SDK (Web) or WeChat Mini Program.
⚠️ **Within the CloudBase SDK family, managed image generation is only available in Node SDK**, not in JS SDK (Web) or WeChat Mini Program.

⚠️ `generateImage()` here belongs to **CloudBase AI's managed access layer**. It is **not** the same as directly calling Tencent Cloud / Hunyuan image APIs. If the user explicitly wants provider-side billing, quota, API keys, or native API behavior, use a backend HTTP client against the official provider API instead of assuming this method.

```js
const imageModel = ai.createImageModel("hunyuan-image");
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# TokenHub direct access from Cloud Functions

Use this note when the user explicitly wants **direct Tencent Cloud TokenHub / Hunyuan API access** from a CloudBase cloud function or other Node.js backend.

## Core routing rule

- **TokenHub platform** is the direct cloud-side model service platform.
- **Token Plan** is a subscription plan for AI coding tools / IDEs such as CodeBuddy, Cursor, Cline, Claude Code and similar tools.
- **Do not** use Token Plan API keys for backend automation, custom app servers, cron jobs, or batch API calls.
- If the user wants cloud functions to call Tencent Cloud models directly, use **TokenHub API Key + TokenHub endpoints**, not CloudBase-managed `generateImage()`.

## Important boundary in this repo

When describing the built-in managed route in this repository, treat it as the **mini program incentive / growth-plan managed path** and do not present it as the general-purpose direct-cloud API route.

## Recommended configuration flow

1. Open TokenHub and confirm the target model and billing path.
2. Create or obtain a **TokenHub API Key**.
3. In the cloud function, store credentials in environment variables instead of hard-coding them.
4. Use a regular HTTP client or OpenAI-compatible SDK from the backend.
5. Return the generated image URL or task status back to the client.

## Suggested environment variables

```bash
TOKENHUB_API_KEY=your_tokenhub_api_key
TOKENHUB_BASE_URL=https://tokenhub.tencentmaas.com
TOKENHUB_IMAGE_MODEL=hy-image-lite
```

For higher-quality async image generation, switch `TOKENHUB_IMAGE_MODEL` to `hy-image-v3.0` and call the submit/query endpoints.

## Cloud Function example: synchronous image generation with `hy-image-lite`

```js
exports.main = async (event) => {
const prompt = event?.prompt || '现代极简客厅,木质地板,暖色灯光,落地窗';
const apiKey = process.env.TOKENHUB_API_KEY;
const baseUrl = process.env.TOKENHUB_BASE_URL || 'https://tokenhub.tencentmaas.com';
const model = process.env.TOKENHUB_IMAGE_MODEL || 'hy-image-lite';

if (!apiKey) {
throw new Error('Missing TOKENHUB_API_KEY');
}

const response = await fetch(`${baseUrl}/v1/api/image/lite`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify({
model,
prompt,
rsp_img_type: 'url',
}),
});

if (!response.ok) {
const errorText = await response.text();
throw new Error(`TokenHub request failed: ${response.status} ${errorText}`);
}

const result = await response.json();
return {
model,
prompt,
imageUrl: result?.data?.[0]?.url || null,
raw: result,
};
};
```

## Async image generation with `hy-image-v3.0`

- Submit endpoint: `POST https://tokenhub.tencentmaas.com/v1/api/image/submit`
- Query endpoint: `POST https://tokenhub.tencentmaas.com/v1/api/image/query`
- Submit with `model: "hy-image-v3.0"` and save the returned `id`
- Poll the query endpoint with `model` + `id` until `status === "completed"`

## Minimal implementation notes for agents

- Prefer cloud functions or backend services, not browser-side direct calls.
- Put API keys in environment variables only.
- For CAD -> render scenarios, keep CAD parsing, prompt assembly, and result persistence in CloudBase; keep model inference on TokenHub.
- If the user asks about pricing or package selection, distinguish **TokenHub platform billing** from **Token Plan for IDE tools**.

## Official references

- Product overview: `https://cloud.tencent.com/document/product/1823/130050`
- Model list: `https://cloud.tencent.com/document/product/1823/130051`
- Token Plan overview: `https://cloud.tencent.com/document/product/1823/130060`
- Token Plan FAQ: `https://cloud.tencent.com/document/product/1823/130076`
- API protocol: `https://cloud.tencent.com/document/product/1823/130078`
- Text generation: `https://cloud.tencent.com/document/product/1823/130079`
- Image generation: `https://cloud.tencent.com/document/product/1823/130080`
- Video generation: `https://cloud.tencent.com/document/product/1823/130081`
- 3D generation: `https://cloud.tencent.com/document/product/1823/130082`
8 changes: 4 additions & 4 deletions doc/components/prompts.json
Original file line number Diff line number Diff line change
Expand Up @@ -252,13 +252,13 @@
{
"id": "ai-model-nodejs",
"title": "在 Node.js 后端中集成 AI 功能",
"description": "在 Node.js 后端服务或云函数中使用 CloudBase AI 模型,支持文本生成、流式响应和图片生成",
"shortDescription": "在后端服务中调用 AI",
"description": "在 Node.js 后端服务或云函数中使用 CloudBase AI 托管模型能力,适合文本生成、流式响应和托管式图片生成;如果用户明确要直连腾讯云上模型 API,不应默认使用该 Skill",
"shortDescription": "在后端服务中调用 CloudBase AI",
"category": "ai",
"order": 2,
"prompts": [
"在 CloudBase 云函数中集成 AI 模型,实现文本生成功能",
"创建一个使用 CloudBase AI 模型生成图片的云函数",
"在 CloudBase 云函数中集成 CloudBase AI 模型,实现文本生成功能",
"创建一个通过 CloudBase AI 托管入口生成图片的云函数",
"在 Express 后端服务中使用 CloudBase AI 模型处理用户请求"
]
},
Expand Down
6 changes: 3 additions & 3 deletions doc/prompts/ai-model-nodejs.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# 在 Node.js 后端中集成 AI 功能

在 Node.js 后端服务或云函数中使用 CloudBase AI 模型,支持文本生成、流式响应和图片生成
在 Node.js 后端服务或云函数中使用 CloudBase AI 托管模型能力,适合文本生成、流式响应和托管式图片生成;如果用户明确要直连腾讯云上模型 API,不应默认使用该 Skill

## 如何使用

Expand All @@ -10,8 +10,8 @@

你可以使用以下提示词来测试:

- "在 CloudBase 云函数中集成 AI 模型,实现文本生成功能"
- "创建一个使用 CloudBase AI 模型生成图片的云函数"
- "在 CloudBase 云函数中集成 CloudBase AI 模型,实现文本生成功能"
- "创建一个通过 CloudBase AI 托管入口生成图片的云函数"
- "在 Express 后端服务中使用 CloudBase AI 模型处理用户请求"

import AIDevelopmentPrompt from '../components/AIDevelopmentPrompt';
Expand Down
8 changes: 4 additions & 4 deletions doc/prompts/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -234,13 +234,13 @@ rules:

- id: ai-model-nodejs
title: 在 Node.js 后端中集成 AI 功能
description: 在 Node.js 后端服务或云函数中使用 CloudBase AI 模型,支持文本生成、流式响应和图片生成
shortDescription: 在后端服务中调用 AI
description: 在 Node.js 后端服务或云函数中使用 CloudBase AI 托管模型能力,适合文本生成、流式响应和托管式图片生成;如果用户明确要直连腾讯云上模型 API,不应默认使用该 Skill
shortDescription: 在后端服务中调用 CloudBase AI
category: ai
order: 2
prompts:
- "在 CloudBase 云函数中集成 AI 模型,实现文本生成功能"
- "创建一个使用 CloudBase AI 模型生成图片的云函数"
- "在 CloudBase 云函数中集成 CloudBase AI 模型,实现文本生成功能"
- "创建一个通过 CloudBase AI 托管入口生成图片的云函数"
- "在 Express 后端服务中使用 CloudBase AI 模型处理用户请求"

- id: ai-model-wechat
Expand Down
Loading