Build dev image podman --cgroup-manager=cgroupfs build --no-cache -t localhost/chat:dev --target=dev .Run dev podman --cgroup-manager=cgroupfs run --rm -p 3000:3000 -v $(pwd):/app -v /app/node_modules --name chat_dev localhost/chat:dev
docker build -t chat_tests . podman --cgroup-manager=cgroupfs build --no-cache -t localhost/chat:dev --target=dev . podman --cgroup-manager=cgroupfs run -it --rm --entrypoint=/bin/sh localhost/chat:dev -c "yarn test"
podman run -d -p 3000:3000
-e NODE_ENV=production
-e DB_HOST=mydb.example.com
-e DB_USER=myuser
-e DB_PASS=mypass
my-app:1.2.3
When managing configurations for multiple environments, best practices include separating environment-specific settings from those common across all environments and ensuring configurations are easily maintainable and secure. Here’s a way to handle shared and environment-specific configurations in a Node.js application:
Create a base configuration file that includes settings common across all environments. This might include:
- Default application settings
- Feature flags
- Shared service endpoints (if they do not change)
- Default security settings
Let’s assume you have a config folder in your project. Inside that:
// config/base.js
module.exports = {
appName: 'MyApp',
featureFlags: {
enableFeatureX: true,
},
security: {
contentSecurityPolicy: "...",
},
// ...
};Create environment-specific files that will extend or override the base settings if necessary:
// config/dev.js
const baseConfig = require('./base');
module.exports = {
...baseConfig,
database: {
host: 'localhost',
user: 'devuser',
password: 'devpassword',
// ...
},
// Any other dev-specific overrides
};// config/prod.js
const baseConfig = require('./base');
module.exports = {
...baseConfig,
database: {
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
// ...
},
// Any other production-specific overrides
};Load the correct configurations dynamically based on the environment the app is running in. Create an index.js in the config folder for this purpose:
// config/index.js
const devConfig = require('./dev');
const prodConfig = require('./prod');
const ciConfig = require('./ci');
const ENV = process.env.NODE_ENV || 'development';
let currentConfig;
switch (ENV) {
case 'production':
currentConfig = prodConfig;
break;
case 'ci':
currentConfig = ciConfig;
break;
case 'development':
currentConfig = devConfig;
break;
default:
currentConfig = devConfig;
break;
}
module.exports = currentConfig;Wherever you need to access the configuration within your application, you would require the config/index.js. This will always provide the correct, environment-specific configuration merged with the base configuration:
// In your app code
const config = require('./config');
console.log(config.appName); // 'MyApp' from the base config
console.log(config.database.host); // environment-specific from dev/prod/ci- Centralize Configuration: Keeping configuration in a central location helps avoid scattered or hardcoded values throughout your codebase.
- Use Environment Variables for Secrets: Do not store sensitive information such as API keys or passwords in configuration files. Use environment variables or a secrets management system instead.
- Immutable Deployments: Build your application in such a way that the configuration does not change once deployed. This means that the application must be rebuilt to change the configuration, which promotes consistency and reliability across environments.
- Do Not Duplicate Configs: Don’t repeat the same configuration in multiple places. Use a base configuration and extend or override it as needed for each environment.
- Version Control: Keep the configuration files under version control, excluding any sensitive data like passwords or private keys, to track changes and maintain the history.
By applying these practices, you can have a configuration management system in your project that is both robust and flexible, simplifying the process of maintaining different environments for development, CI, and production.
To utilize the Dockerfile for development, CI, and production, you should adjust the build and execution process to accommodate the needs of each environment. Here's a guide on how to use the Dockerfile in different contexts:
In development, you may want to mount your code into the container to enable hot-reloading and use tools like proxychains-ng if needed.
- Create a
docker-compose.ymlfor easy local development, which may include volumes for live code updates and port mappings:
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
target: base
ports:
- '3000:3000'
volumes:
- .:/app
- /app/node_modules
environment:
- OPENAI_API_KEY=your_dev_key
- NODE_ENV=development
# Add other tools or services required for development (databases, redis, etc.)- Use
docker-composefor building and starting your application:
docker-compose up --buildIn CI, you’ll typically want to run tests and not deploy the running application the same way as in production.
- Configure your CI pipeline (such as GitHub Actions, GitLab CI, etc.) to build the Docker image:
# Example CI pipeline step using GitHub Actions
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Build Docker image
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
load: true
tags: myapp:ci
build-args:
OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }}
CODE=${{ secrets.CODE }}- After building, run the tests within a Docker container as part of the CI pipeline:
# Continuing the example CI pipeline from GitHub Actions
- name: Run tests
run: docker run --rm myapp:ci yarn testFor production, you want an image that's as slim and secure as possible, and you will not be mounting live code.
- Build the production-specific image:
docker build -t myapp:prod --target=runner .- In your production deployment, you can skip the mounting of volumes, and you won’t need
proxychains-ng(assuming you’re not using it in production). Thus, your Docker run command might look like:
docker run -d --name myapp-prod -p 80:3000 \
-e OPENAI_API_KEY=your_production_key \
-e NODE_ENV=production \
myapp:prodOr, if you’re using a service manager like Kubernetes or a cloud service provider's orchestration tools, you would use their respective configurations (Kubernetes manifests, AWS ECS task definitions, etc.) to deploy the Docker container.
- Decide what’s needed in each environment and tailor the Docker images accordingly (use multi-stage builds when necessary).
- Always tag your Docker images appropriately for each environment.
- Use environment variables for injecting sensitive and environment-specific configuration. Do not hard code them into the image.
- Ensure that you have a
.dockerignorefile set up to exclude unnecessary files from the build context. - For secrets (like
OPENAI_API_KEY), use your orchestrator's secrets management tools to avoid exposing them. - In production, use
docker-composewith caution and instead prefer orchestrators (Kubernetes, ECS, etc.) for better management, fault tolerance, and scaling capabilities. - In CI, ensure that building the Docker image and running tests does not inadvertently alter the production state or leak sensitive information.
This strategy allows you to manage a unified Docker workflow across different stages of software development, ensuring consistency and efficient promotion through environments.
English / 简体中文
One-Click to deploy well-designed ChatGPT web UI on Vercel.
一键免费部署你的私人 ChatGPT 网页应用。
- Deploy for free with one-click on Vercel in under 1 minute
- Privacy first, all data stored locally in the browser
- Responsive design, dark mode and PWA
- Fast first screen loading speed (~100kb), support streaming response
- Awesome prompts powered by awesome-chatgpt-prompts-zh and awesome-chatgpt-prompts
- Automatically compresses chat history to support long conversations while also saving your tokens
- One-click export all chat history with full Markdown support
- I18n supported
- System Prompt: pin a user defined prompt as system prompt #138
- User Prompt: user can edit and save custom prompts to prompt list
- Prompt Template: create a new chat with pre-defined in-context prompts
- Share as image, share to ShareGPT
- Desktop App with tauri
- Self-host Model: support llama, alpaca, ChatGLM, BELLE etc.
- Plugins: support network search, calculator, any other apis etc. #165
- User login, accounts, cloud sync
- UI text customize
- 在 1 分钟内使用 Vercel 免费一键部署
- 精心设计的 UI,响应式设计,支持深色模式,支持 PWA
- 极快的首屏加载速度(~100kb),支持流式响应
- 隐私安全,所有数据保存在用户浏览器本地
- 海量的内置 prompt 列表,来自中文和英文
- 自动压缩上下文聊天记录,在节省 Token 的同时支持超长对话
- 一键导出聊天记录,完整的 Markdown 支持
- 拥有自己的域名?好上加好,绑定后即可在任何地方无障碍快速访问
- 为每个对话设置系统 Prompt #138
- 允许用户自行编辑内置 Prompt 列表
- 提示词模板:使用预制上下文快速定制新对话
- 分享为图片,分享到 ShareGPT
- 使用 tauri 打包桌面应用
- 支持自部署的大语言模型
- 插件机制,支持联网搜索、计算器、调用其他平台 api #165
- 界面文字自定义
- 用户登录、账号管理、消息云同步
- Get OpenAI API Key;
- Click
, remember that
CODEis your page password; - Enjoy :)
If you have deployed your own project with just one click following the steps above, you may encounter the issue of "Updates Available" constantly showing up. This is because Vercel will create a new project for you by default instead of forking this project, resulting in the inability to detect updates correctly.
We recommend that you follow the steps below to re-deploy:
- Delete the original repository;
- Use the fork button in the upper right corner of the page to fork this project;
- Choose and deploy in Vercel again, please see the detailed tutorial.
After forking the project, due to the limitations imposed by Github, you need to manually enable Workflows and Upstream Sync Action on the Actions page of the forked project. Once enabled, automatic updates will be scheduled every hour:
If you want to update instantly, you can check out the Github documentation to learn how to synchronize a forked project with upstream code.
You can star or watch this project or follow author to get release notifictions in time.
This project provides limited access control. Please add an environment variable named CODE on the vercel environment variables page. The value should be passwords separated by comma like this:
code1,code2,code3
After adding or modifying this environment variable, please redeploy the project for the changes to take effect.
Your openai api key.
Access passsword, separated by comma.
Default:
https://api.openai.com
Examples:
http://your-openai-proxy.com
Override openai api request base url.
Specify OpenAI organization ID.
Before starting development, you must create a new .env.local file at project root, and place your api key into it:
OPENAI_API_KEY=<your api key here>
# 1. install nodejs and yarn first
# 2. config local env vars in `.env.local`
# 3. run
yarn install
yarn devdocker pull yidadaa/chatgpt-next-web
docker run -d -p 3000:3000 \
-e OPENAI_API_KEY="sk-xxxx" \
-e CODE="your-password" \
yidadaa/chatgpt-next-webYou can start service behind a proxy:
docker run -d -p 3000:3000 \
-e OPENAI_API_KEY="sk-xxxx" \
-e CODE="your-password" \
-e PROXY_URL="http://localhost:7890" \
yidadaa/chatgpt-next-webbash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/scripts/setup.sh)仅列出捐赠金额 >= 100RMB 的用户。
@mushan0x0 @ClarenceDan @zhangjia @hoochanlon @relativequantum @desenmeng @webees @chazzhou @hauy @Corwin006 @yankunsong @ypwhs @fxxxchao




