AI-powered Twitter/X content engine with human-in-the-loop approval.
Agent-X automates the research-to-draft pipeline for maintaining a high-quality Twitter/X presence. It discovers trending tech stories from Hacker News and RSS feeds, generates polished tweet drafts using Google Gemini, and presents them in a web dashboard for human review. Nothing gets published without explicit approval.
Live instance: instance-deepseekmistral.taild04f2d.ts.net
- How It Works
- Features
- Tech Stack
- Architecture
- Project Structure
- Getting Started
- Configuration
- API Reference
- Testing
- Deployment
- Documentation
Fetch Sources (HN + RSS)
→ Rank & Deduplicate Topics
→ Generate Content Angles (LLM)
→ Draft Tweets (LLM, 280-char enforced)
→ Uniqueness Check (embedding similarity + hash)
→ Human Review (Dashboard)
→ Approve / Edit / Reject
→ Publish to X (scheduled with jitter)
→ Track Metrics & Generate Weekly Improvements
The pipeline runs on a configurable schedule (1h–24h intervals) via APScheduler. Each stage is independently configurable — model, temperature, prompts, and parameters can all be tuned from the dashboard without touching code.
Content Pipeline
- Automated topic discovery from Hacker News API and RSS feeds
- LLM-powered topic ranking, angle expansion, and tweet generation
- Dual deduplication: content hash (exact match) + embedding cosine similarity (0.85 threshold)
- 280-character limit enforcement with retry logic
- Configurable style: tone, topics, hashtag policy, CTA frequency, hook styles
Human-in-the-Loop Approval
- Web dashboard for reviewing, editing, approving, and rejecting drafts
- Image upload support (JPEG, PNG, GIF, WebP — server-side MIME validation)
- Rejection feedback stored and fed back into future LLM prompts
- Original content preserved when editing before approval
Publishing & Safety
- Randomized scheduling with jitter (±8–12 min) to avoid pattern detection
- Daily post limit (default: 5) and monthly cap (500) enforced at code level
- Failed publish retry with exponential backoff
- Email alerts on publish failures (via Resend)
Pipeline Management
- Visual DAG dashboard (ReactFlow) showing all 7 pipeline stages in real-time
- Per-agent configuration: model override, temperature, system prompt, custom instructions
- Stage-specific parameters (e.g.,
hn_story_limit,similarity_threshold,max_angles_per_topic) - Configurable schedule frequency and posting windows from the UI
- Pipeline logs viewer with batch tracking
Analytics & Improvement
- Engagement metrics: impressions, likes, reposts, replies, engagement rate
- Period-based analytics (7d, 30d, all-time) with trend charts and category breakdown
- Auto-fetch metrics from X API on a configurable interval
- Weekly AI improvement engine analyzes performance and proposes style changes
- Improvement proposals require human approval before applying
Infrastructure
- Docker Compose deployment (3 services: backend, frontend, nginx)
- Auto-deploy via systemd timer (polls GitHub every 2 minutes)
- Tailscale HTTPS termination
- GitHub Actions CI (lint, format, type-check, test, build)
- Mock adapters for LLM and publisher (development without external dependencies)
| Layer | Technology |
|---|---|
| Backend | Python 3.12, FastAPI, uvicorn, Pydantic |
| Frontend | Next.js 16, TypeScript, React 19, Tailwind CSS v4, shadcn/ui |
| Database | Supabase (PostgreSQL + pgvector for embeddings) |
| LLM | Vertex AI Gemini (configurable model per pipeline stage) |
| Scheduler | APScheduler (AsyncIOScheduler) |
| X API | tweepy (v1.1 media upload + v2 tweet creation) |
| Resend | |
| Visualization | Recharts (analytics), ReactFlow (pipeline DAG) |
| Reverse Proxy | Nginx (rate limiting, security headers, gzip) |
| Deployment | Docker Compose, Tailscale HTTPS, GitHub Actions CI |
Agent-X follows Hexagonal Architecture (Ports & Adapters). Business logic in core/ depends only on abstract interfaces — concrete adapters for LLM, publisher, and storage are injected via factory functions, making them swappable without changing business logic.
┌─────────────────────────────────────────────────────┐
│ Nginx (:80) │
│ Rate limiting · Security headers │
├────────────────────┬────────────────────────────────┤
│ │ │
│ /api/* │ /* │
│ ▼ │ ▼ │
│ FastAPI (:8000) │ Next.js (:3000) │
│ ├─ api/routes/ │ ├─ Dashboard │
│ ├─ core/ │ ├─ Approval Queue │
│ ├─ integrations/ │ ├─ Pipeline DAG │
│ └─ scheduler/ │ ├─ Analytics │
│ │ └─ Settings │
├────────────────────┴────────────────────────────────┤
│ Supabase (PostgreSQL + pgvector) │
│ posts · sources · categories · agent_configs │
│ post_metrics · style_config · improvement_proposals│
│ settings · pipeline_logs │
└─────────────────────────────────────────────────────┘
Backend Layers:
API Layer (api/) ← HTTP routing, auth, serialization
↓
Core Business Logic (core/) ← Pure Python, zero framework deps
↓
Integrations (integrations/) ← Abstract interfaces → concrete adapters
Key patterns:
- Factory pattern for adapter creation (
integrations/factories.py) - Per-agent LLM caching to avoid reinitializing the same model
- Pipeline status persisted to DB for real-time frontend polling
- Background threading for long-running pipeline stages
- Dynamic model switching via settings API (no restart needed)
Agent-X/
├── backend/
│ ├── api/
│ │ ├── routes/ # Route modules (11 total)
│ │ │ ├── auth.py # POST /login (JWT issuance)
│ │ │ ├── posts.py # CRUD, approve/reject, image upload, metrics
│ │ │ ├── sources.py # Content source CRUD
│ │ │ ├── categories.py # Category management
│ │ │ ├── style.py # Style config (tone, topics, hashtags)
│ │ │ ├── analytics.py # Overview, categories, trends
│ │ │ ├── improvements.py # AI improvement proposals
│ │ │ ├── settings.py # LLM model, schedule config
│ │ │ ├── agent_configs.py # Per-stage pipeline config
│ │ │ ├── logs.py # Pipeline execution logs
│ │ │ └── system.py # Health check, cycle trigger/status
│ │ ├── dependencies.py # JWT verification middleware
│ │ └── rate_limiter.py # Global rate limiting (slowapi)
│ ├── core/ # Business logic (zero framework deps)
│ │ ├── pipeline.py # 7-stage pipeline orchestration
│ │ ├── research.py # Topic discovery from sources
│ │ ├── topic_ranker.py # Scoring + angle expansion
│ │ ├── content_generator.py # LLM draft generation (280-char)
│ │ ├── uniqueness_checker.py# Hash + embedding deduplication
│ │ ├── publisher.py # Publishing with pre-checks + retry
│ │ ├── improvement_engine.py# Weekly performance analysis
│ │ ├── metrics_fetcher.py # Auto-fetch engagement from X API
│ │ ├── analytics_tracker.py # Metrics aggregation
│ │ └── exceptions.py # Domain exception hierarchy
│ ├── integrations/
│ │ ├── llm/
│ │ │ ├── base.py # Abstract LLM interface
│ │ │ ├── vertex_ai.py # Gemini via Vertex AI
│ │ │ └── mock.py # Deterministic mock for testing
│ │ ├── publishers/
│ │ │ ├── base.py # Abstract publisher interface
│ │ │ ├── x_publisher.py # X/Twitter via tweepy
│ │ │ └── mock.py # Mock publisher for testing
│ │ ├── supabase_client.py # Database operations wrapper
│ │ ├── hn_fetcher.py # Hacker News API client
│ │ ├── rss_parser.py # RSS feed parser (feedparser)
│ │ ├── resend_client.py # Email notifications (Resend)
│ │ └── factories.py # Adapter factory (LLM, publisher)
│ ├── models/ # Pydantic request/response models
│ ├── scheduler/ # APScheduler wrapper
│ ├── utils/ # Logger, datetime helpers, jitter
│ ├── tests/ # Unit tests (27+ test files)
│ ├── migrations/ # SQL schema + seed data
│ ├── main.py # FastAPI app entry point
│ └── config.py # Settings from environment
├── frontend/
│ ├── src/
│ │ ├── app/
│ │ │ ├── login/ # Password authentication
│ │ │ └── (dashboard)/ # Route group (shared sidebar layout)
│ │ │ ├── dashboard/ # Stats, recent activity, quick actions
│ │ │ ├── queue/ # Approval queue (approve/reject/edit)
│ │ │ ├── posts/ # Post history with filters + search
│ │ │ ├── sources/ # Source CRUD + health monitoring
│ │ │ ├── pipeline/ # ReactFlow DAG + agent config
│ │ │ ├── analytics/ # Charts + category breakdown
│ │ │ ├── logs/ # Pipeline execution logs viewer
│ │ │ └── settings/ # LLM, schedule, style, categories
│ │ ├── components/ # ApprovalCard, PostCard, PipelineNode, etc.
│ │ ├── lib/ # Axios client, auth helpers, utilities
│ │ └── types/ # TypeScript interfaces
│ └── Dockerfile # Multi-stage build (standalone)
├── nginx/
│ └── nginx.conf # Reverse proxy, rate limiting, security headers
├── e2e/ # Playwright E2E tests (5 test suites)
├── docs/
│ ├── v1/ # PRD, Architecture, Tasks, Deployment, API Reference
│ └── v2/ # V2 PRD, Architecture changes, Changelog
├── docker-compose.yml # 3 services: backend, frontend, nginx
└── .github/workflows/ci.yml # Lint, test, type-check, build
- Docker and Docker Compose
- A Supabase project (free tier works)
- Google Cloud account with Vertex AI enabled (for Gemini)
- X/Twitter Developer account (free tier)
- (Optional) Resend account for email notifications
- Clone the repository:
git clone https://github.com/anas-aqeel/Agent-X.git
cd Agent-X- Create environment file:
cp .env.example .env-
Configure
.envwith your credentials (see Configuration). -
Set up the database:
Run the SQL in backend/migrations/schema.sql against your Supabase project (via the SQL Editor in the Supabase dashboard).
- Start the stack:
docker compose up --build- Access the dashboard at
http://localhost(Nginx) orhttp://localhost:3000(Next.js direct).
Backend:
cd backend
pip install -r requirements.txt
uvicorn main:app --reload --port 8000Frontend:
cd frontend
npm install
npm run devSet LLM_PROVIDER=mock and PUBLISH_PLATFORM=mock in .env to develop without external API dependencies.
All configuration is via environment variables. Copy .env.example to .env and fill in your values.
| Variable | Description |
|---|---|
SUPABASE_URL |
Supabase project URL |
SUPABASE_SERVICE_ROLE_KEY |
Supabase service role key (full access) |
OWNER_PASSWORD_HASH |
bcrypt hash of your dashboard password |
JWT_SECRET_KEY |
Secret for signing JWT tokens |
Generate your password hash:
python -c "from passlib.hash import bcrypt; print(bcrypt.hash('your-password'))"| Variable | Default | Description |
|---|---|---|
LLM_PROVIDER |
mock |
vertex_ai or mock |
VERTEX_AI_PROJECT_ID |
— | Google Cloud project ID |
VERTEX_AI_LOCATION |
us-central1 |
Vertex AI region |
VERTEX_AI_MODEL |
gemini-1.5-flash |
Default model (overridable per-agent via UI) |
GOOGLE_APPLICATION_CREDENTIALS |
— | Path to GCP service account JSON |
| Variable | Default | Description |
|---|---|---|
PUBLISH_PLATFORM |
mock |
x_twitter or mock |
X_API_KEY |
— | X/Twitter API key |
X_API_SECRET |
— | X/Twitter API secret |
X_ACCESS_TOKEN |
— | X/Twitter access token |
X_ACCESS_SECRET |
— | X/Twitter access secret |
| Variable | Default | Description |
|---|---|---|
POSTING_WINDOW_START |
7 |
Earliest hour to publish (0–23) |
POSTING_WINDOW_END |
22 |
Latest hour to publish (0–23) |
MAX_DAILY_POSTS |
5 |
Maximum posts per day |
POSTING_JITTER_MINUTES |
10 |
Random offset range for scheduling |
TIMEZONE |
America/Los_Angeles |
Timezone for all scheduling |
| Variable | Default | Description |
|---|---|---|
RESEND_API_KEY |
— | Resend API key (optional, for email alerts) |
NOTIFICATION_EMAIL |
— | Email address for notifications |
FRONTEND_URL |
http://localhost:3000 |
Allowed CORS origin |
NEXT_PUBLIC_API_URL |
http://localhost:8081 |
Backend URL (build-time, requires rebuild to change) |
LOG_LEVEL |
INFO |
Logging level |
All endpoints prefixed with /api/. All except /auth/login and /system/health require a JWT Bearer token.
| Method | Endpoint | Description |
|---|---|---|
POST |
/auth/login |
Authenticate and receive JWT |
GET |
/system/health |
Health check (public) |
POST |
/system/cycle/trigger |
Manually trigger the pipeline |
GET |
/system/cycle/status |
Poll pipeline execution status |
GET |
/posts |
List posts (filter by status, search, paginate) |
GET |
/posts/dashboard-stats |
Dashboard overview statistics |
GET |
/posts/recent-activity |
Recent post activity feed |
PATCH |
/posts/{id} |
Update post (approve, reject, edit) |
POST |
/posts/{id}/image |
Upload image for a post |
POST |
/posts/{id}/reschedule |
Reschedule a scheduled post |
GET |
/posts/{id}/metrics |
Get engagement metrics |
POST |
/posts/fetch-metrics |
Auto-fetch metrics from X API |
GET |
/sources |
List all content sources |
POST |
/sources |
Create a new source |
PATCH |
/sources/{id} |
Update a source |
DELETE |
/sources/{id} |
Delete a source |
GET |
/categories |
List categories |
POST |
/categories |
Create a category |
PATCH |
/categories/{id} |
Update a category |
GET |
/style-config |
Get active style configuration |
PUT |
/style-config |
Update style configuration |
GET |
/analytics/overview |
Aggregate metrics (7d, 30d, all) |
GET |
/analytics/categories |
Per-category metric breakdown |
GET |
/analytics/trends |
Daily metrics trend data |
GET |
/improvements |
List AI improvement proposals |
PATCH |
/improvements/{id} |
Approve or reject a proposal |
GET |
/settings/llm-model |
Get current LLM model |
PUT |
/settings/llm-model |
Set LLM model dynamically |
GET |
/settings/schedule |
Get scheduling configuration |
PUT |
/settings/schedule |
Update schedule (live rescheduling) |
GET |
/agent-configs |
List all pipeline agent configs |
PATCH |
/agent-configs/{name} |
Update a pipeline agent's config |
GET |
/logs/pipeline |
Get pipeline log entries |
GET |
/logs/pipeline/runs |
Summary of recent pipeline runs |
Full API documentation: docs/v1/API_REFERENCE.md
cd backend
# Unit tests with coverage
pytest tests/unit/ -v --cov=. --cov-report=term-missing
# Integration tests
pytest tests/integration/ -v
# Lint
ruff check .
# Format check
black --check .27+ unit test files covering pipeline orchestration, content generation, uniqueness checking, publishing, scheduling, all API routes, and adapter behavior. Tests use mock adapters — no external services required.
cd frontend
# Lint
npm run lint
# Type check
npx tsc --noEmit
# Build verification
npm run buildcd e2e
npm install
npx playwright test5 test suites: login, dashboard, approvals, analytics, sources. Requires running services.
GitHub Actions runs on every push/PR to Master:
- Backend: Ruff lint, Black format check, unit tests with coverage, integration tests
- Frontend: ESLint, TypeScript type check, production build
Internet → Tailscale HTTPS (:443) → Nginx (:8081) → Backend (:8000) / Frontend (:3000)
- Clone to VPS:
git clone git@github.com:anas-aqeel/Agent-X.git /opt/agent-x- Configure production
.env:
LLM_PROVIDER=vertex_ai
PUBLISH_PLATFORM=x_twitter
FRONTEND_URL=https://your-domain.ts.net
NEXT_PUBLIC_API_URL=https://your-domain.ts.net
NGINX_PORT=8081- Set up Tailscale HTTPS:
tailscale serve --bg --https=443 http://localhost:8081- Start services:
docker compose up --build -dA systemd timer polls GitHub every 2 minutes. On new commits, it pulls changes and rebuilds only affected services.
# Full rebuild
docker compose up --build -d
# Backend only
docker compose up --build -d backend
# Frontend (required when NEXT_PUBLIC_API_URL changes)
docker compose build --no-cache frontend && docker compose up -d frontendFull deployment guide: docs/v1/DEPLOYMENT.md
| Document | Description |
|---|---|
| PRD | Product requirements, content strategy, success metrics |
| Architecture | Technical design, data flow, security model |
| Implementation Plan | 7-milestone development roadmap |
| Tasks | 117 granular tasks with priorities and estimates |
| Deployment | VPS deployment and auto-deploy setup |
| API Reference | Complete endpoint documentation |
| Document | Description |
|---|---|
| PRD | V2 requirements: pipeline DAG, agent configs, new sources |
| Architecture | V2 architecture changes and new systems |
| Changelog | Running changelog |
11 tables in Supabase PostgreSQL with pgvector:
| Table | Purpose |
|---|---|
posts |
Tweet drafts and published content (with 768-dim embeddings) |
post_metrics |
Per-post engagement metrics |
sources |
HN and RSS content sources |
categories |
Content categories (10 default) |
style_config |
JSONB style configuration (tone, topics, formatting) |
agent_configs |
Per-pipeline-stage configuration (model, prompts, params) |
improvement_proposals |
Weekly AI-generated style improvement suggestions |
settings |
Runtime key-value configuration |
pipeline_logs |
Pipeline execution logs with batch tracking |
post_categories |
Many-to-many post-category mapping |
Schema: backend/migrations/schema.sql
This project is not currently licensed for public use. All rights reserved.