Learn AI/ML interactively at AI-ML Companion - Guided walkthroughs, architecture decisions, hands-on challenges, and narrated overviews for every project.
A comprehensive AI/ML learning platform with 10 end-to-end projects covering the full spectrum, from classical ML to production LLM systems. Every project follows industry best-practice structure.
| # | Project | Domain | Difficulty | Key Tech | Walkthrough |
|---|---|---|---|---|---|
| 1 | IPL Analysis | Data Science / EDA | Beginner-Intermediate | Pandas, Plotly, Scikit-learn | Learn → |
| 2 | ML Algorithms | Classical ML / Interpretability | Intermediate | Scikit-learn, XGBoost, SHAP | Learn → |
| 3 | Deep Learning | Computer Vision / DL | Intermediate-Advanced | PyTorch, TorchVision | Learn → |
| 4 | ML Pipeline | Feature Engineering / Production ML | Advanced | Scikit-learn, FastAPI, Docker | Learn → |
| 5 | MLOps | Model Deployment / Infrastructure | Advanced | FastAPI, Docker, Prometheus, GitHub Actions | Learn → |
| 6 | LLM/RAG | Retrieval-Augmented Generation | Advanced | LangChain, ChromaDB | Learn → |
| 7 | AI Agents | LLM Agent Orchestration | Advanced | LangGraph, OpenAI, Tavily | Learn → |
| 8 | Content Moderation | Multi-Agentic AI | Advanced | LangGraph, Multi-Agent | Learn → |
| 9 | Due Diligence Agent | Multi-Agent Research | Advanced | LangGraph, Gemini, Streamlit | Learn → |
| 10 | Smart Claims Processor | Multi-Agent Insurance Claims | Advanced | LangGraph, CrewAI, Gemini, FastAPI, React | Learn → |
Comprehensive analysis of 17 IPL seasons with interactive visualizations, hypothesis testing, feature engineering, and predictive modeling.
Highlights: 1000+ matches | Plotly interactive charts | Hypothesis testing | RF + GB models
Compare 6 ML algorithms on real clinical data with cost-sensitive threshold tuning (~95% malignant recall) and SHAP explainability for regulatory review.
Highlights: 6 algorithms compared | XGBoost AUC ~0.994 | SHAP reports | Threshold tuning
Systematically improve a CIFAR-10 image classifier from 60% to 93%+ accuracy across 6 documented experiments with a full diagnostics toolkit.
Highlights: 6 progressive experiments | ResNet + CutMix | LR Finder | Per-class analysis
End-to-end pipeline from messy bank data to deployed model with KNN imputation, domain feature engineering, and PSI drift monitoring.
Highlights: Feature engineering | 10:1 cost-sensitive | PSI drift detection | FastAPI + Docker
Production ML infrastructure: FastAPI with graceful shutdown, CI/CD pipeline, Prometheus metrics, Locust load testing, and operational runbook.
Highlights: CI/CD (GitHub Actions) | P95 < 45ms | 161.7 RPS | Kubernetes-ready
Production RAG system with chunking, security defense, and evaluation framework.
Highlights: RAG pipeline | PII defense | A/B testing
4-agent orchestrated research pipeline (researcher, analyst, writer, fact-checker) with guardrails, evaluation, and cost tracking.
Highlights: LangGraph orchestration | +33% completeness vs single-agent | Budget tracking | LLM-as-judge
Multi-agent content moderation pipeline with specialized agents for different content types.
Enterprise-grade company research powered by 6 AI agents with parallel execution, fact-checking, contradiction resolution, and comprehensive guardrails.
Highlights: 6 specialist agents | Parallel execution via LangGraph Send() | Fact-checking + debate | Streamlit dashboard
Production-style multi-agent insurance claims system built with LangGraph (orchestration) and CrewAI (fraud detection). 7 specialist agents handle intake validation, fraud detection, damage assessment, policy compliance, settlement calculation, LLM-as-judge evaluation, and claimant notification.
Highlights: LangGraph + CrewAI hybrid | Human-in-the-Loop with durable checkpointing | Per-agent confidence gates | Country-aware (US/India) | Pluggable LLMs (Gemini/Groq) | React UI with Agent Trace panel
Every project follows a consistent structure adapted from top ML teams:
project/
├── configs/ # Experiment configuration (YAML)
├── notebooks/ # Exploration & communication
├── src/ # Production source code
├── tests/ # Testing pyramid (unit/integration/load)
├── artifacts/ # Versioned outputs (models, results, figures)
├── docs/ # Model cards, architecture docs, experiment logs
├── scripts/ # One-command automation scripts
├── docker/ # Containerization (where applicable)
├── .gitignore
├── Makefile # make train | make test | make serve
├── requirements.txt
└── README.md
| Principle | What It Means |
|---|---|
| Separation of Concerns | Code (src/), config (configs/), data (data/), and artifacts (artifacts/) never mix |
| Reproducibility First | Configs are YAML, seeds are explicit, environments are containerized |
| Notebook = Communication | Notebooks prototype and communicate; src/ is the production code |
| Testing Pyramid | Unit tests catch logic bugs, integration tests catch pipeline bugs, load tests catch scaling bugs |
| Security by Default | Input sanitization, PII detection, injection defense (critical for LLM projects) |
| Observable from Day 1 | Monitoring, structured logging, metrics export built-in |
Each project is self-contained. Pick one and follow its README:
cd projects/algorithm-showdown # or any other project
pip install -r requirements.txt
make all # train -> evaluate -> test1. IPL Analysis -> Data wrangling, EDA, visualization fundamentals
|
2. ML Algorithms -> Classical ML, model comparison, interpretability
|
3. Deep Learning -> Neural networks, progressive experimentation
|
4. ML Pipeline -> Feature engineering, end-to-end pipelines, monitoring
|
5. MLOps -> Deployment, CI/CD, load testing, infrastructure
|
6. LLM/RAG -> Retrieval-augmented generation, evaluation, security
|
7. AI Agents -> Multi-agent orchestration, guardrails, cost optimization
|
8. Content Moderation -> Multi-agentic content pipelines
|
9. Due Diligence Agent -> Enterprise multi-agent research, fact-checking, debate
|
10. Smart Claims Processor -> Multi-agent insurance, HITL, hybrid orchestration
aiml-companion/
├── projects/
│ ├── ipl-match-predictor/ # EDA + Predictive Modeling
│ ├── algorithm-showdown/ # Classical ML + SHAP
│ ├── deep-learning-project/ # CIFAR-10 + PyTorch
│ ├── credit-risk-pipeline/ # Credit Risk + Monitoring
│ ├── model-serving-platform/ # Model Serving + CI/CD
│ ├── rag-expert-assistant/ # RAG + Security
│ ├── ai-agents-project/ # Multi-Agent + LangGraph
│ ├── content-moderation-project/ # Multi-Agentic Content Moderation
│ ├── due-diligence-agent/ # Multi-Agent Company Research
│ └── smart-claims-processor/ # Multi-Agent Insurance Claims
└── README.md # This file
Author: Rajesh Srivastava