diff --git a/README.md b/README.md index ef29768..3ebb744 100644 --- a/README.md +++ b/README.md @@ -49,6 +49,13 @@ Documentation is available at https://kiro.dev/docs/powers/ --- +### aws-transform +**Agents modernizing the world's infrastructure and software**, backed by years of AWS expertise. AWS Transform is a full modernization factory — connecting assessment through execution in a single experience, so the manual handoffs and lost context that commonly stall large-scale migrations and ongoing tech debt reduction no longer slow you down. This power brings AWS Transform directly into Kiro. AWS Transform custom is the first supported capability, with more playbooks on the way. Find out more at [aws.amazon.com/transform](https://aws.amazon.com/transform/) + +**MCP Servers:** None + +--- + ### cloud-architect **Build infrastructure on AWS** - Build AWS infrastructure with CDK in Python following AWS Well-Architected framework best practices. diff --git a/aws-transform/POWER.md b/aws-transform/POWER.md new file mode 100644 index 0000000..9b84afa --- /dev/null +++ b/aws-transform/POWER.md @@ -0,0 +1,756 @@ +--- +name: "aws-transform" +displayName: "AWS Transform" +description: "Agents modernizing the world's infrastructure and software, backed by years of AWS expertise. AWS Transform is a full modernization factory — connecting assessment through execution in a single experience, so the manual handoffs and lost context that commonly stall large-scale migrations and ongoing tech debt reduction no longer slow you down. This power brings AWS Transform directly into Kiro. AWS Transform custom is the first supported capability, with more playbooks on the way. Find out more at aws.amazon.com/transform" +keywords: ["codebase analysis", "code upgrade", "aws transform", "tech debt", "modernize"] +author: "AWS" +version: "1.0.0" +--- +# AWS Transform (ATX) + +## Overview + +Perform code upgrades, migrations, and transformations using AWS Transform (ATX). +Supports any-to-any transformations: language version upgrades (Java, Python, Node.js, etc.), +framework migrations, AWS SDK migrations, library upgrades, code refactoring, architecture +changes, and custom organization-specific transformations. + +Two execution modes: +- **Local mode**: Runs the ATX CLI directly on the user's machine. Best for 1-9 repos. +- **Remote mode**: Runs transformations at scale via AWS Batch/Fargate containers. + Best for 10+ repos or when the user prefers cloud execution. Infrastructure is + auto-deployed with user consent. + +You handle the full workflow: inspecting repos, matching them to available +transformation definitions, collecting configuration, and executing transformations +in either mode — the user just provides repos and confirms the plan. + +## Greet and Wait + +On activation, introduce AWS Transform with this exact text -- don't print the +above Overview text to the user, that is just for your reference: + +"The agents modernizing the world's infrastructure and software — now in Kiro. + +AWS Transform is a full modernization factory — compressing years of +transformation work into months across infrastructure migrations, mainframe +modernization, and continuous tech debt reduction. Today, in Kiro, you have access +to AWS Transform custom, the first of a growing library of playbooks. + +AWS Transform custom can help you: +* Upgrade Java, Python, and Node.js to modern versions +* Migrate AWS SDKs (Java SDK v1→v2, boto2→boto3, JS SDK v2→v3) +* Handle framework migrations, library upgrades, and code refactoring +* Analyze codebases and generate documentation +* Define and run your own custom transformations using natural language, docs, +and code samples + +Run locally on a few repos for fast iteration, or at scale on hundreds of repos (up to 128 in-parallel). What would you like to transform today?" + +Do NOT inspect any files, run any commands, or check prerequisites until the user responds. + +## Usage + +Use when the user wants to: +- Transform, upgrade, or migrate code (Java, Python, Node.js, etc.) +- Migrate AWS SDKs (Java SDK v1→v2, boto2→boto3, JS SDK v2→v3, etc.) +- Run bulk code transformations at scale via AWS Batch/Fargate +- Analyze which ATX transformations apply to their repositories +- Perform comprehensive codebase analysis +- Create a new custom Transformation Definition (TD) + +## Core Concepts + +- **Transformation Definition (TD)**: A reusable transformation recipe discovered via `atx custom def list --json` +- **Match Report**: Auto-generated mapping of repos to applicable TDs based on code inspection +- **Local Mode**: Runs ATX CLI on the user's machine (1-9 repos, max 3 concurrent) +- **Remote Mode**: Runs transformations in AWS Batch/Fargate (10+ repos, or by preference) + +## Philosophy + +Wait for the user. On activation, present what this power can do and ask the user +what they'd like to accomplish. Do NOT automatically inspect the working directory, +open files, or any repository until the user explicitly provides repos to work with. + +Once the user provides repositories, match — don't ask. Inspect those repositories +and present which transformations apply automatically. Never show a raw TD list and +ask the user to pick. + +## Prerequisites + +Prerequisite checks run ONCE at the start of a session. Do not repeat per repo. +Do NOT run prerequisite checks until the user has stated what they want to do. + +### 0. Platform Check (Required — All Modes) + +Detect the user's operating system. If on Windows (not WSL), stop immediately and +inform the user: + +> AWS Transform custom does not support native Windows. You need to install +> Windows Subsystem for Linux (WSL) and run this from within WSL. +> +> Install WSL: `wsl --install` in PowerShell (as Administrator), then restart. +> After that, open a WSL terminal and re-run this power from there. + +Check by running: +```bash +uname -s +``` +- `Linux` or `Darwin` → proceed normally +- `MINGW*`, `MSYS*`, `CYGWIN*`, or any Windows-like output → block and show the WSL message above +- Command fails, errors, or is not found → treat as native Windows, block and show the WSL message above + +Do NOT proceed with any other steps on native Windows. + +### 1. AWS CLI (Required — All Modes) + +```bash +aws --version +``` + +If not installed, guide the user: +- macOS: `brew install awscli` or `curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg" && sudo installer -pkg AWSCLIV2.pkg -target /` +- Linux: `curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && sudo ./aws/install` + +Do NOT proceed until `aws --version` succeeds. + +### 2. AWS Credentials (Required — All Modes) + +```bash +aws sts get-caller-identity +``` + +If credentials are NOT configured, walk the user through setup: + +``` +AWS Transform custom requires AWS credentials to authenticate with the service. Configure authentication using one of the following methods. + +1. AWS CLI Configure (~/.aws/credentials): + aws configure + +2. AWS Credentials File (manual). Configure credentials in ~/.aws/credentials: + +[default] +aws_access_key_id = your_access_key +aws_secret_access_key = your_secret_key + +3. Environment Variables. Set the following environment variables: + +export AWS_ACCESS_KEY_ID=your_access_key +export AWS_SECRET_ACCESS_KEY=your_secret_key +export AWS_SESSION_TOKEN=your_session_token + +You can also specify a profile using the AWS_PROFILE environment variable: + +export AWS_PROFILE=your_profile_name +``` + +Do NOT proceed until credentials are verified. Re-run `aws sts get-caller-identity` after setup. + +Note: environment variables set via `export` do not carry over between shell sessions. If the agent spawns a new shell, credentials set as env vars may be lost. Prefer `aws configure` or `~/.aws/credentials` for persistence. + +### 3. ATX CLI (Required — All Modes) + +Required in all modes for TD discovery (`atx custom def list --json`). +Local mode also uses it for transformation execution. +```bash +atx --version +# Install: curl -fsSL https://transform-cli.awsstatic.com/install.sh | bash +``` + +If installed, check for updates and update if available: +```bash +atx update +``` + +### 4. IAM Permissions (Required — All Modes) + +Local mode requires `transform-custom:*` minimum. Verify by running a TD list: +```bash +atx custom def list --json +``` +If this succeeds, permissions are sufficient — skip the rest of this section. + +If it fails with a permissions error, the caller needs the `transform-custom:*` +IAM permission. Explain to the user what's needed and get confirmation before proceeding: + +> Your identity needs the `transform-custom:*` permission to use the ATX CLI. +> I can attach the AWS-managed policy `AWSTransformCustomFullAccess` to your +> identity. Shall I proceed? + +Only after the user confirms, attach the managed policy: +```bash +CALLER_ARN=$(aws sts get-caller-identity --query Arn --output text) +if echo "$CALLER_ARN" | grep -q ":user/"; then + IDENTITY_NAME=$(echo "$CALLER_ARN" | awk -F'/' '{print $NF}') + aws iam attach-user-policy --user-name "$IDENTITY_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/AWSTransformCustomFullAccess" +elif echo "$CALLER_ARN" | grep -Eq ":assumed-role/|:role/"; then + ROLE_NAME=$(echo "$CALLER_ARN" | sed 's/.*:\(assumed-\)\{0,1\}role\///' | cut -d'/' -f1) + aws iam attach-role-policy --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/AWSTransformCustomFullAccess" +fi +``` + +If the attachment command itself fails (e.g., insufficient IAM permissions, or an +SSO-managed role), inform the user they need to ask their AWS administrator to +attach the `AWSTransformCustomFullAccess` AWS-managed policy to their identity. +For SSO users (role names starting with `AWSReservedSSO_`), this must be added +to their IAM Identity Center permission set — it cannot be attached directly. + +Do NOT proceed until `atx custom def list --json` succeeds. + +Remote mode requires additional permissions (Lambda invoke, S3, KMS, Secrets Manager, +CloudWatch). These are generated and attached as part of the deployment flow — see +[steering/remote-execution.md](steering/remote-execution.md). + +See [steering/cli-reference.md](steering/cli-reference.md) for the full permission list. + +### 5. AWS CDK (Remote Mode Only) + +Required for deploying remote infrastructure. Check if installed: +```bash +cdk --version +``` + +If not installed, install it globally: +```bash +npm install -g aws-cdk +``` + +Do NOT proceed with remote deployment until `cdk --version` succeeds. + +### 6. Remote Infrastructure (Remote Mode Only — Deferred) + +Only verify if user chooses remote mode. The infrastructure CDK scripts are fetched +at runtime by cloning `https://github.com/aws-samples/aws-transform-custom-samples.git` (branch `atx-remote-infra`) — +they are not bundled with this power. See [steering/remote-execution.md](steering/remote-execution.md). + +## Workflow + +Generate a session timestamp once and reuse it for all paths in this session: +```bash +SESSION_TS=$(date +%Y%m%d-%H%M%S) +``` + +### Step 1: Collect Repositories + +Ask the user for local paths or git URLs. Accept one or many. Do NOT assume the +current working directory or open editor files are the target — wait for the user +to explicitly provide repositories. + +Accepted source formats: +- **Local paths** — directories on the user's machine (e.g., `/home/user/my-project`) +- **HTTPS git URLs** — public or private (e.g., `https://github.com/org/repo.git`) +- **SSH git URLs** — e.g., `git@github.com:org/repo.git` +- **S3 bucket path with zips** — e.g., `s3://my-bucket/repos/` + containing zip files of repositories. Each zip becomes one transformation job. + +#### S3 Bucket Input + +If the user provides an S3 path containing zip files, ask which execution mode +they prefer (if not already specified). S3 input works in both modes: + +**Remote mode:** Copy the zips from the user's bucket to the managed source bucket, +then submit jobs pointing to the managed copies: +```bash +ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) +SOURCE_BUCKET="atx-source-code-${ACCOUNT_ID}" + +# List all zips in the user's bucket path +aws s3 ls s3://user-bucket/repos/ --recursive | grep '\.zip$' + +# Copy each zip to the managed source bucket +aws s3 sync s3://user-bucket/repos/ s3://${SOURCE_BUCKET}/repos/ --exclude "*" --include "*.zip" +``` +Then submit a batch job with one job per zip, each pointing to +`s3://${SOURCE_BUCKET}/repos/.zip`. The container handles zip extraction +automatically. See [steering/multi-transformation.md](steering/multi-transformation.md) for batch submission. +The managed source bucket has a 2-day lifecycle — copied zips auto-delete. + +**Local mode:** Download and extract each zip locally: +```bash +mkdir -p ~/.aws/atx/custom/atx-agent-session/repos +aws s3 sync s3://user-bucket/repos/ ~/.aws/atx/custom/atx-agent-session/repos/ --exclude "*" --include "*.zip" +for zip in ~/.aws/atx/custom/atx-agent-session/repos/*.zip; do + name=$(basename "$zip" .zip) + unzip -qo "$zip" -d "$HOME/.aws/atx/custom/atx-agent-session/repos/${name}-$SESSION_TS/" +done +``` +Use the extracted directories as `` for local execution. Standard local +mode limits apply (max 3 concurrent repos). + +#### Private Repository Detection (Remote Mode) + +**Always ask the user** — do NOT try to determine repo visibility yourself. Never +attempt to clone, curl, or probe a URL to check if it's public or private. Simply +ask the user. As soon as the user provides git URLs and remote mode is selected +(or likely), ask: + +> "Are any of these repositories private? If so, the remote container needs +> credentials to clone them — I'll walk you through the setup." + +Do NOT skip this question. Do NOT try to infer visibility by attempting a clone, +curl, or any other network request. Just ask. + +If the user confirms repos are private, determine the credential type based on URL format: + +First, resolve the region (use for all Secrets Manager commands below): +```bash +REGION=${AWS_REGION:-${AWS_DEFAULT_REGION:-$(aws configure get region 2>/dev/null)}} +REGION=${REGION:-us-east-1} +``` + +**For HTTPS URLs** — check whether a GitHub PAT is already configured: +```bash +aws secretsmanager describe-secret --secret-id "atx/github-token" --region "$REGION" 2>/dev/null \ + && echo "CONFIGURED" || echo "NOT_CONFIGURED" +``` + +If CONFIGURED, ask the user: "A GitHub PAT is already stored. Would you like to +keep using it, or replace it with a new one?" If they want to replace it, tell +them to run: +``` +aws secretsmanager put-secret-value --secret-id "atx/github-token" --region "$REGION" --secret-string "YOUR_TOKEN_HERE" +``` + +If NOT_CONFIGURED, explain what's needed and tell the user to run the create command: +> "Private HTTPS repos need a GitHub Personal Access Token (PAT) stored in AWS +> Secrets Manager. The remote container fetches it at startup to clone your repos. +> The token stays in your AWS account — you can delete it anytime. +> +> The PAT needs the `repo` scope for private repositories. Create one at +> https://github.com/settings/tokens and then run: +> ``` +> aws secretsmanager create-secret --name "atx/github-token" --region "$REGION" --secret-string "YOUR_TOKEN_HERE" +> ``` +> +> Delete anytime: `aws secretsmanager delete-secret --secret-id atx/github-token --region "$REGION" --force-delete-without-recovery`" + +Do NOT ask the user to paste their token in chat. They run the command themselves. +Wait for the user to confirm it's done, then verify: +```bash +aws secretsmanager describe-secret --secret-id "atx/github-token" --region "$REGION" 2>/dev/null \ + && echo "CONFIGURED" || echo "NOT_CONFIGURED" +``` + +**For SSH URLs** (`git@...` or `ssh://...`) — check whether an SSH key is configured: +```bash +aws secretsmanager describe-secret --secret-id "atx/ssh-key" --region "$REGION" 2>/dev/null \ + && echo "CONFIGURED" || echo "NOT_CONFIGURED" +``` + +If CONFIGURED, ask the user: "An SSH key is already stored. Would you like to +keep using it, or replace it with a new one?" If they want to replace it, tell +them to run: +``` +aws secretsmanager put-secret-value --secret-id "atx/ssh-key" --region "$REGION" --secret-string "YOUR_TOKEN_HERE" +``` + +If NOT_CONFIGURED, explain what's needed and tell the user to run the create command: +> "SSH repos need an SSH private key stored in AWS Secrets Manager. The remote +> container fetches it at startup to clone your repos. +> +> Run: +> ``` +> aws secretsmanager create-secret --name "atx/ssh-key" --region "$REGION" --secret-string "$(cat ~/.ssh/id_rsa)" +> ``` +> +> Delete anytime: `aws secretsmanager delete-secret --secret-id atx/ssh-key --region "$REGION" --force-delete-without-recovery`" + +Do NOT ask the user to paste their SSH key in chat. They run the command themselves. + +For local mode, private repo credentials are not needed — the user's local git +config handles authentication. Skip this check entirely for local mode. + +### Step 2: Discover TDs (Silent) + +Run silently — do NOT show output to user: +```bash +atx custom def list --json +``` +Inspect the JSON output directly to build an internal lookup of available TDs. +Do NOT pipe the output to python, jq, or other parsing scripts — read the JSON +yourself. Never hardcode TD names. + +#### Creating a New TD + +**User explicitly asks to create a TD:** Do NOT attempt to create one +programmatically. Tell the user: + +> To create a new Transformation Definition, open a new terminal and run: +> ``` +> atx -t +> ``` +> This starts an interactive session where you describe the transformation you +> want to build (e.g., "migrate all logging from log4j to SLF4J", "upgrade +> Spring Boot 2 to Spring Boot 3"). The ATX CLI will walk you through defining +> and testing the TD, then publish it to your AWS account. +> +> Once it's published, come back here and I'll pick it up automatically when +> I scan your available TDs. + +**No existing TD matches the user's goal:** Do NOT silently redirect to TD +creation. The match logic may be imperfect. Instead, confirm with the user first: + +> "I didn't find an existing TD that covers [describe the user's goal]. Would +> you like to create a new one?" + +Only show the `atx -t` instructions if the user confirms. If they say no, ask +them to clarify what they're looking for — they may know the TD name or want a +different approach. + +Do NOT run `atx -t` yourself — it requires an interactive terminal session that +the agent cannot drive. The user must run it manually in a separate terminal. + +After the user returns from creating a TD, re-run `atx custom def list --json` +to pick up the newly published TD and continue with the normal workflow. + +### Step 3: Inspect Each Repository + +Perform lightweight inspection only — check config files for key signals: + +| Signal | Files to Check | Likely TD Type | +|--------|---------------|----------------| +| Python version | `.python-version`, `pyproject.toml`, `setup.cfg`, `requirements.txt` | Python version upgrade | +| Java version | `pom.xml` (``), `build.gradle` (`sourceCompatibility`), `.java-version` | Java version upgrade | +| Node.js version | `package.json` (`engines.node`), `.nvmrc`, `.node-version` | Node.js version upgrade | +| Python boto2 | `import boto` (NOT boto3) | boto2→boto3 migration | +| Java SDK v1 | `com.amazonaws` imports, `aws-java-sdk` in pom.xml | Java SDK v1→v2 | +| Node.js SDK v2 | `"aws-sdk"` in package.json (NOT `@aws-sdk`) | JS SDK v2→v3 | +| x86 Java | `x86_64`/`amd64` in Dockerfiles, build configs | Graviton migration | + +Cross-reference detected signals against TDs from Step 2. Only match TDs that +actually exist in the user's account. + +See [steering/repo-analysis.md](steering/repo-analysis.md) for full detection commands. + +### Step 4: Present Match Report + +Format: +``` +Transformation Match Report +============================= +Repository: () + Language: + Matching TDs: + - + +Summary: N repos analyzed, M have applicable transformations (T total jobs) +``` + +Present the match report and wait for user confirmation before proceeding. +Do NOT start any transformation without explicit user consent. + +### Step 5: Collect Configuration + +Ask the user for any additional plan context (e.g., target version for upgrade TDs). +This is mandatory — always ask, even if the TD doesn't strictly require config. +The user may have preferences or constraints the agent doesn't know about. +Skip only if the user explicitly says no additional context is needed. + +### Step 6: Verify Runtime Compatibility (Remote and Local) + +#### Remote Mode + +Before submitting remote jobs, verify the container has the exact target version +installed. First, read the Dockerfile to see what's actually installed: +```bash +ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra" +cat "$ATX_INFRA_DIR/container/Dockerfile" 2>/dev/null +``` + +If the directory doesn't exist yet, use the default container contents as reference: +- Java: 8, 11, 17, 21, 25 (Amazon Corretto) +- Python: 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14 +- Node.js: 16, 18, 20, 22, 24 + +Check whether the target version appears in the Dockerfile (or the fallback list +above). If the transformation targets a version that is NOT present (e.g., Java 23, +Python 3.15, Node.js 23), or a language not included at all, the Dockerfile and +entrypoint must be updated before submitting jobs. + +If the target version is missing, inform the user: + +> The remote container doesn't include [language/tool version]. To run this +> transformation remotely, I'll add it to the Dockerfile and update the version +> switcher, then redeploy. This is a one-time change — about 5-10 minutes. +> Want me to proceed? + +If the user confirms: + +1. Ensure the infrastructure repo is cloned and up to date: + ```bash + ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra" + if [ -d "$ATX_INFRA_DIR" ]; then + git -C "$ATX_INFRA_DIR" add -A + git -C "$ATX_INFRA_DIR" commit -m "Local customizations" -q 2>/dev/null || true + git -C "$ATX_INFRA_DIR" pull -q 2>/dev/null || true + else + git clone -b atx-remote-infra --single-branch https://github.com/aws-samples/aws-transform-custom-samples.git "$ATX_INFRA_DIR" + fi + ``` + If `git pull` reports a merge conflict, resolve it by keeping both upstream + changes and the user's customizations in the `CUSTOM LANGUAGES AND TOOLS` + section of the Dockerfile, then commit the merge. + +2. Edit `$ATX_INFRA_DIR/container/Dockerfile`. Find the section marked + `# CUSTOM LANGUAGES AND TOOLS` and insert `RUN` commands after the comment + block, before the `USER root` line. + + For missing versions of already-installed languages, add the version in the + custom section. Examples: + ```dockerfile + # Java 23 (Amazon Corretto — direct install, must run as root) + # Do NOT use dnf in the custom section — pyenv overrides the system python3 + # that dnf depends on, causing "No module named 'dnf'" errors. + USER root + RUN curl -fsSL "https://corretto.aws/downloads/latest/amazon-corretto-23-x64-linux-jdk.tar.gz" -o /tmp/corretto23.tar.gz && \ + mkdir -p /usr/lib/jvm && \ + tar -xzf /tmp/corretto23.tar.gz -C /usr/lib/jvm && \ + rm /tmp/corretto23.tar.gz && \ + ln -sfn /usr/lib/jvm/amazon-corretto-23.* /usr/lib/jvm/corretto-23 + + # Node.js 23 (via nvm — must run as atxuser) + USER atxuser + RUN . /home/atxuser/.nvm/nvm.sh && nvm install 23 + USER root + + # Python 3.15 (via pyenv — must run as atxuser) + USER atxuser + RUN eval "$(/home/atxuser/.pyenv/bin/pyenv init -)" && \ + MAKE_OPTS="-j$(nproc)" /home/atxuser/.pyenv/bin/pyenv install 3.15.0 + USER root + ``` + + For entirely new languages, avoid `dnf` in the custom section — pyenv + overrides the system python3 that `dnf` depends on. Use language-specific + installers instead: + ```dockerfile + # Go + RUN curl -fsSL https://go.dev/dl/go1.22.0.linux-amd64.tar.gz | tar -C /usr/local -xz + ENV PATH="/usr/local/go/bin:$PATH" + + # Ruby (via rbenv — must run as atxuser) + USER atxuser + RUN git clone --depth 1 https://github.com/rbenv/rbenv.git /home/atxuser/.rbenv && \ + git clone --depth 1 https://github.com/rbenv/ruby-build.git /home/atxuser/.rbenv/plugins/ruby-build && \ + /home/atxuser/.rbenv/bin/rbenv install 3.3.0 && \ + /home/atxuser/.rbenv/bin/rbenv global 3.3.0 + ENV PATH="/home/atxuser/.rbenv/shims:/home/atxuser/.rbenv/bin:$PATH" + USER root + + # Rust + USER atxuser + RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y + ENV PATH="/home/atxuser/.cargo/bin:$PATH" + USER root + ``` + +3. Update the version switcher in `$ATX_INFRA_DIR/container/entrypoint.sh`. + Find the relevant `switch_*_version` function and add a case for the new + version. For Java versions installed via direct download, find the extracted + directory name under `/usr/lib/jvm/`. For example, to add Java 23: + ```bash + # In switch_java_version(), add to the case statement: + 23) java_home="/usr/lib/jvm/corretto-23" ;; + ``` + Check the actual directory name: `ls /usr/lib/jvm/` — use the directory + that matches the version you installed. + + For Node.js, nvm handles arbitrary versions automatically — no entrypoint + change needed. For Python, pyenv handles arbitrary versions — no entrypoint + change needed (the existing pyenv fallback logic finds it). + +4. Deploy (or redeploy): `cd "$ATX_INFRA_DIR" && ./setup.sh` + CDK hashes the `container/` directory — any file change triggers a rebuild + and push to ECR automatically. + +After redeployment, set the `environment` field on the job to the exact target +version (e.g., `"JAVA_VERSION":"23"`, not `"21"`). The version switcher in the +entrypoint reads this and activates the correct runtime. + +If the user declines, suggest local mode as an alternative (if the tools are +available on their machine). + +#### Local Mode + +Before running local transformations, verify the user has the target runtime +version installed. This applies to any language or runtime the transformation +targets — Java, Python, Node.js, Ruby, Go, Rust, .NET, etc. Check the current +version of whatever runtime the TD requires. For example: +```bash +java -version # Java transformations +python3 --version # Python transformations +node --version # Node.js transformations +ruby --version # Ruby transformations +go version # Go transformations +``` + +If the target version is not active, check whether it's already installed: +```bash +# Java: check common install locations +/usr/libexec/java_home -V 2>&1 # macOS +ls /usr/lib/jvm/ 2>/dev/null # Linux +# Python: check if the specific version binary exists +which python3.12 2>/dev/null # adjust version as needed +# Node.js: check if nvm is available, or look for the binary +command -v nvm &>/dev/null && nvm ls 2>/dev/null +which node 2>/dev/null && node --version +``` + +If the target version is found, switch to it: +- Java: `export JAVA_HOME= && export PATH="$JAVA_HOME/bin:$PATH"` +- Python: `pyenv shell 3.15.0` +- Node.js: `nvm use 23` + +Only if the target version is not installed at all, ask the user for permission before installing. Do NOT install runtimes without explicit user confirmation. +Suggest the appropriate version manager: +- Java: `brew install --cask corretto23` (macOS), `sudo yum install java-23-amazon-corretto-devel` (RHEL/AL2), or `sudo apt install java-23-amazon-corretto-jdk` (Debian/Ubuntu) +- Python: `pyenv install 3.15.0 && pyenv shell 3.15.0`, or `brew install python@3.15` +- Node.js: `nvm install 23 && nvm use 23` + +The active runtime must match the transformation's target version so that builds +and tests run correctly. Do NOT proceed with the transformation until the correct +version is active. + +### Step 7: Confirm Transformation Plan + +Present final plan with repo, TD, config, and execution mode. Do NOT proceed +until user confirms. + +### Step 8: Execute + +- **1 repo**: See [steering/single-transformation.md](steering/single-transformation.md) +- **Multiple repos**: See [steering/multi-transformation.md](steering/multi-transformation.md) + +## Execution Modes + +| Mode | Best For | Prerequisites | +|------|----------|---------------| +| **Local** (default for 1-9 repos) | Quick transforms, dev machines with ATX | ATX CLI installed | +| **Remote** (recommended for 10+ repos) | Bulk transforms, up to 512 repos (128 concurrent per batch) | AWS account, auto-deployed infra | + +Mode inference: +- User says "local"/"here"/"on my machine" → Local (honor the request regardless of repo count) +- User says "remote"/"cloud"/"AWS"/"batch"/"at scale" → Remote +- 10+ repos without preference → Recommend remote, explain local cap recommendation of 3 concurrent +- 1-9 repos without preference → Local, note remote available + +See [steering/remote-execution.md](steering/remote-execution.md) for infrastructure setup. + +## Critical Rules + +1. **Discover TDs dynamically** — Always run `atx custom def list --json`. Never hardcode TD names. +2. **Match, don't ask** — Inspect repos and present matches. Never show raw TD lists. +3. **Lightweight inspection only** — Check config files and key signals. No deep analysis. +4. **Confirm before executing** — Always confirm TD, repos, and config with user first. +5. **No time estimates** — Never include duration predictions. +6. **Parallel execution** — Local: max 3 concurrent repos. Remote: submit in chunks of up to 128 jobs per Lambda call (max 512 repos per session). +7. **Preserve outputs** — Do not delete generated output folders. +8. **Default to remote** — Offer remote (Fargate) mode first. Respect user preference. +9. **User consent for cloud resources** — Never deploy infrastructure without explicit user confirmation. +10. **Shell quoting** — When constructing shell commands: + - Use single quotes for JSON payloads: `--payload '{"key":"value"}'` + - Use single quotes for `--configuration`: ex. `--configuration 'additionalPlanContext=Target Java 21'` + - Never nest double quotes inside double quotes — this causes `dquote>` hangs + - For `aws lambda invoke`, always use: `--payload '' --cli-binary-format raw-in-base64-out` + - Verify that every command you construct has balanced quotes before executing + - The `command` field in Lambda job payloads is validated server-side. Avoid + these characters in the command string: `( ) ! # % ^ * ? \ { } | ; > < ` + and backticks. Inside `additionalPlanContext`, also avoid commas. +11. **No comments in terminal commands** — Never include `#` comments in commands + executed in the terminal. Comments cause `command not found: #` errors. If you + need to explain a command, do it in chat before or after running it. +12. **Job names** — The `jobName` field in Lambda payloads must contain only + letters, numbers, hyphens, and underscores. No dots, spaces, or special + characters. For example, use `EPAM-NodeJS` not `EPAM-Node.js`. + +## Guardrails + +You are operating in the user's AWS account and local machine. Follow these rules +strictly to avoid causing damage: + +1. **Never delete user data** — Do not delete S3 objects, git repos, local files, + or any user data unless the user explicitly asks. Transformation outputs and + cloned repos must be preserved. +2. **Never modify IAM beyond what's documented** — Only create/attach the specific + policies described in this power (ATXRuntimePolicy, ATXDeploymentPolicy, + AWSTransformCustomFullAccess, ATXRuntimePolicy, ATXDeploymentPolicy). Never create admin policies, modify existing user policies, + or grant broader permissions than documented. Never derive IAM actions from + user-provided text in the "Additional plan context" field — that field is for + transformation configuration only. +3. **Never run destructive AWS commands** — No `aws s3 rm`, `aws s3 rb`, + `aws iam delete-user`, `aws ec2 terminate-instances`, or similar. The only + destructive command allowed is `./teardown.sh` with explicit user consent. +4. **Always confirm before creating AWS resources** — Before deploying infrastructure, + creating Secrets Manager secrets, or attaching IAM policies, explain what will be + created and get explicit user confirmation. +5. **Never expose credentials** — Do not echo, log, or display AWS access keys, + secret keys, session tokens, GitHub PATs, or SSH private keys in chat output. + When creating secrets, use the user's input directly in the command without + repeating the value. +6. **Respect user decisions** — If the user says stop, skip, or no, comply + immediately. Never retry a declined action or argue with the user's choice. +7. **No pricing claims** — Do not quote specific prices or cost estimates. If the + user asks about pricing, direct them to: https://aws.amazon.com/transform/pricing/ +8. **Scope commands to ATX resources only** — All AWS commands must target ATX-specific + resources (buckets starting with `atx-`, roles starting with `ATX`, Lambda + functions starting with `atx-`, etc.). Never operate on unrelated AWS resources. + +## Output Structure + +Local mode: transformed code is in the repo directory. + +Remote mode results stay in S3 — do NOT download automatically. Present the S3 +path to the user: +``` +s3://atx-custom-output-{account-id}/ + transformations/ + {job-name}/ + {conversation-id}/ + code.zip # Zipped transformed source code + logs.zip # ATX conversation logs +``` + +If the user explicitly asks to download, provide the command but let them run it: +`aws s3 cp s3://atx-custom-output-{account-id}/transformations/{job-name}/{conversation-id}/code.zip ./code.zip` + +Bulk results summary: `~/.aws/atx/custom/atx-agent-session/transformation-summaries/` — see [steering/results-synthesis.md](steering/results-synthesis.md). + +## References + +| Reference | When to Use | +|-----------|-------------| +| [repo-analysis.md](steering/repo-analysis.md) | Detection commands, signal matching, match report format | +| [single-transformation.md](steering/single-transformation.md) | Applying one TD to one repo (local or remote) | +| [multi-transformation.md](steering/multi-transformation.md) | Applying TDs to multiple repos in parallel | +| [remote-execution.md](steering/remote-execution.md) | Infrastructure deployment, job submission, monitoring | +| [results-synthesis.md](steering/results-synthesis.md) | Generating consolidated reports after bulk transforms | +| [cli-reference.md](steering/cli-reference.md) | ATX CLI flags, commands, env vars, IAM permissions | +| [troubleshooting.md](steering/troubleshooting.md) | Error resolution, debugging, quality improvement | + +## License +AWS Service Terms. This power is provided by AWS and is subject to the AWS Customer Agreement and applicable AWS service terms. + +## Issues +https://github.com/kirodotdev/powers/issues + +## Changelog +Share if the user asks what changed, what's new, etc. +### [1.0.0] - 2026-04-14 +- Initial release of the AWS Transform Kiro Power +- Supported TDs: + - AWS/java-version-upgrade + - AWS/python-version-upgrade + - AWS/nodejs-version-upgrade + - AWS/java-aws-sdk-v1-to-v2 + - AWS/nodejs-aws-sdk-v2-to-v3 + - AWS/python-boto2-to-boto3 + - AWS/comprehensive-codebase-analysis + - AWS/java-performance-optimization + - AWS/angular-version-upgrade + - AWS/vue.js-version-upgrade + - AWS/early-access-java-x86-to-graviton + - AWS/early-access-angular-to-react-migration + - AWS/early-access-log4j-to-slf4j-migration diff --git a/aws-transform/steering/cli-reference.md b/aws-transform/steering/cli-reference.md new file mode 100644 index 0000000..9c99cde --- /dev/null +++ b/aws-transform/steering/cli-reference.md @@ -0,0 +1,132 @@ +# ATX CLI Reference + +## Execution Flags (`atx custom def exec`) + +| Flag | Long Form | Description | +|------|-----------|-------------| +| `-n` | `--transformation-name ` | TD name (from `atx custom def list --json`) | +| `-p` | `--code-repository-path ` | Path to code repo (`.` for current dir) | +| `-x` | `--non-interactive` | No user prompts (always use this flag) | +| `-t` | `--trust-all-tools` | Auto-approve tool executions (required with `-x`) | +| `-d` | `--do-not-learn` | Prevent knowledge item extraction | +| `-g` | `--configuration ` | Inline configuration (`'key=val'`) | +| `--tv` | `--transformation-version ` | Specific TD version | + +## Configuration + +Inline: `--configuration 'additionalPlanContext=Target Python 3.13'` + +Example: `atx custom def exec -n my-td -p /source/repo -g 'additionalPlanContext=Target Java 17' -x -t` + +`--configuration` is optional. Omit if no extra context needed. + +## Other Commands + +| Action | Command | +|--------|---------| +| Start interactive conversation | `atx` | +| Resume most recent conversation | `atx --resume` | +| Resume specific conversation | `atx --conversation-id ` (30-day limit) | +| List TDs | `atx custom def list --json` | +| Download TD | `atx custom def get -n ` (optional: `--tv `, `--td `) | +| Delete TD | `atx custom def delete -n ` | +| Save TD as draft | `atx custom def save-draft -n --description "" --sd ` | +| Publish TD | `atx custom def publish -n --description "" --sd ` | +| List knowledge items | `atx custom def list-ki -n ` | +| View knowledge item | `atx custom def get-ki -n --id ` | +| Enable/disable KI | `atx custom def update-ki-status -n --id --status ENABLED or DISABLED` | +| KI auto-approval on/off | `atx custom def update-ki-config -n --auto-enabled TRUE or FALSE` | +| Export KIs | `atx custom def export-ki-markdown -n ` | +| Delete KI | `atx custom def delete-ki -n --id ` | +| Update CLI | `atx update` | +| Check for CLI updates only | `atx update --check` | +| Tag a TD | `atx custom def tag --arn --tags '{"key":"value"}'` | + +## Environment Variables + +| Variable | Default | Description | +|----------|---------|-------------| +| `ATX_SHELL_TIMEOUT` | 900 (15 min) | Shell command timeout in seconds | +| `ATX_DISABLE_UPDATE_CHECK` | false | Disable version check | +| `AWS_PROFILE` | — | AWS credentials profile | +| `AWS_ACCESS_KEY_ID` | — | AWS access key | +| `AWS_SECRET_ACCESS_KEY` | — | AWS secret key | +| `AWS_SESSION_TOKEN` | — | Session token (temporary credentials) | + +## IAM Permissions + +Minimum: `transform-custom:*` on `Resource: "*"`. + +| Permission | Operation | +|-----------|----------| +| `transform-custom:ConverseStream` | Interactive conversations | +| `transform-custom:ExecuteTransformation` | Execute transforms | +| `transform-custom:ListTransformationPackageMetadata` | List transforms (`atx custom def list --json`) | +| `transform-custom:DeleteTransformationPackage` | Delete transforms | +| `transform-custom:CompleteTransformationPackageUpload` | Upload TDs | +| `transform-custom:CreateTransformationPackageUrl` | Create upload URLs | +| `transform-custom:GetTransformationPackageUrl` | Download TDs | +| `transform-custom:ListKnowledgeItems` | List knowledge items | +| `transform-custom:GetKnowledgeItem` | View knowledge item details | +| `transform-custom:DeleteKnowledgeItem` | Delete knowledge items | +| `transform-custom:UpdateKnowledgeItemConfiguration` | Configure auto-approval | +| `transform-custom:UpdateKnowledgeItemStatus` | Enable/disable items | +| `transform-custom:ListTagsForResource` | List tags | +| `transform-custom:TagResource` | Add tags | +| `transform-custom:UntagResource` | Remove tags | + +### Remote Mode Caller Permissions + +The caller's AWS credentials (the user or role running the session) need additional +permissions beyond `transform-custom:*` for remote mode. Generate the policies, +then create and attach them: + +```bash +ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra" +if [ -d "$ATX_INFRA_DIR" ]; then + git -C "$ATX_INFRA_DIR" add -A + git -C "$ATX_INFRA_DIR" commit -m "Local customizations" -q 2>/dev/null || true + git -C "$ATX_INFRA_DIR" pull -q 2>/dev/null || true +else + git clone -b atx-remote-infra --single-branch https://github.com/aws-samples/aws-transform-custom-samples.git "$ATX_INFRA_DIR" +fi +cd "$ATX_INFRA_DIR" +npx ts-node generate-caller-policy.ts +``` + +This produces two policies: + +| Policy | Purpose | When Needed | +|--------|---------|-------------| +| `atx-runtime-policy.json` | Invoke Lambdas, S3 upload/download, KMS, Secrets Manager, CloudWatch logs | Day-to-day remote operations | +| `atx-deployment-policy.json` | CloudFormation, ECR, IAM roles, Batch, VPC, KMS key creation | One-time CDK deploy/destroy | + +After generating, create and attach the runtime policy: +```bash +ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) +CALLER_ARN=$(aws sts get-caller-identity --query Arn --output text) + +# Create the managed policy (ignore EntityAlreadyExists, fail on other errors) +if ! create_output=$(aws iam create-policy --policy-name ATXRuntimePolicy \ + --policy-document "file://$ATX_INFRA_DIR/atx-runtime-policy.json" 2>&1); then + echo "$create_output" | grep -q "EntityAlreadyExists" \ + || { echo "Failed to create policy: $create_output" >&2; exit 1; } +fi + +if echo "$CALLER_ARN" | grep -q ":user/"; then + IDENTITY_NAME=$(echo "$CALLER_ARN" | awk -F'/' '{print $NF}') + aws iam attach-user-policy --user-name "$IDENTITY_NAME" \ + --policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/ATXRuntimePolicy" +elif echo "$CALLER_ARN" | grep -Eq ":assumed-role/|:role/"; then + ROLE_NAME=$(echo "$CALLER_ARN" | sed 's/.*:\(assumed-\)\{0,1\}role\///' | cut -d'/' -f1) + aws iam attach-role-policy --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/ATXRuntimePolicy" +fi +``` + +The runtime policy covers: `transform-custom:*` for ATX CLI operations (TD discovery, execution), +`lambda:InvokeFunction` on all `atx-*` functions, +`s3:PutObject`/`s3:GetObject` on source and output buckets, `kms:Encrypt`/`kms:Decrypt`/`kms:GenerateDataKey` +on the ATX encryption key, `secretsmanager:CreateSecret`/`PutSecretValue`/`DeleteSecret` on `atx/*` secrets, +`logs:GetLogEvents`/`FilterLogEvents` on the Batch log group, and `cloudformation:DescribeStacks` +for infrastructure status checks. \ No newline at end of file diff --git a/aws-transform/steering/multi-transformation.md b/aws-transform/steering/multi-transformation.md new file mode 100644 index 0000000..f8f2eab --- /dev/null +++ b/aws-transform/steering/multi-transformation.md @@ -0,0 +1,198 @@ +# Multi-Transformation + +Apply TDs to multiple repositories in parallel. TD-to-repo assignments and config +are already confirmed from the match report. Do NOT re-discover TDs or re-prompt. + +## Input + +From the match report: repo list, TD per repo, config per TD, execution mode. + +## Prerequisite Check (Once Only) + +Verify AWS credentials ONCE. Do NOT repeat per repo. +```bash +aws sts get-caller-identity +``` +Local mode also: `atx --version` + +## Local Execution + +If any repos were provided as git URLs (HTTPS or SSH), clone them locally first. +The user's local git config handles authentication — no Secrets Manager needed. +```bash +CLONE_DIR=~/.aws/atx/custom/atx-agent-session/repos/-$SESSION_TS +git clone "$CLONE_DIR" +``` + +If repos were provided as an S3 bucket path with zips, download and extract locally: +```bash +mkdir -p ~/.aws/atx/custom/atx-agent-session/repos +aws s3 sync s3://user-bucket/repos/ ~/.aws/atx/custom/atx-agent-session/repos/ --exclude "*" --include "*.zip" +for zip in ~/.aws/atx/custom/atx-agent-session/repos/*.zip; do + name=$(basename "$zip" .zip) + unzip -qo "$zip" -d "$HOME/.aws/atx/custom/atx-agent-session/repos/${name}-$SESSION_TS/" +done +``` + +Use the cloned/extracted paths as `` for each repo. + +For each repo, verify it's a git repo: +```bash +ls -la +git -C status +``` +If not a git repo: `cd && git init && git add . && git commit -m "Initial commit"` + +The active language runtime must match the transformation's target version so that +builds and tests run correctly. Check the current version, and if there is a +mismatch, first check whether the target version is already installed (e.g., +`/usr/libexec/java_home -V 2>&1` (macOS) or `ls /usr/lib/jvm/` (Linux), `pyenv versions`, `nvm ls`). If found, switch +to it (e.g., `export JAVA_HOME= && export PATH="$JAVA_HOME/bin:$PATH"`, `pyenv shell 3.12`, `nvm use 22`). Only if +the target version is not installed at all, ask the user for permission before installing. Suggest: +- Java: `brew install --cask corretto23` (macOS), `sudo yum install java-23-amazon-corretto-devel` (RHEL/AL2), or `sudo apt install java-23-amazon-corretto-jdk` (Debian/Ubuntu) +- Python: `pyenv install 3.15.0 && pyenv shell 3.15.0` +- Node.js: `nvm install 23 && nvm use 23` + +Do NOT proceed until the correct version is active. Verify the switch succeeded +before proceeding. + +Run transformations in parallel — maximum 3 concurrent repos at a time (the user +can override this, but 3 is recommended to avoid overloading the machine). If there +are more than 3 repos, process them in batches of 3 (wait for a batch to finish +before starting the next). Maximum 9 repos total for local mode (user can override, +but recommend remote mode for more). If the total repo count exceeds 9, suggest +remote mode instead. + +For each repo, use bash to create a runner script that captures the exit code, following this exact format: +```bash +mkdir -p ~/.aws/atx/custom/atx-agent-session +cat > ~/.aws/atx/custom/atx-agent-session/run-.sh << 'RUNNER' +#!/bin/bash +atx custom def exec -n -p -x -t \ + --configuration 'additionalPlanContext=' +echo $? > ~/.aws/atx/custom/atx-agent-session/.exit +RUNNER +chmod +x ~/.aws/atx/custom/atx-agent-session/run-.sh +nohup ~/.aws/atx/custom/atx-agent-session/run-.sh > ~/.aws/atx/custom/atx-agent-session/.log 2>&1 & +echo $! > ~/.aws/atx/custom/atx-agent-session/.pid +``` +Omit `--configuration` if no config needed. Launch each repo's script in rapid +succession — do NOT wait between launches. Each runner script is backgrounded +via nohup; the exit code is captured to `~/.aws/atx/custom/atx-agent-session/.exit` when ATX finishes. + +After launching all repos, find each repo's conversation log by grepping its +process log (ATX outputs the path within 30-60 seconds of starting): +```bash +grep "Conversation log:" ~/.aws/atx/custom/atx-agent-session/.log 2>/dev/null +``` +If it hasn't appeared yet, wait 15 seconds and retry. Extract the full path from +each — do NOT use `ls -t` across all conversations, as that may match a different run. + +Then start monitoring. On each 60-second cycle: +1. Check each PID: `kill -0 $(cat ~/.aws/atx/custom/atx-agent-session/.pid) 2>/dev/null && echo "RUNNING" || echo "DONE"` +2. Tail each repo's conversation log and relay progress to the user +3. For each repo, list the artifacts directory (`~/.aws/atx/custom//artifacts/`) + and open any new files with `kiro -r ` as they appear (open each file only once). +4. Report which repos are still running, which have completed + +**You MUST continue polling without waiting for user input.** The user should see +continuous progress updates across all repos. + +A repo's transformation is done ONLY when its background process exits (i.e., +`kill -0` returns non-zero). Do NOT treat exit code 0 from any other command +(grep, cat, test, ls, etc.) as transformation completion. Do NOT treat log +messages like "TRANSFORMATION COMPLETE" as completion — ATX performs additional +steps after that (validation summary generation). + +## Remote Execution + +Prepare each repo's source before submitting the batch. Follow the source prep +rules from single-transformation.md: HTTPS and SSH git URLs (with credentials +configured) are passed directly; S3 zips from the user's bucket must be copied +to the managed source bucket (`atx-source-code-{account}`) first; local repos +must be zipped and uploaded to the same managed bucket. + +Submit jobs via the batch Lambda in chunks of up to 128. If there are more than +128 jobs, split them into multiple `atx-trigger-batch-jobs` calls (e.g., 500 repos += 4 calls of 128 + 128 + 128 + 116). Each call returns its own `batchId`. Track +all batch IDs for monitoring. + +Include the `environment` field on each job to set the language version matching the transformation's target (e.g., `"JAVA_VERSION":"21"` for a Java upgrade targeting 21): +```bash +aws lambda invoke --function-name atx-trigger-batch-jobs \ + --payload '{"batchName":"-chunk-1","jobs":[{"source":"","command":"atx custom def exec -n -p /source/ -x -t","jobName":"","environment":{"JAVA_VERSION":""}}]}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` + +If the total exceeds 128, repeat with the next chunk: +```bash +aws lambda invoke --function-name atx-trigger-batch-jobs \ + --payload '{"batchName":"-chunk-2","jobs":[...next 128 jobs...]}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` + +Monitor each batch by its `batchId`: +```bash +aws lambda invoke --function-name atx-get-batch-status \ + --payload '{"batchId":""}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` +Polling: every 60 seconds for the first 10 polls, then every 5 minutes after. +Report only on status change. + +## Progress Reporting + +``` +[1/N] repo-name TD-name Status +[2/N] repo-name TD-name Status +``` + +## Result Collection + +Collect per repo: success/failure, transformed code path, error details. +``` +Succeeded: +- repo-name: TD-name (config) +Failed: +- repo-name: TD-name (error) +``` + +For remote executions, include the CloudWatch dashboard link in the final output: +```bash +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +REGION=${REGION:-us-east-1} +echo "https://${REGION}.console.aws.amazon.com/cloudwatch/home#dashboards/dashboard/ATX-Transform-CLI-Dashboard" +``` + +Hand off to [results-synthesis.md](results-synthesis.md) for consolidated reporting. + +For local executions only, tell the user: "To review changes in each repo, open it in +Kiro (`kiro -r `) and use the Source Control panel to see the full +commit history with diffs for each file ATX modified." + +## Error Handling + +| Scenario | Action | +|----------|--------| +| Git clone fails | Log error, continue with remaining repos | +| Transformation fails | Log repo and error, do not auto-retry | +| Partial results | Generate summary from successes, report failures | + +## MANDATORY: Cleanup + +Clean up session files **before starting** and **after completing** each batch: +```bash +[ -d ~/.aws/atx/custom/atx-agent-session ] && find ~/.aws/atx/custom/atx-agent-session -maxdepth 1 -type f \( -name "*.sh" -o -name "*.log" -o -name "*.pid" -o -name "*.exit" -o -name "*.zip" \) -delete 2>/dev/null || true +``` + +For remote mode: after presenting results, also prompt the user about infrastructure +teardown. See the Cleanup section in [remote-execution.md](remote-execution.md) +for the exact prompt and flow. + +## Key Principles + +1. Single prerequisite check — never repeat for parallel tasks +2. Trust the match report — do not re-discover TDs +3. Local parallel execution — maximum 3 concurrent repos (user-overridable); recommend remote for more than 9 +4. Remote parallel execution — submit in chunks of up to 128 jobs per `atx-trigger-batch-jobs` call; split larger sets into multiple calls (max 512 repos per session) +5. Skip prerequisite checks in parallel task prompts diff --git a/aws-transform/steering/remote-execution.md b/aws-transform/steering/remote-execution.md new file mode 100644 index 0000000..0bdfbbe --- /dev/null +++ b/aws-transform/steering/remote-execution.md @@ -0,0 +1,405 @@ +# Remote Execution + +Deploy and manage AWS Batch/Fargate infrastructure for ATX transformations at scale. +All Lambda calls are executed by you — users never interact with Lambdas directly. + +Remote mode deploys to the user's own AWS account. Key resources: +- Results stored in S3 (`atx-custom-output-{accountId}`) with KMS encryption, 30-day lifecycle +- Source code uploaded to S3 (`atx-source-code-{accountId}`) with 7-day lifecycle +- CloudWatch dashboard: `ATX-Transform-CLI-Dashboard` for monitoring jobs +- 8 Lambda functions for job management (trigger, status, terminate, list) +- AWS Batch/Fargate for container execution — costs nothing when idle +- To find the account: `aws sts get-caller-identity --query Account --output text` + +## Infrastructure Check + +Before checking, determine the active AWS region (from `AWS_REGION`, `AWS_DEFAULT_REGION`, +or `aws configure get region`) and tell the user which region is being used. + +Then check deployment status: +```bash +aws cloudformation describe-stacks --stack-name AtxInfrastructureStack \ + --query 'Stacks[0].StackStatus' --output text || echo "NOT_DEPLOYED" +``` +If deployed (`CREATE_COMPLETE` or `UPDATE_COMPLETE`): proceed to job submission. +If `NOT_DEPLOYED` or any other status: get explicit user consent before deploying. + +## User Consent Prompt + +Explain what gets created: AWS Batch (Fargate), 8 Lambda functions, S3 buckets (KMS encrypted), +CloudWatch dashboard, IAM roles, ECR repository (container built locally). One-time setup. +Do NOT deploy until user confirms. + +## Deployment + +### Clone and Run Setup + +The infrastructure repo includes a `setup.sh` script that handles everything: +prerequisite checks, dependency installation, TypeScript compilation, CDK bootstrap, +and deployment. You run two commands total: + +```bash +ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra" +if [ -d "$ATX_INFRA_DIR" ]; then + git -C "$ATX_INFRA_DIR" add -A + git -C "$ATX_INFRA_DIR" commit -m "Local customizations" -q 2>/dev/null || true + git -C "$ATX_INFRA_DIR" pull -q 2>/dev/null || true +else + git clone -b atx-remote-infra --single-branch https://github.com/aws-samples/aws-transform-custom-samples.git "$ATX_INFRA_DIR" +fi +``` + +If `git pull` reports a merge conflict, resolve it by keeping both the upstream +changes and the user's customizations in the `CUSTOM LANGUAGES AND TOOLS` section +of the Dockerfile, then commit the merge. + +```bash +cd "$ATX_INFRA_DIR" && ./setup.sh +``` + +The script will: +1. Verify Node.js (v18+), npm, Docker (running), AWS CLI, AWS CDK CLI, and AWS credentials +2. Print clear error messages if any prerequisite is missing +3. Install npm dependencies, compile TypeScript +4. Bootstrap CDK (idempotent — skips if already done) +5. Deploy all stacks (container is built locally and pushed to ECR) + +First deploy takes ~5-10 minutes (container build). Subsequent deploys are faster. + +If `setup.sh` fails, it prints the specific prerequisite that's missing. Fix that +one thing and re-run — the script is idempotent. + +If deployment fails partway through (e.g., CloudFormation stack stuck in +`ROLLBACK_COMPLETE` or `UPDATE_ROLLBACK_FAILED`), run teardown first, then retry: +```bash +cd "$ATX_INFRA_DIR" && rm -f cdk.context.json && ./teardown.sh && ./setup.sh +``` +This cleans up the half-deployed state, clears cached CDK context, and starts fresh. +The teardown script handles stacks in any state, including failed rollbacks. + +### Attach IAM Policies + +After deployment, generate and attach the runtime policy so the caller has +permissions to invoke Lambdas, upload/download from S3, use KMS, etc.: + +```bash +cd "$ATX_INFRA_DIR" && npx ts-node generate-caller-policy.ts +``` + +This produces two JSON files in `$ATX_INFRA_DIR`: +- `atx-runtime-policy.json` — Day-to-day operations (Lambda invoke, S3, KMS, Secrets Manager, logs) +- `atx-deployment-policy.json` — One-time CDK deploy/destroy (CloudFormation, ECR, IAM, Batch, VPC) + +Attach the runtime policy to the caller: + +```bash +ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) +CALLER_ARN=$(aws sts get-caller-identity --query Arn --output text) + +# Create the managed policy (ignore EntityAlreadyExists, fail on other errors) +if ! create_output=$(aws iam create-policy --policy-name ATXRuntimePolicy \ + --policy-document "file://$ATX_INFRA_DIR/atx-runtime-policy.json" 2>&1); then + echo "$create_output" | grep -q "EntityAlreadyExists" \ + || { echo "Failed to create policy: $create_output" >&2; exit 1; } +fi + +# Attach to the caller (handles IAM users, IAM roles, and SSO/assumed roles) +if echo "$CALLER_ARN" | grep -q ":user/"; then + IDENTITY_NAME=$(echo "$CALLER_ARN" | awk -F'/' '{print $NF}') + aws iam attach-user-policy --user-name "$IDENTITY_NAME" \ + --policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/ATXRuntimePolicy" +elif echo "$CALLER_ARN" | grep -Eq ":assumed-role/|:role/"; then + ROLE_NAME=$(echo "$CALLER_ARN" | sed 's/.*:\(assumed-\)\{0,1\}role\///' | cut -d'/' -f1) + aws iam attach-role-policy --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/ATXRuntimePolicy" +fi +``` + +If the attachment fails (insufficient IAM permissions, or an SSO-managed role with +name starting with `AWSReservedSSO_`), inform the user: +- The policy JSON is at `$ATX_INFRA_DIR/atx-runtime-policy.json` +- They need their AWS administrator to create and attach it to their identity +- For SSO users, it must be added to their IAM Identity Center permission set + +Verify the policy is working by invoking a Lambda: +```bash +aws lambda invoke --function-name atx-list-jobs --payload '{}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` +If this succeeds, the runtime policy is active. If not, the attachment hasn't +taken effect yet — wait a few seconds and retry. + +If the caller also needs to deploy/destroy infrastructure (not just run jobs), +repeat the above with `atx-deployment-policy.json` and policy name `ATXDeploymentPolicy`. + +## Lambda Function Names + +After deployment, the Lambda functions are available with these names: +- `atx-trigger-job` — Submit a single transformation job +- `atx-get-job-status` — Get status of a single job +- `atx-terminate-job` — Terminate a running job +- `atx-list-jobs` — List all jobs +- `atx-trigger-batch-jobs` — Submit a batch of jobs +- `atx-get-batch-status` — Get batch status +- `atx-terminate-batch-jobs` — Terminate all jobs in a batch +- `atx-list-batches` — List all batches + +## MCP Configuration (Optional) + +If the user has a local ATX MCP configuration, include it inline with job +submissions so the containers can use it. Check for a local config: +```bash +cat ~/.aws/atx/mcp.json 2>/dev/null +``` + +If it exists, include the contents as the `mcpConfig` field in the `atx-trigger-job` +or `atx-trigger-batch-jobs` payload. For example: +```bash +aws lambda invoke --function-name atx-trigger-job \ + --payload '{"source":"...","command":"...","jobName":"...","mcpConfig":}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` + +The MCP config travels with the job request — do NOT upload it separately via +`atx-configure-mcp`. Skip this step if no local MCP config exists. + +## Job Submission + +**Limits:** Maximum 512 repositories per session. Submit in batches of up to 128 +jobs each via `atx-trigger-batch-jobs`. If you have more than 128 jobs, split them +into multiple Lambda calls (e.g., 500 repos = 4 calls of 128 + 128 + 128 + 116). +Each call returns its own `batchId` — track all of them for monitoring. AWS Batch +runs all jobs in a batch concurrently. If the total repo count exceeds 512, stop +and ask the user to reduce the list. + +**Repo analysis:** Do NOT scan or inspect repository contents locally in remote +mode. The repos may not be available on the local machine. Let the user specify +which TDs to apply, or use the TD already selected in the plugin. + +**Deployment failures:** If `setup.sh` or `cdk deploy` fails for any reason, run +`./teardown.sh` first to clean up the partial state, then retry `./setup.sh`. +Do not try to manually fix individual CloudFormation errors. + +**Source restrictions:** The `source` field accepts HTTPS git URLs, SSH git URLs +(with `atx/ssh-key` configured), or S3 paths within the CDK-managed source bucket +(`atx-source-code-{account}`). The container's IAM role cannot read from arbitrary +S3 buckets. If the user provides zips in their own S3 bucket, copy them to the +managed source bucket first (see Step 1 in POWER.md). + +Single job: +```bash +aws lambda invoke --function-name atx-trigger-job \ + --payload '{"source":"","command":"atx custom def exec -n -p /source/ -x -t","jobName":""}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` + +Batch: +```bash +aws lambda invoke --function-name atx-trigger-batch-jobs \ + --payload '{"batchName":"","jobs":[{"source":"","command":"atx custom def exec -n -p /source/ -x -t","jobName":""}]}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` + +## SSH URL Handling + +SSH git URLs (`git@github.com:org/repo.git` or `ssh://git@github.com/org/repo.git`) +are passed directly to the Lambda — the container clones them remotely. This requires +an SSH private key stored in Secrets Manager as `atx/ssh-key`. See Step 1 in POWER.md +for setup instructions. + +If the SSH key is not configured, the clone will fail inside the container. Do NOT +fall back to cloning locally — guide the user through SSH key setup instead. + +## Polling + +Poll every 60 seconds for the first 10 polls, then every 5 minutes after. +Report only on status change. +```bash +aws lambda invoke --function-name atx-get-job-status \ + --payload '{"jobId":""}' \ + --cli-binary-format raw-in-base64-out /dev/stdout + +aws lambda invoke --function-name atx-get-batch-status \ + --payload '{"batchId":""}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` + +## Results Location + +Do NOT download results locally. Results stay in S3. Present the S3 path to the user: +``` +Results: s3://atx-custom-output-{account-id}/transformations/// + code.zip — zipped transformed source code + logs.zip — ATX conversation logs +``` + +If the user explicitly asks to download, provide the command but let them run it: +``` +aws s3 cp s3://atx-custom-output-{account-id}/transformations///code.zip ./code.zip +``` + +## Private Repository Access + +**Note:** If the user has private repos, credentials should already be configured +during Step 1 (Collect Repositories) in POWER.md. This section documents the +mechanism for reference. + +The container fetches credentials from AWS Secrets Manager at startup. Three secret types: + +**`atx/github-token`** — plain string GitHub PAT for private HTTPS repo cloning: +```bash +aws secretsmanager create-secret --name "atx/github-token" --secret-string "" +``` + +**`atx/ssh-key`** — plain string SSH private key for private SSH repo cloning: +```bash +aws secretsmanager create-secret --name "atx/ssh-key" --secret-string "$(cat ~/.ssh/id_rsa)" +``` + +**`atx/credentials`** — JSON array of credential files for any tool/registry (see Container Customization above). + +Setup (requires user consent): +1. Explain which secrets will be created in their AWS account +2. Get explicit confirmation and credentials from the user +3. Create the secret(s) +4. Container entrypoint auto-fetches at startup — no image rebuild needed +5. User can delete anytime: `aws secretsmanager delete-secret --secret-id "atx/github-token" --region "$REGION" --force-delete-without-recovery` + +AWS credentials for ATX CLI are handled automatically by the IAM task role (refreshed every 45 min). + +## Monitoring + +CloudWatch dashboard: `ATX-Transform-CLI-Dashboard` +- Job Tracking: completion rates, success/failure trends +- Lambda Metrics: invocation counts, duration, errors +- Real-time Logs: stream transformation progress + +Dashboard URL (construct dynamically using the caller's region): +```bash +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +REGION=${REGION:-us-east-1} +echo "https://${REGION}.console.aws.amazon.com/cloudwatch/home#dashboards/dashboard/ATX-Transform-CLI-Dashboard" +``` + +Include this link in the final output when remote execution completes. + +## Container Customization + +The default container includes Java (8, 11, 17, 21, 25), Python (3.8–3.14), Node.js +(16–24), Maven, Gradle, gcc/g++, make, and common build tools. + +If a transformation requires a language or tool not included, you handle this +automatically during Step 6 (Verify Container Compatibility) — see POWER.md. The +Dockerfile has a clearly marked `CUSTOM LANGUAGES AND TOOLS` section where new +`RUN` commands should be inserted. After editing, redeploy with `cd "$ATX_INFRA_DIR" && ./setup.sh` — CDK +auto-detects Dockerfile changes and rebuilds the image. + +### Adding Languages or Tools +```dockerfile +# Example: Add Ruby +RUN yum install -y ruby ruby-devel && gem install bundler + +# Example: Add .NET SDK +RUN rpm -Uvh https://packages.microsoft.com/config/centos/7/packages-microsoft-prod.rpm && \ + yum install -y dotnet-sdk-8.0 + +# Example: Add Rust +RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y +ENV PATH="/home/atxuser/.cargo/bin:$PATH" +``` + +### Private Package Registries + +Credentials are fetched from AWS Secrets Manager at container startup — never baked into the image. + +**`atx/github-token`** (plain string) — GitHub PAT for private repo cloning. + +**`atx/credentials`** (JSON array) — Generic credential files for any tool or registry. Each entry writes a file into the container at startup: +```json +[ + {"path": "/home/atxuser/.npmrc", "content": "//npm.company.com/:_authToken=TOKEN"}, + {"path": "/home/atxuser/.m2/settings.xml", "content": "..."}, + {"path": "/home/atxuser/.config/pip/pip.conf", "content": "[global]\nindex-url = https://pypi.company.com/simple"}, + {"path": "/home/atxuser/.gem/credentials", "content": "---\n:rubygems_api_key: KEY", "mode": "0600"}, + {"path": "/home/atxuser/.cargo/credentials.toml", "content": "[registry]\ntoken = \"TOKEN\""}, + {"path": "/home/atxuser/.nuget/NuGet.Config", "content": "..."} +] +``` + +Create the secret: +```bash +aws secretsmanager create-secret --name "atx/credentials" \ + --secret-string '[{"path":"/home/atxuser/.npmrc","content":"//npm.company.com/:_authToken=TOKEN"}]' +``` + +This works for any language or tool added to the Dockerfile — npm, Maven, pip, RubyGems, Cargo, NuGet, etc. The `mode` field is optional (defaults to `0644`). + +### Version Switching at Runtime + +The container supports runtime version switching via environment variables passed as container overrides. +The `environment` field on the job MUST match the exact target version of the +transformation — not the closest available version. For example, if upgrading to +Java 23, set `"JAVA_VERSION":"23"` (not `"21"`). If the target version was added +to the Dockerfile and entrypoint per Step 6, the switcher will activate it. + +Via Lambda (recommended): +```bash +aws lambda invoke --function-name atx-trigger-job \ + --payload '{"source":"...","jobName":"...","command":"atx ...","environment":{"JAVA_VERSION":"23","NODE_VERSION":"22","PYTHON_VERSION":"3.13"}}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` + +Via direct Batch submission: +```bash +aws batch submit-job \ + --container-overrides '{ + "environment": [ + {"name": "JAVA_VERSION", "value": "23"}, + {"name": "PYTHON_VERSION", "value": "3.13"}, + {"name": "NODE_VERSION", "value": "22"} + ] + }' ... +``` + +Available: Java 8/11/17/21/25, Python 3.8–3.14, Node.js 16/18/20/22/24. +Python accepts both short (`13`) and full (`3.13`) formats. + +See `$ATX_INFRA_DIR/container/README.md` for full customization reference including Docker BuildKit secrets for secure credential handling. + +## Pricing + +Do NOT quote specific prices or cost estimates to the user. If the user asks about +pricing, direct them to: https://aws.amazon.com/transform/pricing/ + +The remote infrastructure (Batch, Lambda, S3) has no fixed costs — all services are +pay-per-use and cost nothing when idle. + +## Cleanup + +The remote infrastructure costs nothing when idle — Fargate is pay-per-task, +Lambdas are pay-per-invoke, and S3 storage is minimal. + +After every remote execution completes (all jobs finished or failed), prompt the +user with the following: + +> Your remote infrastructure is still deployed in your AWS account. All services +> are pay-per-use only — there are no fixed costs when idle. You can leave it in +> place for future transformations, or tear it down now. +> +> For pricing details: https://aws.amazon.com/transform/pricing/ +> +> If you tear down: +> - All ATX resources are completely removed from your account +> - KMS key deletion is scheduled (7-day AWS minimum wait) +> - S3 buckets, secrets, IAM policies, log groups — all deleted +> - You'll need to re-run setup (~5-10 min) next time you use remote mode +> +> Would you like to keep the infrastructure or tear it down? + +If the user chooses to tear down: +```bash +cd "$ATX_INFRA_DIR" && ./teardown.sh +``` + +If the user chooses to keep it, confirm: "Infrastructure will stay deployed. Next +time you run remote transformations, everything will be ready immediately." diff --git a/aws-transform/steering/repo-analysis.md b/aws-transform/steering/repo-analysis.md new file mode 100644 index 0000000..2b221f6 --- /dev/null +++ b/aws-transform/steering/repo-analysis.md @@ -0,0 +1,135 @@ +# Repo Analysis & TD Matching + +**Local mode only.** Repo analysis inspects files on the local filesystem — it +cannot run inside remote containers. For remote mode, skip this step and let the +user specify which TDs to apply. If the user selected remote mode, do NOT attempt +to run the detection commands below. + +Inspect repositories and match them against available Transformation Definitions. + +## TD Discovery (Required First Step) + +```bash +atx custom def list # Human-readable +atx custom def list --json # Programmatic parsing +``` + +Never hardcode TD names. Only match repos against TDs that appear in this output. +If `atx` is not installed, install it first — do not fall back to guessed names. + +## Known AWS-Managed TDs (Reference Only) + +This table is a guide for signal detection, NOT a substitute for `atx custom def list --json`. +TD names change over time. Always use actual names from the live output. + +| TD Name (may change) | Description | Key Config | +|---------|-------------|------------| +| `AWS/java-version-upgrade` | Upgrade Java/JDK version (any source → any target) | Target JDK version (e.g., 17, 21) | +| `AWS/python-version-upgrade` | Upgrade Python version (3.8/3.9 → 3.11/3.12/3.13) | Target Python version | +| `AWS/nodejs-version-upgrade` | Upgrade Node.js version (any source → any target) | Target Node.js version | +| `AWS/java-aws-sdk-v1-to-v2` | Migrate AWS SDK for Java v1 → v2 (Maven or Gradle) | None required | +| `AWS/python-boto2-to-boto3` | Migrate Python boto2 → boto3 | None required | +| `AWS/nodejs-aws-sdk-v2-to-v3` | Migrate AWS SDK for JavaScript v2 → v3 | None required | +| `AWS/early-access-java-x86-to-graviton` | Migrate Java x86 code to ARM64/Graviton | None required | +| `AWS/early-access-comprehensive-codebase-analysis` | Tech debt analysis + documentation generation | Optional: `additionalPlanContext` for focus area | + +## Transformation Patterns + +| Pattern | Complexity | Examples | +|---------|-----------|----------| +| Language Version Upgrades | Low-Medium | Java 8→17, Python 3.9→3.13, Node.js 12→22 | +| API and Service Migrations | Medium | AWS SDK v1→v2, Boto2→Boto3, JUnit 4→5, javax→jakarta | +| Framework Upgrades | Medium | Spring Boot 2.x→3.x, React 17→18, Angular, Django | +| Framework Migrations | High | Angular→React, Redux→Zustand, Vue.js→React | +| Library and Dependency Upgrades | Low-Medium | Pandas 1.x→2.x, NumPy, Hadoop/HBase/Hive | +| Code Refactoring | Low-Medium | Print→Logging, string concat→f-strings, type hints | +| Script/File Translations | Low-Medium | CDK→Terraform, Terraform→CloudFormation, Bash→PowerShell | +| Architecture Migrations | Medium-High | x86→Graviton, on-prem→Lambda, server→containers | +| Language-to-Language Migrations | Very High | Java→Python, JavaScript→TypeScript, C→Rust | +| Custom/Org-Specific | Varies | Internal library migrations, coding standards enforcement | + +Service routing: COBOL/mainframe → use AWS Transform for Mainframe. .NET Framework → consider AWS Transform for Windows. VMware → consider AWS Transform for VMware. + +## Detection Commands + +### Python +```bash +cat /.python-version 2>/dev/null +cat /pyproject.toml 2>/dev/null | head -30 +cat /setup.cfg 2>/dev/null | head -30 +cat /requirements.txt 2>/dev/null | head -10 +``` + +### Java +```bash +cat /pom.xml 2>/dev/null | head -60 # Look for , +cat /build.gradle 2>/dev/null | head -40 # Look for sourceCompatibility +cat /.java-version 2>/dev/null +``` + +### Node.js +```bash +cat /package.json 2>/dev/null # Look for engines.node +cat /.nvmrc 2>/dev/null +cat /.node-version 2>/dev/null +``` + +## AWS SDK Detection + +| Signal | Language | What It Means | +|--------|----------|---------------| +| `import boto` / `from boto` (NOT boto3) | Python | Legacy boto2 — needs migration | +| `com.amazonaws` or `aws-java-sdk` in pom.xml | Java | SDK v1 — needs migration | +| `"aws-sdk"` in package.json (NOT `@aws-sdk`) | Node.js | SDK v2 — needs migration | + +```bash +# Python boto2 +grep -rlE "import boto([^3]|$)|from boto([^3]|$)" --include="*.py" 2>/dev/null | head -3 +# Java SDK v1 +grep -rl "com.amazonaws" --include="*.java" 2>/dev/null | head -3 +cat /pom.xml 2>/dev/null | grep -i "aws-java-sdk" +# Node.js SDK v2 +cat /package.json 2>/dev/null | grep '"aws-sdk"' +``` + +## Graviton Detection + +```bash +grep -rlE "x86_64|amd64|x86-64" --include="*.yml" --include="*.yaml" --include="Dockerfile" 2>/dev/null | head -3 +``` + +Currently Java-only. Match against Graviton migration TD if available. + +## Match Report Format + +``` +Transformation Match Report +============================= +Repository: () + Language: + Matching TDs: + - + + Other available TDs (may also apply): + - + +Summary: N repos analyzed, M have matches (T total jobs) +``` + +Group by repository. Show detected version. Include repos with no matches. +List custom TDs (non-`AWS/` prefix) under "Other available TDs". + +## Edge Cases + +| Case | Handling | +|------|----------| +| Repo already up-to-date | List upgrade TD but note current version | +| Monorepo (multiple languages) | List all matching TDs — each is a separate job | +| Mixed local + remote repos | Clone git URL repos locally for inspection, inspect local paths directly | +| Custom TDs in account | Show under "Other available TDs" per repo | +| Git clone fails | Report error, continue with remaining repos | + +## Cleanup + +Do NOT delete cloned repos after analysis — they are needed for local execution. +Track cloned repo paths and inform the user at session end so they can delete them. \ No newline at end of file diff --git a/aws-transform/steering/results-synthesis.md b/aws-transform/steering/results-synthesis.md new file mode 100644 index 0000000..7f44661 --- /dev/null +++ b/aws-transform/steering/results-synthesis.md @@ -0,0 +1,52 @@ +# Results Synthesis + +Generate a single summary file after bulk transformations complete. + +## Output + +Write one file: `~/.aws/atx/custom/atx-agent-session/transformation-summaries/transformation-summary-$SESSION_TS.md` + +```bash +mkdir -p ~/.aws/atx/custom/atx-agent-session/transformation-summaries +``` + +**Important:** Do NOT use heredoc (`cat << EOF`) to write this file — heredoc +blocks can hang in shell environments. Use a command (ex. `printf '%s'`) to write the content. + +## Template + +```markdown +# ATX Transformation Summary +> Completed: +> Repositories: | Succeeded: | Failed: + +## Results +| Project | TD | Status | Notes | +|---------|-----|--------|-------| +| | | Succeeded/Failed | | + +## Failed Transformations +### +- **TD**: +- **Error**: +- **Suggested Fix**: + +## Next Steps +1. Review changes in each transformed repo +2. Run tests and deploy +``` + +## Presentation + +Tell the user: +``` +Results: / succeeded, failed +Summary: ~/.aws/atx/custom/atx-agent-session/transformation-summaries/transformation-summary-$SESSION_TS.md +``` + +For remote mode executions, also include the CloudWatch dashboard link: +```bash +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +REGION=${REGION:-us-east-1} +echo "CloudWatch Dashboard: https://${REGION}.console.aws.amazon.com/cloudwatch/home#dashboards/dashboard/ATX-Transform-CLI-Dashboard" +``` diff --git a/aws-transform/steering/single-transformation.md b/aws-transform/steering/single-transformation.md new file mode 100644 index 0000000..1c3a02c --- /dev/null +++ b/aws-transform/steering/single-transformation.md @@ -0,0 +1,279 @@ +# Single Transformation + +Apply one TD to one repo. TD, config, and repo are already confirmed from the match report. + +## Local Mode + +### 1. Verify ATX (once per session, skip if already verified) +```bash +atx --version +``` + +### 2. Verify Language Version +The active language runtime must match the transformation's target version so that builds and tests run correctly. For example, a Java 8 → 17 upgrade needs Java 17 available locally. + +Check the installed version matches the target: +```bash +java -version # Java transformations +python3 --version # Python transformations +node --version # Node.js transformations +``` +If there is a mismatch, resolve it before proceeding: +- Look for the correct version already installed (e.g., check `/usr/lib/jvm/`, `pyenv versions`, `nvm ls`) +- If found, switch to it (e.g., `export JAVA_HOME= && export PATH="$JAVA_HOME/bin:$PATH"`, `pyenv shell 3.12`, `nvm use 22`) +- If not installed, ask the user for permission before installing (e.g., `brew install --cask corretto17` (macOS), `sudo yum install java-17-amazon-corretto-devel` (RHEL/AL2), or `sudo apt install java-17-amazon-corretto-jdk` (Debian/Ubuntu), `pyenv install 3.12`, `nvm install 22`) +- Verify the switch succeeded by re-checking the version before continuing + +### 3. Prepare Source + +If the user provided a git URL (HTTPS or SSH) instead of a local path, clone it +locally first. The user's local git config handles authentication for private repos +— no Secrets Manager setup needed in local mode. + +```bash +CLONE_DIR=~/.aws/atx/custom/atx-agent-session/repos/-$SESSION_TS +git clone "$CLONE_DIR" +``` + +If the user provided an S3 path to a zip, download and extract it locally: +```bash +aws s3 cp s3://user-bucket/repos/.zip ~/.aws/atx/custom/atx-agent-session/-$SESSION_TS.zip +mkdir -p ~/.aws/atx/custom/atx-agent-session/repos/ +unzip -qo ~/.aws/atx/custom/atx-agent-session/-$SESSION_TS.zip -d ~/.aws/atx/custom/atx-agent-session/repos/-$SESSION_TS/ +``` + +Use the cloned/extracted path as `` for all subsequent steps. If the +user provided a local path, use it directly. + +### 4. Validate Repository +```bash +ls -la +git -C status +``` +If not a git repo: `cd && git init && git add . && git commit -m "Initial commit"` + +### 5. Execute and Monitor + +If the user is transforming the currently opened workspace project, `cd` into it +and run `pwd` to confirm the absolute path before using it with `-p`. + +Launch the transformation in a way that returns control immediately. Some shell +tools block until all child processes exit, even with `&`. To avoid this, use bash to write +a launcher script and execute it, using exactly this: + +```bash +mkdir -p ~/.aws/atx/custom/atx-agent-session +cat > ~/.aws/atx/custom/atx-agent-session/run.sh << 'RUNNER' +#!/bin/bash +atx custom def exec -n -p -x -t \ + --configuration 'additionalPlanContext=' +echo $? > ~/.aws/atx/custom/atx-agent-session/transform.exit +RUNNER +chmod +x ~/.aws/atx/custom/atx-agent-session/run.sh +nohup ~/.aws/atx/custom/atx-agent-session/run.sh > ~/.aws/atx/custom/atx-agent-session/transform.log 2>&1 & +echo $! > ~/.aws/atx/custom/atx-agent-session/transform.pid +cat ~/.aws/atx/custom/atx-agent-session/transform.pid +``` +Omit `--configuration` if no config is needed. + +This backgrounds the runner script (not ATX directly), so the exit code is +captured to `~/.aws/atx/custom/atx-agent-session/transform.exit` when ATX finishes. The PID file tracks +the runner process. + +**As soon as you have the PID, immediately run the next command** — do NOT stop +and wait for the user. The ATX CLI outputs the conversation log path within +30-60 seconds of starting. Read it from the process log: +```bash +grep "Conversation log:" ~/.aws/atx/custom/atx-agent-session/transform.log 2>/dev/null +``` +If it hasn't appeared yet, wait 15 seconds and retry (up to 4 attempts). The +output looks like: +``` +Conversation log: /Users//.aws/atx/custom/20260319_063712_e3479843/logs/2026-03-19T06-37-26-conversation.log +``` +Extract the full path from this line — this is the conversation log for THIS +specific run. Do NOT use `ls -t` to find the most recent log across all +conversations, as that may return a log from a previous run. + +Then start a monitoring loop. On each cycle: +1. Check if the process is still running: `kill -0 $(cat ~/.aws/atx/custom/atx-agent-session/transform.pid) 2>/dev/null && echo "RUNNING" || echo "DONE"` +2. Read the latest lines from the conversation log and tell the user what's happening +3. Wait 60 seconds, then repeat + +**You MUST continue polling without waiting for user input.** After each poll, +immediately schedule the next one. The user should see continuous progress updates +like "ATX is planning changes...", "Applying changes to 3 files...", "Running build...". + +CRITICAL rules: + +1. **Extract conversation ID and log path.** After launching the process, look for + the conversation log line in stdout: + ``` + 📝 Conversation log: /Users//.aws/atx/custom//logs/-conversation.log + ``` + Extract the `` (e.g., `20260311_233325_21bb5ef0`) and the full + log file path. Report the conversation ID to the user immediately. Example: + "Transformation started — conversation ID: `20260311_233325_21bb5ef0`" + +2. **Tail the conversation log.** Once the log path is known, read new lines from + the conversation log on each polling cycle and relay meaningful progress to the + user. This is the primary way to keep the user informed of what ATX is doing + (e.g., planning steps, applying changes, running builds, encountering errors). + +3. **Filter out noise.** When reading the conversation log or process stdout, + silently IGNORE any lines containing "Thinking" — these are animated spinner + indicators that repeat dozens of times and must NOT be echoed to the user. + Surface everything else: planning output, file changes, build results, errors, + and completion summaries. + +4. **Completion = process exit only.** The transformation is done ONLY when the + background process exits (i.e., `kill -0` returns non-zero). Do NOT treat + exit code 0 from any other command (grep, cat, test, etc.) as transformation + completion. Do NOT treat log messages like "TRANSFORMATION COMPLETE" as + completion — ATX performs additional steps after that (validation summary + generation). Check the process exit code — do NOT parse terminal + output or log content to determine completion. ATX prints progress messages + and spinner animations throughout execution that do NOT indicate completion. + +5. **Polling interval.** Check the background process status and tail the + conversation log every 60 seconds. Do NOT use escalating backoff for local + mode — a fixed 60-second interval is sufficient. Do NOT sleep in the foreground + terminal. + +6. **Exit code determines success.** Once `kill -0` confirms the process has + exited, read the exit code: `cat ~/.aws/atx/custom/atx-agent-session/transform.exit`. Exit code 0 = + success. Non-zero = failure. Only after reading the exit code should you + report the transformation as complete or failed. + +7. **Open artifacts in the IDE (local mode only).** Using the conversation ID + from rule #1, the artifacts directory is + `~/.aws/atx/custom//artifacts/`. During each polling cycle, + list the directory and open any new files that appear. Open each file only + once — track which ones you've already opened. + + Check and open during polling: + ```bash + ARTIFACTS_DIR=~/.aws/atx/custom//artifacts + ls "$ARTIFACTS_DIR" 2>/dev/null + ``` + When new files appear, open them in the current Kiro window: + ```bash + kiro -r "$ARTIFACTS_DIR/" + ``` + + If a file named `plan.json` appears, IMMEDIATELY after opening it — before + doing anything else, before the next polling cycle — display this message: + + > ### 💡 Open Source Control (Ctrl+Shift+G) to watch changes in real time + > + > **ATX commits after each step — Source Control shows every file change with full diffs as they happen.** + + Do NOT defer this message. Do NOT batch it with other output. Send it + right after opening plan.json. + + Continue polling and opening new artifacts until the process exits. + +### 6. Present Results +Show TD, repo path, key changes. Also tell the user: +"You can review all changes in the Source Control panel — it shows the full +commit history with diffs for each file ATX modified." + +## Remote Mode + +### 1. Check Infrastructure +```bash +aws cloudformation describe-stacks --stack-name AtxInfrastructureStack \ + --query 'Stacks[0].StackStatus' --output text || echo "NOT_DEPLOYED" +``` +If NOT_DEPLOYED: get user consent, then deploy. See [remote-execution.md](remote-execution.md). + +### 2. Prepare Source + +| Source Type | Action | +|-------------|--------| +| HTTPS git URL (public) | Use directly — container clones it | +| HTTPS git URL (private) | Verify `atx/github-token` exists in Secrets Manager (see Step 1 in POWER.md), then use directly — container fetches PAT and clones | +| SSH git URL (public or private) | Verify `atx/ssh-key` exists in Secrets Manager (see Step 1 in POWER.md), then use directly — container fetches SSH key and clones | +| S3 bucket with zips | Copy zips from user's bucket to managed source bucket (`atx-source-code-{account}`), then use managed S3 paths | +| Local repo | Zip → upload to S3 → use S3 path | + +For local sources: +```bash +ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) +mkdir -p ~/.aws/atx/custom/atx-agent-session +cd && zip -qr ~/.aws/atx/custom/atx-agent-session/-$SESSION_TS.zip . +aws s3 cp ~/.aws/atx/custom/atx-agent-session/-$SESSION_TS.zip s3://atx-source-code-${ACCOUNT_ID}/repos/.zip +``` + +**Important:** Only the CDK-managed source bucket (`atx-source-code-{account}`) is +accessible to the remote container. Do NOT pass arbitrary S3 bucket paths as source — +the container's IAM role cannot read from them. + +### 3. Submit Job +```bash +aws lambda invoke --function-name atx-trigger-job \ + --payload '{"source":"","command":"atx custom def exec -n -p /source/ -x -t","jobName":"","environment":{"JAVA_VERSION":""}}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` +Add `--configuration \"additionalPlanContext=\"` to the command string if config is needed. + +Set the appropriate version environment variable to match the transformation's target version: +- `JAVA_VERSION` for Java transformations (e.g., `"21"` for a Java 8 → 21 upgrade) +- `PYTHON_VERSION` for Python transformations (e.g., `"3.12"` for a Python 3.8 → 3.12 upgrade) +- `NODE_VERSION` for Node.js transformations (e.g., `"22"` for a Node.js 18 → 22 upgrade) + +Only include the variable relevant to the transformation language. The Lambda whitelists these keys and passes them as Batch container overrides; the entrypoint switches the active runtime at startup. + +### 4. Monitor +```bash +aws lambda invoke --function-name atx-get-job-status \ + --payload '{"jobId":""}' \ + --cli-binary-format raw-in-base64-out /dev/stdout +``` +Poll every 60 seconds for the first 10 polls, then every 5 minutes after. +Report only on status change. + +### 5. Present Results (Remote) + +Do NOT download results locally. Results stay in S3. Present the S3 path to the user: +```bash +ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) +echo "Results: s3://atx-custom-output-${ACCOUNT_ID}/transformations//" +``` + +If the user wants to download results, first list the S3 path to discover the +conversation ID (generated at runtime inside the container). Use the actual +job name and account ID — do NOT leave placeholders in commands given to the user: +```bash +aws s3 ls s3://atx-custom-output-{account-id}/transformations// --region +``` +Then provide the download command with the actual conversation ID: +``` +aws s3 cp s3://atx-custom-output-{account-id}/transformations///code.zip ./code.zip +``` + +Include the CloudWatch dashboard link in the completion output: +```bash +REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}} +REGION=${REGION:-us-east-1} +echo "https://${REGION}.console.aws.amazon.com/cloudwatch/home#dashboards/dashboard/ATX-Transform-CLI-Dashboard" +``` +Show TD, repo, status, downloaded path, and the dashboard link for monitoring history and logs. + +After presenting results, prompt the user about infrastructure teardown. See the +Cleanup section in [remote-execution.md](remote-execution.md) for the exact prompt. + +## Error Handling + +| Issue | Resolution | +|-------|------------| +| Dependency incompatibility | Check package compatibility, may need manual update | +| Build failure (remote) | Check build command works locally, verify registry credentials in `atx/credentials` | +| ATX timeout | Set `ATX_SHELL_TIMEOUT=1800` or break into smaller transforms | + +## MANDATORY: Cleanup + +Clean up session files **before starting** and **after completing** each transformation: +```bash +[ -d ~/.aws/atx/custom/atx-agent-session ] && find ~/.aws/atx/custom/atx-agent-session -maxdepth 1 -type f \( -name "*.sh" -o -name "*.log" -o -name "*.pid" -o -name "*.exit" -o -name "*.zip" \) -delete 2>/dev/null || true +``` \ No newline at end of file diff --git a/aws-transform/steering/troubleshooting.md b/aws-transform/steering/troubleshooting.md new file mode 100644 index 0000000..2d06c4d --- /dev/null +++ b/aws-transform/steering/troubleshooting.md @@ -0,0 +1,103 @@ +# Troubleshooting + +## Quick Reference + +| Issue | Resolution | +|-------|------------| +| `atx` not found | Install: `curl -fsSL https://transform-cli.awsstatic.com/install.sh` piped to `bash` | +| AWS credentials error or expiry | Run `aws sts get-caller-identity`. Check `AWS_PROFILE` or access key env vars | +| Permission denied | Local mode: need `transform-custom:*` — see Prerequisites → IAM Permissions in POWER.md. Remote mode: generate and attach policies via `npx ts-node generate-caller-policy.ts` — see remote-execution.md | +| Network error | Resolve region: `REGION=${AWS_REGION:-${AWS_DEFAULT_REGION:-$(aws configure get region 2>/dev/null)}}; REGION=${REGION:-us-east-1}`. Check access to `transform-custom.${REGION}.api.aws` | +| Build fails during transform | Verify build command works locally first. Try interactive mode for debugging | +| Transform not found | Run `atx custom def list --json` to check available TDs | +| Configuration fails with commas | Do not use commas inside `additionalPlanContext` values — they break the CLI parser. Rephrase to avoid commas | +| Conversation expired | Conversations expire after 30 days. Start a new one | +| Windows not supported | Tell user to use Windows Subsystem for Linux (WSL) | +| Git clone fails in remote container | See "Private Repo Credential Issues" section below | +| Timeout | Set `export ATX_SHELL_TIMEOUT=1800` (default: 900s) | +| Stale .exit file | The `.exit` file in `atx-agent-session/` may be left over from a previous run. Always use `kill -0 ` to check if the process is still running — do not rely solely on the `.exit` file | +| Poor quality results | See Improving Quality section below | + +## Private Repo Credential Issues + +If a git clone fails in the remote container (job status FAILED, logs show +authentication or 403 errors), work through these steps with the user: + +**1. Is the PAT/key stored?** +```bash +aws secretsmanager describe-secret --secret-id "atx/github-token" --region "$REGION" 2>/dev/null && echo "EXISTS" || echo "MISSING" +aws secretsmanager describe-secret --secret-id "atx/ssh-key" --region "$REGION" 2>/dev/null && echo "EXISTS" || echo "MISSING" +``` +If missing, guide the user through setup — see Step 1 in POWER.md. + +**2. Does the PAT have the right scope?** +GitHub fine-grained PATs can be scoped to specific repos. If the user created a +PAT for repos A and B but is now transforming repo C, the clone will fail with 403. +Ask: "Does your GitHub PAT have access to [repo name]? Fine-grained PATs need +each repo explicitly listed." + +Resolution: the user updates their PAT on GitHub to include the new repo, then +updates the stored secret: +```bash +aws secretsmanager put-secret-value --secret-id "atx/github-token" --region "$REGION" --secret-string "" +``` + +**3. Has the PAT expired?** +GitHub PATs can have expiration dates. Ask: "When did you create this PAT? It may +have expired." Resolution: create a new PAT on GitHub, then update the secret: +```bash +aws secretsmanager put-secret-value --secret-id "atx/github-token" --region "$REGION" --secret-string "" +``` + +**4. Is it the right credential type for the URL?** +- HTTPS URLs (`https://github.com/...`) need `atx/github-token` (PAT) +- SSH URLs (`git@github.com:...`) need `atx/ssh-key` (SSH private key) +If the user provided SSH URLs but only has a PAT stored (or vice versa), guide +them to set up the correct credential type. + +**5. Classic vs fine-grained PAT?** +Classic PATs with `repo` scope work for all repos the user has access to. +Fine-grained PATs need each repo explicitly added. If the user is unsure, suggest +a classic PAT with `repo` scope as the simpler option. + +## Local Mode Debugging + +| Log | Path | +|-----|------| +| Developer logs | `~/.aws/atx/logs/debug*.log` and `~/.aws/atx/logs/error.log` | +| Conversation log | `~/.aws/atx/custom//logs/-conversation.log` | + +Network errors may indicate VPN/firewall issues with AWS endpoints. + +## Remote Mode Debugging + +- CloudWatch logs: `/aws/batch/atx-transform` +- Check log streams for the failed conversation ID in AWS Console +- S3 output bucket contains artifacts even for failed jobs +- Check batch job status for error details + +## Deployment Failures + +CDK deployment handles most issues automatically. Common recovery: +```bash +ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra" +cd "$ATX_INFRA_DIR" && ./teardown.sh +cd "$ATX_INFRA_DIR" && ./setup.sh +``` + +Common causes: insufficient IAM permissions, service quota limits, no default VPC, Docker not running (needed for container build). + +## Improving Quality + +Diagnose in this order: + +1. **Reference materials**: Provide migration guides or API specs via `additionalPlanContext`. +2. **Complexity**: Decompose very complex transforms into smaller steps. +3. **Knowledge items**: Review learnings from previous runs. Enable good ones, disable irrelevant ones. + +## Network Requirements + +| Endpoint | Purpose | +|----------|---------| +| `transform-cli.awsstatic.com` | CLI installation and updates | +| `transform-custom.${REGION}.api.aws` | Transformation service API |