Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,13 @@ Documentation is available at https://kiro.dev/docs/powers/

---

### aws-transform
**Agents modernizing the world's infrastructure and software**, backed by years of AWS expertise. AWS Transform is a full modernization factory — connecting assessment through execution in a single experience, so the manual handoffs and lost context that commonly stall large-scale migrations and ongoing tech debt reduction no longer slow you down. This power brings AWS Transform directly into Kiro. AWS Transform custom is the first supported capability, with more playbooks on the way. Find out more at [aws.amazon.com/transform](https://aws.amazon.com/transform/)

**MCP Servers:** None

---

### cloud-architect
**Build infrastructure on AWS** - Build AWS infrastructure with CDK in Python following AWS Well-Architected framework best practices.

Expand Down
756 changes: 756 additions & 0 deletions aws-transform/POWER.md

Large diffs are not rendered by default.

132 changes: 132 additions & 0 deletions aws-transform/steering/cli-reference.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# ATX CLI Reference

## Execution Flags (`atx custom def exec`)

| Flag | Long Form | Description |
|------|-----------|-------------|
| `-n` | `--transformation-name <name>` | TD name (from `atx custom def list --json`) |
| `-p` | `--code-repository-path <path>` | Path to code repo (`.` for current dir) |
| `-x` | `--non-interactive` | No user prompts (always use this flag) |
| `-t` | `--trust-all-tools` | Auto-approve tool executions (required with `-x`) |
| `-d` | `--do-not-learn` | Prevent knowledge item extraction |
| `-g` | `--configuration <config>` | Inline configuration (`'key=val'`) |
| `--tv` | `--transformation-version <ver>` | Specific TD version |

## Configuration

Inline: `--configuration 'additionalPlanContext=Target Python 3.13'`

Example: `atx custom def exec -n my-td -p /source/repo -g 'additionalPlanContext=Target Java 17' -x -t`

`--configuration` is optional. Omit if no extra context needed.

## Other Commands

| Action | Command |
|--------|---------|
| Start interactive conversation | `atx` |
| Resume most recent conversation | `atx --resume` |
| Resume specific conversation | `atx --conversation-id <id>` (30-day limit) |
| List TDs | `atx custom def list --json` |
| Download TD | `atx custom def get -n <name>` (optional: `--tv <version>`, `--td <directory>`) |
| Delete TD | `atx custom def delete -n <name>` |
| Save TD as draft | `atx custom def save-draft -n <name> --description "<desc>" --sd <dir>` |
| Publish TD | `atx custom def publish -n <name> --description "<desc>" --sd <dir>` |
| List knowledge items | `atx custom def list-ki -n <name>` |
| View knowledge item | `atx custom def get-ki -n <name> --id <id>` |
| Enable/disable KI | `atx custom def update-ki-status -n <name> --id <id> --status ENABLED or DISABLED` |
| KI auto-approval on/off | `atx custom def update-ki-config -n <name> --auto-enabled TRUE or FALSE` |
| Export KIs | `atx custom def export-ki-markdown -n <name>` |
| Delete KI | `atx custom def delete-ki -n <name> --id <id>` |
| Update CLI | `atx update` |
| Check for CLI updates only | `atx update --check` |
| Tag a TD | `atx custom def tag --arn <arn> --tags '{"key":"value"}'` |

## Environment Variables

| Variable | Default | Description |
|----------|---------|-------------|
| `ATX_SHELL_TIMEOUT` | 900 (15 min) | Shell command timeout in seconds |
| `ATX_DISABLE_UPDATE_CHECK` | false | Disable version check |
| `AWS_PROFILE` | — | AWS credentials profile |
| `AWS_ACCESS_KEY_ID` | — | AWS access key |
| `AWS_SECRET_ACCESS_KEY` | — | AWS secret key |
| `AWS_SESSION_TOKEN` | — | Session token (temporary credentials) |

## IAM Permissions

Minimum: `transform-custom:*` on `Resource: "*"`.

| Permission | Operation |
|-----------|----------|
| `transform-custom:ConverseStream` | Interactive conversations |
| `transform-custom:ExecuteTransformation` | Execute transforms |
| `transform-custom:ListTransformationPackageMetadata` | List transforms (`atx custom def list --json`) |
| `transform-custom:DeleteTransformationPackage` | Delete transforms |
| `transform-custom:CompleteTransformationPackageUpload` | Upload TDs |
| `transform-custom:CreateTransformationPackageUrl` | Create upload URLs |
| `transform-custom:GetTransformationPackageUrl` | Download TDs |
| `transform-custom:ListKnowledgeItems` | List knowledge items |
| `transform-custom:GetKnowledgeItem` | View knowledge item details |
| `transform-custom:DeleteKnowledgeItem` | Delete knowledge items |
| `transform-custom:UpdateKnowledgeItemConfiguration` | Configure auto-approval |
| `transform-custom:UpdateKnowledgeItemStatus` | Enable/disable items |
| `transform-custom:ListTagsForResource` | List tags |
| `transform-custom:TagResource` | Add tags |
| `transform-custom:UntagResource` | Remove tags |

### Remote Mode Caller Permissions

The caller's AWS credentials (the user or role running the session) need additional
permissions beyond `transform-custom:*` for remote mode. Generate the policies,
then create and attach them:

```bash
ATX_INFRA_DIR="$HOME/.aws/atx/custom/remote-infra"
if [ -d "$ATX_INFRA_DIR" ]; then
git -C "$ATX_INFRA_DIR" add -A
git -C "$ATX_INFRA_DIR" commit -m "Local customizations" -q 2>/dev/null || true
git -C "$ATX_INFRA_DIR" pull -q 2>/dev/null || true
else
git clone -b atx-remote-infra --single-branch https://github.com/aws-samples/aws-transform-custom-samples.git "$ATX_INFRA_DIR"
fi
cd "$ATX_INFRA_DIR"
npx ts-node generate-caller-policy.ts
```

This produces two policies:

| Policy | Purpose | When Needed |
|--------|---------|-------------|
| `atx-runtime-policy.json` | Invoke Lambdas, S3 upload/download, KMS, Secrets Manager, CloudWatch logs | Day-to-day remote operations |
| `atx-deployment-policy.json` | CloudFormation, ECR, IAM roles, Batch, VPC, KMS key creation | One-time CDK deploy/destroy |

After generating, create and attach the runtime policy:
```bash
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
CALLER_ARN=$(aws sts get-caller-identity --query Arn --output text)

# Create the managed policy (ignore EntityAlreadyExists, fail on other errors)
if ! create_output=$(aws iam create-policy --policy-name ATXRuntimePolicy \
--policy-document "file://$ATX_INFRA_DIR/atx-runtime-policy.json" 2>&1); then
echo "$create_output" | grep -q "EntityAlreadyExists" \
|| { echo "Failed to create policy: $create_output" >&2; exit 1; }
fi

if echo "$CALLER_ARN" | grep -q ":user/"; then
IDENTITY_NAME=$(echo "$CALLER_ARN" | awk -F'/' '{print $NF}')
aws iam attach-user-policy --user-name "$IDENTITY_NAME" \
--policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/ATXRuntimePolicy"
elif echo "$CALLER_ARN" | grep -Eq ":assumed-role/|:role/"; then
ROLE_NAME=$(echo "$CALLER_ARN" | sed 's/.*:\(assumed-\)\{0,1\}role\///' | cut -d'/' -f1)
aws iam attach-role-policy --role-name "$ROLE_NAME" \
--policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/ATXRuntimePolicy"
fi
```

The runtime policy covers: `transform-custom:*` for ATX CLI operations (TD discovery, execution),
`lambda:InvokeFunction` on all `atx-*` functions,
`s3:PutObject`/`s3:GetObject` on source and output buckets, `kms:Encrypt`/`kms:Decrypt`/`kms:GenerateDataKey`
on the ATX encryption key, `secretsmanager:CreateSecret`/`PutSecretValue`/`DeleteSecret` on `atx/*` secrets,
`logs:GetLogEvents`/`FilterLogEvents` on the Batch log group, and `cloudformation:DescribeStacks`
for infrastructure status checks.
198 changes: 198 additions & 0 deletions aws-transform/steering/multi-transformation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
# Multi-Transformation

Apply TDs to multiple repositories in parallel. TD-to-repo assignments and config
are already confirmed from the match report. Do NOT re-discover TDs or re-prompt.

## Input

From the match report: repo list, TD per repo, config per TD, execution mode.

## Prerequisite Check (Once Only)

Verify AWS credentials ONCE. Do NOT repeat per repo.
```bash
aws sts get-caller-identity
```
Local mode also: `atx --version`

## Local Execution

If any repos were provided as git URLs (HTTPS or SSH), clone them locally first.
The user's local git config handles authentication — no Secrets Manager needed.
```bash
CLONE_DIR=~/.aws/atx/custom/atx-agent-session/repos/<repo-name>-$SESSION_TS
git clone <git-url> "$CLONE_DIR"
```

If repos were provided as an S3 bucket path with zips, download and extract locally:
```bash
mkdir -p ~/.aws/atx/custom/atx-agent-session/repos
aws s3 sync s3://user-bucket/repos/ ~/.aws/atx/custom/atx-agent-session/repos/ --exclude "*" --include "*.zip"
for zip in ~/.aws/atx/custom/atx-agent-session/repos/*.zip; do
name=$(basename "$zip" .zip)
unzip -qo "$zip" -d "$HOME/.aws/atx/custom/atx-agent-session/repos/${name}-$SESSION_TS/"
done
```

Use the cloned/extracted paths as `<repo-path>` for each repo.

For each repo, verify it's a git repo:
```bash
ls -la <repo-path>
git -C <repo-path> status
```
If not a git repo: `cd <repo-path> && git init && git add . && git commit -m "Initial commit"`

The active language runtime must match the transformation's target version so that
builds and tests run correctly. Check the current version, and if there is a
mismatch, first check whether the target version is already installed (e.g.,
`/usr/libexec/java_home -V 2>&1` (macOS) or `ls /usr/lib/jvm/` (Linux), `pyenv versions`, `nvm ls`). If found, switch
to it (e.g., `export JAVA_HOME=<path to JDK> && export PATH="$JAVA_HOME/bin:$PATH"`, `pyenv shell 3.12`, `nvm use 22`). Only if
the target version is not installed at all, ask the user for permission before installing. Suggest:
- Java: `brew install --cask corretto23` (macOS), `sudo yum install java-23-amazon-corretto-devel` (RHEL/AL2), or `sudo apt install java-23-amazon-corretto-jdk` (Debian/Ubuntu)
- Python: `pyenv install 3.15.0 && pyenv shell 3.15.0`
- Node.js: `nvm install 23 && nvm use 23`

Do NOT proceed until the correct version is active. Verify the switch succeeded
before proceeding.

Run transformations in parallel — maximum 3 concurrent repos at a time (the user
can override this, but 3 is recommended to avoid overloading the machine). If there
are more than 3 repos, process them in batches of 3 (wait for a batch to finish
before starting the next). Maximum 9 repos total for local mode (user can override,
but recommend remote mode for more). If the total repo count exceeds 9, suggest
remote mode instead.

For each repo, use bash to create a runner script that captures the exit code, following this exact format:
```bash
mkdir -p ~/.aws/atx/custom/atx-agent-session
cat > ~/.aws/atx/custom/atx-agent-session/run-<repo-name>.sh << 'RUNNER'
#!/bin/bash
atx custom def exec -n <td-name> -p <repo-path> -x -t \
--configuration 'additionalPlanContext=<config>'
echo $? > ~/.aws/atx/custom/atx-agent-session/<repo-name>.exit
RUNNER
chmod +x ~/.aws/atx/custom/atx-agent-session/run-<repo-name>.sh
nohup ~/.aws/atx/custom/atx-agent-session/run-<repo-name>.sh > ~/.aws/atx/custom/atx-agent-session/<repo-name>.log 2>&1 &
echo $! > ~/.aws/atx/custom/atx-agent-session/<repo-name>.pid
```
Omit `--configuration` if no config needed. Launch each repo's script in rapid
succession — do NOT wait between launches. Each runner script is backgrounded
via nohup; the exit code is captured to `~/.aws/atx/custom/atx-agent-session/<repo-name>.exit` when ATX finishes.

After launching all repos, find each repo's conversation log by grepping its
process log (ATX outputs the path within 30-60 seconds of starting):
```bash
grep "Conversation log:" ~/.aws/atx/custom/atx-agent-session/<repo-name>.log 2>/dev/null
```
If it hasn't appeared yet, wait 15 seconds and retry. Extract the full path from
each — do NOT use `ls -t` across all conversations, as that may match a different run.

Then start monitoring. On each 60-second cycle:
1. Check each PID: `kill -0 $(cat ~/.aws/atx/custom/atx-agent-session/<repo-name>.pid) 2>/dev/null && echo "RUNNING" || echo "DONE"`
2. Tail each repo's conversation log and relay progress to the user
3. For each repo, list the artifacts directory (`~/.aws/atx/custom/<conversation-id>/artifacts/`)
and open any new files with `kiro -r <filepath>` as they appear (open each file only once).
4. Report which repos are still running, which have completed

**You MUST continue polling without waiting for user input.** The user should see
continuous progress updates across all repos.

A repo's transformation is done ONLY when its background process exits (i.e.,
`kill -0` returns non-zero). Do NOT treat exit code 0 from any other command
(grep, cat, test, ls, etc.) as transformation completion. Do NOT treat log
messages like "TRANSFORMATION COMPLETE" as completion — ATX performs additional
steps after that (validation summary generation).

## Remote Execution

Prepare each repo's source before submitting the batch. Follow the source prep
rules from single-transformation.md: HTTPS and SSH git URLs (with credentials
configured) are passed directly; S3 zips from the user's bucket must be copied
to the managed source bucket (`atx-source-code-{account}`) first; local repos
must be zipped and uploaded to the same managed bucket.

Submit jobs via the batch Lambda in chunks of up to 128. If there are more than
128 jobs, split them into multiple `atx-trigger-batch-jobs` calls (e.g., 500 repos
= 4 calls of 128 + 128 + 128 + 116). Each call returns its own `batchId`. Track
all batch IDs for monitoring.

Include the `environment` field on each job to set the language version matching the transformation's target (e.g., `"JAVA_VERSION":"21"` for a Java upgrade targeting 21):
```bash
aws lambda invoke --function-name atx-trigger-batch-jobs \
--payload '{"batchName":"<name>-chunk-1","jobs":[{"source":"<url>","command":"atx custom def exec -n <td> -p /source/<project> -x -t","jobName":"<name>","environment":{"JAVA_VERSION":"<target>"}}]}' \
--cli-binary-format raw-in-base64-out /dev/stdout
```

If the total exceeds 128, repeat with the next chunk:
```bash
aws lambda invoke --function-name atx-trigger-batch-jobs \
--payload '{"batchName":"<name>-chunk-2","jobs":[...next 128 jobs...]}' \
--cli-binary-format raw-in-base64-out /dev/stdout
```

Monitor each batch by its `batchId`:
```bash
aws lambda invoke --function-name atx-get-batch-status \
--payload '{"batchId":"<batch-id>"}' \
--cli-binary-format raw-in-base64-out /dev/stdout
```
Polling: every 60 seconds for the first 10 polls, then every 5 minutes after.
Report only on status change.

## Progress Reporting

```
[1/N] repo-name TD-name Status
[2/N] repo-name TD-name Status
```

## Result Collection

Collect per repo: success/failure, transformed code path, error details.
```
Succeeded:
- repo-name: TD-name (config)
Failed:
- repo-name: TD-name (error)
```

For remote executions, include the CloudWatch dashboard link in the final output:
```bash
REGION=${AWS_DEFAULT_REGION:-${AWS_REGION:-$(aws configure get region 2>/dev/null)}}
REGION=${REGION:-us-east-1}
echo "https://${REGION}.console.aws.amazon.com/cloudwatch/home#dashboards/dashboard/ATX-Transform-CLI-Dashboard"
```

Hand off to [results-synthesis.md](results-synthesis.md) for consolidated reporting.

For local executions only, tell the user: "To review changes in each repo, open it in
Kiro (`kiro -r <repo-path>`) and use the Source Control panel to see the full
commit history with diffs for each file ATX modified."

## Error Handling

| Scenario | Action |
|----------|--------|
| Git clone fails | Log error, continue with remaining repos |
| Transformation fails | Log repo and error, do not auto-retry |
| Partial results | Generate summary from successes, report failures |

## MANDATORY: Cleanup

Clean up session files **before starting** and **after completing** each batch:
```bash
[ -d ~/.aws/atx/custom/atx-agent-session ] && find ~/.aws/atx/custom/atx-agent-session -maxdepth 1 -type f \( -name "*.sh" -o -name "*.log" -o -name "*.pid" -o -name "*.exit" -o -name "*.zip" \) -delete 2>/dev/null || true
```

For remote mode: after presenting results, also prompt the user about infrastructure
teardown. See the Cleanup section in [remote-execution.md](remote-execution.md)
for the exact prompt and flow.

## Key Principles

1. Single prerequisite check — never repeat for parallel tasks
2. Trust the match report — do not re-discover TDs
3. Local parallel execution — maximum 3 concurrent repos (user-overridable); recommend remote for more than 9
4. Remote parallel execution — submit in chunks of up to 128 jobs per `atx-trigger-batch-jobs` call; split larger sets into multiple calls (max 512 repos per session)
5. Skip prerequisite checks in parallel task prompts
Loading