Skip to content
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
168 changes: 168 additions & 0 deletions .github/actions/bandwidth-throttling-linux/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
name: bandwidth-throttling-linux
description: Action to throttle the bandwidth on Linux runners (for Android E2E) using tc/netem

inputs:
test_server_host:
description: The host of the test server, no protocol
required: true
profile:
description: "Network profile to use (lte, slow_3g, edge_2g, satellite, flapping). Overrides individual settings."
required: false
default: ""
download_speed:
description: The download speed limit (in Kbit/s) - used if profile is not set
required: false
default: "3300"
upload_speed:
description: The upload speed limit (in Kbit/s) - used if profile is not set
required: false
default: "3300"
latency:
description: The latency (in ms) each way - used if profile is not set
required: false
default: "500"
packet_loss:
description: Packet loss percentage (0-100) - used if profile is not set
required: false
default: "0"
disable:
description: Disable throttling
required: false
default: "false"

outputs:
effective_profile:
description: The profile that was applied
value: ${{ steps.resolve-profile.outputs.profile }}
timeout_multiplier:
description: Recommended timeout multiplier for this profile
value: ${{ steps.resolve-profile.outputs.timeout_multiplier }}

runs:
using: composite
steps:
- name: Resolve network profile settings
id: resolve-profile
shell: bash
run: |
PROFILE="${{ inputs.profile }}"

# Profile-based settings (same as macOS version)
case "$PROFILE" in
lte)
DOWNLOAD=10000
UPLOAD=5000
LATENCY=30
PACKET_LOSS=0
TIMEOUT_MULT=1
;;
slow_3g)
DOWNLOAD=400
UPLOAD=128
LATENCY=300
PACKET_LOSS=2
TIMEOUT_MULT=3
;;
edge_2g)
DOWNLOAD=50
UPLOAD=25
LATENCY=500
PACKET_LOSS=5
TIMEOUT_MULT=10
;;
satellite)
DOWNLOAD=1000
UPLOAD=256
LATENCY=700
PACKET_LOSS=1
TIMEOUT_MULT=5
;;
flapping)
DOWNLOAD=1000
UPLOAD=256
LATENCY=200
PACKET_LOSS=0
TIMEOUT_MULT=5
;;
*)
DOWNLOAD=${{ inputs.download_speed }}
UPLOAD=${{ inputs.upload_speed }}
LATENCY=${{ inputs.latency }}
PACKET_LOSS=${{ inputs.packet_loss }}
TIMEOUT_MULT=1
PROFILE="custom"
;;
esac

echo "download=$DOWNLOAD" >> $GITHUB_OUTPUT
echo "upload=$UPLOAD" >> $GITHUB_OUTPUT
echo "latency=$LATENCY" >> $GITHUB_OUTPUT
echo "packet_loss=$PACKET_LOSS" >> $GITHUB_OUTPUT
echo "timeout_multiplier=$TIMEOUT_MULT" >> $GITHUB_OUTPUT
echo "profile=$PROFILE" >> $GITHUB_OUTPUT

echo "Network profile: $PROFILE"
echo " Download: ${DOWNLOAD} Kbit/s"
echo " Upload: ${UPLOAD} Kbit/s"
echo " Latency: ${LATENCY} ms"
echo " Packet Loss: ${PACKET_LOSS}%"
echo " Timeout Multiplier: ${TIMEOUT_MULT}x"

- name: Disable existing throttling
if: ${{ inputs.disable == 'true' }}
shell: bash
run: |
# Remove any existing tc rules
sudo tc qdisc del dev eth0 root 2>/dev/null || true
echo "Network throttling disabled"
Comment on lines +111 to +117
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Flapping cleanup is incomplete.

flapping-linux.sh is started in the background, but the disable path only deletes the qdisc and never stops that process. A later cleanup can therefore be undone as soon as the flapping loop runs again.

Suggested fix
     - name: Disable existing throttling
       if: ${{ inputs.disable == 'true' }}
       shell: bash
       run: |
+        if [ -n "${FLAPPING_PID:-}" ]; then
+          kill "${FLAPPING_PID}" 2>/dev/null || true
+        fi
+
         # Remove any existing tc rules
         sudo tc qdisc del dev eth0 root 2>/dev/null || true
         echo "Network throttling disabled"

Also applies to: 153-160

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/actions/bandwidth-throttling-linux/action.yml around lines 111 -
117, The disable path currently only deletes the qdisc but doesn't stop the
background flapping process (flapping-linux.sh), so the throttling can be
reinstated; update the disable branch in action.yml to also stop that background
process by reading and killing the PID file created when flapping-linux.sh is
started (or falling back to pkill -f flapping-linux.sh) and remove the PID file,
and apply the same change to the second disable block (the one around the later
lines referenced) so both cleanup paths kill the flapping process as well as
removing the qdisc.


- name: Apply network throttling with tc/netem
if: ${{ inputs.disable != 'true' }}
shell: bash
run: |
# Remove any existing rules
sudo tc qdisc del dev eth0 root 2>/dev/null || true

DOWNLOAD="${{ steps.resolve-profile.outputs.download }}"
LATENCY="${{ steps.resolve-profile.outputs.latency }}"
PACKET_LOSS="${{ steps.resolve-profile.outputs.packet_loss }}"

echo "Applying network throttling..."

# Build netem options
NETEM_OPTS="delay ${LATENCY}ms"
if [ "$PACKET_LOSS" != "0" ]; then
NETEM_OPTS="$NETEM_OPTS loss ${PACKET_LOSS}%"
fi

# Add root qdisc with netem for delay and packet loss
sudo tc qdisc add dev eth0 root handle 1: netem $NETEM_OPTS

# Add tbf (token bucket filter) for rate limiting
# burst = rate / 8 (bytes per second / 8 = 1 second of buffer)
# latency = how long packets can wait in queue
RATE="${DOWNLOAD}kbit"
BURST="$((DOWNLOAD / 8))kb"

sudo tc qdisc add dev eth0 parent 1: handle 2: tbf rate $RATE burst $BURST latency 50ms
Comment on lines +119 to +147
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Does tc qdisc add dev eth0 root ... on Linux shape only egress traffic, and is an IFB/ingress redirect required to limit download bandwidth for a client or NATed Android emulator?

💡 Result:

Yes.

  • tc qdisc add dev eth0 root … attaches a qdisc to eth0 egress (packets leaving eth0). Linux traffic control queuing/shaping is fundamentally an egress mechanism; the built-in ingress qdisc is limited (primarily policing/dropping rather than delaying/queuing). [1][2]
  • To limit “download” (inbound) bandwidth on eth0 (packets arriving on eth0 from the WAN), you typically redirect ingress to an IFB and then shape on the IFB’s egress qdisc (the common workaround for “ingress shaping”). [3][4]

Practical implication for a client behind you (or a NATed Android emulator):

  • If you’re the router for that client, you can often limit the client’s “download” without IFB by shaping the egress of the interface that sends traffic to the client (e.g., your LAN/bridge/veth egress), because the downloaded bytes become your egress toward the client.
  • You need IFB when you specifically must shape inbound on a given interface (commonly WAN ingress). [3][4]

Sources:
[1] Linux Advanced Routing & Traffic Control HOWTO (LARTC)
[2] NetBeez “How to Use the Linux Traffic Control” (notes that tc rules are generally applied to egress)
[3] CNI bandwidth plugin docs (explains IFB creation due to ingress shaping limitations)
[4] tc-mirred(8) man page (shows mirred egress redirect dev ifb0 usage)


Incomplete traffic shaping: upload_speed input is not applied.

The action accepts both download_speed and upload_speed inputs but implements throttling only for download. The tc qdisc add dev eth0 root command shapes egress traffic exclusively; applying it with only the DOWNLOAD value leaves upload bandwidth unthrottled. Proper bidirectional throttling requires separate handling (e.g., root qdisc for egress and an ingress redirect setup for inbound traffic).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/actions/bandwidth-throttling-linux/action.yml around lines 119 -
147, The workflow only applies throttling to DOWNLOAD (variables/commands:
DOWNLOAD, RATE, BURST, NETEM_OPTS and the sudo tc qdisc add dev eth0 root ...
tbf invocation) so implement upload shaping by reading the upload output (e.g.,
UPLOAD="${{ steps.resolve-profile.outputs.upload }}"), setting up an ingress
redirect to an ifb device and applying netem/tbf to that ifb device for ingress;
specifically add commands to: load and configure ifb0, attach an ingress qdisc
to eth0 and redirect to ifb0, build UPLOAD_NETEM_OPTS analogously to NETEM_OPTS,
compute UPLOAD_RATE and UPLOAD_BURST from UPLOAD, and run sudo tc qdisc add dev
ifb0 root ... netem/tbf for the upload path so both egress (eth0 root) and
ingress (ifb0) are shaped.


echo "Network throttling applied:"
echo " Profile: ${{ steps.resolve-profile.outputs.profile }}"
sudo tc qdisc show dev eth0

- name: Start flapping simulation
if: ${{ inputs.profile == 'flapping' && inputs.disable != 'true' }}
shell: bash
run: |
# Start the flapping script in background
nohup ${{ github.action_path }}/flapping-linux.sh &
FLAPPING_PID=$!
echo "FLAPPING_PID=$FLAPPING_PID" >> $GITHUB_ENV
echo "Started flapping simulation with PID: $FLAPPING_PID"

- name: Test connection after throttling
if: ${{ inputs.disable != 'true' }}
shell: bash
run: |
echo "Testing connection with throttling applied..."
curl -o /dev/null -m 30 --retry 2 -s -w 'Total: %{time_total}s\n' 'https://${{ inputs.test_server_host }}/api/v4/system/ping?get_server_status=true' || echo "Connection test completed (may have timed out under heavy throttling)"
63 changes: 63 additions & 0 deletions .github/actions/bandwidth-throttling-linux/flapping-linux.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
#!/bin/bash
# Flapping Network Simulation Script for Linux (tc/netem)
# Simulates intermittent connectivity by cycling through network states

set -e

echo "Starting flapping network simulation (Linux)"
echo "PID: $$"

# Function to apply network settings using tc/netem
apply_settings() {
local download=$1
local latency=$2
local packet_loss=$3
local state_name=$4

echo "[$(date '+%H:%M:%S')] Switching to state: $state_name"

# Remove existing rules
sudo tc qdisc del dev eth0 root 2>/dev/null || true

if [ "$state_name" = "disconnected" ]; then
# 100% packet loss = disconnected
sudo tc qdisc add dev eth0 root netem loss 100%
else
# Build netem options
NETEM_OPTS="delay ${latency}ms"
if [ "$packet_loss" != "0" ]; then
NETEM_OPTS="$NETEM_OPTS loss ${packet_loss}%"
fi

# Add netem for delay/loss
sudo tc qdisc add dev eth0 root handle 1: netem $NETEM_OPTS

# Add rate limiting
RATE="${download}kbit"
BURST="$((download / 8))kb"
sudo tc qdisc add dev eth0 parent 1: handle 2: tbf rate $RATE burst $BURST latency 50ms
fi
}

# Flapping pattern loop
cycle=0
while true; do
cycle=$((cycle + 1))
echo "[$(date '+%H:%M:%S')] === Flapping cycle $cycle ==="

# State 1: Connected (good connection)
apply_settings 1000 200 0 "connected"
sleep 30

# State 2: Disconnected
apply_settings 0 0 100 "disconnected"
sleep 5

# State 3: Slow 3G
apply_settings 400 300 2 "slow_3g"
sleep 30

# State 4: Brief disconnection
apply_settings 0 0 100 "disconnected"
sleep 3
Comment on lines +64 to +78
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

State parameters inconsistent with macOS flapping.sh.

The call sites pass only 4 arguments, but comparing to .github/actions/bandwidth-throttling/flapping.sh lines 61-74, the macOS version passes 5 arguments including upload speed. For example, macOS connected state: apply_settings 1000 256 200 0 "connected" vs Linux: apply_settings 1000 200 0 "connected".

This inconsistency could cause confusion when maintaining both scripts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/actions/bandwidth-throttling-linux/flapping-linux.sh around lines 48
- 62, The Linux flapping script calls to apply_settings use four arguments while
the macOS variant uses five (including upload rate), causing inconsistency;
update each apply_settings invocation in this file (the calls for states
"connected", "disconnected", "slow_3g", "disconnected") to include the missing
upload speed parameter in the same argument order as the macOS script (e.g., add
the upload value between download and latency arguments so calls match
apply_settings <download> <upload> <latency> <loss> "<state>").

done
Loading
Loading