From 105dfad37f640b546e85ef47e5a597ac4d2d26bb Mon Sep 17 00:00:00 2001 From: zemelee <3049788545@qq.com> Date: Mon, 31 Mar 2025 11:34:37 +0800 Subject: [PATCH] Update auto-parallelism.md --- chapter_computational-performance/auto-parallelism.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/chapter_computational-performance/auto-parallelism.md b/chapter_computational-performance/auto-parallelism.md index be5ee2aa54..e975bc849d 100644 --- a/chapter_computational-performance/auto-parallelism.md +++ b/chapter_computational-performance/auto-parallelism.md @@ -27,7 +27,7 @@ import torch ## Parallel Computation on GPUs -Let's start by defining a reference workload to test: the `run` function below performs 10 matrix-matrix multiplications on the device of our choice using data allocated into two variables: `x_gpu1` and `x_gpu2`. +Let's start by defining a reference workload to test: the `run` function below performs 50 matrix-matrix multiplications on the device of our choice using data allocated into two variables: `x_gpu1` and `x_gpu2`. ```{.python .input} #@tab mxnet