Thank you for your code. I have a question about the code of PerAvg algorithm.
When utilizing evaluate_one_step (in serverperavg.py) function to evaluate the performance of PerAvg, the function first executes
for c in self.users: c.train_one_step()
to train personalized models for one step. However, in the function train_one_step, it seems that it utilizes testing data to update the personalized model. Is it right?
Source code:
```
def train_one_step(self):
self.model.train()
#step 1
X, y = self.get_next_testbatch()
self.optimizer.zero_grad()
output = self.model(X)
loss = self.loss(output, y)
loss.backward()
self.optimizer.step()
#step 2
X, y = self.get_nexttest_batch()
self.optimizer.zero_grad()
output = self.model(X)
loss = self.loss(output, y)
loss.backward()
self.optimizer.step(beta=self.beta)
Looking forward to your reply! Thank you!
Thank you for your code. I have a question about the code of PerAvg algorithm.
When utilizing
evaluate_one_step(in serverperavg.py) function to evaluate the performance of PerAvg, the function first executesfor c in self.users: c.train_one_step()to train personalized models for one step. However, in the function
train_one_step, it seems that it utilizes testing data to update the personalized model. Is it right?Source code:
```
def train_one_step(self):
self.model.train()
#step 1
X, y = self.get_next_testbatch()
self.optimizer.zero_grad()
output = self.model(X)
loss = self.loss(output, y)
loss.backward()
self.optimizer.step()
#step 2
X, y = self.get_nexttest_batch()
self.optimizer.zero_grad()
output = self.model(X)
loss = self.loss(output, y)
loss.backward()
self.optimizer.step(beta=self.beta)