fedlab_core.utils.sampler
¶
Module Contents¶
Classes¶
Sampler that restricts data loading to a subset of the dataset. |
|
This is a copy of |
-
class
fedlab_core.utils.sampler.
DistributedSampler
(dataset, rank, num_replicas, shuffle=True)¶ Bases:
torch.utils.data.distributed.Sampler
Sampler that restricts data loading to a subset of the dataset.
It is especially useful in conjunction with
torch.nn.parallel.DistributedDataParallel
. In such case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it.Note
Dataset is assumed to be of constant size.
- Parameters
dataset – Dataset used for sampling.
num_replicas (optional) – Number of processes participating in distributed training.
rank (optional) – Rank of the current process within num_replicas.
shuffle (optional) – If true (default), sampler will shuffle the indices
-
__iter__
(self)¶
-
__len__
(self)¶
-
set_epoch
(self, epoch)¶
-
class
fedlab_core.utils.sampler.
NonIIDDistributedSampler
(dataset, add_extra_samples=True)¶ Bases:
torch.utils.data.distributed.Sampler
This is a copy of
torch.utils.data.distributed.DistributedSampler
(28 March 2019) with the option to turn off adding extra samples to divide the work evenly.-
__iter__
(self)¶
-
__len__
(self)¶
-
set_epoch
(self, epoch)¶
-