:mod:`fedlab_core.utils.sampler` ================================ .. py:module:: fedlab_core.utils.sampler Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: fedlab_core.utils.sampler.DistributedSampler fedlab_core.utils.sampler.NonIIDDistributedSampler .. class:: DistributedSampler(dataset, rank, num_replicas, shuffle=True) Bases: :class:`torch.utils.data.distributed.Sampler` Sampler that restricts data loading to a subset of the dataset. It is especially useful in conjunction with :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it. .. note:: Dataset is assumed to be of constant size. :param dataset: Dataset used for sampling. :param num_replicas: Number of processes participating in distributed training. :type num_replicas: optional :param rank: Rank of the current process within num_replicas. :type rank: optional :param shuffle: If true (default), sampler will shuffle the indices :type shuffle: optional .. method:: __iter__(self) .. method:: __len__(self) .. method:: set_epoch(self, epoch) .. class:: NonIIDDistributedSampler(dataset, add_extra_samples=True) Bases: :class:`torch.utils.data.distributed.Sampler` This is a copy of :class:`torch.utils.data.distributed.DistributedSampler` (28 March 2019) with the option to turn off adding extra samples to divide the work evenly. .. method:: __iter__(self) .. method:: __len__(self) .. method:: set_epoch(self, epoch)