:mod:`fedlab_core.server.handler` ================================= .. py:module:: fedlab_core.server.handler Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: fedlab_core.server.handler.ParameterServerHandler fedlab_core.server.handler.SyncParameterServerHandler fedlab_core.server.handler.AsyncParameterServerHandler .. class:: ParameterServerHandler(model, cuda=False) Bases: :class:`object` An abstract class representing handler for parameter server. Please make sure that you self-defined server handler class subclasses this class .. rubric:: Example read sourcecode of :class:`SyncSGDParameterServerHandler` below .. method:: on_receive(self) :abstractmethod: Override this function to define what the server to do when receiving message from client .. method:: update(self, model_list) :abstractmethod: Override this function to update global model :param model_list: a list of model parameters serialized by :func:`ravel_model_params` :type model_list: list .. method:: buffer(self) :property: .. method:: model(self) :property: .. class:: SyncParameterServerHandler(model, client_num_in_total, cuda=False, select_ratio=1.0, logger_path='server_handler.txt', logger_name='server handler') Bases: :class:`fedlab_core.server.handler.ParameterServerHandler` Synchronous Parameter Server Handler Backend of synchronous parameter server: this class is responsible for backend computing. Synchronous parameter server will wait for every client to finish local training process before the next FL round. :param model: Model used in this federation :type model: torch.nn.Module :param client_num_in_total: Total number of clients in this federation :type client_num_in_total: int :param cuda: Use GPUs or not :type cuda: bool :param select_ratio: ``select_ratio * client_num`` is the number of clients to join every FL round :type select_ratio: float .. method:: on_receive(self, sender, message_code, payload) -> None Define what parameter server does when receiving a single client's message :param sender: Index of client in distributed :type sender: int :param message_code: Agreements code defined in :class:`MessageCode` class :type message_code: MessageCode :param payload: Serialized model parameters :type payload: torch.Tensor .. method:: select_clients(self) Return a list of client rank indices selected randomly .. method:: is_updated(self) -> bool .. method:: start_round(self) .. class:: AsyncParameterServerHandler(model, cuda) Bases: :class:`fedlab_core.server.handler.ParameterServerHandler` Asynchronous ParameterServer Handler Update global model immediately after receiving a ParameterUpdate message paper: https://arxiv.org/abs/1903.03934 :param model: Global model in server :type model: torch.nn.Module :param cuda: Use GPUs or not :type cuda: bool .. method:: update(self, model_list) Override this function to update global model :param model_list: a list of model parameters serialized by :func:`ravel_model_params` :type model_list: list .. method:: on_receive(self, sender, message_code, parameter) Override this function to define what the server to do when receiving message from client