{"id":2720,"date":"2025-01-25T16:34:01","date_gmt":"2025-01-25T08:34:01","guid":{"rendered":"https:\/\/www.gnn.club\/?p=2720"},"modified":"2025-03-12T15:06:27","modified_gmt":"2025-03-12T07:06:27","slug":"tutorial-09-%e6%ae%8a%e9%80%94%e5%90%8c%e5%bd%92%ef%bc%9a%e8%81%94%e9%82%a6%e5%ad%a6%e4%b9%a0%ef%bc%88federated-learning%ef%bc%89","status":"publish","type":"post","link":"http:\/\/gnn.club\/?p=2720","title":{"rendered":"Tutorial 09 &#8211; \u6b8a\u9014\u540c\u5f52\uff1a\u8054\u90a6\u5b66\u4e60\uff08Federated Learning\uff09"},"content":{"rendered":"<h1>Learning Methods of Deep Learning<\/h1>\n<hr \/>\n<p>create by Deepfinder<\/p>\n<h3><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/bubbles\/50\/000000\/checklist.png\" style=\"height:50px;display:inline\"> Agenda<\/h3>\n<hr \/>\n<ol>\n<li>\u5e08\u5f92\u76f8\u6388\uff1a\u6709\u76d1\u7763\u5b66\u4e60\uff08Supervised Learning\uff09<\/li>\n<li>\u89c1\u5fae\u77e5\u8457\uff1a\u65e0\u76d1\u7763\u5b66\u4e60\uff08Un-supervised Learning\uff09<\/li>\n<li>\u65e0\u5e08\u81ea\u901a\uff1a\u81ea\u76d1\u7763\u5b66\u4e60\uff08Self-supervised Learning\uff09<\/li>\n<li>\u4ee5\u70b9\u5e26\u9762\uff1a\u534a\u76d1\u7763\u5b66\u4e60\uff08Semi-supervised learning\uff09<\/li>\n<li>\u660e\u8fa8\u662f\u975e\uff1a\u5bf9\u6bd4\u5b66\u4e60\uff08Contrastive Learning\uff09<\/li>\n<li>\u4e3e\u4e00\u53cd\u4e09\uff1a\u8fc1\u79fb\u5b66\u4e60\uff08Transfer Learning\uff09<\/li>\n<li>\u9488\u950b\u76f8\u5bf9\uff1a\u5bf9\u6297\u5b66\u4e60\uff08Adversarial Learning\uff09<\/li>\n<li>\u4f17\u5fd7\u6210\u57ce\uff1a\u96c6\u6210\u5b66\u4e60(Ensemble Learning) <\/li>\n<li><strong>\u6b8a\u9014\u540c\u5f52\uff1a\u8054\u90a6\u5b66\u4e60\uff08Federated Learning\uff09<\/strong><\/li>\n<li>\u767e\u6298\u4e0d\u6320\uff1a\u5f3a\u5316\u5b66\u4e60\uff08Reinforcement Learning\uff09<\/li>\n<li>\u6c42\u77e5\u82e5\u6e34\uff1a\u4e3b\u52a8\u5b66\u4e60\uff08Active Learning\uff09<\/li>\n<li>\u4e07\u6cd5\u5f52\u5b97\uff1a\u5143\u5b66\u4e60\uff08Meta-Learning\uff09<\/li>\n<\/ol>\n<h2>Tutorial 09 - \u6b8a\u9014\u540c\u5f52\uff1a\u8054\u90a6\u5b66\u4e60\uff08Federated Learning\uff09<\/h2>\n<p>\u5728\u5f53\u4eca\u5927\u6570\u636e\u65f6\u4ee3\uff0c\u6570\u636e\u662f\u9a71\u52a8\u4eba\u5de5\u667a\u80fd\uff08AI\uff09\u548c\u673a\u5668\u5b66\u4e60\uff08ML\uff09\u53d1\u5c55\u7684\u6838\u5fc3\u8d44\u6e90\u3002\u7136\u800c\uff0c\u6570\u636e\u7684\u5206\u6563\u6027\u548c\u9690\u79c1\u4fdd\u62a4\u9700\u6c42\u5bf9\u4f20\u7edf\u7684\u96c6\u4e2d\u5f0f\u673a\u5668\u5b66\u4e60\u65b9\u6cd5\u63d0\u51fa\u4e86\u5de8\u5927\u6311\u6218\u3002\u96c6\u4e2d\u5f0f\u673a\u5668\u5b66\u4e60\u901a\u5e38\u9700\u8981\u5c06\u6240\u6709\u6570\u636e\u4e0a\u4f20\u5230\u4e2d\u592e\u670d\u52a1\u5668\u8fdb\u884c\u8bad\u7ec3\uff0c\u8fd9\u4e0d\u4ec5\u5e26\u6765\u4e86\u6570\u636e\u9690\u79c1\u6cc4\u9732\u7684\u98ce\u9669\uff0c\u8fd8\u53ef\u80fd\u5bfc\u81f4\u6570\u636e\u4f20\u8f93\u548c\u5b58\u50a8\u7684\u9ad8\u6210\u672c\u3002\u4e3a\u4e86\u89e3\u51b3\u8fd9\u4e9b\u95ee\u9898\uff0c<strong>\u8054\u90a6\u5b66\u4e60(Federated Learning, FL)<\/strong> \u5e94\u8fd0\u800c\u751f\u3002<\/p>\n<p>\u8054\u90a6\u5b66\u4e60\u662f\u4e00\u79cd\u5206\u5e03\u5f0f\u673a\u5668\u5b66\u4e60\u6846\u67b6\uff0c\u5141\u8bb8\u591a\u4e2a\u53c2\u4e0e\u65b9\uff08\u5982\u79fb\u52a8\u8bbe\u5907\u3001\u4f01\u4e1a\u6216\u673a\u6784\uff09\u5728\u4e0d\u5171\u4eab\u539f\u59cb\u6570\u636e\u7684\u60c5\u51b5\u4e0b\uff0c\u534f\u540c\u8bad\u7ec3\u4e00\u4e2a\u5168\u5c40\u6a21\u578b\u3002\u8fd9\u79cd\u65b9\u6cd5\u4e0d\u4ec5\u4fdd\u62a4\u4e86\u6570\u636e\u9690\u79c1\uff0c\u8fd8\u5145\u5206\u5229\u7528\u4e86\u5206\u6563\u7684\u8ba1\u7b97\u8d44\u6e90\u3002\u672c\u6587\u5c06\u4ecb\u7ecd\u8054\u90a6\u5b66\u4e60\u7684\u57fa\u672c\u6982\u5ff5\u3001\u6280\u672f\u4f18\u52bf\uff0c\u5e76\u901a\u8fc7\u4e00\u4e2a\u57fa\u4e8eMNIST\u6570\u636e\u96c6\u7684\u8054\u90a6\u5b66\u4e60\u9879\u76ee\uff0c\u5c55\u793a\u5176\u5b9e\u9645\u5e94\u7528\u3002<\/p>\n<h2><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/dusk\/64\/000000\/popular-topic.png\" style=\"height:50px;display:inline\">  \u8054\u90a6\u5b66\u4e60\u7684\u57fa\u672c\u6982\u5ff5<\/h2>\n<hr \/>\n<p>\u8054\u90a6\u5b66\u4e60\u7684\u6838\u5fc3\u601d\u60f3\u662f<strong>\u6570\u636e\u4e0d\u52a8\uff0c\u6a21\u578b\u52a8<\/strong>\u3002\u5177\u4f53\u6765\u8bf4\uff0c\u8054\u90a6\u5b66\u4e60\u5305\u62ec\u4ee5\u4e0b\u51e0\u4e2a\u5173\u952e\u6b65\u9aa4\uff1a<\/p>\n<ol>\n<li>\u672c\u5730\u8bad\u7ec3\uff1a<\/li>\n<\/ol>\n<ul>\n<li>\n<p>\u6bcf\u4e2a\u53c2\u4e0e\u65b9\uff08\u5ba2\u6237\u7aef\uff09\u5728\u672c\u5730\u4f7f\u7528\u81ea\u5df1\u7684\u6570\u636e\u8bad\u7ec3\u6a21\u578b\u3002<\/p>\n<\/li>\n<li>\n<p>\u8bad\u7ec3\u5b8c\u6210\u540e\uff0c\u5ba2\u6237\u7aef\u5c06\u6a21\u578b\u66f4\u65b0\uff08\u5982\u6743\u91cd\u6216\u68af\u5ea6\uff09\u53d1\u9001\u7ed9\u4e2d\u592e\u670d\u52a1\u5668\u3002<\/p>\n<\/li>\n<\/ul>\n<ol start=\"2\">\n<li>\u6a21\u578b\u805a\u5408\uff1a<\/li>\n<\/ol>\n<ul>\n<li>\u4e2d\u592e\u670d\u52a1\u5668\u6536\u96c6\u6240\u6709\u5ba2\u6237\u7aef\u7684\u6a21\u578b\u66f4\u65b0\uff0c\u5e76\u901a\u8fc7\u805a\u5408\u7b97\u6cd5\uff08\u5982\u8054\u90a6\u5e73\u5747\uff0cFedAvg\uff09\u751f\u6210\u4e00\u4e2a\u5168\u5c40\u6a21\u578b\u3002<\/li>\n<\/ul>\n<ol start=\"3\">\n<li>\u6a21\u578b\u5206\u53d1\uff1a<\/li>\n<\/ol>\n<ul>\n<li>\n<p>\u4e2d\u592e\u670d\u52a1\u5668\u5c06\u66f4\u65b0\u540e\u7684\u5168\u5c40\u6a21\u578b\u5206\u53d1\u7ed9\u6240\u6709\u5ba2\u6237\u7aef\u3002<\/p>\n<\/li>\n<li>\n<p>\u5ba2\u6237\u7aef\u4f7f\u7528\u65b0\u7684\u5168\u5c40\u6a21\u578b\u7ee7\u7eed\u672c\u5730\u8bad\u7ec3\u3002<\/p>\n<\/li>\n<\/ul>\n<p>\u901a\u8fc7\u591a\u6b21\u8fed\u4ee3\uff0c\u5168\u5c40\u6a21\u578b\u9010\u6e10\u6536\u655b\uff0c\u6700\u7ec8\u8fbe\u5230\u4e0e\u96c6\u4e2d\u5f0f\u8bad\u7ec3\u76f8\u5f53\u7684\u6027\u80fd\u3002<\/p>\n<h2><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/cute-clipart\/64\/000000\/task.png\" style=\"height:50px;display:inline\"> \u8054\u90a6\u5b66\u4e60\u7684\u6280\u672f\u4f18\u52bf<\/h2>\n<hr \/>\n<ol>\n<li>\u9690\u79c1\u4fdd\u62a4\uff1a<\/li>\n<\/ol>\n<ul>\n<li>\n<p>\u8054\u90a6\u5b66\u4e60\u4e0d\u9700\u8981\u5c06\u539f\u59cb\u6570\u636e\u4e0a\u4f20\u5230\u4e2d\u592e\u670d\u52a1\u5668\uff0c\u907f\u514d\u4e86\u6570\u636e\u6cc4\u9732\u7684\u98ce\u9669\u3002<\/p>\n<\/li>\n<li>\n<p>\u901a\u8fc7\u5dee\u5206\u9690\u79c1\u3001\u540c\u6001\u52a0\u5bc6\u7b49\u6280\u672f\uff0c\u53ef\u4ee5\u8fdb\u4e00\u6b65\u589e\u5f3a\u9690\u79c1\u4fdd\u62a4\u3002<\/p>\n<\/li>\n<\/ul>\n<ol start=\"2\">\n<li>\u6570\u636e\u5206\u5e03\u591a\u6837\u6027\uff1a<\/li>\n<\/ol>\n<ul>\n<li>\u8054\u90a6\u5b66\u4e60\u80fd\u591f\u5904\u7406\u975e\u72ec\u7acb\u540c\u5206\u5e03\uff08Non-IID\uff09\u6570\u636e\uff0c\u9002\u5e94\u73b0\u5b9e\u4e16\u754c\u4e2d\u6570\u636e\u7684\u591a\u6837\u6027\u3002<\/li>\n<\/ul>\n<ol start=\"3\">\n<li>\u8d44\u6e90\u9ad8\u6548\u5229\u7528\uff1a<\/li>\n<\/ol>\n<ul>\n<li>\u8054\u90a6\u5b66\u4e60\u5145\u5206\u5229\u7528\u4e86\u5ba2\u6237\u7aef\u7684\u8ba1\u7b97\u8d44\u6e90\uff0c\u51cf\u8f7b\u4e86\u4e2d\u592e\u670d\u52a1\u5668\u7684\u8d1f\u62c5\u3002<\/li>\n<\/ul>\n<ol start=\"4\">\n<li>\u5408\u89c4\u6027\uff1a<\/li>\n<\/ol>\n<ul>\n<li>\u8054\u90a6\u5b66\u4e60\u7b26\u5408\u6570\u636e\u9690\u79c1\u4fdd\u62a4\u6cd5\u89c4\uff08\u5982GDPR\uff09\uff0c\u9002\u7528\u4e8e\u533b\u7597\u3001\u91d1\u878d\u7b49\u5bf9\u6570\u636e\u9690\u79c1\u8981\u6c42\u4e25\u683c\u7684\u9886\u57df\u3002<\/li>\n<\/ul>\n<h2><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/dusk\/64\/000000\/lego-head.png\" style=\"height:50px;display:inline\"> \u8054\u90a6\u5b66\u4e60\u7684\u5e94\u7528\u573a\u666f<\/h2>\n<hr \/>\n<p>\u8054\u90a6\u5b66\u4e60\u5728\u8bb8\u591a\u9886\u57df\u5177\u6709\u5e7f\u6cdb\u7684\u5e94\u7528\u524d\u666f\uff0c\u5305\u62ec\u4f46\u4e0d\u9650\u4e8e\uff1a<\/p>\n<ul>\n<li>\n<p>\u533b\u7597\u5065\u5eb7\uff1a \u4e0d\u540c\u533b\u9662\u53ef\u4ee5\u5728\u4e0d\u5171\u4eab\u60a3\u8005\u6570\u636e\u7684\u60c5\u51b5\u4e0b\uff0c\u534f\u540c\u8bad\u7ec3\u75be\u75c5\u8bca\u65ad\u6a21\u578b\u3002<\/p>\n<\/li>\n<li>\n<p>\u91d1\u878d\u98ce\u63a7\uff1a \u94f6\u884c\u548c\u91d1\u878d\u673a\u6784\u53ef\u4ee5\u8054\u5408\u8bad\u7ec3\u4fe1\u7528\u8bc4\u5206\u6a21\u578b\uff0c\u540c\u65f6\u4fdd\u62a4\u5ba2\u6237\u9690\u79c1\u3002<\/p>\n<\/li>\n<li>\n<p>\u667a\u80fd\u8bbe\u5907\uff1a \u667a\u80fd\u624b\u673a\u3001\u667a\u80fd\u5bb6\u5c45\u8bbe\u5907\u53ef\u4ee5\u5728\u672c\u5730\u8bad\u7ec3\u4e2a\u6027\u5316\u6a21\u578b\uff0c\u63d0\u5347\u7528\u6237\u4f53\u9a8c\u3002<\/p>\n<\/li>\n<li>\n<p>\u667a\u6167\u57ce\u5e02\uff1a \u57ce\u5e02\u4e2d\u7684\u4f20\u611f\u5668\u548c\u8bbe\u5907\u53ef\u4ee5\u534f\u540c\u8bad\u7ec3\u4ea4\u901a\u6d41\u91cf\u9884\u6d4b\u3001\u73af\u5883\u76d1\u6d4b\u7b49\u6a21\u578b\u3002<\/p>\n<\/li>\n<\/ul>\n<h2><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/color\/96\/000000\/tweezers.png\" style=\"height:50px;display:inline\"> \u57fa\u4e8eMNIST\u6570\u636e\u96c6\u7684\u8054\u90a6\u5b66\u4e60\u9879\u76ee<\/h2>\n<hr \/>\n<p>\u4e3a\u4e86\u5e2e\u52a9\u8bfb\u8005\u66f4\u597d\u5730\u7406\u89e3\u8054\u90a6\u5b66\u4e60\u7684\u5b9e\u73b0\u8fc7\u7a0b\uff0c\u6211\u5b9e\u73b0\u4e86\u4e00\u4e2a\u57fa\u4e8eMNIST\u6570\u636e\u96c6\u7684\u8054\u90a6\u5b66\u4e60\u9879\u76ee\u3002MNIST\u662f\u4e00\u4e2a\u7ecf\u5178\u7684\u624b\u5199\u6570\u5b57\u8bc6\u522b\u6570\u636e\u96c6\uff0c\u5305\u542b60,000\u5f20\u8bad\u7ec3\u56fe\u50cf\u548c10,000\u5f20\u6d4b\u8bd5\u56fe\u50cf\u3002\u5728\u8be5\u9879\u76ee\u4e2d\uff0c\u6211\u4eec\u6a21\u62df\u4e86\u591a\u4e2a\u5ba2\u6237\u7aef\uff08\u5982\u79fb\u52a8\u8bbe\u5907\u6216\u673a\u6784\uff09\u534f\u540c\u8bad\u7ec3\u4e00\u4e2a\u624b\u5199\u6570\u5b57\u8bc6\u522b\u6a21\u578b\u7684\u8fc7\u7a0b\u3002<\/p>\n<p><strong>\u9879\u76ee\u7279\u70b9\uff1a<\/strong><\/p>\n<ul>\n<li>\n<p>\u6570\u636e\u5206\u5e03\uff1a \u5c06MNIST\u6570\u636e\u96c6\u5212\u5206\u4e3a\u591a\u4e2a\u5b50\u96c6\uff0c\u6bcf\u4e2a\u5b50\u96c6\u5206\u914d\u7ed9\u4e00\u4e2a\u5ba2\u6237\u7aef\uff0c\u6a21\u62df\u73b0\u5b9e\u4e2d\u7684\u6570\u636e\u5206\u5e03\u3002<\/p>\n<\/li>\n<li>\n<p>\u672c\u5730\u8bad\u7ec3\uff1a \u6bcf\u4e2a\u5ba2\u6237\u7aef\u5728\u672c\u5730\u4f7f\u7528\u81ea\u5df1\u7684\u6570\u636e\u8bad\u7ec3\u6a21\u578b\uff0c\u5e76\u5c06\u6a21\u578b\u66f4\u65b0\u53d1\u9001\u7ed9\u4e2d\u592e\u670d\u52a1\u5668\u3002<\/p>\n<\/li>\n<li>\n<p>\u6a21\u578b\u805a\u5408\uff1a \u4e2d\u592e\u670d\u52a1\u5668\u4f7f\u7528\u8054\u90a6\u5e73\u5747\u7b97\u6cd5\u805a\u5408\u5ba2\u6237\u7aef\u7684\u6a21\u578b\u66f4\u65b0\uff0c\u751f\u6210\u5168\u5c40\u6a21\u578b\u3002<\/p>\n<\/li>\n<li>\n<p>\u6a21\u578b\u8bc4\u4f30\uff1a \u5728\u6bcf\u6b21\u8054\u90a6\u5b66\u4e60\u8fed\u4ee3\u540e\uff0c\u8bc4\u4f30\u5168\u5c40\u6a21\u578b\u5728\u6d4b\u8bd5\u96c6\u4e0a\u7684\u6027\u80fd\u3002<\/p>\n<\/li>\n<\/ul>\n<p><strong>\u6280\u672f\u5b9e\u73b0\uff1a<\/strong><\/p>\n<ul>\n<li>\n<p>\u4f7f\u7528PyTorch\u6846\u67b6\u6784\u5efa\u795e\u7ecf\u7f51\u7edc\u6a21\u578b\u3002<\/p>\n<\/li>\n<li>\n<p>\u4f7f\u7528\u8054\u90a6\u5e73\u5747\u7b97\u6cd5\u5b9e\u73b0\u6a21\u578b\u805a\u5408\u3002<\/p>\n<\/li>\n<li>\n<p>\u901a\u8fc7\u591a\u6b21\u8fed\u4ee3\uff0c\u9010\u6b65\u63d0\u5347\u5168\u5c40\u6a21\u578b\u7684\u51c6\u786e\u7387\u3002<\/p>\n<\/li>\n<\/ul>\n<pre><code class=\"language-python\">import numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom torchvision import datasets, transforms\n\n# \u52a0\u8f7d MNIST \u6570\u636e\u96c6\ntrain_dataset = datasets.MNIST(root=&#039;.\/data&#039;, train=True, download=True, transform=transforms.ToTensor())\ntest_dataset = datasets.MNIST(root=&#039;.\/data&#039;, train=False, download=True, transform=transforms.ToTensor())\n\n# \u8f6c\u6362\u4e3a NumPy \u6570\u7ec4\nxTrain = train_dataset.data.numpy().reshape(-1, 784) \/ 255.0\nyTrain = train_dataset.targets.numpy()  # \u5df2\u7ecf\u662f\u7c7b\u522b\u7d22\u5f15\nxTest = test_dataset.data.numpy().reshape(-1, 784) \/ 255.0\nyTest = test_dataset.targets.numpy()    # \u5df2\u7ecf\u662f\u7c7b\u522b\u7d22\u5f15\n\n# \u5168\u5c40\u53c2\u6570\nbatch_size = 64\nepochs = 5\n\n# \u5c06\u6570\u636e\u8f6c\u6362\u4e3a PyTorch \u5f20\u91cf\nxTrain_tensor = torch.tensor(xTrain, dtype=torch.float32)\nyTrain_tensor = torch.tensor(yTrain, dtype=torch.long)  # \u4f7f\u7528 torch.long \u8868\u793a\u7c7b\u522b\u7d22\u5f15\nxTest_tensor = torch.tensor(xTest, dtype=torch.float32)\nyTest_tensor = torch.tensor(yTest, dtype=torch.long)    # \u4f7f\u7528 torch.long \u8868\u793a\u7c7b\u522b\u7d22\u5f15\n\n# \u521b\u5efa DataLoader\ntrain_dataset = TensorDataset(xTrain_tensor, yTrain_tensor)\ntrain_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)\ntest_dataset = TensorDataset(xTest_tensor, yTest_tensor)\ntest_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)\n\n# \u6a21\u578b\u5b9a\u4e49\nclass DeepModel(nn.Module):\n    def __init__(self):\n        super(DeepModel, self).__init__()\n        self.fc1 = nn.Linear(784, 64)\n        self.fc2 = nn.Linear(64, 10)\n        self.relu = nn.ReLU()\n\n    def forward(self, x):\n        x = self.relu(self.fc1(x))\n        x = self.fc2(x)\n        return x\n\n# \u8bad\u7ec3\u51fd\u6570\ndef train(model, train_loader, criterion, optimizer, epochs):\n    history = {&#039;accuracy&#039;: [], &#039;val_accuracy&#039;: [], &#039;loss&#039;: [], &#039;val_loss&#039;: []}\n    for epoch in range(epochs):\n        model.train()\n        running_loss = 0.0\n        correct = 0\n        total = 0\n        for inputs, labels in train_loader:\n            optimizer.zero_grad()\n            outputs = model(inputs)\n            loss = criterion(outputs, labels)  # labels \u662f\u7c7b\u522b\u7d22\u5f15\n            loss.backward()\n            optimizer.step()\n            running_loss += loss.item()\n            _, predicted = torch.max(outputs.data, 1)  # predicted \u662f\u7c7b\u522b\u7d22\u5f15\n            total += labels.size(0)\n            correct += (predicted == labels).sum().item()  # \u76f4\u63a5\u6bd4\u8f83\u7c7b\u522b\u7d22\u5f15\n        epoch_loss = running_loss \/ len(train_loader)\n        epoch_accuracy = correct \/ total\n        history[&#039;loss&#039;].append(epoch_loss)\n        history[&#039;accuracy&#039;].append(epoch_accuracy)\n        print(f&#039;Epoch {epoch + 1}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}&#039;)\n    return history\n\n# \u521d\u59cb\u5316\u6a21\u578b\u3001\u635f\u5931\u51fd\u6570\u548c\u4f18\u5316\u5668\nnonFmodel = DeepModel()\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(nonFmodel.parameters(), lr=0.0001)\n\n# \u8bad\u7ec3\u6a21\u578b\nhistory = train(nonFmodel, train_loader, criterion, optimizer, epochs)<\/code><\/pre>\n<pre><code>Epoch 1, Loss: 1.0643141178179905, Accuracy: 0.7744\nEpoch 2, Loss: 0.4311890813238077, Accuracy: 0.8908833333333334\nEpoch 3, Loss: 0.3416135678889909, Accuracy: 0.9067833333333334\nEpoch 4, Loss: 0.3032634595532153, Accuracy: 0.9153333333333333\nEpoch 5, Loss: 0.2790384936148424, Accuracy: 0.9215333333333333<\/code><\/pre>\n<pre><code class=\"language-python\">\nnumOfClients = 5  # \u5ba2\u6237\u7aef\u6570\u91cf\nnumOfIterations = 5  # \u8054\u90a6\u5b66\u4e60\u8fed\u4ee3\u6b21\u6570\nclientDataInterval = len(xTrain) \/\/ numOfClients  # \u6bcf\u4e2a\u5ba2\u6237\u7aef\u7684\u6570\u636e\u91cf<\/code><\/pre>\n<pre><code class=\"language-python\">\nxClientsList = []\nyClientsList = []\nfor clientID in range(numOfClients):\n    start = clientID * clientDataInterval\n    end = start + clientDataInterval\n    xClientsList.append(xTrain_tensor[start:end])\n    yClientsList.append(yTrain_tensor[start:end])<\/code><\/pre>\n<pre><code class=\"language-python\">import matplotlib.pyplot as plt\n\ndef plot_client_data_distribution(yClientsList, numOfClients):\n    plt.figure(figsize=(15, 10))\n    for clientID in range(numOfClients):\n        # \u7edf\u8ba1\u6bcf\u4e2a\u7c7b\u522b\u7684\u6837\u672c\u6570\u91cf\n        class_counts = np.bincount(yClientsList[clientID].numpy(), minlength=10)\n\n        # \u7ed8\u5236\u67f1\u72b6\u56fe\n        plt.subplot(2, 3, clientID + 1)  # 2 \u884c 3 \u5217\u7684\u5b50\u56fe\u5e03\u5c40\n        plt.bar(range(10), class_counts, color=&#039;skyblue&#039;)\n        plt.title(f&#039;Client {clientID + 1} Data Distribution&#039;)\n        plt.xlabel(&#039;Class&#039;)\n        plt.ylabel(&#039;Number of Samples&#039;)\n        plt.xticks(range(10))  # \u8bbe\u7f6e x \u8f74\u523b\u5ea6\u4e3a 0-9\n    plt.tight_layout()\n    plt.show()\n\n# \u5212\u5206\u5ba2\u6237\u7aef\u6570\u636e\nxClientsList = []\nyClientsList = []\nfor clientID in range(numOfClients):\n    start = clientID * clientDataInterval\n    end = start + clientDataInterval\n    xClientsList.append(xTrain_tensor[start:end])\n    yClientsList.append(yTrain_tensor[start:end])\n\n# \u53ef\u89c6\u5316\u5ba2\u6237\u7aef\u6570\u636e\u5206\u5e03\nplot_client_data_distribution(yClientsList, numOfClients)<\/code><\/pre>\n<p align=\"center\">\n  <img decoding=\"async\" src=\"https:\/\/gnnclub-1311496010.cos.ap-beijing.myqcloud.com\/wp-content\/uploads\/2025\/01\/20250125163322874.png\n\" style=\"height:500px\">\n<\/p>\n<pre><code class=\"language-python\">\nclientsModelList = []\nfor clientID in range(numOfClients):\n    model = DeepModel()\n    model.load_state_dict(nonFmodel.state_dict())  # \u52a0\u8f7d\u670d\u52a1\u5668\u7684\u521d\u59cb\u6743\u91cd\n    clientsModelList.append(model)<\/code><\/pre>\n<pre><code class=\"language-python\">\ndef federated_learning(server_model, clientsModelList, xClientsList, yClientsList, numOfIterations, batch_size, criterion):\n    for iteration in range(numOfIterations):\n        print(f&quot;Iteration {iteration + 1}\/{numOfIterations}&quot;)\n\n        # \u5ba2\u6237\u7aef\u672c\u5730\u8bad\u7ec3\n        client_weights = []\n        for clientID in range(numOfClients):\n            print(f&quot;Training client {clientID + 1}\/{numOfClients}&quot;)\n            client_model = clientsModelList[clientID]\n            client_model.train()  # \u8bbe\u7f6e\u6a21\u578b\u4e3a\u8bad\u7ec3\u6a21\u5f0f\n\n            # \u4e3a\u6bcf\u4e2a\u5ba2\u6237\u7aef\u521b\u5efa\u72ec\u7acb\u7684\u4f18\u5316\u5668\n            client_optimizer = optim.Adam(client_model.parameters(), lr=0.0001)\n\n            # \u521b\u5efa\u5ba2\u6237\u7aef\u7684\u6570\u636e\u52a0\u8f7d\u5668\n            client_dataset = TensorDataset(xClientsList[clientID], yClientsList[clientID])\n            client_loader = DataLoader(client_dataset, batch_size=batch_size, shuffle=True)\n\n            # \u5ba2\u6237\u7aef\u672c\u5730\u8bad\u7ec3\n            for epoch in range(10):  # \u6bcf\u4e2a\u5ba2\u6237\u7aef\u8bad\u7ec3 10 \u4e2a epoch\n                running_loss = 0.0\n                correct = 0\n                total = 0\n                for inputs, labels in client_loader:\n                    client_optimizer.zero_grad()\n                    outputs = client_model(inputs)\n                    loss = criterion(outputs, labels)\n                    loss.backward()\n                    client_optimizer.step()\n\n                    # \u8ba1\u7b97\u8bad\u7ec3\u6307\u6807\n                    running_loss += loss.item()\n                    _, predicted = torch.max(outputs.data, 1)\n                    total += labels.size(0)\n                    correct += (predicted == labels).sum().item()\n\n                # \u6253\u5370\u5ba2\u6237\u7aef\u8bad\u7ec3\u7ed3\u679c\n                epoch_loss = running_loss \/ len(client_loader)\n                epoch_accuracy = correct \/ total\n                print(f&quot;Client {clientID + 1}, Epoch {epoch + 1}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}&quot;)\n\n            # \u4fdd\u5b58\u5ba2\u6237\u7aef\u7684\u6743\u91cd\n            client_weights.append(client_model.state_dict())\n\n        # \u670d\u52a1\u5668\u805a\u5408\u6743\u91cd\uff08FedAvg\uff09\n        print(&quot;Aggregating client weights...&quot;)\n        avg_weights = {}\n        for key in client_weights[0].keys():\n            avg_weights[key] = torch.stack([client_weights[i][key] for i in range(numOfClients)]).mean(0)\n\n        # \u66f4\u65b0\u670d\u52a1\u5668\u6a21\u578b\n        server_model.load_state_dict(avg_weights)\n\n        # \u66f4\u65b0\u5ba2\u6237\u7aef\u6a21\u578b\n        for clientID in range(numOfClients):\n            clientsModelList[clientID].load_state_dict(server_model.state_dict())\n\n        # \u5728\u6d4b\u8bd5\u96c6\u4e0a\u8bc4\u4f30\u670d\u52a1\u5668\u6a21\u578b\n        server_model.eval()\n        correct = 0\n        total = 0\n        with torch.no_grad():\n            for inputs, labels in test_loader:\n                outputs = server_model(inputs)\n                _, predicted = torch.max(outputs.data, 1)\n                total += labels.size(0)\n                correct += (predicted == labels).sum().item()\n        accuracy = correct \/ total\n        print(f&quot;Server model accuracy after iteration {iteration + 1}: {accuracy:.4f}&quot;)\n\n# \u521d\u59cb\u5316\u670d\u52a1\u5668\u6a21\u578b\nserver_model = DeepModel()\nserver_model.load_state_dict(nonFmodel.state_dict())\n\n# \u8fd0\u884c\u8054\u90a6\u5b66\u4e60\nfederated_learning(server_model, clientsModelList, xClientsList, yClientsList, numOfIterations, batch_size, criterion, optimizer)<\/code><\/pre>\n<pre><code>Iteration 1\/5\nTraining client 1\/5\nClient 1, Epoch 1, Loss: 0.25442665502270484, Accuracy: 0.9296666666666666\nClient 1, Epoch 2, Loss: 0.24710868604164174, Accuracy: 0.9306666666666666\nClient 1, Epoch 3, Loss: 0.2416693331237803, Accuracy: 0.93275\nClient 1, Epoch 4, Loss: 0.23599671370330008, Accuracy: 0.9344166666666667\nClient 1, Epoch 5, Loss: 0.23226321789812535, Accuracy: 0.9355833333333333\nClient 1, Epoch 6, Loss: 0.22663887760582122, Accuracy: 0.9370833333333334\nClient 1, Epoch 7, Loss: 0.22229622411442565, Accuracy: 0.9388333333333333\nClient 1, Epoch 8, Loss: 0.21833917011130363, Accuracy: 0.9390833333333334\nClient 1, Epoch 9, Loss: 0.21452935036034027, Accuracy: 0.9404166666666667\nClient 1, Epoch 10, Loss: 0.21081844095061433, Accuracy: 0.9420833333333334\nTraining client 2\/5\nClient 2, Epoch 1, Loss: 0.2733165322545361, Accuracy: 0.92425\nClient 2, Epoch 2, Loss: 0.2656372262838673, Accuracy: 0.9265\nClient 2, Epoch 3, Loss: 0.2590672530709429, Accuracy: 0.9293333333333333\nClient 2, Epoch 4, Loss: 0.25394435714375463, Accuracy: 0.9298333333333333\nClient 2, Epoch 5, Loss: 0.24873139519006648, Accuracy: 0.9321666666666667\nClient 2, Epoch 6, Loss: 0.24368406511209112, Accuracy: 0.9338333333333333\nClient 2, Epoch 7, Loss: 0.23849685680358967, Accuracy: 0.9346666666666666\nClient 2, Epoch 8, Loss: 0.2351764801572612, Accuracy: 0.9356666666666666\nClient 2, Epoch 9, Loss: 0.22973585976882183, Accuracy: 0.9376666666666666\nClient 2, Epoch 10, Loss: 0.22543375182183498, Accuracy: 0.9383333333333334\nTraining client 3\/5\nClient 3, Epoch 1, Loss: 0.2700467231742879, Accuracy: 0.92275\nClient 3, Epoch 2, Loss: 0.26203744765371084, Accuracy: 0.9248333333333333\nClient 3, Epoch 3, Loss: 0.25573475979902643, Accuracy: 0.9265\nClient 3, Epoch 4, Loss: 0.2504164947870564, Accuracy: 0.9281666666666667\nClient 3, Epoch 5, Loss: 0.24430735924459518, Accuracy: 0.931\nClient 3, Epoch 6, Loss: 0.23942389398654726, Accuracy: 0.93175\nClient 3, Epoch 7, Loss: 0.23514361727110883, Accuracy: 0.9331666666666667\nClient 3, Epoch 8, Loss: 0.23075153888698588, Accuracy: 0.9346666666666666\nClient 3, Epoch 9, Loss: 0.2257772829145827, Accuracy: 0.9359166666666666\nClient 3, Epoch 10, Loss: 0.22238441370427608, Accuracy: 0.9375833333333333\nTraining client 4\/5\nClient 4, Epoch 1, Loss: 0.2802324682236352, Accuracy: 0.91925\nClient 4, Epoch 2, Loss: 0.2726831494414426, Accuracy: 0.9224166666666667\nClient 4, Epoch 3, Loss: 0.26736432238620644, Accuracy: 0.9245833333333333\nClient 4, Epoch 4, Loss: 0.2624769541335867, Accuracy: 0.9251666666666667\nClient 4, Epoch 5, Loss: 0.2568098053415405, Accuracy: 0.92675\nClient 4, Epoch 6, Loss: 0.25263245514732724, Accuracy: 0.928\nClient 4, Epoch 7, Loss: 0.24686859892879395, Accuracy: 0.9305\nClient 4, Epoch 8, Loss: 0.24287960662486704, Accuracy: 0.93125\nClient 4, Epoch 9, Loss: 0.23832704166465618, Accuracy: 0.9328333333333333\nClient 4, Epoch 10, Loss: 0.23368976726890245, Accuracy: 0.934\nTraining client 5\/5\nClient 5, Epoch 1, Loss: 0.2474874449457894, Accuracy: 0.93275\nClient 5, Epoch 2, Loss: 0.2405232391221092, Accuracy: 0.9339166666666666\nClient 5, Epoch 3, Loss: 0.235105116118459, Accuracy: 0.93575\nClient 5, Epoch 4, Loss: 0.22996057994029623, Accuracy: 0.9363333333333334\nClient 5, Epoch 5, Loss: 0.2248192489979432, Accuracy: 0.939\nClient 5, Epoch 6, Loss: 0.22043576741472204, Accuracy: 0.9399166666666666\nClient 5, Epoch 7, Loss: 0.21599398069876305, Accuracy: 0.9408333333333333\nClient 5, Epoch 8, Loss: 0.212206183318445, Accuracy: 0.9423333333333334\nClient 5, Epoch 9, Loss: 0.20817280465618093, Accuracy: 0.9435833333333333\nClient 5, Epoch 10, Loss: 0.2040201946300395, Accuracy: 0.94375\nAggregating client weights...\nServer model accuracy after iteration 1: 0.9333\nIteration 2\/5\nTraining client 1\/5\nClient 1, Epoch 1, Loss: 0.2245550649835074, Accuracy: 0.937\nClient 1, Epoch 2, Loss: 0.21859113547079106, Accuracy: 0.9384166666666667\nClient 1, Epoch 3, Loss: 0.21327925455617777, Accuracy: 0.9403333333333334\nClient 1, Epoch 4, Loss: 0.20996919222810168, Accuracy: 0.9416666666666667\nClient 1, Epoch 5, Loss: 0.20513910768513985, Accuracy: 0.9428333333333333\nClient 1, Epoch 6, Loss: 0.200950890343557, Accuracy: 0.9446666666666667\nClient 1, Epoch 7, Loss: 0.19713345411768618, Accuracy: 0.9455\nClient 1, Epoch 8, Loss: 0.19456246620083742, Accuracy: 0.9464166666666667\nClient 1, Epoch 9, Loss: 0.1904639073033282, Accuracy: 0.9464166666666667\nClient 1, Epoch 10, Loss: 0.18692744266677727, Accuracy: 0.949\nTraining client 2\/5\nClient 2, Epoch 1, Loss: 0.24080706754342673, Accuracy: 0.93275\nClient 2, Epoch 2, Loss: 0.23518067392263006, Accuracy: 0.9343333333333333\nClient 2, Epoch 3, Loss: 0.229269724299616, Accuracy: 0.937\nClient 2, Epoch 4, Loss: 0.22364443861582178, Accuracy: 0.9390833333333334\nClient 2, Epoch 5, Loss: 0.21968471776059967, Accuracy: 0.9405833333333333\nClient 2, Epoch 6, Loss: 0.21481007690283846, Accuracy: 0.9416666666666667\nClient 2, Epoch 7, Loss: 0.2107157273653974, Accuracy: 0.943\nClient 2, Epoch 8, Loss: 0.20644662140848788, Accuracy: 0.9444166666666667\nClient 2, Epoch 9, Loss: 0.20390446634685738, Accuracy: 0.946\nClient 2, Epoch 10, Loss: 0.19998429164765996, Accuracy: 0.9465\nTraining client 3\/5\nClient 3, Epoch 1, Loss: 0.2376840992414571, Accuracy: 0.931\nClient 3, Epoch 2, Loss: 0.23073043036175536, Accuracy: 0.9335\nClient 3, Epoch 3, Loss: 0.2251918563975933, Accuracy: 0.9354166666666667\nClient 3, Epoch 4, Loss: 0.22021012024042455, Accuracy: 0.9363333333333334\nClient 3, Epoch 5, Loss: 0.216222530983864, Accuracy: 0.93825\nClient 3, Epoch 6, Loss: 0.21245101407328817, Accuracy: 0.94025\nClient 3, Epoch 7, Loss: 0.20791046107386021, Accuracy: 0.9410833333333334\nClient 3, Epoch 8, Loss: 0.2038806156116597, Accuracy: 0.9411666666666667\nClient 3, Epoch 9, Loss: 0.1996750899174429, Accuracy: 0.94225\nClient 3, Epoch 10, Loss: 0.19675296338948797, Accuracy: 0.94375\nTraining client 4\/5\nClient 4, Epoch 1, Loss: 0.2494982845605688, Accuracy: 0.9278333333333333\nClient 4, Epoch 2, Loss: 0.24326234833991273, Accuracy: 0.9310833333333334\nClient 4, Epoch 3, Loss: 0.23730852664943705, Accuracy: 0.9320833333333334\nClient 4, Epoch 4, Loss: 0.23344911202946875, Accuracy: 0.93325\nClient 4, Epoch 5, Loss: 0.22863681875961891, Accuracy: 0.9339166666666666\nClient 4, Epoch 6, Loss: 0.22455686259459942, Accuracy: 0.936\nClient 4, Epoch 7, Loss: 0.22062719137744702, Accuracy: 0.9373333333333334\nClient 4, Epoch 8, Loss: 0.21771507274280202, Accuracy: 0.9378333333333333\nClient 4, Epoch 9, Loss: 0.21245987919416834, Accuracy: 0.9398333333333333\nClient 4, Epoch 10, Loss: 0.21053638400391061, Accuracy: 0.9401666666666667\nTraining client 5\/5\nClient 5, Epoch 1, Loss: 0.21783870391230634, Accuracy: 0.9395\nClient 5, Epoch 2, Loss: 0.21219479023142063, Accuracy: 0.94175\nClient 5, Epoch 3, Loss: 0.2078793477861488, Accuracy: 0.9428333333333333\nClient 5, Epoch 4, Loss: 0.2033630972133672, Accuracy: 0.944\nClient 5, Epoch 5, Loss: 0.19981738498949625, Accuracy: 0.9454166666666667\nClient 5, Epoch 6, Loss: 0.19510637055289873, Accuracy: 0.9460833333333334\nClient 5, Epoch 7, Loss: 0.19138122813657243, Accuracy: 0.94775\nClient 5, Epoch 8, Loss: 0.18793742850105813, Accuracy: 0.948\nClient 5, Epoch 9, Loss: 0.1842302670900492, Accuracy: 0.9501666666666667\nClient 5, Epoch 10, Loss: 0.18058367008145185, Accuracy: 0.9510833333333333\nAggregating client weights...\nServer model accuracy after iteration 2: 0.9389\nIteration 3\/5\nTraining client 1\/5\nClient 1, Epoch 1, Loss: 0.20072802946843366, Accuracy: 0.9433333333333334\nClient 1, Epoch 2, Loss: 0.19523284259311696, Accuracy: 0.9449166666666666\nClient 1, Epoch 3, Loss: 0.1907785619668504, Accuracy: 0.9460833333333334\nClient 1, Epoch 4, Loss: 0.18718989110214912, Accuracy: 0.9473333333333334\nClient 1, Epoch 5, Loss: 0.1834891686176366, Accuracy: 0.9486666666666667\nClient 1, Epoch 6, Loss: 0.17927925534387854, Accuracy: 0.9499166666666666\nClient 1, Epoch 7, Loss: 0.17652998454472485, Accuracy: 0.9508333333333333\nClient 1, Epoch 8, Loss: 0.1733978860119873, Accuracy: 0.9509166666666666\nClient 1, Epoch 9, Loss: 0.1697270419607137, Accuracy: 0.9535833333333333\nClient 1, Epoch 10, Loss: 0.1681383055971658, Accuracy: 0.9544166666666667\nTraining client 2\/5\nClient 2, Epoch 1, Loss: 0.2165118505028968, Accuracy: 0.9409166666666666\nClient 2, Epoch 2, Loss: 0.20965290901825784, Accuracy: 0.9415833333333333\nClient 2, Epoch 3, Loss: 0.2045787629532687, Accuracy: 0.9438333333333333\nClient 2, Epoch 4, Loss: 0.20037631987732776, Accuracy: 0.94475\nClient 2, Epoch 5, Loss: 0.1965011071334494, Accuracy: 0.9459166666666666\nClient 2, Epoch 6, Loss: 0.19215396948238003, Accuracy: 0.9475833333333333\nClient 2, Epoch 7, Loss: 0.18841116536567185, Accuracy: 0.9483333333333334\nClient 2, Epoch 8, Loss: 0.18482514050729731, Accuracy: 0.9500833333333333\nClient 2, Epoch 9, Loss: 0.18158305036102204, Accuracy: 0.9501666666666667\nClient 2, Epoch 10, Loss: 0.17805842421156295, Accuracy: 0.9510833333333333\nTraining client 3\/5\nClient 3, Epoch 1, Loss: 0.21235425552313633, Accuracy: 0.9388333333333333\nClient 3, Epoch 2, Loss: 0.2068075367269364, Accuracy: 0.9404166666666667\nClient 3, Epoch 3, Loss: 0.20202833548822302, Accuracy: 0.94275\nClient 3, Epoch 4, Loss: 0.19758986047607788, Accuracy: 0.9435833333333333\nClient 3, Epoch 5, Loss: 0.19325346590832193, Accuracy: 0.9450833333333334\nClient 3, Epoch 6, Loss: 0.18987077813437012, Accuracy: 0.9454166666666667\nClient 3, Epoch 7, Loss: 0.18642294735826076, Accuracy: 0.9474166666666667\nClient 3, Epoch 8, Loss: 0.1833710680577032, Accuracy: 0.9479166666666666\nClient 3, Epoch 9, Loss: 0.17899490708604138, Accuracy: 0.9490833333333333\nClient 3, Epoch 10, Loss: 0.17610458298487233, Accuracy: 0.9495833333333333\nTraining client 4\/5\nClient 4, Epoch 1, Loss: 0.2249332496777494, Accuracy: 0.9351666666666667\nClient 4, Epoch 2, Loss: 0.2192696023217224, Accuracy: 0.9378333333333333\nClient 4, Epoch 3, Loss: 0.21398302702669134, Accuracy: 0.9386666666666666\nClient 4, Epoch 4, Loss: 0.20985325259414125, Accuracy: 0.9398333333333333\nClient 4, Epoch 5, Loss: 0.20572651407503068, Accuracy: 0.9406666666666667\nClient 4, Epoch 6, Loss: 0.20195866883435148, Accuracy: 0.942\nClient 4, Epoch 7, Loss: 0.19861821798568077, Accuracy: 0.9433333333333334\nClient 4, Epoch 8, Loss: 0.19404628456748546, Accuracy: 0.9439166666666666\nClient 4, Epoch 9, Loss: 0.1925215980315462, Accuracy: 0.9455\nClient 4, Epoch 10, Loss: 0.1880613338558915, Accuracy: 0.9474166666666667\nTraining client 5\/5\nClient 5, Epoch 1, Loss: 0.19557195037920425, Accuracy: 0.9460833333333334\nClient 5, Epoch 2, Loss: 0.19003618328257443, Accuracy: 0.9485\nClient 5, Epoch 3, Loss: 0.18576712578416188, Accuracy: 0.9485\nClient 5, Epoch 4, Loss: 0.18181784708607704, Accuracy: 0.9504166666666667\nClient 5, Epoch 5, Loss: 0.17832311876910797, Accuracy: 0.9510833333333333\nClient 5, Epoch 6, Loss: 0.1748609929444625, Accuracy: 0.9520833333333333\nClient 5, Epoch 7, Loss: 0.1714997968458115, Accuracy: 0.9528333333333333\nClient 5, Epoch 8, Loss: 0.16825064208279264, Accuracy: 0.9541666666666667\nClient 5, Epoch 9, Loss: 0.16515613157064357, Accuracy: 0.9548333333333333\nClient 5, Epoch 10, Loss: 0.16213883511087995, Accuracy: 0.95525\nAggregating client weights...\nServer model accuracy after iteration 3: 0.9424\nIteration 4\/5\nTraining client 1\/5\nClient 1, Epoch 1, Loss: 0.182429523088355, Accuracy: 0.9478333333333333\nClient 1, Epoch 2, Loss: 0.17713796456364242, Accuracy: 0.9498333333333333\nClient 1, Epoch 3, Loss: 0.1725572363691444, Accuracy: 0.9515\nClient 1, Epoch 4, Loss: 0.1691359833992542, Accuracy: 0.95275\nClient 1, Epoch 5, Loss: 0.1652309938353744, Accuracy: 0.9541666666666667\nClient 1, Epoch 6, Loss: 0.16237769654377343, Accuracy: 0.955\nClient 1, Epoch 7, Loss: 0.15893117510812713, Accuracy: 0.9565833333333333\nClient 1, Epoch 8, Loss: 0.15686692723489187, Accuracy: 0.95725\nClient 1, Epoch 9, Loss: 0.15326355835621028, Accuracy: 0.9585833333333333\nClient 1, Epoch 10, Loss: 0.1505941345574374, Accuracy: 0.9595\nTraining client 2\/5\nClient 2, Epoch 1, Loss: 0.1952190003060597, Accuracy: 0.9456666666666667\nClient 2, Epoch 2, Loss: 0.18929978812787127, Accuracy: 0.9479166666666666\nClient 2, Epoch 3, Loss: 0.18436487199381946, Accuracy: 0.94925\nClient 2, Epoch 4, Loss: 0.1802316410268875, Accuracy: 0.9505833333333333\nClient 2, Epoch 5, Loss: 0.17599707110685872, Accuracy: 0.9511666666666667\nClient 2, Epoch 6, Loss: 0.17309348406071992, Accuracy: 0.9530833333333333\nClient 2, Epoch 7, Loss: 0.16917347275909592, Accuracy: 0.9535833333333333\nClient 2, Epoch 8, Loss: 0.16606444897169761, Accuracy: 0.9543333333333334\nClient 2, Epoch 9, Loss: 0.16287051163058966, Accuracy: 0.9555\nClient 2, Epoch 10, Loss: 0.15984033815007895, Accuracy: 0.95725\nTraining client 3\/5\nClient 3, Epoch 1, Loss: 0.1918950032323916, Accuracy: 0.9446666666666667\nClient 3, Epoch 2, Loss: 0.18673791805718173, Accuracy: 0.9458333333333333\nClient 3, Epoch 3, Loss: 0.18174212433873338, Accuracy: 0.9470833333333334\nClient 3, Epoch 4, Loss: 0.17863007559579738, Accuracy: 0.94825\nClient 3, Epoch 5, Loss: 0.1742238389486645, Accuracy: 0.9494166666666667\nClient 3, Epoch 6, Loss: 0.17105837629989107, Accuracy: 0.95125\nClient 3, Epoch 7, Loss: 0.16747696741305768, Accuracy: 0.9511666666666667\nClient 3, Epoch 8, Loss: 0.16476664832852622, Accuracy: 0.95275\nClient 3, Epoch 9, Loss: 0.1621082168706554, Accuracy: 0.9535\nClient 3, Epoch 10, Loss: 0.15942862239527575, Accuracy: 0.9545\nTraining client 4\/5\nClient 4, Epoch 1, Loss: 0.20403501960112058, Accuracy: 0.9411666666666667\nClient 4, Epoch 2, Loss: 0.19888011227421304, Accuracy: 0.9438333333333333\nClient 4, Epoch 3, Loss: 0.19454329952280572, Accuracy: 0.944\nClient 4, Epoch 4, Loss: 0.19040773374999456, Accuracy: 0.945\nClient 4, Epoch 5, Loss: 0.18615327030420303, Accuracy: 0.9461666666666667\nClient 4, Epoch 6, Loss: 0.18328818088357754, Accuracy: 0.94825\nClient 4, Epoch 7, Loss: 0.17990211914590698, Accuracy: 0.9493333333333334\nClient 4, Epoch 8, Loss: 0.17674055487472326, Accuracy: 0.94975\nClient 4, Epoch 9, Loss: 0.17322728331101703, Accuracy: 0.9515833333333333\nClient 4, Epoch 10, Loss: 0.17024758610715893, Accuracy: 0.9529166666666666\nTraining client 5\/5\nClient 5, Epoch 1, Loss: 0.1773396316677966, Accuracy: 0.9505833333333333\nClient 5, Epoch 2, Loss: 0.17325909413952142, Accuracy: 0.952\nClient 5, Epoch 3, Loss: 0.16790582005806426, Accuracy: 0.953\nClient 5, Epoch 4, Loss: 0.16528973227089389, Accuracy: 0.9545\nClient 5, Epoch 5, Loss: 0.1612872869925613, Accuracy: 0.955\nClient 5, Epoch 6, Loss: 0.15799647805459321, Accuracy: 0.9560833333333333\nClient 5, Epoch 7, Loss: 0.15557134692418448, Accuracy: 0.9565833333333333\nClient 5, Epoch 8, Loss: 0.15191104774303893, Accuracy: 0.9580833333333333\nClient 5, Epoch 9, Loss: 0.15004502205138512, Accuracy: 0.9583333333333334\nClient 5, Epoch 10, Loss: 0.14679565506571152, Accuracy: 0.9599166666666666\nAggregating client weights...\nServer model accuracy after iteration 4: 0.9472\nIteration 5\/5\nTraining client 1\/5\nClient 1, Epoch 1, Loss: 0.1652052229904431, Accuracy: 0.9528333333333333\nClient 1, Epoch 2, Loss: 0.16066185163056595, Accuracy: 0.9540833333333333\nClient 1, Epoch 3, Loss: 0.1566948471392723, Accuracy: 0.9555\nClient 1, Epoch 4, Loss: 0.15346477715734472, Accuracy: 0.9573333333333334\nClient 1, Epoch 5, Loss: 0.15127332187554937, Accuracy: 0.9585833333333333\nClient 1, Epoch 6, Loss: 0.1475321408757504, Accuracy: 0.9589166666666666\nClient 1, Epoch 7, Loss: 0.1447430395600485, Accuracy: 0.9606666666666667\nClient 1, Epoch 8, Loss: 0.14162614368932677, Accuracy: 0.9623333333333334\nClient 1, Epoch 9, Loss: 0.13970148359286658, Accuracy: 0.96325\nClient 1, Epoch 10, Loss: 0.13666962443831118, Accuracy: 0.9638333333333333\nTraining client 2\/5\nClient 2, Epoch 1, Loss: 0.1775019117135634, Accuracy: 0.95025\nClient 2, Epoch 2, Loss: 0.1722347265545358, Accuracy: 0.9523333333333334\nClient 2, Epoch 3, Loss: 0.1675563465130139, Accuracy: 0.9541666666666667\nClient 2, Epoch 4, Loss: 0.1636965499913439, Accuracy: 0.9555833333333333\nClient 2, Epoch 5, Loss: 0.16058857798417833, Accuracy: 0.95625\nClient 2, Epoch 6, Loss: 0.1565185741303449, Accuracy: 0.95775\nClient 2, Epoch 7, Loss: 0.15363066159981362, Accuracy: 0.9580833333333333\nClient 2, Epoch 8, Loss: 0.15052510630042154, Accuracy: 0.95925\nClient 2, Epoch 9, Loss: 0.14807047367967824, Accuracy: 0.9601666666666666\nClient 2, Epoch 10, Loss: 0.14536668780319234, Accuracy: 0.9609166666666666\nTraining client 3\/5\nClient 3, Epoch 1, Loss: 0.17489542960724297, Accuracy: 0.9485833333333333\nClient 3, Epoch 2, Loss: 0.169963705095839, Accuracy: 0.95025\nClient 3, Epoch 3, Loss: 0.16578015096564877, Accuracy: 0.9523333333333334\nClient 3, Epoch 4, Loss: 0.1619015270844102, Accuracy: 0.9533333333333334\nClient 3, Epoch 5, Loss: 0.15859658156145126, Accuracy: 0.9540833333333333\nClient 3, Epoch 6, Loss: 0.1554605971744403, Accuracy: 0.9546666666666667\nClient 3, Epoch 7, Loss: 0.15221528948700808, Accuracy: 0.9549166666666666\nClient 3, Epoch 8, Loss: 0.14939277788544905, Accuracy: 0.9568333333333333\nClient 3, Epoch 9, Loss: 0.1468444204829792, Accuracy: 0.9566666666666667\nClient 3, Epoch 10, Loss: 0.1444110000268259, Accuracy: 0.9573333333333334\nTraining client 4\/5\nClient 4, Epoch 1, Loss: 0.18703835338671157, Accuracy: 0.947\nClient 4, Epoch 2, Loss: 0.18203374030108146, Accuracy: 0.9478333333333333\nClient 4, Epoch 3, Loss: 0.17748043935825217, Accuracy: 0.9488333333333333\nClient 4, Epoch 4, Loss: 0.17399417218613497, Accuracy: 0.9513333333333334\nClient 4, Epoch 5, Loss: 0.16971268447393434, Accuracy: 0.9515\nClient 4, Epoch 6, Loss: 0.1669555729294711, Accuracy: 0.95325\nClient 4, Epoch 7, Loss: 0.1633991839047125, Accuracy: 0.95525\nClient 4, Epoch 8, Loss: 0.16083770197756747, Accuracy: 0.9554166666666667\nClient 4, Epoch 9, Loss: 0.15794647431516268, Accuracy: 0.9563333333333334\nClient 4, Epoch 10, Loss: 0.15579183895061624, Accuracy: 0.9580833333333333\nTraining client 5\/5\nClient 5, Epoch 1, Loss: 0.16196638664745905, Accuracy: 0.954\nClient 5, Epoch 2, Loss: 0.15741348214090822, Accuracy: 0.9563333333333334\nClient 5, Epoch 3, Loss: 0.1541800645199862, Accuracy: 0.95725\nClient 5, Epoch 4, Loss: 0.14980401459367984, Accuracy: 0.9586666666666667\nClient 5, Epoch 5, Loss: 0.1487972610729172, Accuracy: 0.9594166666666667\nClient 5, Epoch 6, Loss: 0.14405786252005937, Accuracy: 0.96075\nClient 5, Epoch 7, Loss: 0.14075000449380975, Accuracy: 0.9608333333333333\nClient 5, Epoch 8, Loss: 0.13871056750971586, Accuracy: 0.9618333333333333\nClient 5, Epoch 9, Loss: 0.13590463492623034, Accuracy: 0.9630833333333333\nClient 5, Epoch 10, Loss: 0.13408878715114392, Accuracy: 0.9641666666666666\nAggregating client weights...\nServer model accuracy after iteration 5: 0.9516<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Learning Methods of Deep Learning create by Deepfinder  [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2722,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[18,28],"tags":[],"class_list":["post-2720","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-18","category-28"],"_links":{"self":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2720","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2720"}],"version-history":[{"count":3,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2720\/revisions"}],"predecessor-version":[{"id":2726,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2720\/revisions\/2726"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/media\/2722"}],"wp:attachment":[{"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2720"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2720"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2720"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}