{"id":2673,"date":"2025-01-25T10:21:04","date_gmt":"2025-01-25T02:21:04","guid":{"rendered":"https:\/\/www.gnn.club\/?p=2673"},"modified":"2025-03-12T15:06:52","modified_gmt":"2025-03-12T07:06:52","slug":"tutorial-04-%e4%bb%a5%e7%82%b9%e5%b8%a6%e9%9d%a2%ef%bc%9a%e5%8d%8a%e7%9b%91%e7%9d%a3%e5%ad%a6%e4%b9%a0%ef%bc%88semi-supervised-learning%ef%bc%89","status":"publish","type":"post","link":"http:\/\/gnn.club\/?p=2673","title":{"rendered":"Tutorial 04 &#8211; \u4ee5\u70b9\u5e26\u9762\uff1a\u534a\u76d1\u7763\u5b66\u4e60\uff08Semi-supervised learning\uff09"},"content":{"rendered":"<h1>Learning Methods of Deep Learning<\/h1>\n<hr \/>\n<p>create by Deepfinder<\/p>\n<h3><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/bubbles\/50\/000000\/checklist.png\" style=\"height:50px;display:inline\"> Agenda<\/h3>\n<hr \/>\n<ol>\n<li>\u5e08\u5f92\u76f8\u6388\uff1a\u6709\u76d1\u7763\u5b66\u4e60\uff08Supervised Learning\uff09<\/li>\n<li>\u89c1\u5fae\u77e5\u8457\uff1a\u65e0\u76d1\u7763\u5b66\u4e60\uff08Un-supervised Learning\uff09<\/li>\n<li>\u65e0\u5e08\u81ea\u901a\uff1a\u81ea\u76d1\u7763\u5b66\u4e60\uff08Self-supervised Learning\uff09<\/li>\n<li><strong>\u4ee5\u70b9\u5e26\u9762\uff1a\u534a\u76d1\u7763\u5b66\u4e60\uff08Semi-supervised learning\uff09<\/strong><\/li>\n<li>\u660e\u8fa8\u662f\u975e\uff1a\u5bf9\u6bd4\u5b66\u4e60\uff08Contrastive Learning\uff09<\/li>\n<li>\u4e3e\u4e00\u53cd\u4e09\uff1a\u8fc1\u79fb\u5b66\u4e60\uff08Transfer Learning\uff09<\/li>\n<li>\u9488\u950b\u76f8\u5bf9\uff1a\u5bf9\u6297\u5b66\u4e60\uff08Adversarial Learning\uff09<\/li>\n<li>\u4f17\u5fd7\u6210\u57ce\uff1a\u96c6\u6210\u5b66\u4e60(Ensemble Learning) <\/li>\n<li>\u6b8a\u9014\u540c\u5f52\uff1a\u8054\u90a6\u5b66\u4e60\uff08Federated Learning\uff09<\/li>\n<li>\u767e\u6298\u4e0d\u6320\uff1a\u5f3a\u5316\u5b66\u4e60\uff08Reinforcement Learning\uff09<\/li>\n<li>\u6c42\u77e5\u82e5\u6e34\uff1a\u4e3b\u52a8\u5b66\u4e60\uff08Active Learning\uff09<\/li>\n<li>\u4e07\u6cd5\u5f52\u5b97\uff1a\u5143\u5b66\u4e60\uff08Meta-Learning\uff09<\/li>\n<\/ol>\n<h2>Tutorial 04 - \u4ee5\u70b9\u5e26\u9762\uff1a\u534a\u76d1\u7763\u5b66\u4e60\uff08Semi-supervised learning\uff09<\/h2>\n<p>\u534a\u76d1\u7763\u5b66\u4e60\uff08Semi-Supervised Learning\uff09\u662f\u6307\u5728\u4ec5\u6709\u4e00\u90e8\u5206\u6837\u672c\u5e26\u6709\u4eba\u5de5\u6807\u6ce8\u3001\u800c\u5927\u90e8\u5206\u6837\u672c\u662f\u65e0\u6807\u6ce8\u7684\u573a\u666f\u4e0b\uff0c\u4ecd\u80fd\u6709\u6548\u5229\u7528\u5168\u90e8\u6570\u636e\uff08\u6709\u6807\u6ce8 + \u65e0\u6807\u6ce8\uff09\u8fdb\u884c\u6a21\u578b\u8bad\u7ec3\u7684\u65b9\u6cd5\u3002\u5b83\u65e2\u5229\u7528\u4e86\u6709\u76d1\u7763\u5b66\u4e60\u4e2d\u201c\u6709\u6807\u6ce8\u6570\u636e\u201d\u7684\u4fe1\u606f\uff0c\u53c8\u5145\u5206\u6316\u6398\u4e86\u65e0\u6807\u6ce8\u6570\u636e\u6f5c\u5728\u7684\u7ed3\u6784\u6216\u5206\u5e03\u7279\u5f81\uff0c\u4ece\u800c\u63d0\u5347\u6a21\u578b\u6027\u80fd\u3002<\/p>\n<p>\u4e0b\u9762\u4ecb\u7ecd\u5e38\u89c1\u7684\u534a\u76d1\u7763\u5b66\u4e60\u4e3b\u8981\u9014\u5f84\u53ca\u601d\u8def\u3002<\/p>\n<p><strong>1. \u81ea\u8bad\u7ec3 (Self-Training) \/ \u4f2a\u6807\u7b7e (Pseudo-Labeling)<\/strong><\/p>\n<p><strong>\u6838\u5fc3\u601d\u8def<\/strong><\/p>\n<ul>\n<li>\u7528\u5f53\u524d\u6a21\u578b\u4e3a\u65e0\u6807\u6ce8\u6570\u636e\u751f\u6210\u201c\u4f2a\u6807\u7b7e\u201d\uff0c\u518d\u628a\u5b83\u4eec\u5f53\u505a\u201c\u5e26\u6807\u7b7e\u6570\u636e\u201d\u4e00\u8d77\u52a0\u5165\u8bad\u7ec3\uff0c\u8fed\u4ee3\u66f4\u65b0\u6a21\u578b\u3002<\/li>\n<li>\u5177\u4f53\u505a\u6cd5\u662f\uff1a\u5148\u7528\u5c11\u91cf\u6709\u6807\u6ce8\u6570\u636e\u8bad\u7ec3\u4e00\u4e2a\u521d\u59cb\u6a21\u578b\uff0c\u7136\u540e\u8ba9\u6a21\u578b\u5728\u65e0\u6807\u6ce8\u6570\u636e\u4e0a\u505a\u9884\u6d4b\uff0c\u5c06\u9ad8\u7f6e\u4fe1\u5ea6\u7684\u9884\u6d4b\u7ed3\u679c\u5f53\u4f5c\u4f2a\u6807\u7b7e\u52a0\u5165\u8bad\u7ec3\u96c6\u4e2d\uff0c\u518d\u91cd\u65b0\u8bad\u7ec3\u6a21\u578b\u3002<\/li>\n<\/ul>\n<p><strong>\u4ee3\u8868\u65b9\u6cd5<\/strong><\/p>\n<ul>\n<li>Self-Training \/ \u81ea\u8bad\u7ec3: \u6700\u57fa\u7840\u7684\u505a\u6cd5\uff1a\u5bf9\u65e0\u6807\u6ce8\u6837\u672c\u8fdb\u884c\u9884\u6d4b\u5e76\u8fc7\u6ee4\u6389\u6a21\u578b\u7f6e\u4fe1\u5ea6\u4f4e\u7684\u6837\u672c\uff0c\u53ea\u4fdd\u7559\u7f6e\u4fe1\u5ea6\u9ad8\u7684\u4f2a\u6807\u7b7e\u52a0\u5165\u5230\u65b0\u7684\u8bad\u7ec3\u96c6\u3002<\/li>\n<li>Pseudo-Labeling: Google Brain \u63d0\u51fa\u7684\u7b80\u5355\u5b9e\u73b0\uff1a\u8ba9\u6a21\u578b\u81ea\u5df1\u7ed9\u65e0\u6807\u6ce8\u6570\u636e\u6253\u6807\u7b7e\uff0c\u7136\u540e\u518d\u628a\u8fd9\u4e9b\u65b0\u751f\u6210\u7684\u6807\u7b7e\u5f53\u4f5c\u771f\u6807\u7b7e\u6765\u8bad\u7ec3\u3002<\/li>\n<\/ul>\n<p><strong>\u4f18\u52bf &amp; \u5c40\u9650<\/strong><\/p>\n<ul>\n<li>\u4f18\u70b9\uff1a\u5b9e\u73b0\u7b80\u5355\uff0c\u6613\u4e8e\u548c\u5176\u4ed6\u65b9\u6cd5\u7ed3\u5408\u3002<\/li>\n<li>\u7f3a\u70b9\uff1a\u5982\u679c\u521d\u59cb\u6a21\u578b\u672c\u8eab\u504f\u5dee\u5927\uff0c\u4ea7\u751f\u7684\u4f2a\u6807\u7b7e\u8d28\u91cf\u4f4e\uff0c\u53ef\u80fd\u4f1a\u88ab\u9519\u8bef\u6807\u7b7e\u201c\u6c61\u67d3\u201d\uff0c\u51fa\u73b0\u8bad\u7ec3\u9000\u5316\u3002<\/li>\n<\/ul>\n<h4><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/?size=100&id=91CnU00i6HLv&format=png&color=000000\" style=\"height:50px;display:inline\"> \u5982\u679c\u6a21\u578b\u521d\u671f\u9884\u6d4b\u6709\u504f\u5dee\uff0c\u628a\u9519\u8bef\u9884\u6d4b\u5f53\u4f5c\u201c\u771f\u6807\u7b7e\u201d\u91cd\u65b0\u8bad\u7ec3\uff0c\u53ef\u80fd\u4f1a\u8d8a\u8bad\u8d8a\u9519\uff1f<\/h4>\n<p><strong>2. \u4e00\u81f4\u6027\u6b63\u5219\u5316 (Consistency Regularization)<\/strong><\/p>\n<p><strong>\u6838\u5fc3\u601d\u8def<\/strong><\/p>\n<ul>\n<li>\u5047\u8bbe\uff1a \u540c\u4e00\u4e2a\u65e0\u6807\u6ce8\u6837\u672c\u5728\u7ecf\u8fc7\u4e0d\u540c\u7684\u6270\u52a8\u6216\u589e\u5f3a\u540e\uff0c\u6a21\u578b\u7684\u8f93\u51fa\u5e94\u8be5\u4fdd\u6301\u4e00\u81f4\u3002\u5c06\u8fd9\u79cd\u4e00\u81f4\u6027\u8bef\u5dee\u4f5c\u4e3a\u6b63\u5219\u9879\u6765\u7ea6\u675f\u6a21\u578b\uff0c\u5373\u8981\u6c42\u6a21\u578b\u5bf9\u540c\u4e00\u6570\u636e\u4e0d\u540c\u589e\u5f3a\u89c6\u56fe\u7684\u9884\u6d4b\u7ed3\u679c\u5dee\u5f02\u5c3d\u53ef\u80fd\u5c0f\u3002<\/li>\n<\/ul>\n<p><strong>\u5177\u4f53\u5b9e\u73b0<\/strong><\/p>\n<ul>\n<li>\u5728\u534a\u76d1\u7763\u5b66\u4e60\u91cc\uff0c\u901a\u5e38\u6709\u4e24\u4e2a\u6570\u636e\u96c6:\n<ol>\n<li>\u6709\u6807\u6ce8\u6570\u636e\u96c6 $D_L$ \uff1a\u6837\u672c\u5c11\uff0c\u4f46\u6bcf\u4e2a\u90fd\u6709\u6807\u7b7e\u3002<\/li>\n<li>\u65e0\u6807\u6ce8\u6570\u636e\u96c6 $D_U$ \uff1a\u6837\u672c\u5f88\u591a\uff0c\u6ca1\u6709\u6807\u7b7e\u3002<\/li>\n<\/ol>\n<\/li>\n<li>\u5bf9 $D_L$ \uff0c\u6211\u4eec\u901a\u5e38\u4f7f\u7528\u76d1\u7763\u5b66\u4e60\u7684\u635f\u5931\u51fd\u6570\uff08\u4f8b\u5982\u5206\u7c7b\u7684\u4ea4\u53c9\u6458\uff09\uff1a<\/li>\n<\/ul>\n<p>$$<br \/>\n\\mathcal{L}_{\\text {supervised }}=\\sum_{(x, y) \\in D_L} \\operatorname{CE}\\left(f_\\theta(x), y\\right)<br \/>\n$$<\/p>\n<p>\u8fd9\u91cc $f_\\theta$ \u8868\u793a\u6a21\u578b\uff0c CE \u8868\u793a\u4ea4\u53c9\u71b5\u3002<\/p>\n<ul>\n<li>\u5bf9 $D_U$ \uff0c\u6211\u4eec\u6ca1\u6709\u771f\u5b9e\u6807\u7b7e\uff0c\u5374\u5e0c\u671b\u6a21\u578b\u80fd&quot;\u6709\u7a33\u5b9a\u7684\u8f93\u51fa&quot;\u2014\u2014\u4e5f\u5c31\u662f\u4e00\u81f4\u6027\u6b63\u5219\u5316:<\/li>\n<\/ul>\n<p>$$<br \/>\n\\mathcal{L}_{\\text {consistency }}=\\sum_{x \\in D_U} d\\left(f_\\theta\\left(\\operatorname{Aug}_1(x)\\right), f_\\theta\\left(\\operatorname{Aug}_2(x)\\right)\\right)<br \/>\n$$<\/p>\n<ul>\n<li>\n<p>$\\mathrm{Aug}_1, \\mathrm{Aug}_2$ \u662f\u5bf9\u540c\u4e00\u65e0\u6807\u6ce8\u6837\u672c\u7684\u4e24\u79cd\u968f\u673a\u589e\u5f3a\/\u6270\u52a8\u65b9\u5f0f\uff1b<\/p>\n<\/li>\n<li>\n<p>$d(\\cdot, \\cdot)$ \u53ef\u4ee5\u662f\u5747\u65b9\u8bef\u5dee\u3001KL \u6563\u5ea6\u7b49\u5ea6\u91cf\u51fd\u6570\uff0c\u7528\u6765\u8861\u91cf\u4e24\u6b21\u589e\u5f3a\u540e\u7684\u9884\u6d4b\u5206\u5e03\u5dee\u5f02\u3002<\/p>\n<\/li>\n<li>\n<p>\u8981\u6c42\u8fd9\u4e24\u4e2a\u589e\u5f3a\u89c6\u56fe\u7684\u9884\u6d4b\u5c3d\u91cf\u76f8\u4f3c\uff0c\u9f13\u52b1\u6a21\u578b\u5bf9&quot;\u540c\u4e00\u4e2a\u6837\u672c&quot;\u6709\u4e00\u81f4\u7684\u8f93\u51fa\u3002<\/p>\n<\/li>\n<li>\n<p>\u7efc\u5408\u8d77\u6765\uff0c\u5728\u8bad\u7ec3\u9636\u6bb5\uff0c\u4f1a\u628a\u4e0a\u9762\u4e24\u90e8\u5206\u635f\u5931\u52a0\u6743\u6c42\u548c:<\/p>\n<\/li>\n<\/ul>\n<p>$$<br \/>\n\\mathcal{L}_{\\text {total }}=\\mathcal{L}_{\\text {supervised }}+\\lambda \\cdot \\mathcal{L}_{\\text {consistency }}<br \/>\n$$<\/p>\n<ul>\n<li>\u5176\u4e2d $\\lambda$ \u662f\u4e00\u4e2a\u8d85\u53c2\u6570\uff0c\u7528\u6765\u5e73\u8861\u76d1\u7763\u635f\u5931\u548c\u4e00\u81f4\u6027\u635f\u5931\u7684\u76f8\u5bf9\u6743\u91cd\u3002<\/li>\n<li>\u8fd9\u6837\u5c31\u540c\u65f6\u5229\u7528\u4e86\u6709\u6807\u6ce8\u6570\u636e\uff08\u63d0\u4f9b\u7c7b\u522b\u533a\u5206\u7684\u76d1\u7763\u4fe1\u53f7\uff09\u548c\u65e0\u6807\u6ce8\u6570\u636e\uff08\u63d0\u4f9b\u4e00\u81f4\u6027\u6b63\u5219\u7ea6\u675f\uff0c\u63d0\u5347\u6a21\u578b\u7684\u5224\u522b\u80fd\u529b\u548c\u6cdb\u5316\u80fd\u529b\uff09\u3002<\/li>\n<\/ul>\n<p><strong>\u4f18\u52bf &amp; \u5c40\u9650<\/strong><\/p>\n<ul>\n<li>\u4f18\u70b9\uff1a\u80fd\u6709\u6548\u5229\u7528\u65e0\u6807\u6ce8\u6570\u636e\u7684\u5206\u5e03\u4fe1\u606f\uff0c\u5c24\u5176\u5728\u8ba1\u7b97\u673a\u89c6\u89c9\u4e2d\u914d\u5408\u6570\u636e\u589e\u5f3a\u6548\u679c\u660e\u663e\u3002<\/li>\n<li>\u7f3a\u70b9\uff1a\u4e00\u81f4\u6027\u7ea6\u675f\u4f9d\u8d56\u5408\u9002\u7684\u6570\u636e\u589e\u5f3a\u6216\u6270\u52a8\u65b9\u5f0f\uff0c\u5bf9\u4e0d\u540c\u4efb\u52a1\u9700\u8981\u4e0d\u540c\u8bbe\u8ba1\u3002<\/li>\n<\/ul>\n<p><strong>3. \u57fa\u4e8e\u751f\u6210\u6a21\u578b (Generative Approaches)<\/strong><\/p>\n<p><strong>\u6838\u5fc3\u601d\u8def<\/strong> <\/p>\n<ul>\n<li>\u5b66\u4e60\u6570\u636e\u7684\u5206\u5e03\u6a21\u578b\uff08\u5982\u53d8\u5206\u81ea\u7f16\u7801\u5668 VAE\u3001GAN \u7b49\uff09\uff0c\u5728\u6b64\u8fc7\u7a0b\u4e2d\u540c\u65f6\u4f7f\u7528\u6709\u6807\u6ce8\u548c\u65e0\u6807\u6ce8\u6570\u636e\uff0c\u4f7f\u5f97\u6a21\u578b\u5728\u6355\u6349\u6570\u636e\u5206\u5e03\u7684\u540c\u65f6\uff0c\u4e5f\u80fd\u533a\u5206\u4e0d\u540c\u7c7b\u522b\u3002<\/li>\n<\/ul>\n<p><strong>\u4ee3\u8868\u65b9\u6cd5<\/strong> <\/p>\n<ul>\n<li>VAE + \u5206\u7c7b\u5668\uff1a\u628a VAE \u7f16\u7801\u5668\u5f97\u5230\u7684\u9690\u53d8\u91cf\u7a7a\u95f4\u65e2\u7528\u4e8e\u91cd\u6784\u65e0\u6807\u6ce8\u6837\u672c\uff0c\u4e5f\u8f85\u52a9\u5206\u7c7b\u5668\u5206\u8fa8\u7c7b\u522b\u3002<\/li>\n<li>Semi-Supervised GAN\uff1a\u5728 GAN \u6846\u67b6\u4e2d\uff0c\u5f15\u5165\u4e00\u4e2a\u5224\u522b\u5668\u80fd\u591f\u533a\u5206&quot;\u771f\u5b9e\u56fe\u50cf\u7684\u7c7b\u522b&quot;\u548c&quot;\u751f\u6210\u56fe\u50cf&quot;\u8fd9\u4e24\u4ef6\u4e8b\uff0c\u4ece\u800c\u5728\u5c11\u91cf\u6807\u7b7e\u7684\u60c5\u51b5\u4e0b\u5b66\u4e60\u5230\u6709\u5224\u522b\u529b\u7684\u7279\u5f81\u3002<\/li>\n<\/ul>\n<p><strong>\u4f18\u52bf &amp; \u5c40\u9650<\/strong> <\/p>\n<ul>\n<li>\u4f18\u70b9\uff1a\u751f\u6210\u5f0f\u5efa\u6a21\u80fd\u66f4\u597d\u5730\u6316\u6398\u6570\u636e\u5206\u5e03\uff0c\u5bf9\u65e0\u6807\u6ce8\u6570\u636e\u7684\u8868\u793a\u5b66\u4e60\u80fd\u529b\u8f83\u5f3a\u3002<\/li>\n<li>\u7f3a\u70b9\uff1aGAN \u6216VAE \u7684\u7a33\u5b9a\u8bad\u7ec3\u4ee5\u53ca\u548c\u5206\u7c7b\u4efb\u52a1\u7ed3\u5408\u7684\u7b56\u7565\u8f83\u4e3a\u590d\u6742\u3002<\/li>\n<\/ul>\n<pre><code class=\"language-python\">import os\nimport pickle\nimport torch\nimport random\nimport numpy as np\nfrom torch.utils.data import Dataset, DataLoader\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\n# ---------------------------\n# 1. \u8bbe\u7f6e\u968f\u673a\u79cd\u5b50\u4e0e\u8bbe\u5907\n# ---------------------------\n\nrandom.seed(42)\nnp.random.seed(42)\ntorch.manual_seed(42)\n\ndevice = torch.device(&#039;cuda&#039; if torch.cuda.is_available() else &#039;cpu&#039;)\nprint(f&quot;\u4f7f\u7528\u8bbe\u5907: {device}&quot;)\n\n# ---------------------------\n# 2. \u52a0\u8f7d CIFAR-10 \u6570\u636e\n# ---------------------------\n\ndef load_cifar10_batch(file_path):\n    &quot;&quot;&quot;\n    \u52a0\u8f7d\u5355\u4e2a batch \u6587\u4ef6\uff0c\u8fd4\u56de\u56fe\u50cf\u548c\u6807\u7b7e\u3002\n    &quot;&quot;&quot;\n    with open(file_path, &#039;rb&#039;) as f:\n        batch = pickle.load(f, encoding=&#039;bytes&#039;)\n        images = batch[b&#039;data&#039;]  # shape: (10000, 3072)\n        labels = batch[b&#039;labels&#039;] if b&#039;labels&#039; in batch else batch[b&#039;fine_labels&#039;]\n\n        # \u91cd\u5851\u56fe\u50cf\u4e3a (N, 3, 32, 32)\n        images = images.reshape(-1, 3, 32, 32)\n        images = images.astype(np.float32) \/ 255.0  # \u5f52\u4e00\u5316\u5230 [0,1]\n\n        # \u8f6c\u6362\u4e3a Tensor\n        images = torch.tensor(images)\n        labels = torch.tensor(labels, dtype=torch.long)\n\n    return images, labels\n\ndef load_cifar10_data(data_dir):\n    &quot;&quot;&quot;\n    \u52a0\u8f7d\u6574\u4e2a CIFAR-10 \u6570\u636e\u96c6\uff0c\u8fd4\u56de\u8bad\u7ec3\u96c6\u548c\u6d4b\u8bd5\u96c6\u7684\u56fe\u50cf\u4e0e\u6807\u7b7e\u3002\n    &quot;&quot;&quot;\n    train_images = []\n    train_labels = []\n\n    # \u52a0\u8f7d\u8bad\u7ec3\u6279\u6b21\n    for i in range(1, 6):\n        batch_file = os.path.join(data_dir, f&#039;data_batch_{i}&#039;)\n        images, labels = load_cifar10_batch(batch_file)\n        train_images.append(images)\n        train_labels.append(labels)\n\n    # \u62fc\u63a5\u6240\u6709\u8bad\u7ec3\u6279\u6b21\n    train_images = torch.cat(train_images, dim=0)  # shape: (50000, 3, 32, 32)\n    train_labels = torch.cat(train_labels, dim=0)  # shape: (50000,)\n\n    # \u52a0\u8f7d\u6d4b\u8bd5\u6279\u6b21\n    test_file = os.path.join(data_dir, &#039;test_batch&#039;)\n    test_images, test_labels = load_cifar10_batch(test_file)  # shape: (10000, 3, 32, 32), (10000,)\n\n    return train_images, train_labels, test_images, test_labels\n\n# \u6307\u5b9a CIFAR-10 \u6570\u636e\u76ee\u5f55\nCIFAR10_DIR = &#039;datasets\/cifar-10-batches-py&#039;  # \u6839\u636e\u5b9e\u9645\u8def\u5f84\u4fee\u6539\n\n# \u52a0\u8f7d\u6570\u636e\ntrain_images_all, train_labels_all, test_images_all, test_labels_all = load_cifar10_data(CIFAR10_DIR)\n\nprint(f&quot;\u8bad\u7ec3\u96c6\u56fe\u50cf\u6570\u91cf: {train_images_all.shape[0]}&quot;)\nprint(f&quot;\u6d4b\u8bd5\u96c6\u56fe\u50cf\u6570\u91cf: {test_images_all.shape[0]}&quot;)\n<\/code><\/pre>\n<pre><code>\u4f7f\u7528\u8bbe\u5907: cuda\n\u8bad\u7ec3\u96c6\u56fe\u50cf\u6570\u91cf: 50000\n\u6d4b\u8bd5\u96c6\u56fe\u50cf\u6570\u91cf: 10000<\/code><\/pre>\n<p><strong>\u90e8\u5206\u6807\u8bb0\u7684\u6570\u636e<\/strong><\/p>\n<p>\u60a8\u4f1a\u6ce8\u610f\u5230\uff0c\u6570\u636e\u96c6\u4e2d\u7684\u6240\u6709\u56fe\u50cf\u90fd\u5df2\u63d0\u4f9b\u76f8\u5e94\u7684\u6807\u7b7e\u3002\u5982\u679c\u60a8\u4f7f\u7528\u6240\u6709\u6570\u636e\u8bad\u7ec3\u6a21\u578b\uff0c\u90a3\u4e48\u60a8\u5c06\u62e5\u6709\u4e00\u4e2a\u5b8c\u5168\u76d1\u7763\u7684\u6a21\u578b\u3002<\/p>\n<p>\u672c\u7740\u534a\u76d1\u7763\u5b66\u4e60\u7684\u7cbe\u795e\uff0c\u6211\u4eec\u9700\u8981\u6a21\u62df\u7f3a\u4e4f\u6807\u8bb0\u6570\u636e\u7684\u60c5\u51b5\u3002\u4e00\u79cd\u7b80\u5355\u7684\u65b9\u6cd5\u662f\u63d0\u53d6\u4e00\u5c0f\u90e8\u5206\u56fe\u50cf\u53ca\u5176\u76f8\u5e94\u7684\u6807\u7b7e\uff1b\u7136\u540e\u60a8\u53ef\u4ee5\u5047\u88c5\u5176\u4ed6\u6240\u6709\u5185\u5bb9\u90fd\u6ca1\u6709\u6807\u7b7e\u3002<\/p>\n<p>\u4e0b\u9762\u7684\u4ee3\u7801\u63d0\u53d6\u4e86\u8fd9\u4e2a\u201c\u76d1\u7763\u201d\u6570\u636e\u5b50\u96c6\u3002\u8bf7\u6ce8\u610f\uff0c\u6240\u6709\u5bf9\u8c61\u7c7b\u90fd\u6709\u76f8\u540c\u6570\u91cf\u7684\u63d0\u53d6\u6837\u672c\u3002\u5efa\u8bae\u60a8\u4ece\u4ec5\u63d0\u53d6 1% \u7684\u6570\u636e\u96c6\u5f00\u59cb\uff0c\u4ee5\u4fbf\u6d4b\u8bd5\u548c\u8c03\u8bd5\u540e\u7eed\u4ee3\u7801\u7684\u901f\u5ea6\u66f4\u5feb\u3002\u60a8\u5f53\u7136\u53ef\u4ee5\u5728\u4ee5\u540e\u589e\u52a0\u8fd9\u4e00\u90e8\u5206\u3002<\/p>\n<pre><code class=\"language-python\"># ---------------------------\n# 3. \u62c6\u5206\u201c\u6709\u6807\u7b7e\u201d\u548c\u201c\u65e0\u6807\u7b7e\u201d\u6570\u636e\n# ---------------------------\n\nNUM_LABELED_PER_CLASS = 50  # \u6bcf\u7c7b\u6709\u6807\u7b7e\u6837\u672c\u6570\nNUM_CLASSES = 10\n\n# \u521d\u59cb\u5316\u8ba1\u6570\u5668\ncount_per_class = [0] * NUM_CLASSES\n\nlabeled_images = []\nlabeled_labels = []\nunlabeled_images = []\nunlabeled_labels = []\n\n# \u6253\u4e71\u7d22\u5f15\u4ee5\u786e\u4fdd\u968f\u673a\u6027\nindices = list(range(len(train_images_all)))\nrandom.shuffle(indices)\n\nfor idx in indices:\n    img = train_images_all[idx]\n    lbl = train_labels_all[idx].item()\n\n    if count_per_class[lbl] &lt; NUM_LABELED_PER_CLASS:\n        labeled_images.append(img)\n        labeled_labels.append(lbl)\n        count_per_class[lbl] += 1\n    else:\n        unlabeled_images.append(img)\n        unlabeled_labels.append(lbl)  # \u6807\u7b7e\u4ecd\u7136\u5b58\u5728\uff0c\u4f46\u540e\u7eed\u4e0d\u4f7f\u7528\n\nprint(f&quot;\u6709\u6807\u7b7e\u6570\u636e\u6570\u91cf: {len(labeled_images)}&quot;)      # \u9884\u8ba1: 10 * 50 = 500\nprint(f&quot;\u65e0\u6807\u7b7e\u6570\u636e\u6570\u91cf: {len(unlabeled_images)}&quot;)    # \u9884\u8ba1: 50000 - 500 = 49500\n\n# ---------------------------\n# 4. \u5b9a\u4e49 PyTorch Dataset \u7c7b\n# ---------------------------\n\nclass LabeledCIFARDataset(Dataset):\n    def __init__(self, images, labels):\n        &quot;&quot;&quot;\n        \u6709\u6807\u7b7e\u6570\u636e\u96c6\n        &quot;&quot;&quot;\n        self.images = images\n        self.labels = labels\n\n    def __len__(self):\n        return len(self.images)\n\n    def __getitem__(self, idx):\n        x = self.images[idx]\n        y = self.labels[idx]\n        return x, y\n\nclass UnlabeledCIFARDataset(Dataset):\n    def __init__(self, images):\n        &quot;&quot;&quot;\n        \u65e0\u6807\u7b7e\u6570\u636e\u96c6\n        &quot;&quot;&quot;\n        self.images = images\n\n    def __len__(self):\n        return len(self.images)\n\n    def __getitem__(self, idx):\n        x = self.images[idx]\n        return x\n\nclass CIFARValDataset(Dataset):\n    def __init__(self, images, labels):\n        &quot;&quot;&quot;\n        \u9a8c\u8bc1\/\u6d4b\u8bd5\u6570\u636e\u96c6\n        &quot;&quot;&quot;\n        self.images = images\n        self.labels = labels\n\n    def __len__(self):\n        return len(self.images)\n\n    def __getitem__(self, idx):\n        x = self.images[idx]\n        y = self.labels[idx]\n        return x, y\n\n# ---------------------------\n# 5. \u521b\u5efa DataLoader\n# ---------------------------\n\nBATCH_SIZE = 64\n\n# \u6709\u6807\u7b7e DataLoader\nlabeled_dataset = LabeledCIFARDataset(labeled_images, labeled_labels)\nlabeled_loader = DataLoader(\n    labeled_dataset,\n    batch_size=BATCH_SIZE,\n    shuffle=True\n)\n\n# \u65e0\u6807\u7b7e DataLoader\nunlabeled_dataset = UnlabeledCIFARDataset(unlabeled_images)\nunlabeled_loader = DataLoader(\n    unlabeled_dataset,\n    batch_size=BATCH_SIZE,\n    shuffle=True\n)\n\n# \u9a8c\u8bc1\/\u6d4b\u8bd5 DataLoader\nval_dataset = CIFARValDataset(test_images_all, test_labels_all)\nval_loader = DataLoader(\n    val_dataset,\n    batch_size=BATCH_SIZE,\n    shuffle=False\n)\n\nprint(f&quot;\u6709\u6807\u7b7e\u6570\u636e\u6279\u6b21\u6570: {len(labeled_loader)}&quot;)      # 500 \/ 64 \u2248 8\nprint(f&quot;\u65e0\u6807\u7b7e\u6570\u636e\u6279\u6b21\u6570: {len(unlabeled_loader)}&quot;)    # 49500 \/ 64 \u2248 774\nprint(f&quot;\u9a8c\u8bc1\/\u6d4b\u8bd5\u6570\u636e\u6279\u6b21\u6570: {len(val_loader)}&quot;)      # 10000 \/ 64 \u2248 157<\/code><\/pre>\n<pre><code>\u6709\u6807\u7b7e\u6570\u636e\u6570\u91cf: 500\n\u65e0\u6807\u7b7e\u6570\u636e\u6570\u91cf: 49500\n\u6709\u6807\u7b7e\u6570\u636e\u6279\u6b21\u6570: 8\n\u65e0\u6807\u7b7e\u6570\u636e\u6279\u6b21\u6570: 774\n\u9a8c\u8bc1\/\u6d4b\u8bd5\u6570\u636e\u6279\u6b21\u6570: 157<\/code><\/pre>\n<p><strong>\u5b9a\u4e49\u6a21\u578b\u67b6\u6784<\/strong><\/p>\n<p>\u73b0\u5728\u6211\u4eec\u5df2\u7ecf\u51c6\u5907\u597d\u4e86\u6570\u636e\uff0c\u8ba9\u6211\u4eec\u5c06\u6ce8\u610f\u529b\u8f6c\u5411\u6a21\u578b\u67b6\u6784\u3002\u8bf7\u8bb0\u4f4f\uff0c\u6211\u4eec\u7684\u76ee\u6807\u4e0d\u662f\u4ece\u6211\u4eec\u62e5\u6709\u7684\u6570\u636e\u4e2d\u83b7\u5f97\u6700\u4f73\u6027\u80fd\uff0c\u800c\u662f\u4e13\u6ce8\u4e8e\u5b66\u4e60\u5982\u4f55\u5b9e\u65bd\u534a\u76d1\u7763\u6280\u672f\u3002\u8003\u8651\u5230\u8fd9\u4e00\u70b9\uff0c\u6211\u4eec\u5c06\u7814\u7a76\u80fd\u591f\u4ece\u5934\u5f00\u59cb\u6784\u5efa\u7684\u7b80\u5355toy model\u3002<\/p>\n<p>\u60a8\u73b0\u5728\u5e94\u8be5\u719f\u6089\u5404\u79cd\u534a\u76d1\u7763\u6280\u672f\u3002\u8fd9\u91cc\u6211\u4eec\u4ee5VAE + \u5206\u7c7b\u5668\u4e3a\u4f8b\u5b50\uff0c\u53ef\u4ee5\u9488\u5bf9\u4e24\u4e2a\u4e0d\u540c\u7684\u4efb\u52a1\u8fdb\u884c\u8bad\u7ec3\u3002<\/p>\n<p align=\"center\">\n  <img decoding=\"async\" src=\"https:\/\/gnnclub-1311496010.cos.ap-beijing.myqcloud.com\/wp-content\/uploads\/2025\/01\/20250125101701695.png\n\" style=\"height:300px\">\n<\/p>\n<pre><code class=\"language-python\"># ---------------------------\n# 6. \u5b9a\u4e49\u6a21\u578b\uff08VAE + \u5206\u7c7b\u5668\uff09\n# ---------------------------\n\nclass Encoder(nn.Module):\n    def __init__(self, latent_dim=32):\n        super(Encoder, self).__init__()\n        self.conv1 = nn.Conv2d(3, 16, 3, stride=2, padding=1)  # [B, 16, 16, 16]\n        self.conv2 = nn.Conv2d(16, 32, 3, stride=2, padding=1) # [B, 32, 8, 8]\n        self.fc = nn.Linear(32*8*8, 128)\n        self.mu_layer = nn.Linear(128, latent_dim)\n        self.logvar_layer = nn.Linear(128, latent_dim)\n\n    def forward(self, x):\n        h = F.relu(self.conv1(x))  # [B, 16, 16, 16]\n        h = F.relu(self.conv2(h))  # [B, 32, 8, 8]\n        h = h.view(h.size(0), -1)  # [B, 32*8*8]\n        h = F.relu(self.fc(h))     # [B, 128]\n        mu = self.mu_layer(h)      # [B, latent_dim]\n        logvar = self.logvar_layer(h)  # [B, latent_dim]\n        return mu, logvar\n\nclass Decoder(nn.Module):\n    def __init__(self, latent_dim=32):\n        super(Decoder, self).__init__()\n        self.fc = nn.Linear(latent_dim, 32*8*8)\n        self.deconv1 = nn.ConvTranspose2d(32, 16, 4, stride=2, padding=1)  # [B,16,16,16]\n        self.deconv2 = nn.ConvTranspose2d(16, 3, 4, stride=2, padding=1)   # [B,3,32,32]\n\n    def forward(self, z):\n        h = F.relu(self.fc(z))              # [B, 32*8*8]\n        h = h.view(h.size(0), 32, 8, 8)     # [B, 32, 8, 8]\n        h = F.relu(self.deconv1(h))         # [B, 16, 16, 16]\n        x_recon = torch.sigmoid(self.deconv2(h))  # [B, 3, 32, 32]\n        return x_recon\n\nclass Classifier(nn.Module):\n    def __init__(self, latent_dim=32, num_classes=10):\n        super(Classifier, self).__init__()\n        self.fc1 = nn.Linear(latent_dim, 64)\n        self.fc2 = nn.Linear(64, num_classes)\n\n    def forward(self, z):\n        h = F.relu(self.fc1(z))  # [B, 64]\n        logits = self.fc2(h)     # [B, num_classes]\n        return logits\n\nclass VAE_Classifier(nn.Module):\n    def __init__(self, latent_dim=32, num_classes=10):\n        super(VAE_Classifier, self).__init__()\n        self.encoder = Encoder(latent_dim)\n        self.decoder = Decoder(latent_dim)\n        self.classifier = Classifier(latent_dim, num_classes)\n\n    def reparameterize(self, mu, logvar):\n        std = torch.exp(0.5 * logvar)    # [B, latent_dim]\n        eps = torch.randn_like(std)      # [B, latent_dim]\n        return mu + eps * std            # [B, latent_dim]\n\n    def forward_vae(self, x):\n        mu, logvar = self.encoder(x)             # [B, latent_dim], [B, latent_dim]\n        z = self.reparameterize(mu, logvar)      # [B, latent_dim]\n        x_recon = self.decoder(z)                # [B, 3, 32, 32]\n        return x_recon, mu, logvar, z\n\n    def forward_classifier(self, x):\n        mu, logvar = self.encoder(x)             # [B, latent_dim], [B, latent_dim]\n        logits = self.classifier(mu)             # [B, num_classes]\n        return logits\n\n# \u5b9e\u4f8b\u5316\u6a21\u578b\nlatent_dim = 32\nnum_classes = 10\nmodel = VAE_Classifier(latent_dim=latent_dim, num_classes=num_classes).to(device)\n<\/code><\/pre>\n<p><strong>3. \u8bad\u7ec3\u9636\u6bb5\u7684&quot;\u4e24\u6761\u635f\u5931&quot;<\/strong><\/p>\n<ul>\n<li>3.1 \u8bad\u7ec3 VAE (\u65e0\u6807\u7b7e\u6570\u636e\u53ef\u7528)<\/li>\n<\/ul>\n<p>\u5f53\u4f60\u5bf9\u65e0\u6807\u6ce8\u6570\u636e\u505a\u5ddd\u7ec3\u65f6\uff0c\u53ea\u8981\u505aVAE\u7684\u91cd\u6784\u635f\u5931 + KL \u6563\u5ea6\u5373\u53ef:<\/p>\n<p>$$<br \/>\n\\mathcal{L}_{\\mathrm{VAE}}=|x-\\hat{x}|^2(\\text { or } \\mathrm{BCE})+\\mathrm{KL}\\left[q_\\phi(z \\mid x) | p(z)\\right]<br \/>\n$$<\/p>\n<ul>\n<li>\u91cd\u6784\u635f\u5931: $|x-\\hat{x}|^2$ \u6216\u4e8c\u5143\u4ea4\u53c9\u6458\uff08BCE\uff09\u7b49\u5ea6\u91cf\uff0c\u8ba9\u89e3\u7801\u5668\u8f93\u51fa\u7684 $\\hat{x}$ \u4e0e\u539f\u56fe $x$ \u5c3d\u91cf\u76f8\u4f3c\u3002<\/li>\n<li>KL \u6563\u5ea6\uff1a\u8ba9\u7f16\u7801\u5668\u7684\u6f5c\u5728\u5206\u5e03 $q_\\phi(z \\mid x)$ \u9760\u8fd1\u5148\u9a8c $p(z)$ \uff08\u901a\u5e38\u662f $\\mathcal{N}(0, I)$ \uff09\u3002<\/li>\n<\/ul>\n<p>\u65e0\u6807\u6ce8\u6570\u636e\u4e0a\u6211\u4eec\u4e0d\u8ba1\u7b97\u5206\u7c7b\u635f\u5931\uff0c\u56e0\u4e3a\u6ca1\u6709\u6807\u7b7e\uff0c\u4f46\u53ef\u4ee5\u662d\u6837\u901a\u8fc7 VAE \u91cd\u6784\u53bb\u5b66\u4e60\u6f5c\u5728\u8868\u793a\u3002<\/p>\n<ul>\n<li>3.2 \u8bad\u7ec3\u5206\u7c7b\u5668\uff08\u9700\u6709\u6807\u6ce8\u6570\u636e\uff09<\/li>\n<\/ul>\n<p>\u5f53\u4f60\u5bf9\u6709\u6807\u6ce8\u6570\u636e\u8fdb\u884c\u8bad\u7ec3\u65f6\uff0c\u540c\u65f6\u66f4\u65b0VAE \u53ca\u5206\u7c7b\u5668\uff0c\u56e0\u4e3a\u5206\u7c7b\u5668\u8981\u7528\u5230\u7f16\u7801\u5668\u7684\u8f93\u51fa\uff1a<\/p>\n<p>$$<br \/>\n\\mathcal{L}_{\\text {Classifier }}=\\text { CrossEntropy }(\\text { logits }, y)<br \/>\n$$<\/p>\n<p>\u5176\u4e2d:<\/p>\n<ul>\n<li>logits = classifier(encoder(x)) (\u6216 reparameterized $z$ )\u3002<\/li>\n<li>$y$ \u662f\u56fe\u50cf\u5bf9\u5e94\u7684\u771f\u5b9e\u7c7b\u522b\u6807\u7b7e( $0 \\sim 9$ )\u3002<\/li>\n<\/ul>\n<p>\u8fd9\u4e2a\u90e8\u5206\u53ea\u5728&quot;\u6709\u6807\u7b7e\u6570\u636e&quot;\u4e0a\u5b58\u5728\u3002<\/p>\n<ul>\n<li>3.3 \u603b\u635f\u5931<\/li>\n<\/ul>\n<p>\u7efc\u5408\u8d77\u6765\uff0c\u4f60\u53ef\u4ee5\u540c\u65f6\u6216\u4ea4\u66ff\u5730\u5bf9\u6709\u6807\u7b7e \/ \u65e0\u6807\u7b7e\u6279\u6b21\u8fdb\u884c\u66f4\u65b0\u3002<\/p>\n<pre><code class=\"language-python\">def vae_loss_function(x, x_recon, mu, logvar):\n    # x: [B, 3, 32, 32]\n    # x_recon: [B, 3, 32, 32]\n    # mu, logvar: [B, latent_dim]\n\n    # 1) \u91cd\u6784\u635f\u5931 - \u4f7f\u7528 BCE\n    recon_loss = F.binary_cross_entropy(x_recon, x, reduction=&#039;sum&#039;) \/ x.size(0) \n    # \u6216\u8005 mean() \u5e76\u518d\u6839\u636e\u9700\u8981\u8c03\u8282\u5e73\u8861\n\n    # 2) KL \u6563\u5ea6\n    # KL = 0.5 * sum( exp(logvar) + mu^2 - 1 - logvar )\n    kl_divergence = 0.5 * torch.mean(torch.sum(torch.exp(logvar) + mu**2 - 1. - logvar, dim=1))\n\n    return recon_loss + kl_divergence, recon_loss, kl_divergence\n\ndef classifier_loss_function(logits, y):\n    return F.cross_entropy(logits, y)  \n\n# ---------------------------\n# 8. \u5b9a\u4e49\u9a8c\u8bc1\u51fd\u6570\n# ---------------------------\n\ndef evaluate_on_valset(model, val_loader):\n    &quot;&quot;&quot;\n    \u5728\u9a8c\u8bc1\u96c6\u4e0a\u8bc4\u4f30\u5206\u7c7b\u51c6\u786e\u7387\n    &quot;&quot;&quot;\n    model.eval()\n    correct = 0\n    total = 0\n    with torch.no_grad():\n        for x_val, y_val in val_loader:\n            x_val = x_val.to(device)\n            y_val = y_val.to(device)\n\n            logits = model.forward_classifier(x_val)  # [B, num_classes]\n            preds = torch.argmax(logits, dim=1)       # [B]\n            correct += (preds == y_val).sum().item()\n            total += y_val.size(0)\n    acc = correct \/ total\n    return acc<\/code><\/pre>\n<pre><code class=\"language-python\"># ---------------------------\n# 9. \u5b9e\u73b0 Baseline \u548c Semi-Supervised \u8bad\u7ec3\n# ---------------------------\n\ndef train_baseline(model, labeled_loader, val_loader, epochs=10, lr=1e-3):\n    &quot;&quot;&quot;\n    Baseline \u8bad\u7ec3\uff1a\u4ec5\u4f7f\u7528\u6709\u6807\u7b7e\u6570\u636e\u8bad\u7ec3\u6a21\u578b\n    &quot;&quot;&quot;\n    optimizer = optim.Adam(model.parameters(), lr=lr)\n\n    for epoch in range(epochs):\n        model.train()\n        total_loss = 0.0\n        total_clf = 0.0\n\n        for x_labeled, y_labeled in labeled_loader:\n            x_labeled = x_labeled.to(device)\n            y_labeled = y_labeled.to(device)\n\n            optimizer.zero_grad()\n\n            # \u5206\u7c7b\u5668\u524d\u5411\n            logits = model.forward_classifier(x_labeled)\n            clf_loss = classifier_loss_function(logits, y_labeled)\n\n            # \u53cd\u5411\u4f20\u64ad\u548c\u4f18\u5316\n            clf_loss.backward()\n            optimizer.step()\n\n            total_loss += clf_loss.item()\n            total_clf += clf_loss.item()\n\n        # \u8ba1\u7b97\u5e73\u5747\u635f\u5931\n        avg_loss = total_loss \/ len(labeled_loader)\n        avg_clf = total_clf \/ len(labeled_loader)\n\n        # \u9a8c\u8bc1\u96c6\u8bc4\u4f30\n        val_acc = evaluate_on_valset(model, val_loader)\n\n        print(f&quot;[Baseline] Epoch {epoch+1}\/{epochs}, Loss: {avg_loss:.4f}, Clf: {avg_clf:.4f}, Val Acc: {val_acc:.4f}&quot;)\n\n    return model\n\ndef train_semi_supervised(model, labeled_loader, unlabeled_loader, val_loader, epochs=10, lr=1e-3, lambda_unsupervised=1.0, confidence_threshold=0.8):\n    &quot;&quot;&quot;\n    \u534a\u76d1\u7763\u8bad\u7ec3\uff1a\u540c\u65f6\u4f7f\u7528\u6709\u6807\u7b7e\u548c\u65e0\u6807\u7b7e\u6570\u636e\n    &quot;&quot;&quot;\n    optimizer = optim.Adam(model.parameters(), lr=lr)\n\n    for epoch in range(epochs):\n        model.train()\n        total_loss = 0.0\n        total_recon = 0.0\n        total_kl = 0.0\n        total_clf = 0.0\n        total_unsupervised = 0.0\n\n        # a) \u6709\u6807\u7b7e\u6570\u636e\u8bad\u7ec3\n        for x_labeled, y_labeled in labeled_loader:\n            x_labeled = x_labeled.to(device)\n            y_labeled = y_labeled.to(device)\n\n            optimizer.zero_grad()\n\n            # VAE \u524d\u5411\n            x_recon, mu, logvar, z = model.forward_vae(x_labeled)\n            vae_loss, recon_loss, kl_div = vae_loss_function(x_labeled, x_recon, mu, logvar)\n\n            # \u5206\u7c7b\u5668\u524d\u5411\n            logits = model.forward_classifier(x_labeled)\n            clf_loss = classifier_loss_function(logits, y_labeled)\n\n            # \u5408\u5e76\u635f\u5931\n            total_batch_loss = vae_loss + clf_loss  \n            total_batch_loss.backward()\n            optimizer.step()\n\n            total_loss += total_batch_loss.item()\n            total_recon += recon_loss.item()\n            total_kl += kl_div.item()\n            total_clf += clf_loss.item()\n\n        # b) \u65e0\u6807\u7b7e\u6570\u636e\u8bad\u7ec3\uff08\u4f2a\u6807\u7b7e\uff09\n        for x_unlabeled in unlabeled_loader:\n            x_unlabeled = x_unlabeled.to(device)\n\n            optimizer.zero_grad()\n\n            # \u751f\u6210\u4f2a\u6807\u7b7e\n            logits = model.forward_classifier(x_unlabeled)\n            probs = torch.softmax(logits, dim=-1)\n            max_probs, pseudo_labels = torch.max(probs, dim=-1)\n\n            # \u53ea\u4f7f\u7528\u9ad8\u7f6e\u4fe1\u5ea6\u7684\u4f2a\u6807\u7b7e\n            high_confidence_mask = max_probs &gt; confidence_threshold\n            if high_confidence_mask.sum() &gt; 0:\n                pseudo_labels = pseudo_labels[high_confidence_mask]\n                x_unlabeled = x_unlabeled[high_confidence_mask]\n\n                # \u8ba1\u7b97\u65e0\u6807\u7b7e\u6570\u636e\u7684\u5206\u7c7b\u635f\u5931\n                clf_loss = classifier_loss_function(logits[high_confidence_mask], pseudo_labels)\n\n                # \u603b\u635f\u5931 = \u65e0\u6807\u7b7e\u6570\u636e\u635f\u5931 + \u6709\u6807\u7b7e\u6570\u636e\u635f\u5931\n                total_unsupervised_loss = lambda_unsupervised * clf_loss\n                total_unsupervised_loss.backward()\n                optimizer.step()\n\n                total_unsupervised += total_unsupervised_loss.item()\n\n        # \u8ba1\u7b97\u5e73\u5747\u635f\u5931\n        avg_loss = total_loss \/ len(labeled_loader)\n        avg_recon = total_recon \/ len(labeled_loader)\n        avg_kl = total_kl \/ len(labeled_loader)\n        avg_clf = total_clf \/ len(labeled_loader)\n        avg_unsupervised = total_unsupervised \/ len(unlabeled_loader)\n\n        # \u9a8c\u8bc1\u96c6\u8bc4\u4f30\n        val_acc = evaluate_on_valset(model, val_loader)\n\n        print(f&quot;[Semi-Supervised] Epoch {epoch+1}\/{epochs}, Loss: {avg_loss:.4f}, Clf: {avg_clf:.4f}, Unsupervised: {avg_unsupervised:.4f}, Val Acc: {val_acc:.4f}&quot;)\n\n    return model\n<\/code><\/pre>\n<pre><code class=\"language-python\"># ---------------------------\n# 10. \u8fd0\u884c\u8bad\u7ec3\u5e76\u5bf9\u6bd4\n# ---------------------------\n\ndef main():\n    # \u5b9e\u4f8b\u5316\u4e24\u4e2a\u72ec\u7acb\u7684\u6a21\u578b\n    model_baseline = VAE_Classifier(latent_dim=latent_dim, num_classes=num_classes).to(device)\n    model_semi = VAE_Classifier(latent_dim=latent_dim, num_classes=num_classes).to(device)\n\n    # \u5b9a\u4e49\u8bad\u7ec3\u53c2\u6570\n    epochs = 10\n    learning_rate = 1e-2\n\n    print(&quot;\u5f00\u59cb Baseline \u8bad\u7ec3\uff08\u4ec5\u6709\u6807\u7b7e\u6570\u636e\uff09...&quot;)\n    model_baseline = train_baseline(model_baseline, labeled_loader, val_loader, epochs=epochs, lr=learning_rate)\n\n    print(&quot;\\n\u5f00\u59cb \u534a\u76d1\u7763\u8bad\u7ec3\uff08\u6709\u6807\u7b7e + \u65e0\u6807\u7b7e\u6570\u636e\uff09...&quot;)\n    model_semi = train_semi_supervised(model_semi, labeled_loader, unlabeled_loader, val_loader, epochs=epochs, lr=learning_rate)\n\n    # \u6700\u7ec8\u8bc4\u4f30\u5bf9\u6bd4\n    baseline_acc = evaluate_on_valset(model_baseline, val_loader)\n    semi_acc = evaluate_on_valset(model_semi, val_loader)\n\n    print(f&quot;\\n\u6700\u7ec8\u5bf9\u6bd4\u7ed3\u679c:\\n  Baseline \u51c6\u786e\u7387 = {baseline_acc:.4f}\\n  \u534a\u76d1\u7763\u51c6\u786e\u7387 = {semi_acc:.4f}&quot;)\n\nif __name__ == &quot;__main__&quot;:\n    main()<\/code><\/pre>\n<pre><code>\u5f00\u59cb Baseline \u8bad\u7ec3\uff08\u4ec5\u6709\u6807\u7b7e\u6570\u636e\uff09...\n[Baseline] Epoch 1\/10, Loss: 2.3153, Clf: 2.3153, Val Acc: 0.1000\n[Baseline] Epoch 2\/10, Loss: 2.3001, Clf: 2.3001, Val Acc: 0.1632\n[Baseline] Epoch 3\/10, Loss: 2.2466, Clf: 2.2466, Val Acc: 0.1284\n[Baseline] Epoch 4\/10, Loss: 2.2630, Clf: 2.2630, Val Acc: 0.1401\n[Baseline] Epoch 5\/10, Loss: 2.2094, Clf: 2.2094, Val Acc: 0.1573\n[Baseline] Epoch 6\/10, Loss: 2.1534, Clf: 2.1534, Val Acc: 0.1925\n[Baseline] Epoch 7\/10, Loss: 2.1225, Clf: 2.1225, Val Acc: 0.2331\n[Baseline] Epoch 8\/10, Loss: 2.0578, Clf: 2.0578, Val Acc: 0.2203\n[Baseline] Epoch 9\/10, Loss: 2.0038, Clf: 2.0038, Val Acc: 0.2675\n[Baseline] Epoch 10\/10, Loss: 1.8889, Clf: 1.8889, Val Acc: 0.2509\n\n\u5f00\u59cb \u534a\u76d1\u7763\u8bad\u7ec3\uff08\u6709\u6807\u7b7e + \u65e0\u6807\u7b7e\u6570\u636e\uff09...\n[Semi-Supervised] Epoch 1\/10, Loss: 2130.3845, Clf: 2.3115, Unsupervised: 0.0000, Val Acc: 0.0872\n[Semi-Supervised] Epoch 2\/10, Loss: 2125.7730, Clf: 2.2873, Unsupervised: 0.0000, Val Acc: 0.1113\n[Semi-Supervised] Epoch 3\/10, Loss: 2106.7699, Clf: 2.2789, Unsupervised: 0.0000, Val Acc: 0.1629\n[Semi-Supervised] Epoch 4\/10, Loss: 2088.1254, Clf: 2.2613, Unsupervised: 0.0000, Val Acc: 0.1784\n[Semi-Supervised] Epoch 5\/10, Loss: 2069.3262, Clf: 2.2282, Unsupervised: 0.0000, Val Acc: 0.2065\n[Semi-Supervised] Epoch 6\/10, Loss: 2087.0404, Clf: 2.1865, Unsupervised: 0.0000, Val Acc: 0.1421\n[Semi-Supervised] Epoch 7\/10, Loss: 2071.9022, Clf: 2.1797, Unsupervised: 0.0000, Val Acc: 0.1959\n[Semi-Supervised] Epoch 8\/10, Loss: 2049.9933, Clf: 2.1584, Unsupervised: 0.0000, Val Acc: 0.2161\n[Semi-Supervised] Epoch 9\/10, Loss: 2037.8330, Clf: 2.1724, Unsupervised: 0.0000, Val Acc: 0.1703\n[Semi-Supervised] Epoch 10\/10, Loss: 2030.8822, Clf: 2.1445, Unsupervised: 0.0000, Val Acc: 0.2203\n\n\u6700\u7ec8\u5bf9\u6bd4\u7ed3\u679c:\n  Baseline \u51c6\u786e\u7387 = 0.2509\n  \u534a\u76d1\u7763\u51c6\u786e\u7387 = 0.2203<\/code><\/pre>\n<p>\u6700\u7ec8\u5bf9\u6bd4\u7ed3\u679c\u663e\u793a\uff0cBaseline\u6a21\u578b\u7684\u51c6\u786e\u7387\u4e3a0.2509\uff0c\u800c\u534a\u76d1\u7763\u6a21\u578b\u7684\u51c6\u786e\u7387\u4e3a0.2203\u3002\u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u8fd9\u4e2a\u9879\u76ee\u4ec5\u662f\u4e00\u4e2a\u6f14\u793a\uff08demo\uff09\uff0c\u5176\u4e3b\u8981\u76ee\u7684\u662f\u4e3a\u4e86\u8bb2\u89e3\u534a\u76d1\u7763\u8bad\u7ec3\u7684\u57fa\u672c\u539f\u7406\u548c\u5de5\u4f5c\u6d41\u7a0b\uff0c\u800c\u4e0d\u662f\u8ffd\u6c42\u6700\u7ec8\u7684\u7cbe\u5ea6\u8868\u73b0\u3002\u534a\u76d1\u7763\u8bad\u7ec3\u7684\u6548\u679c\u901a\u5e38\u4f9d\u8d56\u4e8e\u5927\u91cf\u7684\u6570\u636e\u3001\u7ec6\u81f4\u7684\u53c2\u6570\u8c03\u4f18\u4ee5\u53ca\u5408\u9002\u7684\u6a21\u578b\u8bbe\u8ba1\u3002\u5728\u5b9e\u9645\u5e94\u7528\u4e2d\uff0c\u534a\u76d1\u7763\u5b66\u4e60\u53ef\u4ee5\u901a\u8fc7\u5229\u7528\u672a\u6807\u6ce8\u6570\u636e\u6765\u63d0\u5347\u6a21\u578b\u6027\u80fd\uff0c\u4f46\u8fd9\u4e00\u8fc7\u7a0b\u9700\u8981\u7ecf\u8fc7\u53cd\u590d\u7684\u5b9e\u9a8c\u548c\u8c03\u6574\u3002\u56e0\u6b64\uff0c\u5f53\u524d\u7684\u7ed3\u679c\u5e76\u4e0d\u4ee3\u8868\u534a\u76d1\u7763\u5b66\u4e60\u7684\u6700\u7ec8\u6f5c\u529b\uff0c\u800c\u662f\u4e3a\u8fdb\u4e00\u6b65\u63a2\u7d22\u548c\u4f18\u5316\u63d0\u4f9b\u4e86\u4e00\u4e2a\u8d77\u70b9\u3002<\/p>\n<h2><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/dusk\/64\/000000\/prize.png\" style=\"height:50px;display:inline\"> Credits<\/h2>\n<hr \/>\n<ul>\n<li>Icons made by <a href=\"https:\/\/www.flaticon.com\/authors\/becris\" title=\"Becris\">Becris<\/a> from <a href=\"https:\/\/www.flaticon.com\/\" title=\"Flaticon\">www.flaticon.com<\/a><\/li>\n<li>Icons from <a href=\"https:\/\/icons8.com\/\">Icons8.com<\/a> - <a href=\"https:\/\/icons8.com\">https:\/\/icons8.com<\/a><\/li>\n<li>Datasets from <a href=\"https:\/\/www.kaggle.com\/\">Kaggle<\/a> - <a href=\"https:\/\/www.kaggle.com\/\">https:\/\/www.kaggle.com\/<\/a><\/li>\n<li><a href=\"https:\/\/machinelearningmastery.com\/why-initialize-a-neural-network-with-random-weights\/\">Jason Brownlee - Why Initialize a Neural Network with Random Weights?<\/a><\/li>\n<li><a href=\"https:\/\/openai.com\/blog\/deep-double-descent\/\">OpenAI - Deep Double Descent<\/a><\/li>\n<li><a href=\"https:\/\/taldatech.github.io\">Tal Daniel<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Learning Methods of Deep Learning create by Deepfinder  [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2675,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[18,28],"tags":[],"class_list":["post-2673","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-18","category-28"],"_links":{"self":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2673","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2673"}],"version-history":[{"count":3,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2673\/revisions"}],"predecessor-version":[{"id":2677,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2673\/revisions\/2677"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/media\/2675"}],"wp:attachment":[{"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2673"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2673"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2673"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}