{"id":2717,"date":"2025-01-25T16:24:50","date_gmt":"2025-01-25T08:24:50","guid":{"rendered":"https:\/\/www.gnn.club\/?p=2717"},"modified":"2025-03-12T15:06:32","modified_gmt":"2025-03-12T07:06:32","slug":"tutorial-08-%e4%bc%97%e5%bf%97%e6%88%90%e5%9f%8e%ef%bc%9a%e9%9b%86%e6%88%90%e5%ad%a6%e4%b9%a0ensemble-learning","status":"publish","type":"post","link":"http:\/\/gnn.club\/?p=2717","title":{"rendered":"Tutorial 08 &#8211; \u4f17\u5fd7\u6210\u57ce\uff1a\u96c6\u6210\u5b66\u4e60(Ensemble Learning)"},"content":{"rendered":"<h1>Learning Methods of Deep Learning<\/h1>\n<hr \/>\n<p>create by Deepfinder<\/p>\n<h3><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/bubbles\/50\/000000\/checklist.png\" style=\"height:50px;display:inline\"> Agenda<\/h3>\n<hr \/>\n<ol>\n<li>\u5e08\u5f92\u76f8\u6388\uff1a\u6709\u76d1\u7763\u5b66\u4e60\uff08Supervised Learning\uff09<\/li>\n<li>\u89c1\u5fae\u77e5\u8457\uff1a\u65e0\u76d1\u7763\u5b66\u4e60\uff08Un-supervised Learning\uff09<\/li>\n<li>\u65e0\u5e08\u81ea\u901a\uff1a\u81ea\u76d1\u7763\u5b66\u4e60\uff08Self-supervised Learning\uff09<\/li>\n<li>\u4ee5\u70b9\u5e26\u9762\uff1a\u534a\u76d1\u7763\u5b66\u4e60\uff08Semi-supervised learning\uff09<\/li>\n<li>\u660e\u8fa8\u662f\u975e\uff1a\u5bf9\u6bd4\u5b66\u4e60\uff08Contrastive Learning\uff09<\/li>\n<li>\u4e3e\u4e00\u53cd\u4e09\uff1a\u8fc1\u79fb\u5b66\u4e60\uff08Transfer Learning\uff09<\/li>\n<li>\u9488\u950b\u76f8\u5bf9\uff1a\u5bf9\u6297\u5b66\u4e60\uff08Adversarial Learning\uff09<\/li>\n<li><strong>\u4f17\u5fd7\u6210\u57ce\uff1a\u96c6\u6210\u5b66\u4e60(Ensemble Learning)<\/strong><\/li>\n<li>\u6b8a\u9014\u540c\u5f52\uff1a\u8054\u90a6\u5b66\u4e60\uff08Federated Learning\uff09<\/li>\n<li>\u767e\u6298\u4e0d\u6320\uff1a\u5f3a\u5316\u5b66\u4e60\uff08Reinforcement Learning\uff09<\/li>\n<li>\u6c42\u77e5\u82e5\u6e34\uff1a\u4e3b\u52a8\u5b66\u4e60\uff08Active Learning\uff09<\/li>\n<li>\u4e07\u6cd5\u5f52\u5b97\uff1a\u5143\u5b66\u4e60\uff08Meta-Learning\uff09<\/li>\n<\/ol>\n<h2>Tutorial 08 - \u4f17\u5fd7\u6210\u57ce\uff1a\u96c6\u6210\u5b66\u4e60(Ensemble Learning)<\/h2>\n<p>\u5728\u673a\u5668\u5b66\u4e60\u548c\u6df1\u5ea6\u5b66\u4e60\u4e2d\uff0c<strong>\u5355\u4e00\u6a21\u578b\u7684\u6027\u80fd\u5f80\u5f80\u53d7\u5230\u6a21\u578b\u590d\u6742\u5ea6\u3001\u6570\u636e\u8d28\u91cf\u4ee5\u53ca\u8bad\u7ec3\u65b9\u6cd5\u7b49\u8bf8\u591a\u56e0\u7d20\u7684\u9650\u5236<\/strong>\u3002<\/p>\n<p>\u4e3a\u4e86\u89e3\u51b3\u8fd9\u4e9b\u95ee\u9898\uff0c\u96c6\u6210\u5b66\u4e60\uff08Ensemble Learning\uff09\u63d0\u4f9b\u4e86\u4e00\u79cd\u5f3a\u5927\u7684\u65b9\u6cd5\uff0c\u5c06\u591a\u4e2a\u6a21\u578b\u7684\u9884\u6d4b\u7ed3\u679c\u7ed3\u5408\u8d77\u6765\uff0c\u4ece\u800c\u63d0\u5347\u6a21\u578b\u7684\u6574\u4f53\u6027\u80fd\u3001\u9c81\u68d2\u6027\u548c\u6cdb\u5316\u80fd\u529b\u3002<\/p>\n<p>\u96c6\u6210\u5b66\u4e60\u5728\u4f20\u7edf\u673a\u5668\u5b66\u4e60\u4e2d\u6709\u7740\u5e7f\u6cdb\u5e94\u7528\uff0c\u6bd4\u5982\u968f\u673a\u68ee\u6797\uff08Random Forest\uff09\u548c\u68af\u5ea6\u63d0\u5347\u6811\uff08Gradient Boosting Trees\uff09\u3002\u8fd1\u5e74\u6765\uff0c\u968f\u7740\u6df1\u5ea6\u5b66\u4e60\u7684\u5feb\u901f\u53d1\u5c55\uff0c\u96c6\u6210\u5b66\u4e60\u7684\u601d\u60f3\u4e5f\u88ab\u6210\u529f\u5f15\u5165\u5230\u6df1\u5ea6\u5b66\u4e60\u4e2d\uff0c\u4e3a\u89e3\u51b3\u590d\u6742\u4efb\u52a1\uff08\u5982\u56fe\u50cf\u5206\u7c7b\u3001\u81ea\u7136\u8bed\u8a00\u5904\u7406\u7b49\uff09\u5e26\u6765\u4e86\u66f4\u4f18\u7684\u8868\u73b0\u3002<\/p>\n<h2><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/dusk\/64\/000000\/popular-topic.png\" style=\"height:50px;display:inline\">  \u96c6\u6210\u5b66\u4e60\u7684\u6838\u5fc3\u6982\u5ff5<\/h2>\n<hr \/>\n<p>\u96c6\u6210\u5b66\u4e60\u7684\u57fa\u672c\u601d\u60f3\u662f\u901a\u8fc7\u7ec4\u5408\u591a\u4e2a\u57fa\u6a21\u578b\uff08Base Model\uff09\u7684\u9884\u6d4b\u7ed3\u679c\uff0c\u8fbe\u5230\u201c\u4f17\u4eba\u62fe\u67f4\u706b\u7130\u9ad8\u201d\u7684\u6548\u679c\u3002\u5176\u6838\u5fc3\u5305\u62ec\u4ee5\u4e0b\u51e0\u70b9\uff1a<\/p>\n<ol>\n<li>\u57fa\u6a21\u578b\uff08Base Model\uff09<\/li>\n<\/ol>\n<p>\u57fa\u6a21\u578b\u662f\u53c2\u4e0e\u96c6\u6210\u7684\u5355\u4e00\u6a21\u578b\u3002\u5bf9\u4e8e\u6df1\u5ea6\u5b66\u4e60\u800c\u8a00\uff0c\u57fa\u6a21\u578b\u53ef\u4ee5\u662f\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\uff08CNN\uff09\u3001\u5faa\u73af\u795e\u7ecf\u7f51\u7edc\uff08RNN\uff09\u7b49\u3002<\/p>\n<ol start=\"2\">\n<li>\u96c6\u6210\u7b56\u7565<\/li>\n<\/ol>\n<ul>\n<li>Bagging\uff08Bootstrap Aggregating\uff09\uff1a\u901a\u8fc7\u5bf9\u6570\u636e\u96c6\u8fdb\u884c\u91cd\u91c7\u6837\uff0c\u8bad\u7ec3\u591a\u4e2a\u72ec\u7acb\u7684\u57fa\u6a21\u578b\uff08\u5982\u968f\u673a\u68ee\u6797\uff09\u3002<\/li>\n<li>Boosting\uff1a\u901a\u8fc7\u52a0\u6743\u5b66\u4e60\uff0c\u9010\u6b65\u4f18\u5316\u6bcf\u4e2a\u57fa\u6a21\u578b\uff0c\u4f7f\u5176\u5728\u4e0a\u4e00\u8f6e\u4e2d\u8868\u73b0\u8f83\u5dee\u7684\u6570\u636e\u4e0a\u6539\u8fdb\uff08\u5982 AdaBoost \u548c\u68af\u5ea6\u63d0\u5347\uff09\u3002<\/li>\n<li>Stacking\uff08\u5806\u53e0\u6cdb\u5316\uff09\uff1a\u4f7f\u7528\u4e00\u4e2a\u5143\u5b66\u4e60\u5668\uff08Meta-Learner\uff09\u7ed3\u5408\u591a\u4e2a\u57fa\u6a21\u578b\u7684\u8f93\u51fa\u3002<\/li>\n<\/ul>\n<ol start=\"3\">\n<li>\u6295\u7968\u673a\u5236<\/li>\n<\/ol>\n<ul>\n<li>\u786c\u6295\u7968\uff1a\u57fa\u4e8e\u6bcf\u4e2a\u6a21\u578b\u7684\u5206\u7c7b\u7ed3\u679c\u8fdb\u884c\u591a\u6570\u6295\u7968\u3002<\/li>\n<li>\u8f6f\u6295\u7968\uff1a\u7ed3\u5408\u6bcf\u4e2a\u6a21\u578b\u7684\u9884\u6d4b\u6982\u7387\uff0c\u53d6\u52a0\u6743\u5e73\u5747\u6216\u6700\u9ad8\u6982\u7387\u503c\u3002<\/li>\n<li>\u901a\u8fc7\u4ee5\u4e0a\u7b56\u7565\uff0c\u96c6\u6210\u5b66\u4e60\u80fd\u591f\u6709\u6548\u51cf\u5c11\u5355\u4e00\u6a21\u578b\u7684\u8fc7\u62df\u5408\u98ce\u9669\uff0c\u540c\u65f6\u5229\u7528\u4e0d\u540c\u6a21\u578b\u7684\u4e92\u8865\u4f18\u52bf\u6765\u63d0\u5347\u6574\u4f53\u6027\u80fd\u3002<\/li>\n<\/ul>\n<h2><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/dusk\/64\/000000\/lego-head.png\" style=\"height:50px;display:inline\"> \u6df1\u5ea6\u5b66\u4e60\u4e2d\u7684\u96c6\u6210\u5b66\u4e60<\/h2>\n<hr \/>\n<p>\u7ed3\u5408\u6df1\u5ea6\u5b66\u4e60\uff0c\u96c6\u6210\u5b66\u4e60\u5728\u4ee5\u4e0b\u51e0\u4e2a\u65b9\u9762\u5177\u6709\u72ec\u7279\u7684\u4ef7\u503c\uff1a<\/p>\n<ul>\n<li>\u6a21\u578b\u5e73\u5747\uff1a\u591a\u4e2a\u6a21\u578b\u7684\u9884\u6d4b\u7ed3\u679c\u901a\u8fc7\u7b80\u5355\u5e73\u5747\u6216\u52a0\u6743\u5e73\u5747\u878d\u5408\uff0c\u80fd\u591f\u7f13\u89e3\u5355\u4e2a\u6a21\u578b\u7684\u9884\u6d4b\u504f\u5dee\u3002<\/li>\n<li>\u6a21\u578b\u5806\u53e0\uff1a\u591a\u4e2a\u6a21\u578b\u7684\u8f93\u51fa\u4f5c\u4e3a\u7279\u5f81\u8f93\u5165\uff0c\u5229\u7528\u8f7b\u91cf\u7ea7\u5b66\u4e60\u5668\uff08\u5982\u903b\u8f91\u56de\u5f52\u6216\u6d45\u5c42\u795e\u7ecf\u7f51\u7edc\uff09\u5b66\u4e60\u66f4\u597d\u7684\u9884\u6d4b\u89c4\u5219\u3002<\/li>\n<li>\u6a21\u578b\u591a\u6837\u6027\uff1a\u4e0d\u540c\u7f51\u7edc\u67b6\u6784\u3001\u8bad\u7ec3\u7b56\u7565\u548c\u6570\u636e\u589e\u5f3a\u6280\u672f\u7684\u4f7f\u7528\uff0c\u6709\u52a9\u4e8e\u63d0\u9ad8\u96c6\u6210\u6a21\u578b\u7684\u9c81\u68d2\u6027\u3002<\/li>\n<\/ul>\n<pre><code class=\"language-python\">import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\nimport numpy as np\n\n# \u68c0\u67e5 GPU \u662f\u5426\u53ef\u7528\ndevice = torch.device(&quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot;)\n\n# \u6570\u636e\u9884\u5904\u7406\ntransform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])\n\ntrain_dataset = datasets.CIFAR10(root=&#039;datasets&#039;, train=True, download=True, transform=transform)\ntest_dataset = datasets.CIFAR10(root=&#039;datasets&#039;, train=False, download=True, transform=transform)\n\ntrain_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)\ntest_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)\n\n# \u5b9a\u4e49\u7b80\u5355\u7684 CNN \u6a21\u578b\nclass SimpleCNN(nn.Module):\n    def __init__(self):\n        super(SimpleCNN, self).__init__()\n        self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)\n        self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)\n        self.pool = nn.MaxPool2d(2, 2)\n        self.fc1 = nn.Linear(64 * 16 * 16, 128)  # \u4fee\u6b63\u5168\u8fde\u63a5\u5c42\u8f93\u5165\u5927\u5c0f\n        self.fc2 = nn.Linear(128, 10)\n        self.relu = nn.ReLU()\n\n    def forward(self, x):\n        x = self.relu(self.conv1(x))\n        x = self.pool(self.relu(self.conv2(x)))\n        x = x.view(x.size(0), -1)  # \u5c55\u5e73\u4e3a [batch_size, features]\n        x = self.relu(self.fc1(x))\n        x = self.fc2(x)\n        return x\n\n# \u5b9a\u4e49\u5143\u6a21\u578b\uff08\u795e\u7ecf\u7f51\u7edc\uff09\nclass MetaModel(nn.Module):\n    def __init__(self, input_size, hidden_size, output_size):\n        super(MetaModel, self).__init__()\n        self.fc1 = nn.Linear(input_size, hidden_size)\n        self.fc2 = nn.Linear(hidden_size, output_size)\n        self.relu = nn.ReLU()\n\n    def forward(self, x):\n        x = self.relu(self.fc1(x))\n        x = self.fc2(x)\n        return x<\/code><\/pre>\n<pre><code>Files already downloaded and verified\nFiles already downloaded and verified<\/code><\/pre>\n<pre><code class=\"language-python\"># \u6a21\u578b\u8bad\u7ec3\u51fd\u6570\ndef train_model(model, train_loader, epochs=10):\n    criterion = nn.CrossEntropyLoss()\n    optimizer = optim.Adam(model.parameters(), lr=0.001)\n\n    model.to(device)  # \u5c06\u6a21\u578b\u79fb\u52a8\u5230 GPU\n\n    for epoch in range(epochs):\n        model.train()\n        for images, labels in train_loader:\n            images, labels = images.to(device), labels.to(device)  # \u5c06\u6570\u636e\u79fb\u52a8\u5230 GPU\n\n            optimizer.zero_grad()\n            outputs = model(images)\n            loss = criterion(outputs, labels)\n            loss.backward()\n            optimizer.step()\n\n    # \u6d4b\u8bd5\u96c6\u8bc4\u4f30\n    model.eval()\n    correct = 0\n    total = 0\n    with torch.no_grad():\n        for images, labels in test_loader:\n            images, labels = images.to(device), labels.to(device)  # \u5c06\u6570\u636e\u79fb\u52a8\u5230 GPU\n\n            outputs = model(images)\n            _, predicted = torch.max(outputs, 1)\n            total += labels.size(0)\n            correct += (predicted == labels).sum().item()\n\n    return correct \/ total\n\n# \u83b7\u53d6\u57fa\u6a21\u578b\u7684\u8f93\u51fa\ndef get_predictions(models, loader):\n    outputs_list = []\n    labels_list = []\n    with torch.no_grad():\n        for images, labels in loader:\n            images, labels = images.to(device), labels.to(device)  # \u5c06\u6570\u636e\u79fb\u52a8\u5230 GPU\n\n            # \u83b7\u53d6\u6240\u6709\u57fa\u6a21\u578b\u7684\u8f93\u51fa\n            model_outputs = [model(images) for model in models]\n            outputs_list.append(torch.cat(model_outputs, dim=1))  # \u5c06\u57fa\u6a21\u578b\u7684\u8f93\u51fa\u62fc\u63a5\u5728\u4e00\u8d77\n\n            labels_list.append(labels)  # \u83b7\u53d6\u6807\u7b7e\n\n    return torch.cat(outputs_list, dim=0), torch.cat(labels_list, dim=0)<\/code><\/pre>\n<pre><code class=\"language-python\"># \u96c6\u6210\u5b66\u4e60\uff08Bagging\uff09\ndef bagging_ensemble(models, test_loader):\n    correct = 0\n    total = 0\n    with torch.no_grad():\n        for images, labels in test_loader:\n            images, labels = images.to(device), labels.to(device)  # \u5c06\u6570\u636e\u79fb\u52a8\u5230 GPU\n\n            # \u83b7\u53d6\u6240\u6709\u57fa\u6a21\u578b\u7684\u9884\u6d4b\u7ed3\u679c\n            outputs = torch.stack([model(images) for model in models], dim=0)\n            # \u5bf9\u591a\u4e2a\u6a21\u578b\u7684\u8f93\u51fa\u53d6\u5e73\u5747\n            avg_outputs = torch.mean(outputs, dim=0)\n\n            _, predicted = torch.max(avg_outputs, 1)\n            total += labels.size(0)\n            correct += (predicted == labels).sum().item()\n\n    return correct \/ total<\/code><\/pre>\n<pre><code class=\"language-python\"># Boosting \u7684\u8bad\u7ec3\u4e0e\u96c6\u6210\ndef boosting_ensemble(models, train_loader, test_loader, num_epochs=10):\n    model_weights = np.ones(len(models))  # \u521d\u59cb\u5316\u6bcf\u4e2a\u6a21\u578b\u7684\u6743\u91cd\u4e3a1\n    model_accuracies = []  # \u4fdd\u5b58\u6bcf\u4e2a\u6a21\u578b\u7684\u51c6\u786e\u7387\n\n    # \u8bad\u7ec3\u6bcf\u4e2a\u6a21\u578b\u5e76\u8ba1\u7b97\u5176\u51c6\u786e\u7387\n    for model in models:\n        accuracy = train_model(model, train_loader, epochs=num_epochs)\n        model_accuracies.append(accuracy)\n\n    # \u8c03\u6574\u6a21\u578b\u7684\u6743\u91cd\uff0c\u6839\u636e\u51c6\u786e\u7387\u8c03\u6574\n    total_accuracy = np.sum(model_accuracies)\n    model_weights = model_accuracies \/ total_accuracy  # \u6839\u636e\u51c6\u786e\u7387\u8ba1\u7b97\u6743\u91cd\n\n    correct = 0\n    total = 0\n    with torch.no_grad():\n        for images, labels in test_loader:\n            images, labels = images.to(device), labels.to(device)  # \u5c06\u6570\u636e\u79fb\u52a8\u5230 GPU\n\n            # \u83b7\u53d6\u6240\u6709\u57fa\u6a21\u578b\u7684\u9884\u6d4b\u7ed3\u679c\n            outputs = torch.stack([model(images) for model in models], dim=0)\n            # \u5c06\u6743\u91cd\u8f6c\u6362\u4e3a tensor \u5e76\u786e\u4fdd\u5176\u5728\u76f8\u540c\u8bbe\u5907\u4e0a\n            weighted_outputs = torch.sum(outputs * torch.tensor(model_weights, device=device).view(-1, 1, 1), dim=0)\n            _, predicted = torch.max(weighted_outputs, 1)\n            total += labels.size(0)\n            correct += (predicted == labels).sum().item()\n\n    return correct \/ total<\/code><\/pre>\n<pre><code class=\"language-python\"># Stacking \u96c6\u6210\u65b9\u6cd5\ndef stacking_ensemble(models, meta_model, train_loader, test_loader, epochs=20):\n    # \u83b7\u53d6\u8bad\u7ec3\u96c6\u548c\u6d4b\u8bd5\u96c6\u7684\u57fa\u6a21\u578b\u8f93\u51fa\n    X_train, y_train = get_predictions(models, train_loader)\n    X_test, y_test = get_predictions(models, test_loader)\n\n    # \u5bf9\u57fa\u6a21\u578b\u7684\u8f93\u51fa\u8fdb\u884c softmax \u5f52\u4e00\u5316\n    X_train = torch.softmax(X_train, dim=1)\n    X_test = torch.softmax(X_test, dim=1)\n\n    # \u8bad\u7ec3\u5143\u6a21\u578b\n    criterion = nn.CrossEntropyLoss()\n    optimizer = optim.Adam(meta_model.parameters(), lr=0.001, weight_decay=1e-4)  # \u589e\u52a0\u6743\u91cd\u8870\u51cf\u6b63\u5219\u5316\n\n    meta_model.to(device)  # \u5c06\u5143\u6a21\u578b\u79fb\u52a8\u5230 GPU\n    X_train, y_train = X_train.to(device), y_train.to(device)\n    X_test, y_test = X_test.to(device), y_test.to(device)\n\n    # \u5212\u5206\u8bad\u7ec3\u96c6\u548c\u9a8c\u8bc1\u96c6\n    dataset_size = X_train.size(0)\n    indices = torch.randperm(dataset_size)\n    split = int(dataset_size * 0.8)  # 80% \u8bad\u7ec3\uff0c20% \u9a8c\u8bc1\n    train_indices, val_indices = indices[:split], indices[split:]\n\n    X_train_split, y_train_split = X_train[train_indices], y_train[train_indices]\n    X_val, y_val = X_train[val_indices], y_train[val_indices]\n\n    best_accuracy = 0.0\n    for epoch in range(epochs):\n        meta_model.train()\n        optimizer.zero_grad()\n        outputs = meta_model(X_train_split)\n        loss = criterion(outputs, y_train_split)\n        loss.backward()\n        optimizer.step()\n\n        # \u9a8c\u8bc1\u96c6\u8bc4\u4f30\n        meta_model.eval()\n        with torch.no_grad():\n            val_outputs = meta_model(X_val)\n            _, val_predicted = torch.max(val_outputs, 1)\n            val_accuracy = (val_predicted == y_val).float().mean().item()\n\n        # \u4fdd\u5b58\u6700\u4f73\u6a21\u578b\n        if val_accuracy &gt; best_accuracy:\n            best_accuracy = val_accuracy\n            torch.save(meta_model.state_dict(), &#039;best_meta_model.pth&#039;)\n\n    # \u52a0\u8f7d\u6700\u4f73\u6a21\u578b\n    meta_model.load_state_dict(torch.load(&#039;best_meta_model.pth&#039;, weights_only=True))\n\n    # \u6d4b\u8bd5\u96c6\u8bc4\u4f30\n    meta_model.eval()\n    with torch.no_grad():\n        outputs = meta_model(X_test)\n        _, predicted = torch.max(outputs, 1)\n        accuracy = (predicted == y_test).float().mean().item()\n\n    return accuracy\n<\/code><\/pre>\n<pre><code class=\"language-python\"># \u57fa\u7ebf\u6a21\u578b\uff1a\u8bad\u7ec3\u4e00\u4e2a\u5355\u4e00\u7684\u7f51\u7edc\ndef baseline_model(train_loader, test_loader):\n    model = SimpleCNN().to(device)\n    accuracy = train_model(model, train_loader)\n    return accuracy\n\n# \u8bad\u7ec3\u57fa\u7ebf\u6a21\u578b\nbaseline_accuracy = baseline_model(train_loader, test_loader)\nprint(f&quot;Baseline model accuracy: {baseline_accuracy * 100:.2f}%&quot;)<\/code><\/pre>\n<pre><code>Baseline model accuracy: 68.07%<\/code><\/pre>\n<pre><code class=\"language-python\"># \u4e3b\u7a0b\u5e8f\nnum_models = 3  # \u5047\u8bbe\u6211\u4eec\u8bad\u7ec33\u4e2a\u6a21\u578b\nmodels = [SimpleCNN() for _ in range(num_models)]\n\n# \u8bad\u7ec3\u6240\u6709\u6a21\u578b\uff08Bagging \u548c Boosting \u9700\u8981\uff09\nfor model in models:\n    train_model(model, train_loader, epochs=5)\n\n# \u4f7f\u7528 Bagging \u65b9\u6cd5\u8fdb\u884c\u6a21\u578b\u96c6\u6210\nbagging_accuracy = bagging_ensemble(models, test_loader)\nprint(f&quot;Bagging accuracy: {bagging_accuracy * 100:.2f}%&quot;)\n\n# \u4f7f\u7528 Boosting \u65b9\u6cd5\u8fdb\u884c\u6a21\u578b\u8bad\u7ec3\nboosting_accuracy = boosting_ensemble(models, train_loader, test_loader, num_epochs=5)\nprint(f&quot;Boosting accuracy: {boosting_accuracy * 100:.2f}%&quot;)\n\n# \u4f7f\u7528 Stacking \u65b9\u6cd5\u8fdb\u884c\u6a21\u578b\u96c6\u6210\ninput_size = 10 * num_models  # \u6bcf\u4e2a\u57fa\u6a21\u578b\u8f93\u51fa10\u4e2a\u7c7b\u522b\uff0c\u62fc\u63a5\u540e\u7684\u8f93\u5165\u5927\u5c0f\nmeta_model = MetaModel(input_size, hidden_size=64, output_size=10)  # \u4f7f\u7528\u795e\u7ecf\u7f51\u7edc\u4f5c\u4e3a\u5143\u6a21\u578b\nstacking_accuracy = stacking_ensemble(models, meta_model, train_loader, test_loader, epochs=100)\nprint(f&quot;Stacking accuracy: {stacking_accuracy * 100:.2f}%&quot;)\n<\/code><\/pre>\n<pre><code>Bagging accuracy: 73.26%\nBoosting accuracy: 73.35%\nStacking accuracy: 71.43%<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Learning Methods of Deep Learning create by Deepfinder  [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2718,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[18,28],"tags":[],"class_list":["post-2717","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-18","category-28"],"_links":{"self":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2717","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2717"}],"version-history":[{"count":1,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2717\/revisions"}],"predecessor-version":[{"id":2719,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2717\/revisions\/2719"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/media\/2718"}],"wp:attachment":[{"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2717"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2717"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2717"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}