{"id":2609,"date":"2025-01-24T20:55:24","date_gmt":"2025-01-24T12:55:24","guid":{"rendered":"https:\/\/www.gnn.club\/?p=2609"},"modified":"2025-03-12T15:07:03","modified_gmt":"2025-03-12T07:07:03","slug":"tutorial-02-%e8%a7%81%e5%be%ae%e7%9f%a5%e8%91%97%ef%bc%9a%e6%97%a0%e7%9b%91%e7%9d%a3%e5%ad%a6%e4%b9%a0%ef%bc%88un-supervised-learning%ef%bc%89","status":"publish","type":"post","link":"http:\/\/gnn.club\/?p=2609","title":{"rendered":"Tutorial 02 &#8211; \u89c1\u5fae\u77e5\u8457\uff1a\u65e0\u76d1\u7763\u5b66\u4e60\uff08Un-supervised Learning\uff09"},"content":{"rendered":"<h1>Learning Methods of Deep Learning<\/h1>\n<hr \/>\n<p>create by Deepfinder<\/p>\n<h3><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/bubbles\/50\/000000\/checklist.png\" style=\"height:50px;display:inline\"> Agenda<\/h3>\n<hr \/>\n<ol>\n<li>\u5e08\u5f92\u76f8\u6388\uff1a\u6709\u76d1\u7763\u5b66\u4e60\uff08Supervised Learning\uff09<\/li>\n<li><strong>\u89c1\u5fae\u77e5\u8457\uff1a\u65e0\u76d1\u7763\u5b66\u4e60\uff08Un-supervised Learning\uff09<\/strong><\/li>\n<li>\u65e0\u5e08\u81ea\u901a\uff1a\u81ea\u76d1\u7763\u5b66\u4e60\uff08Self-supervised Learning\uff09<\/li>\n<li>\u4ee5\u70b9\u5e26\u9762\uff1a\u534a\u76d1\u7763\u5b66\u4e60\uff08Semi-supervised learning\uff09<\/li>\n<li>\u660e\u8fa8\u662f\u975e\uff1a\u5bf9\u6bd4\u5b66\u4e60\uff08Contrastive Learning\uff09<\/li>\n<li>\u4e3e\u4e00\u53cd\u4e09\uff1a\u8fc1\u79fb\u5b66\u4e60\uff08Transfer Learning\uff09<\/li>\n<li>\u9488\u950b\u76f8\u5bf9\uff1a\u5bf9\u6297\u5b66\u4e60\uff08Adversarial Learning\uff09<\/li>\n<li>\u4f17\u5fd7\u6210\u57ce\uff1a\u96c6\u6210\u5b66\u4e60(Ensemble Learning) <\/li>\n<li>\u6b8a\u9014\u540c\u5f52\uff1a\u8054\u90a6\u5b66\u4e60\uff08Federated Learning\uff09<\/li>\n<li>\u767e\u6298\u4e0d\u6320\uff1a\u5f3a\u5316\u5b66\u4e60\uff08Reinforcement Learning\uff09<\/li>\n<li>\u6c42\u77e5\u82e5\u6e34\uff1a\u4e3b\u52a8\u5b66\u4e60\uff08Active Learning\uff09<\/li>\n<li>\u4e07\u6cd5\u5f52\u5b97\uff1a\u5143\u5b66\u4e60\uff08Meta-Learning\uff09<\/li>\n<\/ol>\n<h2>Tutorial 02 - \u89c1\u5fae\u77e5\u8457\uff1a\u65e0\u76d1\u7763\u5b66\u4e60\uff08Un-supervised Learning\uff09<\/h2>\n<h3><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/color\/96\/000000\/code.png\" style=\"height:50px;display:inline\"> \u81ea\u7f16\u7801\u5668 Auto-encoders<\/h3>\n<hr \/>\n<ul>\n<li>\u5927\u591a\u6570\u81ea\u7136\u6570\u636e\u90fd\u662f\u9ad8\u7ef4\u7684\uff0c\u4f8b\u5982\u56fe\u50cf\u3002\u8003\u8651 MNIST\uff08\u624b\u5199\u6570\u5b57\uff09\u6570\u636e\u96c6\uff0c\u5176\u4e2d\u6bcf\u5e45\u56fe\u50cf\u6709 $28x28=784$ \u4e2a\u50cf\u7d20\uff0c\u8fd9\u610f\u5473\u7740\u5b83\u53ef\u4ee5\u7528\u957f\u5ea6\u4e3a 784 \u7684\u5411\u91cf\u8868\u793a\u3002<\/li>\n<li>\u4f46\u6211\u4eec\u771f\u7684\u9700\u8981 784 \u4e2a\u503c\u6765\u8868\u793a\u4e00\u4e2a\u6570\u5b57\u5417\uff1f\u7b54\u6848\u662f\u5426\u5b9a\u7684\u3002\u6211\u4eec\u8ba4\u4e3a\u6570\u636e\u4f4d\u4e8e\u4f4e\u7ef4\u7a7a\u95f4\u4e2d\uff0c\u8db3\u4ee5\u63cf\u8ff0\u89c2\u5bdf\u7ed3\u679c\u3002\u5728 MNIST \u7684\u60c5\u51b5\u4e0b\uff0c\u6211\u4eec\u53ef\u4ee5\u9009\u62e9\u5c06\u6570\u5b57\u8868\u793a\u4e3a\u72ec\u70ed\u5411\u91cf\uff0c\u8fd9\u610f\u5473\u7740\u6211\u4eec\u53ea\u9700\u8981 10 \u4e2a\u7ef4\u5ea6\u3002\u56e0\u6b64\uff0c\u6211\u4eec\u53ef\u4ee5\u5728\u4f4e\u7ef4\u7a7a\u95f4\u4e2d<strong>\u7f16\u7801<\/strong>\u9ad8\u7ef4\u89c2\u5bdf\u7ed3\u679c\u3002<\/li>\n<\/ul>\n<h4><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/?size=100&id=91CnU00i6HLv&format=png&color=000000\" style=\"height:50px;display:inline\"> \u4f46\u6211\u4eec\u5982\u4f55\u624d\u80fd\u5b66\u4e60\u6709\u610f\u4e49\u7684\u4f4e\u7ef4\u8868\u793a\uff1f<\/h4>\n<p>\u4e00\u822c\u7684\u60f3\u6cd5\u662f\u91cd\u5efa\u6216<strong>\u89e3\u7801<\/strong>\u4f4e\u7ef4\u8868\u793a\u4e3a\u9ad8\u7ef4\u8868\u793a\uff0c\u5e76\u4f7f\u7528\u91cd\u5efa\u8bef\u5dee\u6765\u627e\u5230\u6700\u4f73\u8868\u793a\uff08\u4f7f\u7528\u8bef\u5dee\u7684\u68af\u5ea6\uff09\u3002\u8fd9\u662f<strong>\u81ea\u52a8\u7f16\u7801\u5668<\/strong>\u80cc\u540e\u7684\u6838\u5fc3\u601d\u60f3\u3002<\/p>\n<ul>\n<li><strong>\u81ea\u52a8\u7f16\u7801\u5668<\/strong> - \u5c06\u6570\u636e\u4f5c\u4e3a\u8f93\u5165\u5e76\u53d1\u73b0\u8be5\u6570\u636e\u7684\u4e00\u4e9b\u6f5c\u5728\u72b6\u6001\u8868\u793a\u7684\u6a21\u578b\u3002\u8f93\u5165\u6570\u636e\u88ab\u8f6c\u6362\u4e3a\u7f16\u7801\u5411\u91cf\uff0c\u5176\u4e2d\u6bcf\u4e2a\u7ef4\u5ea6\u4ee3\u8868\u6709\u5173\u6570\u636e\u7684\u4e00\u4e9b\u5b66\u4e60\u5c5e\u6027\u3002\u8fd9\u91cc\u8981\u638c\u63e1\u7684\u6700\u91cd\u8981\u7684\u7ec6\u8282\u662f\u6211\u4eec\u7684\u7f16\u7801\u5668\u7f51\u7edc\u4e3a\u6bcf\u4e2a\u7f16\u7801\u7ef4\u5ea6\u8f93\u51fa\u4e00\u4e2a\u503c\u3002\u7136\u540e\uff0c\u89e3\u7801\u5668\u7f51\u7edc\u968f\u540e\u83b7\u53d6\u8fd9\u4e9b\u503c\u5e76\u5c1d\u8bd5\u91cd\u65b0\u521b\u5efa\u539f\u59cb\u8f93\u5165\u3002<\/li>\n<li>\u81ea\u52a8\u7f16\u7801\u5668\u6709<strong>\u4e09\u4e2a\u90e8\u5206<\/strong>\uff1a\u7f16\u7801\u5668\u3001\u89e3\u7801\u5668\u548c\u5c06\u4e00\u4e2a\u90e8\u5206\u6620\u5c04\u5230\u53e6\u4e00\u4e2a\u90e8\u5206\u7684\u201c\u635f\u5931\u201d\u51fd\u6570\u3002\u5bf9\u4e8e\u6700\u7b80\u5355\u7684\u81ea\u52a8\u7f16\u7801\u5668\uff08\u5373\u538b\u7f29\u7136\u540e\u4ece\u538b\u7f29\u8868\u793a\u4e2d\u91cd\u5efa\u539f\u59cb\u8f93\u5165\u7684\u90a3\u79cd\uff09\uff0c\u6211\u4eec\u53ef\u4ee5\u5c06\u201c\u635f\u5931\u201d\u89c6\u4e3a\u63cf\u8ff0\u91cd\u5efa\u8fc7\u7a0b\u4e2d\u4e22\u5931\u7684\u4fe1\u606f\u91cf\u3002<\/li>\n<\/ul>\n<p align=\"center\">\n  <img decoding=\"async\" src=\"https:\/\/gnnclub-1311496010.cos.ap-beijing.myqcloud.com\/wp-content\/uploads\/2025\/01\/20250124203609148.png\n\" style=\"height:500px\">\n<\/p>\n<p><a href=\"https:\/\/towardsdatascience.com\/applied-deep-learning-part-3-autoencoders-1c083af4d798\">Image Source<\/a><\/p>\n<p>Let's implement it in PyTorch using what we have learnt so far!<\/p>\n<pre><code class=\"language-python\">import torch\nimport torch.nn as nn \nimport torchvision \nimport matplotlib.pyplot as plt\nimport time\n# Fashion-MNIST\nfmnist_train_dataset = torchvision.datasets.FashionMNIST(root=&#039;.\/datasets\/&#039;,\n                                           train=True, \n                                           transform=torchvision.transforms.ToTensor(),\n                                           download=True)\n\nfmnist_test_dataset = torchvision.datasets.FashionMNIST(root=&#039;.\/datasets&#039;,\n                                          train=False, \n                                          transform=torchvision.transforms.ToTensor())\n\n# Data loader\nfmnist_train_loader = torch.utils.data.DataLoader(dataset=fmnist_train_dataset,\n                                           batch_size=64, \n                                           shuffle=True, drop_last=True)\n\nfmnist_test_loader = torch.utils.data.DataLoader(dataset=fmnist_test_dataset,\n                                          batch_size=64, \n                                          shuffle=False)\n\n# let&#039;s plot some of the samples from the test set\nexamples = enumerate(fmnist_test_loader)\nbatch_idx, (example_data, example_targets) = next(examples)\nprint(&quot;shape: \\n&quot;, example_data.shape)\nfig = plt.figure()\nfor i in range(6):\n    ax = fig.add_subplot(2,3,i+1)\n    ax.imshow(example_data[i][0], cmap=&#039;gray&#039;, interpolation=&#039;none&#039;)\n    ax.set_title(&quot;Ground Truth: {}&quot;.format(example_targets[i]))\n    ax.set_axis_off()\nplt.tight_layout()<\/code><\/pre>\n<pre><code>shape: \n torch.Size([64, 1, 28, 28])<\/code><\/pre>\n<p align=\"center\">\n  <img decoding=\"async\" src=\"https:\/\/gnnclub-1311496010.cos.ap-beijing.myqcloud.com\/wp-content\/uploads\/2025\/01\/20250124204522573.png\n\" style=\"height:400px\">\n<\/p>\n<pre><code class=\"language-python\">class AutoEncoder(nn.Module):\n\n    def __init__(self, input_dim=28*28, hidden_dim=256, latent_dim=10):\n        super(AutoEncoder, self).__init__()\n\n        self.input_dim = input_dim\n        self.hidden_dim = hidden_dim\n        self.latent_dim = latent_dim\n\n        # define the encoder\n        self.encoder = nn.Sequential(nn.Linear(self.input_dim, self.hidden_dim),\n                                     nn.ReLU(), \n                                     nn.Linear(self.hidden_dim, self.hidden_dim),\n                                     nn.ReLU(),\n                                     nn.Linear(self.hidden_dim, self.latent_dim)\n                                    )\n\n        # define decoder\n        self.decoder = nn.Sequential(nn.Linear(self.latent_dim, self.hidden_dim),\n                                     nn.ReLU(),\n                                     nn.Linear(self.hidden_dim, self.hidden_dim),\n                                     nn.ReLU(),\n                                     nn.Linear(self.hidden_dim, self.input_dim),\n                                     nn.Sigmoid())\n\n    def forward(self,x):\n        x = self.encoder(x)\n        x = self.decoder(x)\n        return x\n\n    def get_latent_rep(self, x):\n        return self.encoder(x)<\/code><\/pre>\n<pre><code class=\"language-python\"># hyper-parameters:\nnum_epochs = 5\nlearning_rate = 0.001\n\n# Device configuration, as before\ndevice = torch.device(&#039;cuda:0&#039; if torch.cuda.is_available() else &#039;cpu&#039;)\n\n# create model, send it to device\nmodel = AutoEncoder(input_dim=28 * 28, hidden_dim=128, latent_dim=10).to(device)\n\n# Loss and optimizer\ncriterion = nn.BCELoss()  # binary cross entropy, as pixels are in [0,1]\noptimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)<\/code><\/pre>\n<pre><code class=\"language-python\"># Train the model\ntotal_step = len(fmnist_train_loader)\nstart_time = time.time()\nfor epoch in range(num_epochs):\n    for i, (images, labels) in enumerate(fmnist_train_loader): \n        images = images.to(device).view(64, -1)\n\n        # Forward pass\n        outputs = model(images)\n        loss = criterion(outputs, images)\n\n        # Backward and optimize - ALWAYS IN THIS ORDER!\n        optimizer.zero_grad()\n        loss.backward()\n        optimizer.step()\n\n        if (i + 1) % 100 == 0:\n            print (&#039;Epoch [{}\/{}], Step [{}\/{}], Loss: {:.4f}, Time: {:.4f} secs&#039; \n                   .format(epoch + 1, num_epochs, i + 1, total_step, loss.item(), time.time() - start_time))<\/code><\/pre>\n<pre><code>Epoch [1\/5], Step [100\/937], Loss: 0.3769, Time: 0.4937 secs\nEpoch [1\/5], Step [200\/937], Loss: 0.3284, Time: 0.7780 secs\nEpoch [1\/5], Step [300\/937], Loss: 0.3262, Time: 1.0713 secs\nEpoch [1\/5], Step [400\/937], Loss: 0.3260, Time: 1.3613 secs\nEpoch [1\/5], Step [500\/937], Loss: 0.3224, Time: 1.6620 secs\nEpoch [1\/5], Step [600\/937], Loss: 0.3147, Time: 1.9529 secs\nEpoch [1\/5], Step [700\/937], Loss: 0.3029, Time: 2.2464 secs\nEpoch [1\/5], Step [800\/937], Loss: 0.3255, Time: 2.5387 secs\nEpoch [1\/5], Step [900\/937], Loss: 0.3377, Time: 2.8275 secs\nEpoch [2\/5], Step [100\/937], Loss: 0.3174, Time: 3.2292 secs\nEpoch [2\/5], Step [200\/937], Loss: 0.3075, Time: 3.5164 secs\nEpoch [2\/5], Step [300\/937], Loss: 0.3102, Time: 3.8110 secs\nEpoch [2\/5], Step [400\/937], Loss: 0.2994, Time: 4.0883 secs\nEpoch [2\/5], Step [500\/937], Loss: 0.3047, Time: 4.3774 secs\nEpoch [2\/5], Step [600\/937], Loss: 0.2948, Time: 4.6684 secs\nEpoch [2\/5], Step [700\/937], Loss: 0.2991, Time: 4.9575 secs\nEpoch [2\/5], Step [800\/937], Loss: 0.3083, Time: 5.2461 secs\nEpoch [2\/5], Step [900\/937], Loss: 0.3007, Time: 5.5392 secs\nEpoch [3\/5], Step [100\/937], Loss: 0.2870, Time: 5.9557 secs\nEpoch [3\/5], Step [200\/937], Loss: 0.3090, Time: 6.2507 secs\nEpoch [3\/5], Step [300\/937], Loss: 0.3068, Time: 6.5528 secs\nEpoch [3\/5], Step [400\/937], Loss: 0.2956, Time: 6.8511 secs\nEpoch [3\/5], Step [500\/937], Loss: 0.2904, Time: 7.1595 secs\nEpoch [3\/5], Step [600\/937], Loss: 0.3011, Time: 7.4549 secs\nEpoch [3\/5], Step [700\/937], Loss: 0.2863, Time: 7.7463 secs\nEpoch [3\/5], Step [800\/937], Loss: 0.2903, Time: 8.0476 secs\nEpoch [3\/5], Step [900\/937], Loss: 0.2843, Time: 8.3461 secs\nEpoch [4\/5], Step [100\/937], Loss: 0.3037, Time: 8.7565 secs\nEpoch [4\/5], Step [200\/937], Loss: 0.3125, Time: 9.0489 secs\nEpoch [4\/5], Step [300\/937], Loss: 0.2853, Time: 9.3417 secs\nEpoch [4\/5], Step [400\/937], Loss: 0.3043, Time: 9.6412 secs\nEpoch [4\/5], Step [500\/937], Loss: 0.2971, Time: 9.9420 secs\nEpoch [4\/5], Step [600\/937], Loss: 0.2975, Time: 10.2368 secs\nEpoch [4\/5], Step [700\/937], Loss: 0.2869, Time: 10.5395 secs\nEpoch [4\/5], Step [800\/937], Loss: 0.2910, Time: 10.8345 secs\nEpoch [4\/5], Step [900\/937], Loss: 0.3132, Time: 11.1254 secs\nEpoch [5\/5], Step [100\/937], Loss: 0.2964, Time: 11.5465 secs\nEpoch [5\/5], Step [200\/937], Loss: 0.2909, Time: 11.8279 secs\nEpoch [5\/5], Step [300\/937], Loss: 0.2817, Time: 12.1106 secs\nEpoch [5\/5], Step [400\/937], Loss: 0.3001, Time: 12.3899 secs\nEpoch [5\/5], Step [500\/937], Loss: 0.2937, Time: 12.6798 secs\nEpoch [5\/5], Step [600\/937], Loss: 0.2700, Time: 12.9821 secs\nEpoch [5\/5], Step [700\/937], Loss: 0.2639, Time: 13.2767 secs\nEpoch [5\/5], Step [800\/937], Loss: 0.2882, Time: 13.5852 secs\nEpoch [5\/5], Step [900\/937], Loss: 0.2789, Time: 13.8916 secs<\/code><\/pre>\n<pre><code class=\"language-python\"># let&#039;s see some of the reconstructions\nmodel.eval()  # put in evaluation mode - no gradients\nexamples = enumerate(fmnist_test_loader)\nbatch_idx, (example_data, example_targets) = next(examples)\nprint(&quot;shape: \\n&quot;, example_data.shape)\nfig = plt.figure()\nfor i in range(3):\n    ax = fig.add_subplot(2,3,i+1)\n    ax.imshow(example_data[i][0], cmap=&#039;gray&#039;, interpolation=&#039;none&#039;)\n    ax.set_title(&quot;Ground Truth: {}&quot;.format(example_targets[i]))\n    ax.set_axis_off()\n\n    ax = fig.add_subplot(2,3,i+4)\n    recon_img = model(example_data[i][0].view(1, -1).to(device)).data.cpu().numpy().reshape(28, 28)\n    ax.imshow(recon_img, cmap=&#039;gray&#039;)\n    ax.set_title(&quot;Reconstruction of: {}&quot;.format(example_targets[i]))\n    ax.set_axis_off()\nplt.tight_layout()<\/code><\/pre>\n<pre><code>shape: \n torch.Size([64, 1, 28, 28])<\/code><\/pre>\n<p align=\"center\">\n  <img decoding=\"async\" src=\"https:\/\/gnnclub-1311496010.cos.ap-beijing.myqcloud.com\/wp-content\/uploads\/2025\/01\/20250124204618495.png\n\" style=\"height:400px\">\n<\/p>\n<pre><code class=\"language-python\"># let&#039;s compare different dimensionality reduction methods\nn_neighbors = 10\nn_components = 2\nn_points = 500\n\nfmnist_test_loader = torch.utils.data.DataLoader(dataset=fmnist_test_dataset,\n                                          batch_size=n_points, \n                                          shuffle=False)\nX, labels = next(iter(fmnist_test_loader))\nlatent_X = model.get_latent_rep(X.to(device).view(n_points, -1)).data.cpu().numpy()\nlabels = labels.data.cpu().numpy()<\/code><\/pre>\n<pre><code class=\"language-python\"># scikit-learn imports\nfrom sklearn.manifold import LocallyLinearEmbedding, Isomap, TSNE\nfrom sklearn.decomposition import PCA, KernelPCA\nimport numpy as np\n\nfig = plt.figure(figsize=(20,5))\n\n# PCA\nt0 = time.time()\nx_pca = PCA(n_components).fit_transform(latent_X)\nt1 = time.time()\nprint(&quot;PCA time: %.2g sec&quot; % (t1 - t0))\nax = fig.add_subplot(1, 3, 1)\nax.scatter(x_pca[:, 0], x_pca[:, 1], c=labels, cmap=plt.cm.Spectral)\nax.set_title(&#039;PCA&#039;)\n\n# KPCA\nt0 = time.time()\nx_kpca = KernelPCA(n_components, kernel=&#039;rbf&#039;).fit_transform(latent_X)\nt1 = time.time()\nprint(&quot;KPCA time: %.2g sec&quot; % (t1 - t0))\nax = fig.add_subplot(1, 3, 2)\nax.scatter(x_kpca[:, 0], x_kpca[:, 1], c=labels, cmap=plt.cm.Spectral)\nax.set_title(&#039;KernelPCA&#039;)\n\n# t-SNE\nt0 = time.time()\nx_tsne = TSNE(n_components).fit_transform(latent_X)\nt1 = time.time()\nprint(&quot;t-SNE time: %.2g sec&quot; % (t1 - t0))\nax = fig.add_subplot(1, 3, 3)\nscatter = ax.scatter(x_tsne[:, 0], x_tsne[:, 1], c=labels, cmap=plt.cm.Spectral)\nax.set_title(&#039;t-SNE&#039;)\n\nbounds = np.linspace(0, 10, 11)\ncb = plt.colorbar(scatter, spacing=&#039;proportional&#039;,ticks=bounds)\ncb.set_label(&#039;Classes Colors&#039;)\n\nplt.tight_layout()<\/code><\/pre>\n<pre><code>PCA time: 0.0079 sec\nKPCA time: 0.023 sec\nt-SNE time: 0.39 sec<\/code><\/pre>\n<p align=\"center\">\n  <img decoding=\"async\" src=\"https:\/\/gnnclub-1311496010.cos.ap-beijing.myqcloud.com\/wp-content\/uploads\/2025\/01\/20250124204650863.png\n\" style=\"height:300px\">\n<\/p>\n<h2><img decoding=\"async\" src=\"https:\/\/img.icons8.com\/dusk\/64\/000000\/prize.png\" style=\"height:50px;display:inline\"> Credits<\/h2>\n<hr \/>\n<ul>\n<li>Icons made by <a href=\"https:\/\/www.flaticon.com\/authors\/becris\" title=\"Becris\">Becris<\/a> from <a href=\"https:\/\/www.flaticon.com\/\" title=\"Flaticon\">www.flaticon.com<\/a><\/li>\n<li>Icons from <a href=\"https:\/\/icons8.com\/\">Icons8.com<\/a> - <a href=\"https:\/\/icons8.com\">https:\/\/icons8.com<\/a><\/li>\n<li>Datasets from <a href=\"https:\/\/www.kaggle.com\/\">Kaggle<\/a> - <a href=\"https:\/\/www.kaggle.com\/\">https:\/\/www.kaggle.com\/<\/a><\/li>\n<li><a href=\"https:\/\/machinelearningmastery.com\/why-initialize-a-neural-network-with-random-weights\/\">Jason Brownlee - Why Initialize a Neural Network with Random Weights?<\/a><\/li>\n<li><a href=\"https:\/\/openai.com\/blog\/deep-double-descent\/\">OpenAI - Deep Double Descent<\/a><\/li>\n<li><a href=\"https:\/\/taldatech.github.io\">Tal Daniel<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Learning Methods of Deep Learning create by Deepfinder  [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2616,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[18,28],"tags":[],"class_list":["post-2609","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-18","category-28"],"_links":{"self":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2609","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2609"}],"version-history":[{"count":13,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2609\/revisions"}],"predecessor-version":[{"id":2626,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/posts\/2609\/revisions\/2626"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=\/wp\/v2\/media\/2616"}],"wp:attachment":[{"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2609"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2609"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/gnn.club\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2609"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}