亚洲免费在线-亚洲免费在线播放-亚洲免费在线观看-亚洲免费在线观看视频-亚洲免费在线看-亚洲免费在线视频

基于《Python神經網絡編程》分析三層BP神經網絡與四層BP神經網絡

系統 1831 0
  1. 三層神經網絡
    1. 節點數:784*100*10
    2. 學習率: 0.1
    3. 預測結果得分(五次)
      1. ?0.9512
      2. ?0.9497
      3. ?0.9506
      4. ?0.9505
      5. ?0.9464
    4. 平均預測得分:0.94968
  2. 四層神經網絡
    1. 節點數:784*100*100*10
    2. 學習率 :0.1
    3. 預測結果得分(五次)
      1. ?0.9095
      2. ?0.9142
      3. ?0.9033
      4. ?0.9130
      5. ?0.9046
    4. 平均預測得分:0.90892
  3. 結論:針對這種情況,簡單的神經網絡對MNIST數據集的分析,增加神經網絡層數未能提高學習效果。
  4. 代碼(參考 Tariq Rashid的《Python 神經網絡編程》)
    1. 三層神經網絡?
                          
                            # python notebook for Make Your Own Neural Network
      # code for 3-layer neural network, and code for learning the MNIST dataset
      # 20190603
      
      import numpy as np
      import matplotlib.pyplot as plt
      # scipy.special for the sigmoid function expit()
      import scipy.special as special
      # ensure the plots are inside this notebook, not an external window
      %matplotlib inline
      
      # neural network class definition
      class neuralNetwork(object):
          
          # initialise the neural network
          def __init__(self, inputNodes, hiddenNodes, outputNodes, learningRate=0.5):
              # set number of nodes in each input, hidden, output layer
              self.iNodes = inputNodes
              self.hNodes = hiddenNodes
              self.oNodes = outputNodes
              # link weight matrices, wih and who
              # weights inside the arrays are w_i_j, where link is from node i to node j in 
              # the next layer
              # w11 w21
              # w12 w22 etc
              # pow(x, y), 返回x的y次方
              self.wih = np.random.normal(0.0, pow(self.hNodes, -0.5), (self.hNodes, self.iNodes))
              self.who = np.random.normal(0.0, pow(self.oNodes, -0.5), (self.oNodes, self.hNodes))
              # learning rate
              self.lr = learningRate
              # activation function is the sigmoid function
              # lambda x: special.expit(x)  表示接受x,返回special.expit(x)函數
              self.activation_function = lambda x: special.expit(x)
              pass
          
          # train the neural network
          def train(self, inputs_list, targets_list):
              # convert inputs list to 2d array
              inputs = np.array(inputs_list, ndmin=2).T
              targets = np.array(targets_list, ndmin=2).T
              
              # calculate signals into hidden layer
              hidden_inputs = np.dot(self.wih, inputs)
              # calculate the signals emerging from hidden layer
              hidden_outputs = self.activation_function(hidden_inputs)
              
              # calculate signals into final output layer
              final_inputs = np.dot(self.who, hidden_outputs)
              # calculate signals emerging from final output layer
              final_outputs = self.activation_function(final_inputs)
              
              # error is the (targets - final_outputs)
              output_errors = (targets - final_outputs)
              
              # hidden layer error is the output_errors, split by weights, recombined at 
              # hidden nodes
              hidden_errors = np.dot(self.who.T, output_errors)
              
              # update the weights for the links between the hidden and output layers
              # np.transpose(a)  表示矩陣a的轉秩
              self.who += self.lr * np.dot(output_errors * final_outputs * 
                                           (1.0 - final_outputs), np.transpose(hidden_outputs))
              
              # update the weights for the links between the input and hidden layers
              self.wih += self.lr * np.dot(hidden_errors * hidden_outputs *
                                          (1.0 - hidden_outputs), np.transpose(inputs))
              
              pass
          
          # query the neural network
          def query(self, inputs_list):
              # convert inputs list to 2d array
              # ndmin=2 表示指定最小維數為2
              # .T 表示矩陣的轉秩
              inputs = np.array(inputs_list, ndmin=2).T
              
              # calculate signals into hidden layer
              hidden_inputs = np.dot(self.wih, inputs)
              # calculate the signals emerging from hidden layer
              hidden_outputs = self.activation_function(hidden_inputs)
              
              # calculate signals into final output layer
              final_inputs = np.dot(self.who, hidden_outputs)
              # calculate the signals emerging from final output layer
              final_outputs = self.activation_function(final_inputs)
              
              return final_outputs
          
      
      # number of input, hidden and output nodes
      input_nodes = 784
      hidden_nodes = 100
      output_nodes = 10
      
      # learning rate is 0.3
      learning_rate = 0.1
      
      # creat instance of neural network
      n = neuralNetwork(input_nodes, hidden_nodes, output_nodes, learning_rate)
      
      
      # load the mnist training data CSV file into a list
      training_data_file = open("mnist_dataset/mnist_train.csv", 'r')
      training_data_list = training_data_file.readlines()
      training_data_file.close()
      
      # train the neural network
      # go through all recordes in the training data set
      final_training_data = []
      for record in training_data_list:
          # split the record by the ',' commas
          all_values = record.split(',')
          # scale and shift the inputs
          inputs = (np.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
          # create the target output values (all 0.01, except the desired label which is 0.99)
          targets = np.zeros(output_nodes) + 0.01
          # all_values[0] is the target label for this record
          targets[int(all_values[0])] = 0.99
          # train Model
          final_training_data.append([inputs, targets])
          pass
      
      
      # train Model
      for i in range(len(final_training_data)):
          n.train(final_training_data[i][0], final_training_data[i][1])
      
      
      # load the minist test data CSV file into  a list
      test_data_file = open("mnist_dataset/mnist_test.csv", 'r')
      test_data_list = test_data_file.readlines()
      test_data_file.close()
      
      
      # test the neural network
      # scorecard for how well the network performs, initially empty 
      scorecard = []
      
      # go through all the records in the test data set
      for record in test_data_list:
          # split the record by the ',' commas
          all_values = record.split(',')
          # correct answer is the first value
          correct_label = int(all_values[0])
      #     print("Correct label : ", correct_label)
          # scale and shift the inputs
          inputs = (np.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
          # query the network
          outputs = n.query(inputs)
          # the index of the highest value corresponds to the label
          label = np.argmax(outputs)
      #     print("NeuralNetwork's answer : ", label)
      
          # append correct or incorrect to list
          if(label == correct_label):
              # network's answer matches correct answer, add 1 to scorecard
              scorecard.append(1)
          else:
              #network's answer doesn't match correct answer, add 0 to scorecard
              scorecard.append(0)
              
              pass
          pass
      
      
      
      # calculate the performance score, the fraction of correct answers
      scorecard_array = np.asarray(scorecard)
      print("Performance : ", scorecard_array.sum() / scorecard_array.size)
                          
                        

      ?

    2. 四層神經網絡
                          
                            # python notebook for Make Your Own Neural Network
      # code for 4-layer neural network, and code for learning the MNIST dataset
      # 20190604
      
      
      import numpy as np
      import matplotlib.pyplot as plt
      # scipy.special for the sigmoid function expit()
      import scipy.special as special
      # ensure the plots are inside this notebook, not an external window
      %matplotlib inline
      
      
      # neural network class definition
      class neuralNetwork(object):
          
          # initialise the neural network
          def __init__(self, inputNodes, hiddenNodes, hiddenNodes_2, outputNodes, learningRate=0.5):
              # set number of nodes in each input, hidden, output layer
              self.iNodes = inputNodes
              self.hNodes = hiddenNodes
              self.hNodes_2 = hiddenNodes_2
              self.oNodes = outputNodes
              # link weight matrices, wih and who
              # weights inside the arrays are w_i_j, where link is from node i to node j in 
              # the next layer
              # w11 w21
              # w12 w22 etc
              # pow(x, y), 返回x的y次方
              self.wih = np.random.normal(0.0, pow(self.hNodes, -0.5), (self.hNodes, self.iNodes))
              self.whh = np.random.normal(0.0, pow(int((self.hNodes + self.hNodes_2) / 2) , -0.5), (self.hNodes_2, self.hNodes))
              self.who = np.random.normal(0.0, pow(self.oNodes, -0.5), (self.oNodes, self.hNodes_2))
              # learning rate
              self.lr = learningRate
              # activation function is the sigmoid function
              # lambda x: special.expit(x)  表示接受x,返回special.expit(x)函數
              self.activation_function = lambda x: special.expit(x)
              pass
          
          # train the neural network
          def train(self, inputs_list, targets_list):
              # convert inputs list to 2d array
              inputs = np.array(inputs_list, ndmin=2).T
              targets = np.array(targets_list, ndmin=2).T
              
              # calculate signals into hidden layer
              hidden_inputs = np.dot(self.wih, inputs)
              # calculate the signals emerging from hidden layer
              hidden_outputs = self.activation_function(hidden_inputs)
              
              # calculate signals into hidden_2 layer
              hidden_2_inputs = np.dot(self.whh, hidden_outputs)
              # calculate the signals emerging from hedden_2 layer
              hidden_2_outputs = self.activation_function(hidden_2_inputs)
              
              # calculate signals into final output layer
              final_inputs = np.dot(self.who, hidden_2_outputs)
              # calculate signals emerging from final output layer
              final_outputs = self.activation_function(final_inputs)
              
              # error is the (targets - final_outputs)
              output_errors = (targets - final_outputs)
              
              
              # hidden layer error is the output_errors, split by weights, recombined at 
              # hidden nodes
              hidden_2_errors = np.dot(self.who.T, output_errors)
              
              # hidden_2 layer error is
              hidden_errors = np.dot(self.whh.T, hidden_2_errors)
              
              # update the weights for the links between the hidden and output layers
              # np.transpose(a)  表示矩陣a的轉秩
              self.who += self.lr * np.dot(output_errors * final_outputs * 
                                           (1.0 - final_outputs), np.transpose(hidden_2_outputs))
              
              
              # update the weights for the links between the hideen and hidden_2 layers
              self.whh += self.lr * np.dot(hidden_2_errors * hidden_2_outputs *
                                          (1.0 - hidden_2_outputs), np.transpose(hidden_outputs))
              
              # update the weights for the links between the input and hidden layers
              self.wih += self.lr * np.dot(hidden_errors * hidden_outputs *
                                          (1.0 - hidden_outputs), np.transpose(inputs))
              
              
              
              pass
          
          # query the neural network
          def query(self, inputs_list):
              # convert inputs list to 2d array
              # ndmin=2 表示指定最小維數為2
              # .T 表示矩陣的轉秩
              inputs = np.array(inputs_list, ndmin=2).T
              
              # calculate signals into hidden layer
              hidden_inputs = np.dot(self.wih, inputs)
              # calculate the signals emerging from hidden layer
              hidden_outputs = self.activation_function(hidden_inputs)
              
              # calculate signals into hidden_2 layer
              hidden_2_inputs = np.dot(self.whh, hidden_outputs)
              # calculate the signals emerging from hidden_2 layer
              hidden_2_outputs = self.activation_function(hidden_2_inputs)
              
              # calculate signals into final output layer
              final_inputs = np.dot(self.who, hidden_2_outputs)
              # calculate the signals emerging from final output layer
              final_outputs = self.activation_function(final_inputs)
              
              return final_outputs
          
      
      
      # number of input, hidden and output nodes
      input_nodes = 784
      hidden_nodes = 100
      hidden_2_nodes = 100
      output_nodes = 10
      
      # learning rate is 0.3
      learning_rate = 0.1
      
      # creat instance of neural network
      n = neuralNetwork(input_nodes, hidden_nodes, hidden_2_nodes, output_nodes, learning_rate)
      
      
      # load the mnist training data CSV file into a list
      training_data_file = open("mnist_dataset/mnist_train.csv", 'r')
      training_data_list = training_data_file.readlines()
      training_data_file.close()
      
      
      # train the neural network
      # go through all recordes in the training data set
      final_training_data = []
      for record in training_data_list:
          # split the record by the ',' commas
          all_values = record.split(',')
          # scale and shift the inputs
          inputs = (np.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
          # create the target output values (all 0.01, except the desired label which is 0.99)
          targets = np.zeros(output_nodes) + 0.01
          # all_values[0] is the target label for this record
          targets[int(all_values[0])] = 0.99
          # train Model
          final_training_data.append([inputs, targets])
          pass
      
      
      # load the mnist training data CSV file into a list
      training_data_file = open("mnist_dataset/mnist_train.csv", 'r')
      training_data_list = training_data_file.readlines()
      training_data_file.close()
      
      
      
      # train the neural network
      # go through all recordes in the training data set
      final_training_data = []
      for record in training_data_list:
          # split the record by the ',' commas
          all_values = record.split(',')
          # scale and shift the inputs
          inputs = (np.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
          # create the target output values (all 0.01, except the desired label which is 0.99)
          targets = np.zeros(output_nodes) + 0.01
          # all_values[0] is the target label for this record
          targets[int(all_values[0])] = 0.99
          # train Model
          final_training_data.append([inputs, targets])
          pass
      
      
      
      # train Model
      for i in range(len(final_training_data)):
          n.train(final_training_data[i][0], final_training_data[i][1])
      
      
      
      # load the minist test data CSV file into  a list
      test_data_file = open("mnist_dataset/mnist_test.csv", 'r')
      test_data_list = test_data_file.readlines()
      test_data_file.close()
      
      
      # test the neural network
      # scorecard for how well the network performs, initially empty 
      scorecard = []
      
      # go through all the records in the test data set
      for record in test_data_list:
          # split the record by the ',' commas
          all_values = record.split(',')
          # correct answer is the first value
          correct_label = int(all_values[0])
      #     print("Correct label : ", correct_label)
          # scale and shift the inputs
          inputs = (np.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
          # query the network
          outputs = n.query(inputs)
          # the index of the highest value corresponds to the label
          label = np.argmax(outputs)
      #     print("NeuralNetwork's answer : ", label)
      
          # append correct or incorrect to list
          if(label == correct_label):
              # network's answer matches correct answer, add 1 to scorecard
              scorecard.append(1)
          else:
              #network's answer doesn't match correct answer, add 0 to scorecard
              scorecard.append(0)
              
              pass
          pass
      
      
      
      # calculate the performance score, the fraction of correct answers
      scorecard_array = np.asarray(scorecard)
      print("Performance : ", scorecard_array.sum() / scorecard_array.size)
                          
                        

      ?

?


更多文章、技術交流、商務合作、聯系博主

微信掃碼或搜索:z360901061

微信掃一掃加我為好友

QQ號聯系: 360901061

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點擊下面給點支持吧,站長非常感激您!手機微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點擊微信右上角掃一掃功能,選擇支付二維碼完成支付。

【本文對您有幫助就好】

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描上面二維碼支持博主2元、5元、10元、自定義金額等您想捐的金額吧,站長會非常 感謝您的哦!!!

發表我的評論
最新評論 總共0條評論
主站蜘蛛池模板: 综合色好色| 台湾一级毛片免费播放 | 伊人色视频 | 97视频在线观看视频在线精品 | 免费国产一区二区三区四区 | 老子影院午夜理伦手机不卡 | 99国产精品 | 狠狠干天天干 | 久久精品免费大片国产大片 | 日本乱中文字幕系列在线观看 | 亚洲精品www久久久久久 | 亚洲国产精品久久久久久 | 激情在线网站 | 国产成人综合欧美精品久久 | 免费色片网站 | 在线看亚洲 | 天天干天操 | 亚洲毛片在线观看 | 一区二区三区在线播放 | 久久久在线视频 | 国产欧美亚洲精品第一区 | 国产香蕉在线视频 | 中文字幕三级久久久久久 | 久久久久这里只有精品 | 国产综合精品久久久久成人影 | 久久精品视频3 | 国产麻豆va精品视频 | 久久精品成人国产午夜 | 久久综合狠狠综合久久97色 | 久月婷婷 | 婷婷四房综合激情五月在线 | 亚洲国产精品日韩在线观看 | 日韩免费精品一级毛片 | 999在线免费视频 | 91热精品| 精品欧美一区二区三区免费观看 | 日本一级毛片中文字幕 | 亚洲精品国产啊女成拍色拍 | 91久久| 久草免费公开视频 | 欧美一区二区三区高清视频 |