提示信息

Building a Simple Neural Network in Python from Scratch

念欲 已关注

Building a simple neural network from scratch in Python is a great way to understand the inner workings of neural networks. Below is a step-by-step guide to create a basic neural network with one hidden layer. This example will help you understand the foundational concepts without using machine learning libraries like TensorFlow or PyTorch.

1. Import Necessary Libraries

First, you'll need to import libraries such as NumPy for numerical operations.

  1. import numpy as np

2. Initialize Network Parameters

Define the number of neurons in each layer. For this example, let's create a network with 3 input neurons, 4 hidden neurons, and 1 output neuron.

  1. input_size = 3
  2. hidden_size = 4
  3. output_size = 1
  4. np.random.seed(0) # For reproducibility
  5. W1 = np.random.randn(input_size, hidden_size) # Weight matrix for input to hidden
  6. b1 = np.zeros((1, hidden_size)) # Bias for hidden layer
  7. W2 = np.random.randn(hidden_size, output_size) # Weight matrix for hidden to output
  8. b2 = np.zeros((1, output_size)) # Bias for output layer

3. Activation Function - Sigmoid

Define the sigmoid activation function and its derivative.

  1. def sigmoid(x):
  2. return 1 / (1 + np.exp(-x))
  3. def sigmoid_derivative(x):
  4. return x * (1 - x)

4. Forward Propagation

Implement the forward propagation function to calculate the output.

  1. def forward_propagation(X):
  2. global z1, a1, z2, output
  3. z1 = np.dot(X, W1) + b1
  4. a1 = sigmoid(z1)
  5. z2 = np.dot(a1, W2) + b2
  6. output = sigmoid(z2)
  7. return output

5. Backpropagation

Implement the backpropagation algorithm to adjust the weights and biases.

  1. def backpropagation(X, y, learning_rate):
  2. global W1, b1, W2, b2
  3. m = X.shape[0]
  4. # Calculate error
  5. output_error = y - output
  6. output_delta = output_error * sigmoid_derivative(output)
  7. # Calculate hidden layer error
  8. a1_error = output_delta.dot(W2.T)
  9. a1_delta = a1_error * sigmoid_derivative(a1)
  10. # Update weights and biases
  11. W2 += a1.T.dot(output_delta) * learning_rate / m
  12. b2 += np.sum(output_delta, axis=0, keepdims=True) * learning_rate / m
  13. W1 += X.T.dot(a1_delta) * learning_rate / m
  14. b1 += np.sum(a1_delta, axis=0, keepdims=True) * learning_rate / m

6. Training the Network

Create a function to train the neural network by iterating over several epochs.

  1. def train(X, y, epochs, learning_rate):
  2. for epoch in range(epochs):
  3. forward_propagation(X)
  4. backpropagation(X, y, learning_rate)
  5. if epoch % 1000 == 0:
  6. loss = np.mean(np.square(y - output))
  7. print(f'Epoch {epoch}, Loss: {loss}')

7. Test and Verify

Create test data to ensure the neural network's functionality.

  1. # Example training data (XOR problem)
  2. X = np.array([[0, 0, 1],
  3. [0, 1, 1],
  4. [1, 0, 1],
  5. [1, 1, 1]])
  6. y = np.array([[0],
  7. [1],
  8. [1],
  9. [0]])
  10. # Train the neural network
  11. train(X, y, epochs=10000, learning_rate=0.1)
  12. # Test
  13. print("Predicted Output:")
  14. print(forward_propagation(X))

Summary

This neural network is a simple implementation with one hidden layer, using basic operations to show the entire process of forward and backward propagation. You can experiment by adding more layers, changing activation functions, or using different datasets to gain deeper insights into neural networks.

    遇到难题? "AI大模型GPT4.0、GPT" 是你的私人解答专家! 点击按钮去提问......
念欲 关注 已关注

最近一次登录:2024-11-20 22:24:39   

暂时还没有签名,请关注我或评论我的文章
×
免费图表工具,画流程图、架构图