Particle swarm optimisation is inspired by the social behaviour of birds and fish. It is a population-based optimisation algorithm that is used to find the best solution to a problem. The algorithm is based on the idea of a swarm of particles that move around in a search space, and each particle represents a potential solution to the problem. The particles move around the search space, and their movement is influenced by their own experience and the experience of the other particles in the swarm.
The algorithm will be explained for the following unconstrained optimisation problem:
Initialise the swarm of particles with random positions and velocities \(\mathbf{x}_i\) and \(\mathbf{v}_i\) where \(i \in [1, n_p]\) and \(n_p\) is the number of particles in the swarm.
Evaluate the objective function for each particle (\(f(\mathbf{x}_i)\)) and update the best position (\(\mathbf{p}_i\)) and best objective function value (\(f(\mathbf{p}_i)\)) for each particle.
Update the best position and best objective function value for the swarm.
Update the velocity and position of each particle using the following equations:
where \(w\) is the inertia weight, \(c_1\) and \(c_2\) are the acceleration coefficients, and \(r_1\) and \(r_2\) are random numbers between 0 and 1.
Repeat steps 2-4 until a stopping criterion is met.
The algorithm is sensitive to the choice of parameters, and the performance of the algorithm can be improved by tuning the parameters. The inertia weight \(w\) controls the influence of the previous velocity on the current velocity, and the acceleration coefficients \(c_1\) and \(c_2\) control the influence of the best position of the particle and the best position of the swarm on the velocity of the particle.
Best position and best objective function value for each particle are updated as follows:
The stopping criterion can be a maximum number of iterations, a maximum number of function evaluations, or a minimum change in the objective function value.
Here is an example python code that demonstrates the particle swarm optimisation algorithm for a bi-model objective function. The initial popolation of \(10\) is generated randomly and the algorithm is run for \(100\) iterations. The inertia weight is set to \(0.5\), and the acceleration coefficients are set to \(1.5\). The objective function is defined as:
\[
f(\mathbf{x}) = \sin(x_1) + \cos(x_2)
\]
The objective function has two global minima at \(\mathbf{x}= (n\pi, m\pi)\) where \(n, m \in \mathbb{Z}\). The algorithm is run for \(100\) iterations, and the best objective function value is plotted as a function of the iteration number.
The output of the code is an image that shows the evolution of the population through the iterations. For the sake of clarity, the populations are plotted after every \(10\) iterations. The best objective function value is plotted as a function of the iteration number.
Code
import numpy as npimport matplotlib.pyplot as pltimport matplotlib.animation as animationfrom IPython.display import HTML# Define the Rosenbrock functiondef rosenbrock(x):return (1- x[0])**2+100* (x[1] - x[0]**2)**2# Define the PSO algorithmdef particle_swarm_optimisation(f, swarm, num_iterations, w=0.5, c1=0.05, c2=0.05): positions = [] velocities = np.random.uniform(-1, 1, swarm.shape) personal_best_positions = np.copy(swarm) personal_best_values = np.array([f(x) for x in swarm]) global_best_position = swarm[np.argmin(personal_best_values)]for i inrange(num_iterations):for j inrange(len(swarm)):# Update velocity velocities[j] = (w * velocities[j] + c1 * np.random.random() * (personal_best_positions[j] - swarm[j]) + c2 * np.random.random() * (global_best_position - swarm[j]))# Update position swarm[j] += velocities[j]# Update personal bestif f(swarm[j]) < personal_best_values[j]: personal_best_positions[j] = swarm[j] personal_best_values[j] = f(swarm[j])# Update global bestif f(swarm[j]) < f(global_best_position): global_best_position = swarm[j]if i %10==0: positions.append(np.copy(swarm))return global_best_position, np.array(positions)# Initialize the swarmnp.random.seed(0)swarm = np.random.uniform(-3, 3, (50, 2))# Run the PSO algorithmbest_solution, positions = particle_swarm_optimisation(rosenbrock, swarm, 100)# Create a scatter plot for the initial particle positionsfig, ax = plt.subplots()scat = ax.scatter(positions[0][:, 0], positions[0][:, 1])# Update function for the animationdef update(i):# Update the scatter plot for the current particle positions scat.set_offsets(positions[i])# Create the animationani = animation.FuncAnimation(fig, update, frames=len(positions), interval=1)plt.close()HTML(ani.to_jshtml())# # Display the animation# plt.show()
ModuleNotFoundError: No module named 'numpy'
Code
import numpy as npimport matplotlib.pyplot as pltimport matplotlib.animation as animation# Define the Rosenbrock functiondef rosenbrock(x):return (1- x[0])**2+100* (x[1] - x[0]**2)**2# Define the PSO algorithmdef particle_swarm_optimisation(f, swarm, num_iterations, w=0.5, c1=1, c2=1): positions = [] velocities = np.random.uniform(-1, 1, swarm.shape) personal_best_positions = np.copy(swarm) personal_best_values = np.array([f(x) for x in swarm]) global_best_position = swarm[np.argmin(personal_best_values)]for i inrange(num_iterations):for j inrange(len(swarm)):# Update velocity velocities[j] = (w * velocities[j] + c1 * np.random.random() * (personal_best_positions[j] - swarm[j]) + c2 * np.random.random() * (global_best_position - swarm[j]))# Update position swarm[j] += velocities[j]# Update personal bestif f(swarm[j]) < personal_best_values[j]: personal_best_positions[j] = swarm[j] personal_best_values[j] = f(swarm[j])# Update global bestif f(swarm[j]) < f(global_best_position): global_best_position = swarm[j]if i %10==0: positions.append(np.copy(swarm))return global_best_position, np.array(positions)# Initialize the swarmnp.random.seed(0)swarm = np.random.uniform(-3, 3, (50, 2))# Run the PSO algorithmbest_solution, positions = particle_swarm_optimisation(rosenbrock, swarm, 1000)# Create a scatter plot for the initial particle positionsfig, ax = plt.subplots()# Create a grid of pointsx = np.linspace(-3, 3, 100)y = np.linspace(-3, 3, 100)X, Y = np.meshgrid(x, y)# Compute the function value at each pointZ = rosenbrock([X, Y])# Create a contour plotcontour = ax.contour(X, Y, Z, levels=np.logspace(0, 5, 35), cmap='jet')scat = ax.scatter(positions[0][:, 0], positions[0][:, 1], color='k')# Update function for the animationdef update(i):# Update the scatter plot for the current particle positions scat.set_offsets(positions[i])# Create the animationani = animation.FuncAnimation(fig, update, frames=len(positions), interval=200)# Save the animation as a GIF fileani.save('animation.gif', writer='pillow')# Display the animationplt.show()