Grey Wolf Optimizer in a nutshell

In nature, grey wolves live in a pack, and they organise themselves in a social hierarchy of four layers: alpha (α), beta (β), delta (δ) and omega (ω) as shown in Figure 1. The most dominant individual (leader) is the alpha, and it leads the pack. The second layer is the beta, it reinforces the commands of the alpha to the others, and it is an advisor of the alpha. Delta wolves are the scouts, sentinels, elders, hunters and caretakers. Delta wolves submit to alphas and betas, but they dominate the lowest level in the pack, the omegas. The grey wolves also hunt in a group. Hunting can be divided into three steps: (i) tracking, chasing and approaching the prey; (ii) pursuing encircling, and harassing the prey it stops moving; and (iii) attacking the prey.

GWO social hierarchy

Mirjalili et al. proposed the Grey Wolf Optimizer (Algorithm 1) inspired by the social hierarchy and group hunting of wolves. Each wolf (i.e. the agent) represents a candidate solution in the proposal, and the social hierarchy is based on the current fitness value. The fittest wolf in the swarm is labelled as alpha. The second and third best solutions are labelled as beta and delta, respectively. They influence the movement of the rest of the swarm that is assumed to be the omega. The movement of the wolves is modelled to reproduce the group hunting. The encircled behaviour is proposed using Equation 1 and Equation 2.

(1) D=|Cxp(i)x(i)| (2) x(i+1)=xp(i)AD,

where x(i) and xp(i) are the wolf position, and the position of the prey in the iteration i, respectively. A and C are the coefficients that are calculated as follows

(3) A=2ar1a, (4) C=2r2,

where r1, and r2, are random vectors generated in [0, 1]; a is a vector that the components are decreased linearly over iteration. As α, β and δ are the wolves nearest to the prey (i.e., best position), they are assumed as the prey position ( x_p). Thus, the movement is performed using Equations 5 and 6 considering α as the example, but it is also computed for β and δ.

(5) Dα=|Cαxαx| (6) x'α=xαA1Dα,

Finally, the wolf new position is the mean of x’α, x’β, and x’δ, as described in Equation 7.

(7) x(i+1)=x'α+x'β+x'δ3.

Pseudocode

GWO Pseudo Code

Python Code

Agent

import numpy as np
class Wolf():
def __init__(self, gid, position):
self.id = gid
self.position = position
self.fitness = np.inf
def move(self, a, alpha, beta, delta):
dim = len(self.position)
# alpha
r1 = np.random.uniform(size=dim)
r2 = np.random.uniform(size=dim)
A1 = 2 * a * r1 - a
C1 = 2 * r2
D_alpha = abs(C1 * alpha.position - self.position)
X1 = alpha.position - A1 * D_alpha
#beta
r3 = np.random.uniform(size=dim)
r4 = np.random.uniform(size=dim)
A2 = 2 * a * r3 - a
C2 = 2 * r4
D_beta = abs(C2 * beta.position - self.position)
X2 = beta.position - A2 * D_beta
#delta
r5 = np.random.uniform(size=dim)
r6 = np.random.uniform(size=dim)
A3 = 2 * a * r5 - a
C3 = 2 * r6
D_delta = abs(C3 * delta.position - self.position)
X3 = delta.position - A3 * D_delta
self.position = (X1 + X2 + X3)/3
view raw wolf.py hosted with ❤ by GitHub

Benchmark Function

import numpy as np
class Sphere():
def __init__(self, dimensions, max_value=100, min_value=-100):
self.dim = dimensions
self.max_environment = max_value
self.min_environment = min_value
def evaluate(self, positions):
return np.sum(positions**2)
view raw sphere.py hosted with ❤ by GitHub

Grey Wolf Optimizer Code

import numpy as np
from time import time
from wolf import Wolf
class GWO():
def __init__(self, nagents, max_iter, dimensions, fitness_function, simulation_id):
np.random.seed(simulation_id+int(time())) # more generic seed
self.dimensions = dimensions
self.fitness_function = fitness_function(dimensions=self.dimensions)
self.nagents = nagents
self.max_iter = max_iter
self.pop = []
self.alpha = None
self.beta = None
self.delta = None
self.simulation_id = simulation_id
self.pattern_name = f'GWO_simulation_{self.simulation_id}_'
print(self.pattern_name)
self.best_fitness_through_iterations = []
def _initialize(self):
self.pop.clear()
self.alpha = Wolf(-1, []) #dumb wolves
self.beta = Wolf(-2, [])
self.delta = Wolf(-3, [])
for i in range(self.nagents):
position = np.random.uniform(self.fitness_function.min_environment, self.fitness_function.max_environment, self.dimensions)
wolf = Wolf(i, position)
wolf.fitness = self.fitness_function.evaluate(wolf.position)
self.pop.append(wolf)
self._update_leaders()
def _update_leaders(self):
cloned_pop = self.pop[:]
np.random.shuffle(cloned_pop)
ranked = sorted(cloned_pop, key = lambda wolf: wolf.fitness)
self.alpha = ranked[0]
self.beta = ranked[1]
self.delta = ranked[2]
def optimize(self, debug=False):
i = 1
self._initialize()
self.best_fitness_through_iterations = []
while i <= self.max_iter:
a = 2 - i * (2 / self.max_iter)
for wolf in self.pop:
wolf.move(a, self.alpha, self.beta, self.delta)
np.clip(wolf.position, self.fitness_function.min_environment, self.fitness_function.max_environment, wolf.position)
new_fit = self.fitness_function.evaluate(wolf.position)
wolf.fitness = new_fit
self._update_leaders()
if debug and i % 100 == 0:
print("Simu: %d - Iteration: %d - Best Fitness: %e - best id %d" % (self.simulation_id, i, self.alpha.fitness, self.alpha.id))
self.best_fitness_through_iterations.append(self.alpha.fitness)
i+=1
np.savetxt(self.pattern_name + 'best_fitness_through_iterations.txt',
self.best_fitness_through_iterations, fmt='%.4e')
return self.alpha.position, self.best_fitness_through_iterations
view raw GWO.py hosted with ❤ by GitHub

Main code

from GWO import GWO
from sphere import Sphere
import numpy as np
if __name__ == '__main__':
dimensions = 100
simulations = 10
max_iterations = 1000
agents = 30
func = Sphere
print(f"Running {simulations} simulations using {func.__class__.__name__} function")
bfs = []
for i in range(simulations):
gwo = GWO(agents, max_iterations, dimensions, func, i)
best_fitness, convergence = gwo.optimize(True)
bfs.append(best_fitness)
print(f"#{i:02d}: {best_fitness}")
print(f"\nMean: {np.mean(bfs)} | +- {np.std(bfs)}")
view raw gwo_main.py hosted with ❤ by GitHub

Single file available here https://github.com/rodrigoclira/gwo_nutshell

Text from our paper ‘Modelling the Social Interactions in Grey Wolf Optimizer’ available in https://ieeexplore.ieee.org/document/9769781/

Rodrigo Lira
Rodrigo Lira
Professor

Rodrigo Lira é professor no IFPE e tem interesse nas áreas de inteligência de enxames, aprendizado de máquina e IoT.

Próximo
Anterior