pycellga.problems.single_objective.continuous package
Submodules
pycellga.problems.single_objective.continuous.ackley module
- class pycellga.problems.single_objective.continuous.ackley.Ackley[source]
Bases:
AbstractProblem
Ackley function implementation for optimization problems.
The Ackley function is widely used for testing optimization algorithms. It has a nearly flat outer region and a large hole at the center. The function is usually evaluated on the hypercube x_i ∈ [-32.768, 32.768], for all i = 1, 2, …, d.
- None
Notes
-32.768 ≤ xi ≤ 32.768 Global minimum at f(0, 0) = 0
pycellga.problems.single_objective.continuous.bentcigar module
- class pycellga.problems.single_objective.continuous.bentcigar.Bentcigar[source]
Bases:
AbstractProblem
Bentcigar function implementation for optimization problems.
The Bentcigar function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.
- None
Notes
-100 ≤ xi ≤ 100 for i = 1,…,n Global minimum at f(0,…,0) = 0
pycellga.problems.single_objective.continuous.bohachevsky module
- class pycellga.problems.single_objective.continuous.bohachevsky.Bohachevsky[source]
Bases:
AbstractProblem
Bohachevsky function implementation for optimization problems.
The Bohachevsky function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-15, 15], for all i = 1, 2, …, n.
- None
Notes
-15 ≤ xi ≤ 15 for i = 1,…,n Global minimum at f(0,…,0) = 0
pycellga.problems.single_objective.continuous.chichinadze module
- class pycellga.problems.single_objective.continuous.chichinadze.Chichinadze[source]
Bases:
AbstractProblem
Chichinadze function implementation for optimization problems.
The Chichinadze function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x, y ∈ [-30, 30].
- None
Notes
-30 ≤ x, y ≤ 30 Global minimum at f(5.90133, 0.5) = −43.3159
pycellga.problems.single_objective.continuous.dropwave module
- class pycellga.problems.single_objective.continuous.dropwave.Dropwave[source]
Bases:
AbstractProblem
Dropwave function for optimization problems.
The Dropwave function is a multimodal function commonly used as a performance test problem for optimization algorithms. It is defined within the bounds -5.12 ≤ xi ≤ 5.12 for i = 1, 2, and has a global minimum at f(0, 0) = -1.
- f(x: list) float [source]
Evaluate the Dropwave function at a given point.
- Parameters:
x (list) – A list of two floats representing the coordinates [x1, x2].
- Returns:
The value of the Dropwave function rounded to three decimal places.
- Return type:
float
Notes
- The Dropwave function is defined as:
f(x1, x2) = - (1 + cos(12 * sqrt(x1^2 + x2^2))) / (0.5 * (x1^2 + x2^2) + 2)
where x1 and x2 are the input variables.
Examples
>>> dropwave = Dropwave() >>> dropwave.f([0, 0]) -1.0 >>> dropwave.f([1, 1]) -0.028
pycellga.problems.single_objective.continuous.fms module
- class pycellga.problems.single_objective.continuous.fms.Fms[source]
Bases:
AbstractProblem
Fms function implementation for optimization problems.
The Fms function is used for testing optimization algorithms, specifically those dealing with frequency modulation sound.
- None
Notes
-6.4 ≤ xi ≤ 6.35 Length of chromosomes = 6 Maximum Fitness Value = 0.01 Maximum Fitness Value Error = 10^-2
pycellga.problems.single_objective.continuous.griewank module
- class pycellga.problems.single_objective.continuous.griewank.Griewank[source]
Bases:
AbstractProblem
Griewank function implementation for optimization problems.
The Griewank function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-600, 600], for all i = 1, 2, …, n.
Notes
-600 ≤ xi ≤ 600 for i = 1,…,n Global minimum at f(0,…,0) = 0
pycellga.problems.single_objective.continuous.holzman module
- class pycellga.problems.single_objective.continuous.holzman.Holzman[source]
Bases:
AbstractProblem
Holzman function implementation for optimization problems.
The Holzman function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-10, 10], for all i = 1, 2, …, n.
- None
Notes
-10 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(0,…,0) = 0
pycellga.problems.single_objective.continuous.levy module
- class pycellga.problems.single_objective.continuous.levy.Levy[source]
Bases:
AbstractProblem
Levy function implementation for optimization problems.
The Levy function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-10, 10], for all i = 1, 2, …, n.
- None
Notes
-10 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(1,1) = 0
pycellga.problems.single_objective.continuous.matyas module
- class pycellga.problems.single_objective.continuous.matyas.Matyas[source]
Bases:
AbstractProblem
Matyas function implementation for optimization problems.
The Matyas function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-10, 10], for all i = 1, 2, …, n.
- None
Notes
-10 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(0,…,0) = 0
pycellga.problems.single_objective.continuous.pow module
- class pycellga.problems.single_objective.continuous.pow.Pow[source]
Bases:
AbstractProblem
Pow function implementation for optimization problems.
The Pow function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5.0, 15.0].
- None
Notes
-5.0 ≤ xi ≤ 15.0 Global minimum at f(5, 7, 9, 3, 2) = 0
pycellga.problems.single_objective.continuous.powell module
- class pycellga.problems.single_objective.continuous.powell.Powell[source]
Bases:
AbstractProblem
Powell function implementation for optimization problems.
The Powell function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-4, 5], for all i = 1, 2, …, n.
- None
Notes
-4 ≤ xi ≤ 5 for i = 1,…,n Global minimum at f(0,….,0) = 0
pycellga.problems.single_objective.continuous.rastrigin module
- class pycellga.problems.single_objective.continuous.rastrigin.Rastrigin[source]
Bases:
AbstractProblem
Rastrigin function implementation for optimization problems.
The Rastrigin function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5.12, 5.12], for all i = 1, 2, …, n.
- None
Notes
-5.12 ≤ xi ≤ 5.12 for i = 1,…,n Global minimum at f(0,…,0) = 0
pycellga.problems.single_objective.continuous.rosenbrock module
- class pycellga.problems.single_objective.continuous.rosenbrock.Rosenbrock[source]
Bases:
AbstractProblem
Rosenbrock function implementation for optimization problems.
The Rosenbrock function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 10], for all i = 1, 2, …, n.
- None
Notes
-5 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(1,…,1) = 0
pycellga.problems.single_objective.continuous.rothellipsoid module
- class pycellga.problems.single_objective.continuous.rothellipsoid.Rothellipsoid[source]
Bases:
AbstractProblem
Rotated Hyper-Ellipsoid function implementation for optimization problems.
The Rotated Hyper-Ellipsoid function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.
- None
- f(x: list) float [source]
Calculates the Rotated Hyper-Ellipsoid function value for a given list of variables.
Notes
-100 ≤ xi ≤ 100 for i = 1,…,n Global minimum at f(0,….,0) = 0
pycellga.problems.single_objective.continuous.schaffer module
- class pycellga.problems.single_objective.continuous.schaffer.Schaffer[source]
Bases:
AbstractProblem
Schaffer’s Function.
This class implements the Schaffer’s function, which is a common benchmark problem for optimization algorithms. The function is defined over a multidimensional input and is used to test the performance of optimization methods.
- f(X: list) float [source]
Calculates the value of the Schaffer’s function for a given list of input variables.
- f(X: list) float [source]
Evaluate the Schaffer’s function at a given point.
- Xlist
A list of input variables (continuous values). The length of X should be at least 2.
- float
The value of the Schaffer’s function evaluated at X, rounded to three decimal places.
The Schaffer’s function is defined as: [ f(X) = sum_{i=1}^{n-1} left[ 0.5 +
rac{(sin(x_i^2 + x_{i+1}^2)^2 - 0.5)^2}{(1 + 0.001 cdot (x_i^2 + x_{i+1}^2))^2} ight]
] where ( n ) is the number of elements in X.
>>> schaffer = Schaffer() >>> schaffer.f([1.0, 2.0]) 0.554
pycellga.problems.single_objective.continuous.schaffer2 module
- class pycellga.problems.single_objective.continuous.schaffer2.Schaffer2[source]
Bases:
AbstractProblem
Modified Schaffer function #2 implementation for optimization problems.
The Modified Schaffer function #2 is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.
- None
- f(X: list) float [source]
Calculates the Modified Schaffer function #2 value for a given list of variables.
Notes
-100 ≤ xi ≤ 100 for i = 1,…,n Global minimum at f(0,…,0) = 0
pycellga.problems.single_objective.continuous.schwefel module
- class pycellga.problems.single_objective.continuous.schwefel.Schwefel[source]
Bases:
AbstractProblem
Schwefel function implementation for optimization problems.
The Schwefel function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-500, 500], for all i = 1, 2, …, n.
- None
Notes
-500 ≤ xi ≤ 500 for i = 1,…,n Global minimum at f(420.9687,…,420.9687) = 0
pycellga.problems.single_objective.continuous.sphere module
- class pycellga.problems.single_objective.continuous.sphere.Sphere[source]
Bases:
AbstractProblem
Sphere function implementation for optimization problems.
The Sphere function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5.12, 5.12], for all i = 1, 2, …, n.
- None
Notes
-5.12 ≤ xi ≤ 5.12 for i = 1,…,n Global minimum at f(0,…,0) = 0
pycellga.problems.single_objective.continuous.styblinskitang module
- class pycellga.problems.single_objective.continuous.styblinskitang.StyblinskiTang[source]
Bases:
AbstractProblem
Styblinski-Tang function implementation for optimization problems.
The Styblinski-Tang function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 5], for all i = 1, 2, …, n.
- None
- f(x: list) float [source]
Calculates the Styblinski-Tang function value for a given list of variables.
Notes
-5 ≤ xi ≤ 5 for i = 1,…,n Global minimum at f(−2.903534, −2.903534) = −78.332
pycellga.problems.single_objective.continuous.sumofdifferentpowers module
pycellga.problems.single_objective.continuous.threehumps module
- class pycellga.problems.single_objective.continuous.threehumps.Threehumps[source]
Bases:
AbstractProblem
Three Hump Camel function implementation for optimization problems.
The Three Hump Camel function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 5], for all i = 1, 2, …, n.
- None
- f(x: list) float [source]
Calculates the Three Hump Camel function value for a given list of variables.
Notes
-5 ≤ xi ≤ 5 for i = 1,…,n Global minimum at f(0,..,0) = 0
pycellga.problems.single_objective.continuous.zakharov module
- class pycellga.problems.single_objective.continuous.zakharov.Zakharov[source]
Bases:
AbstractProblem
Zakharov function implementation for optimization problems.
The Zakharov function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 10], for all i = 1, 2, …, n.
- None
Notes
-5 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(0,..,0) = 0
pycellga.problems.single_objective.continuous.zettle module
- class pycellga.problems.single_objective.continuous.zettle.Zettle[source]
Bases:
AbstractProblem
Zettle function implementation for optimization problems.
The Zettle function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 5], for all i = 1, 2, …, n.
- None
Notes
-5 ≤ xi ≤ 5 for i = 1,…,n Global minimum at f(−0.0299, 0) = −0.003791