Continuous Optimization Problems
The pycellga.problems.single_objective.continuous package offers a range of continuous, single-objective benchmark functions. These functions are commonly used to evaluate the performance of optimization algorithms in terms of convergence accuracy, robustness, and computation speed. Below is a list of available benchmark functions in this package, each addressing unique aspects of optimization.
Ackley Function
A multimodal function known for its large number of local minima. Used to evaluate an algorithm’s ability to escape local optima.
- class Ackley(n_var)[source]
Bases:
AbstractProblem
Ackley function implementation for optimization problems.
The Ackley function is widely used for testing optimization algorithms. It is characterized by a nearly flat outer region and a large hole at the center. The function is usually evaluated on the hypercube x_i ∈ [-32.768, 32.768], for all i = 1, 2, …, d.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bound for each variable (fixed to -32.768).
- Type:
float
- xu
Upper bound for each variable (fixed to 32.768).
- Type:
float
- f(x: List[float]) float [source]
Compute the Ackley function value for a single solution.
- Parameters:
x (list or numpy.ndarray) – Array of input variables.
- Returns:
The computed fitness value for the given solution.
- Return type:
float
Bent Cigar Function
A unimodal function that is rotationally invariant, used to test convergence speed and robustness.
- class Bentcigar(n_var: int)[source]
Bases:
AbstractProblem
Bentcigar function implementation for optimization problems.
The Bentcigar function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bound for each variable (fixed to -100).
- Type:
float
- xu
Upper bound for each variable (fixed to 100).
- Type:
float
Notes
-100 ≤ xi ≤ 100 for i = 1,…,n Global minimum at f(0,…,0) = 0
Bohachevsky Function
Characterized by its simple structure with some local minima, making it ideal for testing fine-tuning capabilities.
- class Bohachevsky(n_var: int)[source]
Bases:
AbstractProblem
Bohachevsky function implementation for optimization problems.
The Bohachevsky function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube x_i ∈ [-15, 15], for all i = 1, 2, …, n.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bound for each variable (fixed to -15).
- Type:
float
- xu
Upper bound for each variable (fixed to 15).
- Type:
float
Notes
-15 ≤ xi ≤ 15 for i = 1,…,n Global minimum at f(0,…,0) = 0
Chichinadze Function
A complex landscape with both smooth and steep regions, suitable for testing algorithms on challenging landscapes.
- class Chichinadze[source]
Bases:
AbstractProblem
Chichinadze function implementation for optimization problems.
The Chichinadze function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube x, y ∈ [-30, 30].
- n_var
Number of variables (fixed to 2 for x and y).
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bound for the variables (fixed to -30).
- Type:
float
- xu
Upper bound for the variables (fixed to 30).
- Type:
float
Notes
-30 ≤ x, y ≤ 30 Global minimum at f(5.90133, 0.5) = −43.3159
Drop Wave Function
A multimodal function often used to evaluate the balance between exploration and exploitation.
- class Dropwave[source]
Bases:
AbstractProblem
Dropwave function for optimization problems.
The Dropwave function is a multimodal function commonly used as a performance test problem for optimization algorithms. It is defined within the bounds -5.12 ≤ xi ≤ 5.12 for i = 1, 2, and has a global minimum at f(0, 0) = -1.
- n_var
Number of variables (dimensions) in the problem (fixed to 2).
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bound for the variables (fixed to -5.12).
- Type:
float
- xu
Upper bound for the variables (fixed to 5.12).
- Type:
float
Notes
-5.12 ≤ xi ≤ 5.12 for i = 1, 2 Global minimum at f(0, 0) = -1
Frequency Modulation Sound Function (FMS)
A complex, multimodal function commonly used to test the robustness of optimization algorithms.
- class Fms[source]
Bases:
AbstractProblem
Fms function implementation for optimization problems.
The Fms function is used for testing optimization algorithms, specifically those dealing with frequency modulation sound.
- n_var
Number of variables (dimensions) in the problem (fixed to 6).
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bound for the variables (fixed to -6.4).
- Type:
float
- xu
Upper bound for the variables (fixed to 6.35).
- Type:
float
Griewank Function
A continuous, nonlinear function with numerous local minima, commonly used to test an algorithm’s global search capability.
- class Griewank(n_var: int = 10)[source]
Bases:
AbstractProblem
Griewank function implementation for optimization problems.
The Griewank function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube xi ∈ [-600, 600], for all i = 1, 2, …, n.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bound for the variables (fixed to -600).
- Type:
float
- xu
Upper bound for the variables (fixed to 600).
- Type:
float
Notes
-600 ≤ xi ≤ 600 for i = 1,…,n Global minimum at f(0,…,0) = 0
Holzman Function
An experimental function that provides various levels of difficulty for different optimization approaches.
- class Holzman(n_var: int = 2)[source]
Bases:
AbstractProblem
Holzman function implementation for optimization problems.
The Holzman function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube xi ∈ [-10, 10], for all i = 1, 2, …, n.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bound for the variables (fixed to -10).
- Type:
float
- xu
Upper bound for the variables (fixed to 10).
- Type:
float
Notes
-10 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(0,…,0) = 0
Levy Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Levy(n_var: int = 2)[source]
Bases:
AbstractProblem
Levy function implementation for optimization problems.
The Levy function is widely used for testing optimization algorithms. It evaluates inputs over the hypercube x_i ∈ [-10, 10].
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (REAL).
- Type:
GeneType
- xl
Lower bounds for the variables, fixed to -10.
- Type:
float
- xu
Upper bounds for the variables, fixed to 10.
- Type:
float
- evaluate(x: List[float], out: dict, \*args, \*\*kwargs) None [source]
Pymoo-compatible evaluation method for batch processing.
- __init__(n_var: int = 2)[source]
Initialize the Levy problem.
- Parameters:
n_var (int, optional) – Number of variables (dimensions) for the problem, by default 2.
Matyas Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Matyas[source]
Bases:
AbstractProblem
Matyas function implementation for optimization problems.
The Matyas function is commonly used to evaluate the performance of optimization algorithms. It is a simple, continuous, convex function that has a global minimum at the origin.
- gen_type
The type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- n_var
Number of variables (dimensions) in the problem (fixed to 2).
- Type:
int
- xl
Lower bound for the variables (fixed to -10).
- Type:
float
- xu
Upper bound for the variables (fixed to 10).
- Type:
float
Pow Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Pow(n_var: int = 5)[source]
Bases:
AbstractProblem
Pow function implementation for optimization problems.
The Pow function is typically used for testing optimization algorithms. It is evaluated on the hypercube x_i ∈ [-5.0, 15.0] with the goal of reaching the global minimum at f(5, 7, 9, 3, 2) = 0.
- gen_type
The type of genes used in the problem (REAL).
- Type:
GeneType
- n_var
The number of design variables.
- Type:
int
- xl
The lower bound for the variables (-5.0).
- Type:
float
- xu
The upper bound for the variables (15.0).
- Type:
float
Powell Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Powell(n_var: int = 4)[source]
Bases:
AbstractProblem
Powell function implementation for optimization problems.
The Powell function is widely used for testing optimization algorithms. It is typically evaluated on the hypercube x_i ∈ [-4, 5], for all i = 1, 2, …, n.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (REAL for this implementation).
- Type:
GeneType
- xl
Lower bound for the variables (fixed to -4).
- Type:
float
- xu
Upper bound for the variables (fixed to 5).
- Type:
float
Rastrigin Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Rastrigin(n_var: int = 2)[source]
Bases:
AbstractProblem
Rastrigin function implementation for optimization problems.
The Rastrigin function is widely used for testing optimization algorithms. It is typically evaluated on the hypercube x_i ∈ [-5.12, 5.12], for all i = 1, 2, …, n.
- gen_type
The type of genes used in the problem, set to REAL.
- Type:
GeneType
- n_var
The number of variables (dimensions) in the problem.
- Type:
int
- xl
The lower bound for each variable, set to -5.12.
- Type:
float
- xu
The upper bound for each variable, set to 5.12.
- Type:
float
Rosenbrock Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Rosenbrock(n_var: int = 2)[source]
Bases:
AbstractProblem
Rosenbrock function implementation for optimization problems.
The Rosenbrock function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 10], for all i = 1, 2, …, n.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bounds for the variables (fixed to -5).
- Type:
float
- xu
Upper bounds for the variables (fixed to 10).
- Type:
float
Rotated Hyper-Ellipsoid Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Rothellipsoid(n_var: int = 3)[source]
Bases:
AbstractProblem
Rotated Hyper-Ellipsoid function implementation for optimization problems.
This function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.
- n_var
Number of variables (dimensions) for the problem.
- Type:
int
- gen_type
The type of genes used in the problem, set to REAL.
- Type:
GeneType
- xl
Lower bound for the variables, set to -100.
- Type:
float
- xu
Upper bound for the variables, set to 100.
- Type:
float
- f(x: list) float [source]
Compute the Rotated Hyper-Ellipsoid function value for a given list of variables.
- __init__(n_var: int = 3)[source]
Initialize the Rothellipsoid problem.
- Parameters:
n_var (int, optional) – Number of variables (dimensions) for the problem, by default 3.
Modified Schaffer Function #1
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Schaffer(n_var=2)[source]
Bases:
AbstractProblem
Modified Schaffer function #1 for optimization problems.
This class implements the Schaffer’s function, a common benchmark problem for optimization algorithms. The function is defined over a multidimensional input and is used to test the performance of optimization methods.
- gen_type
The type of gene, set to REAL.
- Type:
GeneType
- n_var
The number of design variables.
- Type:
int
- xl
The lower bound for the design variables, set to -100.
- Type:
float
- xu
The upper bound for the design variables, set to 100.
- Type:
float
- evaluate(x, out, \*args, \*\*kwargs)
Wrapper for pymoo compatibility to calculate the fitness value.
Modified Schaffer Function #2
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Schaffer2(n_var: int = 2)[source]
Bases:
AbstractProblem
Modified Schaffer function #2 implementation for optimization problems.
The Modified Schaffer function #2 is widely used for testing optimization algorithms. The function is evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.
- n_var
The number of variables (dimensions) for the problem.
- Type:
int
- gen_type
Type of genes used in the problem, fixed to REAL.
- Type:
GeneType
- xl
Lower bounds for the variables, fixed to -100.
- Type:
float
- xu
Upper bounds for the variables, fixed to 100.
- Type:
float
- evaluate(x, out, \*args, \*\*kwargs)
Compute the fitness value(s) for pymoo’s optimization framework.
Schwefel Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Schwefel(n_var: int = 2)[source]
Bases:
AbstractProblem
Schwefel function implementation for optimization problems.
The Schwefel function is commonly used for testing optimization algorithms. It is evaluated on the range [-500, 500] for each variable and has a global minimum at f(420.9687,…,420.9687) = 0.
- n_var
The number of variables (dimensions) for the problem.
- Type:
int
- gen_type
The type of genes used in the problem, fixed to REAL.
- Type:
GeneType
- xl
The lower bounds for the variables, fixed to -500.
- Type:
float
- xu
The upper bounds for the variables, fixed to 500.
- Type:
float
Notes
-500 ≤ xi ≤ 500 for i = 1,…,n Global minimum at f(420.9687,…,420.9687) = 0
Sphere Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Sphere(n_var: int = 10)[source]
Bases:
AbstractProblem
Sphere function implementation for optimization problems.
The Sphere function is a simple and commonly used benchmark for optimization algorithms. It is defined on a hypercube where each variable typically lies within [-5.12, 5.12].
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bounds for the variables (fixed to -5.12).
- Type:
float
- xu
Upper bounds for the variables (fixed to 5.12).
- Type:
float
Notes
-5.12 ≤ xi ≤ 5.12 for i = 1,…,n Global minimum at f(0,…,0) = 0
Styblinski-Tang Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class StyblinskiTang(n_var: int = 2)[source]
Bases:
AbstractProblem
Styblinski-Tang function implementation for optimization problems.
The Styblinski-Tang function is commonly used to test optimization algorithms. It is defined over the range [-5, 5] for each variable and has a global minimum.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bounds for the variables (fixed to -5).
- Type:
float
- xu
Upper bounds for the variables (fixed to 5).
- Type:
float
Notes
-5 ≤ xi ≤ 5 for i = 1,…,n Global minimum at f(-2.903534, …, -2.903534) ≈ -39.16599 * n_var
- __init__(n_var: int = 2)[source]
Initialize the Styblinski-Tang function with the specified number of variables.
- Parameters:
n_var (int, optional) – Number of variables (dimensions) in the problem, by default 2.
- evaluate(x, out, *args, **kwargs)[source]
Evaluate function for compatibility with pymoo’s optimizer.
This method wraps the f method and allows pymoo to handle batch evaluations by storing the computed fitness values in the output dictionary.
- Parameters:
x (numpy.ndarray) – Array of input variables.
out (dict) – Dictionary to store the output fitness values.
Sum of Different Powers Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Sumofdifferentpowers(n_var: int = 2)[source]
Bases:
AbstractProblem
Sum of Different Powers function implementation for optimization problems.
The Sum of Different Powers function is commonly used to test optimization algorithms. It is defined over the range [-1, 1] for each variable, with a global minimum of 0 at the origin.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (fixed to REAL).
- Type:
GeneType
- xl
Lower bounds for the variables (fixed to -1).
- Type:
float
- xu
Upper bounds for the variables (fixed to 1).
- Type:
float
Notes
-1 ≤ xi ≤ 1 for all i. Global minimum at f(0,…,0) = 0.
- __init__(n_var: int = 2)[source]
Initialize the Sum of Different Powers problem.
- Parameters:
n_var (int, optional) – Number of variables (dimensions) in the problem, by default 2.
Three Hump Camel Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Threehumps[source]
Bases:
AbstractProblem
Three Hump Camel function implementation for optimization problems.
The Three Hump Camel function is commonly used for testing optimization algorithms. It is defined for two variables within the bounds [-5, 5].
- n_var
Number of variables (dimensions) for the problem, fixed to 2.
- Type:
int
- gen_type
Type of genes used in the problem (REAL).
- Type:
GeneType
- xl
Lower bounds for the variables, fixed to -5.
- Type:
float
- xu
Upper bounds for the variables, fixed to 5.
- Type:
float
Zakharov Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Zakharov(n_var: int = 2)[source]
Bases:
AbstractProblem
Zakharov function implementation for optimization problems.
The Zakharov function is widely used for testing optimization algorithms. It is evaluated on the hypercube x_i ∈ [-5, 10] for all variables.
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (REAL).
- Type:
GeneType
- xl
Lower bounds for the variables, fixed to -5.
- Type:
float
- xu
Upper bounds for the variables, fixed to 10.
- Type:
float
- evaluate(x: list, out: dict, \*args, \*\*kwargs) None [source]
Pymoo-compatible evaluation method for batch processing.
- __init__(n_var: int = 2)[source]
Initialize the Zakharov problem.
- Parameters:
n_var (int, optional) – Number of variables (dimensions) for the problem, by default 2.
Zettle Function
This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.
- class Zettle(n_var: int = 2)[source]
Bases:
AbstractProblem
Zettle function implementation for optimization problems.
The Zettle function is widely used for testing optimization algorithms. It is typically evaluated on the hypercube x_i ∈ [-5, 5].
- n_var
Number of variables (dimensions) in the problem.
- Type:
int
- gen_type
Type of genes used in the problem (REAL).
- Type:
GeneType
- xl
Lower bounds for the variables, fixed to -5.
- Type:
float
- xu
Upper bounds for the variables, fixed to 5.
- Type:
float
- evaluate(x: list, out: dict, \*args, \*\*kwargs) None [source]
Pymoo-compatible evaluation method for batch processing.
- __init__(n_var: int = 2)[source]
Initialize the Zettle problem.
- Parameters:
n_var (int, optional) – Number of variables (dimensions) for the problem, by default 2.