Continuous Optimization Problems

The pycellga.problems.single_objective.continuous package offers a range of continuous, single-objective benchmark functions. These functions are commonly used to evaluate the performance of optimization algorithms in terms of convergence accuracy, robustness, and computation speed. Below is a list of available benchmark functions in this package, each addressing unique aspects of optimization.

Ackley Function

A multimodal function known for its large number of local minima. Used to evaluate an algorithm’s ability to escape local optima.

class Ackley(n_var)[source]

Bases: AbstractProblem

Ackley function implementation for optimization problems.

The Ackley function is widely used for testing optimization algorithms. It is characterized by a nearly flat outer region and a large hole at the center. The function is usually evaluated on the hypercube x_i ∈ [-32.768, 32.768], for all i = 1, 2, …, d.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bound for each variable (fixed to -32.768).

Type:

float

xu

Upper bound for each variable (fixed to 32.768).

Type:

float

__init__(n_var: int)[source]

Initialize the Ackley problem with the specified number of variables.

f(x: List[float]) float[source]

Compute the Ackley function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

__init__(n_var)[source]

Initialize the Ackley problem.

Parameters:

n_var (int) – Number of variables (dimensions) in the problem.

f(x: List[float]) float[source]

Compute the Ackley function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Bent Cigar Function

A unimodal function that is rotationally invariant, used to test convergence speed and robustness.

class Bentcigar(n_var: int)[source]

Bases: AbstractProblem

Bentcigar function implementation for optimization problems.

The Bentcigar function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bound for each variable (fixed to -100).

Type:

float

xu

Upper bound for each variable (fixed to 100).

Type:

float

f(x: List[float]) float[source]

Compute the Bentcigar function value for a single solution.

Notes

-100 ≤ xi ≤ 100 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(n_var: int)[source]

Initialize the Bentcigar problem.

Parameters:

n_var (int) – Number of variables (dimensions) in the problem.

f(x: List[float]) float[source]

Compute the Bentcigar function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Bohachevsky Function

Characterized by its simple structure with some local minima, making it ideal for testing fine-tuning capabilities.

class Bohachevsky(n_var: int)[source]

Bases: AbstractProblem

Bohachevsky function implementation for optimization problems.

The Bohachevsky function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube x_i ∈ [-15, 15], for all i = 1, 2, …, n.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bound for each variable (fixed to -15).

Type:

float

xu

Upper bound for each variable (fixed to 15).

Type:

float

f(x: List[float]) float[source]

Compute the Bohachevsky function value for a single solution.

Notes

-15 ≤ xi ≤ 15 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(n_var: int)[source]

Initialize the Bohachevsky problem.

Parameters:

n_var (int) – Number of variables (dimensions) in the problem.

f(x: List[float]) float[source]

Compute the Bohachevsky function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Chichinadze Function

A complex landscape with both smooth and steep regions, suitable for testing algorithms on challenging landscapes.

class Chichinadze[source]

Bases: AbstractProblem

Chichinadze function implementation for optimization problems.

The Chichinadze function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube x, y ∈ [-30, 30].

n_var

Number of variables (fixed to 2 for x and y).

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bound for the variables (fixed to -30).

Type:

float

xu

Upper bound for the variables (fixed to 30).

Type:

float

f(x: List[float]) float[source]

Compute the Chichinadze function value for a single solution.

Notes

-30 ≤ x, y ≤ 30 Global minimum at f(5.90133, 0.5) = −43.3159

__init__()[source]

Initialize the Chichinadze problem.

f(x: List[float]) float[source]

Compute the Chichinadze function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Drop Wave Function

A multimodal function often used to evaluate the balance between exploration and exploitation.

class Dropwave[source]

Bases: AbstractProblem

Dropwave function for optimization problems.

The Dropwave function is a multimodal function commonly used as a performance test problem for optimization algorithms. It is defined within the bounds -5.12 ≤ xi ≤ 5.12 for i = 1, 2, and has a global minimum at f(0, 0) = -1.

n_var

Number of variables (dimensions) in the problem (fixed to 2).

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bound for the variables (fixed to -5.12).

Type:

float

xu

Upper bound for the variables (fixed to 5.12).

Type:

float

f(x: List[float]) float[source]

Compute the Dropwave function value for a single solution.

Notes

-5.12 ≤ xi ≤ 5.12 for i = 1, 2 Global minimum at f(0, 0) = -1

__init__()[source]

Initialize the Dropwave problem.

f(x: List[float]) float[source]

Compute the Dropwave function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Frequency Modulation Sound Function (FMS)

A complex, multimodal function commonly used to test the robustness of optimization algorithms.

class Fms[source]

Bases: AbstractProblem

Fms function implementation for optimization problems.

The Fms function is used for testing optimization algorithms, specifically those dealing with frequency modulation sound.

n_var

Number of variables (dimensions) in the problem (fixed to 6).

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bound for the variables (fixed to -6.4).

Type:

float

xu

Upper bound for the variables (fixed to 6.35).

Type:

float

f(x: List[float]) float[source]

Compute the Fms function value for a single solution.

__init__()[source]

Initialize the Fms problem.

f(x: List[float]) float[source]

Compute the Fms function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Griewank Function

A continuous, nonlinear function with numerous local minima, commonly used to test an algorithm’s global search capability.

class Griewank(n_var: int = 10)[source]

Bases: AbstractProblem

Griewank function implementation for optimization problems.

The Griewank function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube xi ∈ [-600, 600], for all i = 1, 2, …, n.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bound for the variables (fixed to -600).

Type:

float

xu

Upper bound for the variables (fixed to 600).

Type:

float

f(x: List[float]) float[source]

Compute the Griewank function value for a single solution.

Notes

-600 ≤ xi ≤ 600 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(n_var: int = 10)[source]

Initialize the Griewank problem.

Parameters:

n_var (int, optional) – Number of variables (dimensions) in the problem, by default 10.

f(x: List[float]) float[source]

Compute the Griewank function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Holzman Function

An experimental function that provides various levels of difficulty for different optimization approaches.

class Holzman(n_var: int = 2)[source]

Bases: AbstractProblem

Holzman function implementation for optimization problems.

The Holzman function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube xi ∈ [-10, 10], for all i = 1, 2, …, n.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bound for the variables (fixed to -10).

Type:

float

xu

Upper bound for the variables (fixed to 10).

Type:

float

f(x: List[float]) float[source]

Compute the Holzman function value for a single solution.

Notes

-10 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(n_var: int = 2)[source]

Initialize the Holzman problem.

Parameters:

n_var (int, optional) – Number of variables (dimensions) in the problem, by default 2.

f(x: List[float]) float[source]

Compute the Holzman function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Levy Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Levy(n_var: int = 2)[source]

Bases: AbstractProblem

Levy function implementation for optimization problems.

The Levy function is widely used for testing optimization algorithms. It evaluates inputs over the hypercube x_i ∈ [-10, 10].

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (REAL).

Type:

GeneType

xl

Lower bounds for the variables, fixed to -10.

Type:

float

xu

Upper bounds for the variables, fixed to 10.

Type:

float

f(x: List[float]) float[source]

Compute the Levy function value for a given solution.

evaluate(x: List[float], out: dict, \*args, \*\*kwargs) None[source]

Pymoo-compatible evaluation method for batch processing.

__init__(n_var: int = 2)[source]

Initialize the Levy problem.

Parameters:

n_var (int, optional) – Number of variables (dimensions) for the problem, by default 2.

evaluate(x: List[float], out: dict, *args, **kwargs) None[source]

Evaluate method for compatibility with pymoo’s framework.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: List[float]) float[source]

Compute the Levy function value for a given solution.

Parameters:

x (list) – A list of float variables.

Returns:

The Levy function value.

Return type:

float

Matyas Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Matyas[source]

Bases: AbstractProblem

Matyas function implementation for optimization problems.

The Matyas function is commonly used to evaluate the performance of optimization algorithms. It is a simple, continuous, convex function that has a global minimum at the origin.

gen_type

The type of genes used in the problem (fixed to REAL).

Type:

GeneType

n_var

Number of variables (dimensions) in the problem (fixed to 2).

Type:

int

xl

Lower bound for the variables (fixed to -10).

Type:

float

xu

Upper bound for the variables (fixed to 10).

Type:

float

f(x: list) float[source]

Computes the Matyas function value for a given solution.

__init__()[source]

Initialize the Matyas problem.

This problem is defined for exactly 2 variables with bounds [-10, 10].

Parameters:

None

f(x)[source]

Compute the Matyas function value for a given solution.

Parameters:

x (list of float) – A list of float variables representing a point in the solution space.

Returns:

The computed fitness value for the given solution.

Return type:

float

Pow Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Pow(n_var: int = 5)[source]

Bases: AbstractProblem

Pow function implementation for optimization problems.

The Pow function is typically used for testing optimization algorithms. It is evaluated on the hypercube x_i ∈ [-5.0, 15.0] with the goal of reaching the global minimum at f(5, 7, 9, 3, 2) = 0.

gen_type

The type of genes used in the problem (REAL).

Type:

GeneType

n_var

The number of design variables.

Type:

int

xl

The lower bound for the variables (-5.0).

Type:

float

xu

The upper bound for the variables (15.0).

Type:

float

f(x: List[float]) float[source]

Compute the Pow function value for a given solution.

__init__(n_var: int = 5)[source]

Initialize the Pow problem.

Parameters:

n_var (int, optional) – The number of variables (dimensions) in the problem, by default 5.

f(x: List[float]) float[source]

Compute the Pow function value for a given solution.

Parameters:

x (list of float) – A list of float variables representing a point in the solution space.

Returns:

The computed fitness value for the given solution.

Return type:

float

Powell Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Powell(n_var: int = 4)[source]

Bases: AbstractProblem

Powell function implementation for optimization problems.

The Powell function is widely used for testing optimization algorithms. It is typically evaluated on the hypercube x_i ∈ [-4, 5], for all i = 1, 2, …, n.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (REAL for this implementation).

Type:

GeneType

xl

Lower bound for the variables (fixed to -4).

Type:

float

xu

Upper bound for the variables (fixed to 5).

Type:

float

f(x: List[float]) float[source]

Compute the Powell function value for a given solution.

__init__(n_var: int = 4)[source]

Initialize the Powell problem.

Parameters:

n_var (int, optional) – Number of variables (dimensions) in the problem, by default 4.

f(x: List[float]) float[source]

Compute the Powell function value for a given solution.

Parameters:

x (list of float) – A list of float variables representing a point in the solution space.

Returns:

The computed fitness value for the given solution.

Return type:

float

Rastrigin Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Rastrigin(n_var: int = 2)[source]

Bases: AbstractProblem

Rastrigin function implementation for optimization problems.

The Rastrigin function is widely used for testing optimization algorithms. It is typically evaluated on the hypercube x_i ∈ [-5.12, 5.12], for all i = 1, 2, …, n.

gen_type

The type of genes used in the problem, set to REAL.

Type:

GeneType

n_var

The number of variables (dimensions) in the problem.

Type:

int

xl

The lower bound for each variable, set to -5.12.

Type:

float

xu

The upper bound for each variable, set to 5.12.

Type:

float

f(x: List[float]) float[source]

Computes the Rastrigin function value for a given solution.

__init__(n_var: int = 2)[source]

Initialize the Rastrigin problem with the specified number of variables.

Parameters:

n_var (int, optional) – The number of design variables (dimensions), by default 2.

f(x: List[float]) float[source]

Calculate the Rastrigin function value for a given list of variables.

Parameters:

x (list) – A list of float variables.

Returns:

The computed Rastrigin function value.

Return type:

float

Rosenbrock Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Rosenbrock(n_var: int = 2)[source]

Bases: AbstractProblem

Rosenbrock function implementation for optimization problems.

The Rosenbrock function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 10], for all i = 1, 2, …, n.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bounds for the variables (fixed to -5).

Type:

float

xu

Upper bounds for the variables (fixed to 10).

Type:

float

f(x: List[float]) float[source]

Compute the Rosenbrock function value for a single solution.

__init__(n_var: int = 2)[source]

Initialize the Rosenbrock problem.

Parameters:

n_var (int, optional) – Number of variables (dimensions) in the problem, by default 2.

f(x: List[float]) float[source]

Compute the Rosenbrock function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Rotated Hyper-Ellipsoid Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Rothellipsoid(n_var: int = 3)[source]

Bases: AbstractProblem

Rotated Hyper-Ellipsoid function implementation for optimization problems.

This function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.

n_var

Number of variables (dimensions) for the problem.

Type:

int

gen_type

The type of genes used in the problem, set to REAL.

Type:

GeneType

xl

Lower bound for the variables, set to -100.

Type:

float

xu

Upper bound for the variables, set to 100.

Type:

float

f(x: list) float[source]

Compute the Rotated Hyper-Ellipsoid function value for a given list of variables.

evaluate(x, out, \*args, \*\*kwargs)[source]

Computes the fitness value for pymoo compatibility.

__init__(n_var: int = 3)[source]

Initialize the Rothellipsoid problem.

Parameters:

n_var (int, optional) – Number of variables (dimensions) for the problem, by default 3.

evaluate(x, out, *args, **kwargs)[source]

Evaluate the function for pymoo compatibility.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the computed fitness value.

Notes

Stores the computed fitness value in out[“F”].

f(x)[source]

Compute the Rotated Hyper-Ellipsoid function value for a given solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Modified Schaffer Function #1

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Schaffer(n_var=2)[source]

Bases: AbstractProblem

Modified Schaffer function #1 for optimization problems.

This class implements the Schaffer’s function, a common benchmark problem for optimization algorithms. The function is defined over a multidimensional input and is used to test the performance of optimization methods.

gen_type

The type of gene, set to REAL.

Type:

GeneType

n_var

The number of design variables.

Type:

int

xl

The lower bound for the design variables, set to -100.

Type:

float

xu

The upper bound for the design variables, set to 100.

Type:

float

f(x: list) float[source]

Computes the Schaffer’s function value for a given list of variables.

evaluate(x, out, \*args, \*\*kwargs)

Wrapper for pymoo compatibility to calculate the fitness value.

__init__(n_var=2)[source]

Initialize the Schaffer’s problem.

Parameters:

n_var (int, optional) – The number of design variables, by default 2.

f(x)[source]

Compute the Schaffer’s function value for a given solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The calculated fitness value.

Return type:

float

Modified Schaffer Function #2

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Schaffer2(n_var: int = 2)[source]

Bases: AbstractProblem

Modified Schaffer function #2 implementation for optimization problems.

The Modified Schaffer function #2 is widely used for testing optimization algorithms. The function is evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.

n_var

The number of variables (dimensions) for the problem.

Type:

int

gen_type

Type of genes used in the problem, fixed to REAL.

Type:

GeneType

xl

Lower bounds for the variables, fixed to -100.

Type:

float

xu

Upper bounds for the variables, fixed to 100.

Type:

float

f(x: list) float[source]

Compute the Modified Schaffer function #2 value for a single solution.

evaluate(x, out, \*args, \*\*kwargs)

Compute the fitness value(s) for pymoo’s optimization framework.

__init__(n_var: int = 2)[source]

Initialize the Modified Schaffer function #2.

Parameters:

n_var (int, optional) – Number of variables (dimensions) for the problem, by default 2.

f(x: list) float[source]

Compute the Modified Schaffer function #2 value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Schwefel Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Schwefel(n_var: int = 2)[source]

Bases: AbstractProblem

Schwefel function implementation for optimization problems.

The Schwefel function is commonly used for testing optimization algorithms. It is evaluated on the range [-500, 500] for each variable and has a global minimum at f(420.9687,…,420.9687) = 0.

n_var

The number of variables (dimensions) for the problem.

Type:

int

gen_type

The type of genes used in the problem, fixed to REAL.

Type:

GeneType

xl

The lower bounds for the variables, fixed to -500.

Type:

float

xu

The upper bounds for the variables, fixed to 500.

Type:

float

f(x: list) float[source]

Compute the Schwefel function value for a single solution.

Notes

-500 ≤ xi ≤ 500 for i = 1,…,n Global minimum at f(420.9687,…,420.9687) = 0

__init__(n_var: int = 2)[source]

Initialize the Schwefel function with the specified number of variables.

Parameters:

n_var (int, optional) – The number of variables (dimensions) for the problem, by default 2.

f(x: list) float[source]

Compute the Schwefel function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Sphere Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Sphere(n_var: int = 10)[source]

Bases: AbstractProblem

Sphere function implementation for optimization problems.

The Sphere function is a simple and commonly used benchmark for optimization algorithms. It is defined on a hypercube where each variable typically lies within [-5.12, 5.12].

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bounds for the variables (fixed to -5.12).

Type:

float

xu

Upper bounds for the variables (fixed to 5.12).

Type:

float

f(x: list) float[source]

Compute the Sphere function value for a single solution.

Notes

-5.12 ≤ xi ≤ 5.12 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(n_var: int = 10)[source]

Initialize the Sphere function with the specified number of variables.

Parameters:

n_var (int, optional) – Number of variables (dimensions) in the problem, by default 10.

f(x: list) float[source]

Compute the Sphere function value for a single solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Styblinski-Tang Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class StyblinskiTang(n_var: int = 2)[source]

Bases: AbstractProblem

Styblinski-Tang function implementation for optimization problems.

The Styblinski-Tang function is commonly used to test optimization algorithms. It is defined over the range [-5, 5] for each variable and has a global minimum.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bounds for the variables (fixed to -5).

Type:

float

xu

Upper bounds for the variables (fixed to 5).

Type:

float

f(x: list) float[source]

Compute the Styblinski-Tang function value for a given solution.

Notes

-5 ≤ xi ≤ 5 for i = 1,…,n Global minimum at f(-2.903534, …, -2.903534) ≈ -39.16599 * n_var

__init__(n_var: int = 2)[source]

Initialize the Styblinski-Tang function with the specified number of variables.

Parameters:

n_var (int, optional) – Number of variables (dimensions) in the problem, by default 2.

evaluate(x, out, *args, **kwargs)[source]

Evaluate function for compatibility with pymoo’s optimizer.

This method wraps the f method and allows pymoo to handle batch evaluations by storing the computed fitness values in the output dictionary.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: list) float[source]

Compute the Styblinski-Tang function value for a given solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Sum of Different Powers Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Sumofdifferentpowers(n_var: int = 2)[source]

Bases: AbstractProblem

Sum of Different Powers function implementation for optimization problems.

The Sum of Different Powers function is commonly used to test optimization algorithms. It is defined over the range [-1, 1] for each variable, with a global minimum of 0 at the origin.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (fixed to REAL).

Type:

GeneType

xl

Lower bounds for the variables (fixed to -1).

Type:

float

xu

Upper bounds for the variables (fixed to 1).

Type:

float

f(x: list) float[source]

Compute the Sum of Different Powers function value for a given solution.

Notes

-1 ≤ xi ≤ 1 for all i. Global minimum at f(0,…,0) = 0.

__init__(n_var: int = 2)[source]

Initialize the Sum of Different Powers problem.

Parameters:

n_var (int, optional) – Number of variables (dimensions) in the problem, by default 2.

evaluate(x, out, *args, **kwargs)[source]

Evaluate function for compatibility with pymoo’s optimizer.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: list) float[source]

Compute the Sum of Different Powers function value for a given solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Three Hump Camel Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Threehumps[source]

Bases: AbstractProblem

Three Hump Camel function implementation for optimization problems.

The Three Hump Camel function is commonly used for testing optimization algorithms. It is defined for two variables within the bounds [-5, 5].

n_var

Number of variables (dimensions) for the problem, fixed to 2.

Type:

int

gen_type

Type of genes used in the problem (REAL).

Type:

GeneType

xl

Lower bounds for the variables, fixed to -5.

Type:

float

xu

Upper bounds for the variables, fixed to 5.

Type:

float

f(x: list) float[source]

Compute the Three Hump Camel function value for a given solution.

__init__()[source]

Initialize the Three Hump Camel problem.

evaluate(x, out, *args, **kwargs)[source]

Evaluate method for compatibility with pymoo’s framework.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: list) float[source]

Compute the Three Hump Camel function value for a given solution.

Parameters:

x (list or numpy.ndarray) – Array of input variables.

Returns:

The computed fitness value for the given solution.

Return type:

float

Zakharov Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Zakharov(n_var: int = 2)[source]

Bases: AbstractProblem

Zakharov function implementation for optimization problems.

The Zakharov function is widely used for testing optimization algorithms. It is evaluated on the hypercube x_i ∈ [-5, 10] for all variables.

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (REAL).

Type:

GeneType

xl

Lower bounds for the variables, fixed to -5.

Type:

float

xu

Upper bounds for the variables, fixed to 10.

Type:

float

f(x: list) float[source]

Compute the Zakharov function value for a given solution.

evaluate(x: list, out: dict, \*args, \*\*kwargs) None[source]

Pymoo-compatible evaluation method for batch processing.

__init__(n_var: int = 2)[source]

Initialize the Zakharov problem.

Parameters:

n_var (int, optional) – Number of variables (dimensions) for the problem, by default 2.

evaluate(x: List[float], out: dict, *args, **kwargs) None[source]

Evaluate method for compatibility with pymoo’s framework.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: List[float]) float[source]

Compute the Zakharov function value for a given solution.

Parameters:

x (list) – A list of float variables.

Returns:

The Zakharov function value.

Return type:

float

Zettle Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Zettle(n_var: int = 2)[source]

Bases: AbstractProblem

Zettle function implementation for optimization problems.

The Zettle function is widely used for testing optimization algorithms. It is typically evaluated on the hypercube x_i ∈ [-5, 5].

n_var

Number of variables (dimensions) in the problem.

Type:

int

gen_type

Type of genes used in the problem (REAL).

Type:

GeneType

xl

Lower bounds for the variables, fixed to -5.

Type:

float

xu

Upper bounds for the variables, fixed to 5.

Type:

float

f(x: list) float[source]

Compute the Zettle function value for a given solution.

evaluate(x: list, out: dict, \*args, \*\*kwargs) None[source]

Pymoo-compatible evaluation method for batch processing.

__init__(n_var: int = 2)[source]

Initialize the Zettle problem.

Parameters:

n_var (int, optional) – Number of variables (dimensions) for the problem, by default 2.

evaluate(x: List[float], out: dict, *args, **kwargs) None[source]

Evaluate method for compatibility with pymoo’s framework.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: List[float]) float[source]

Compute the Zettle function value for a given solution.

Parameters:

x (list) – A list of float variables.

Returns:

The Zettle function value, rounded to six decimal places.

Return type:

float