Continuous Optimization Problems

The pycellga.problems.single_objective.continuous package offers a range of continuous, single-objective benchmark functions. These functions are commonly used to evaluate the performance of optimization algorithms in terms of convergence accuracy, robustness, and computation speed. Below is a list of available benchmark functions in this package, each addressing unique aspects of optimization.

Ackley Function

A multimodal function known for its large number of local minima. Used to evaluate an algorithm’s ability to escape local optima.

class Ackley(dimension: int)[source]

Bases: AbstractProblem

Ackley function implementation for optimization problems.

The Ackley function is widely used for testing optimization algorithms. It has a nearly flat outer region and a large hole at the center. The function is usually evaluated on the hypercube x_i ∈ [-32.768, 32.768], for all i = 1, 2, …, d.

design_variables

List of variable names.

Type:

List[str]

bounds

Bounds for each variable.

Type:

List[Tuple[float, float]]

objectives

Objectives for optimization.

Type:

List[str]

constraints

Any constraints for the problem.

Type:

List[str]

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Ackley function value for given variables.

f(x)[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

Notes

-32.768 ≤ xi ≤ 32.768 Global minimum at f(0, 0) = 0

__init__(dimension: int)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Ackley function value for a given list of variables.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x)[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Bent Cigar Function

A unimodal function that is rotationally invariant, used to test convergence speed and robustness.

class Bentcigar(dimension: int)[source]

Bases: AbstractProblem

Bentcigar function implementation for optimization problems.

The Bentcigar function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.

design_variables

List of variable names.

Type:

List[str]

bounds

Bounds for each variable.

Type:

List[Tuple[float, float]]

objectives

Objectives for optimization.

Type:

List[str]

constraints

Any constraints for the problem.

Type:

List[str]

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Bentcigar function value for given variables.

f(x)[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

Notes

-100 ≤ xi ≤ 100 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(dimension: int)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Bentcigar function value for a given list of variables.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x)[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Bohachevsky Function

Characterized by its simple structure with some local minima, making it ideal for testing fine-tuning capabilities.

class Bohachevsky(dimension: int)[source]

Bases: AbstractProblem

Bohachevsky function implementation for optimization problems.

The Bohachevsky function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-15, 15], for all i = 1, 2, …, n.

design_variables

List of variable names.

Type:

List[str]

bounds

Bounds for each variable.

Type:

List[Tuple[float, float]]

objectives

Objectives for optimization.

Type:

List[str]

constraints

Any constraints for the problem.

Type:

List[str]

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Bohachevsky function value for given variables.

f(x)[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

Notes

-15 ≤ xi ≤ 15 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(dimension: int)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x: List[float], out: Dict[str, Any], *args, **kwargs)[source]

Calculate the Bohachevsky function value for a given list of variables.

Parameters:
  • x (list) – A list of float variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: List[float]) float[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Chichinadze Function

A complex landscape with both smooth and steep regions, suitable for testing algorithms on challenging landscapes.

class Chichinadze[source]

Bases: AbstractProblem

Chichinadze function implementation for optimization problems.

The Chichinadze function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x, y ∈ [-30, 30].

design_variables

List of variable names.

Type:

List[str]

bounds

Bounds for each variable.

Type:

List[Tuple[float, float]]

objectives

Objectives for optimization.

Type:

List[str]

constraints

Any constraints for the problem.

Type:

List[str]

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Chichinadze function value for given variables.

f(x)[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

Notes

-30 ≤ x, y ≤ 30 Global minimum at f(5.90133, 0.5) = −43.3159

__init__()[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Chichinadze function value for a given list of variables.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: List[float]) float[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Drop Wave Function

A multimodal function often used to evaluate the balance between exploration and exploitation.

class Dropwave[source]

Bases: AbstractProblem

Dropwave function for optimization problems.

The Dropwave function is a multimodal function commonly used as a performance test problem for optimization algorithms. It is defined within the bounds -5.12 ≤ xi ≤ 5.12 for i = 1, 2, and has a global minimum at f(0, 0) = -1.

design_variables

The names of the variables, in this case [“x1”, “x2”].

Type:

list

bounds

The lower and upper bounds for each variable, [-5.12, 5.12] for both x1 and x2.

Type:

list of tuples

objectives

List defining the optimization objective, which is to “minimize” for this function.

Type:

list

num_variables

The number of variables (dimensions) for the function, which is 2 in this case.

Type:

int

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the value of the Dropwave function at a given point x.

f(x)[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

__init__()[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Dropwave function value for a given list of variables.

Parameters:
  • x (list) – A list of two floats representing the coordinates [x1, x2].

  • out (dict) – Dictionary to store the output fitness values.

Notes

The Dropwave function is defined as:

f(x1, x2) = - (1 + cos(12 * sqrt(x1^2 + x2^2))) / (0.5 * (x1^2 + x2^2) + 2)

where x1 and x2 are the input variables.

f(x)[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Frequency Modulation Sound Function (FMS)

A complex, multimodal function commonly used to test the robustness of optimization algorithms.

class Fms[source]

Bases: AbstractProblem

Fms function implementation for optimization problems.

The Fms function is used for testing optimization algorithms, specifically those dealing with frequency modulation sound.

design_variables

List of variable names.

Type:

List[str]

bounds

Bounds for each variable.

Type:

List[Tuple[float, float]]

objectives

Objectives for optimization.

Type:

List[str]

constraints

Any constraints for the problem.

Type:

List[str]

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Fms function value for given variables.

f(x)[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

__init__()[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Fms function value for a given list of variables.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x)[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Griewank Function

A continuous, nonlinear function with numerous local minima, commonly used to test an algorithm’s global search capability.

class Griewank(dimensions=10)[source]

Bases: AbstractProblem

Griewank function implementation for optimization problems.

The Griewank function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-600, 600], for all i = 1, 2, …, n.

design_variables

Names of the design variables.

Type:

List[str]

bounds

Bounds for each design variable.

Type:

List[Tuple[float, float]]

objectives

Objectives for optimization, typically [“minimize”].

Type:

List[str]

constraints

Any constraints for the optimization problem.

Type:

List[str]

evaluate(x, out, \*args, \*\*kwargs)[source]

Evaluates the Griewank function value for a given list of variables.

f(x: list) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

Notes

-600 ≤ xi ≤ 600 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(dimensions=10)[source]

Initialize the Griewank function with the specified number of dimensions.

Parameters:

dimensions (int, optional) – The number of dimensions (design variables) for the Griewank function, by default 10.

evaluate(x: List[float], out, *args, **kwargs)[source]

Calculate the Griewank function value for a given list of variables.

Parameters:
  • x (list) – A list of float variables.

  • out (dict) – Dictionary to store the output fitness value.

f(x: List[float]) float[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Parameters:

x (list) – A list of float variables.

Returns:

The Griewank function value.

Return type:

float

Holzman Function

An experimental function that provides various levels of difficulty for different optimization approaches.

class Holzman(design_variables: int = 2)[source]

Bases: AbstractProblem

Holzman function implementation for optimization problems.

The Holzman function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-10, 10], for all i = 1, 2, …, n.

design_variables

Names of the design variables.

Type:

List[str]

bounds

Bounds for each variable.

Type:

List[Tuple[float, float]]

objectives

Objectives for optimization.

Type:

List[str]

constraints

Any constraints for the problem.

Type:

List[str]

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Holzman function value for given variables.

f(x: list) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

Notes

-10 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(design_variables: int = 2)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x: List[float], out, *args, **kwargs)[source]

Calculate the Holzman function value for a given list of variables.

Parameters:
  • x (list) – A list of float variables.

  • out (dict) – Dictionary to store the output fitness value.

f(x: List[float]) float[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Levy Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Levy(dimension: int = 2)[source]

Bases: AbstractProblem

Levy function implementation for optimization problems.

The Levy function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-10, 10], for all i = 1, 2, …, n.

design_variables

The names of the design variables.

Type:

List[str]

bounds

The bounds for each variable, typically [(-10, 10), (-10, 10), …].

Type:

List[Tuple[float, float]]

objectives

Objectives for optimization, usually “minimize” for single-objective functions.

Type:

List[str]

constraints

Any constraints for the optimization problem.

Type:

List[str]

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Levy function value for a given list of variables and stores in out.

f(x: list) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

Notes

-10 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(1,1,…,1) = 0

__init__(dimension: int = 2)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x: List[float], out, *args, **kwargs)[source]

Evaluate the Levy function at a given point.

Parameters:
  • x (list) – A list of float variables.

  • out (dict) – Dictionary to store the output fitness value.

f(x: List[float]) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

Parameters:

x (list) – A list of float variables.

Returns:

The calculated Levy function value.

Return type:

float

Matyas Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Matyas[source]

Bases: AbstractProblem

Matyas function implementation for optimization problems.

The Matyas function is commonly used to evaluate the performance of optimization algorithms. It is a simple, continuous, convex function that has a global minimum at the origin.

design_variables

The names of the design variables, typically [“x1”, “x2”] for 2 variables.

Type:

list of str

bounds

The bounds for each variable, typically [(-10, 10), (-10, 10)].

Type:

list of tuple

objectives

The objectives for optimization, set to [“minimize”].

Type:

list of str

evaluate(x, out, \*args, \*\*kwargs)[source]

Computes the Matyas function value for a given list of variables.

f(x: list) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

__init__()[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Matyas function value for a given list of variables.

Parameters:
  • x (list of float) – A list of float variables representing a point in the solution space.

  • out (dict) – Dictionary to store the output fitness value with key “F”.

f(x)[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Pow Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Pow(design_variables=5)[source]

Bases: AbstractProblem

Pow function implementation for optimization problems.

The Pow function is typically used for testing optimization algorithms. It is evaluated on the hypercube x_i ∈ [-5.0, 15.0] with the goal of reaching the global minimum at f(5, 7, 9, 3, 2) = 0.

design_variables

The names of the design variables.

Type:

List[str]

bounds

The bounds for each variable, typically [(-5.0, 15.0) for each dimension].

Type:

List[Tuple[float, float]]

objectives

Objectives for optimization, e.g., “minimize”.

Type:

List[str]

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Pow function value for compatibility with Pymoo’s optimizer.

f(x: list) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

__init__(design_variables=5)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Pow function value for a given list of variables.

Parameters:
  • x (list) – A list of float variables representing the point in the solution space.

  • out (dict) – Dictionary to store the output fitness values.

f(x: List[float]) float[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Parameters:

x (list) – A list of float variables.

Returns:

The Pow function value.

Return type:

float

Powell Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Powell(design_variables=4)[source]

Bases: AbstractProblem

Powell function implementation for optimization problems.

The Powell function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-4, 5], for all i = 1, 2, …, n.

design_variables

The number of variables for the problem.

Type:

int

bounds

The bounds for each variable, typically [(-4, 5), (-4, 5), …].

Type:

list of tuple

objectives

Number of objectives, set to 1 for single-objective optimization.

Type:

int

evaluate(x, out, \*args, \*\*kwargs) None[source]

Calculates the Powell function value and stores in the output dictionary.

f(x: list) float[source]

Wrapper for evaluate to maintain compatibility with the rest of the codebase.

__init__(design_variables=4)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Evaluate the Powell function at a given point.

Parameters:
  • x (list or numpy array) – Input variables.

  • out (dict) – Output dictionary to store the function value.

f(x)[source]

Wrapper for the evaluate method to maintain compatibility.

Parameters:

x (list) – A list of float variables.

Returns:

The computed Powell function value.

Return type:

float

Rastrigin Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Rastrigin(design_variables=2)[source]

Bases: AbstractProblem

Rastrigin function implementation for optimization problems.

The Rastrigin function is widely used for testing optimization algorithms. It is typically evaluated on the hypercube x_i ∈ [-5.12, 5.12], for all i = 1, 2, …, n.

design_variables

The number of variables for the problem.

Type:

int

bounds

The bounds for each variable, typically [(-5.12, 5.12), (-5.12, 5.12), …].

Type:

list of tuple

objectives

Number of objectives, set to 1 for single-objective optimization.

Type:

int

f(x: list) float[source]

Calculates the Rastrigin function value for a given list of variables.

Notes

-5.12 ≤ xi ≤ 5.12 for i = 1,…,n Global minimum at f(0,…,0) = 0

__init__(design_variables=2)[source]

Initialize the Rastrigin problem with the specified number of variables.

Parameters:

design_variables (int, optional) – The number of design variables, by default 2.

f(x: list) float[source]

Calculate the Rastrigin function value for a given list of variables.

Parameters:

x (list) – A list of float variables.

Returns:

The Rastrigin function value.

Return type:

float

Rosenbrock Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Rosenbrock(design_variables=2)[source]

Bases: AbstractProblem

Rosenbrock function implementation for optimization problems.

The Rosenbrock function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 10], for all i = 1, 2, …, n.

design_variables

Number of variables for the problem.

Type:

int

bounds

The bounds for each variable, typically [(-5, 10), (-5, 10), …].

Type:

list of tuple

objectives

Number of objectives, set to 1 for single-objective optimization.

Type:

int

evaluate(x, out, \*args, \*\*kwargs)[source]

Evaluates the Rosenbrock function value for a given list of variables.

f(x: list) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

Notes

-5 ≤ xi ≤ 10 for i = 1,…,n Global minimum at f(1,…,1) = 0

__init__(design_variables=2)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Rosenbrock function value for a given list of variables.

Parameters:
  • x (list) – A list of float variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: list) float[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Parameters:

x (list) – A list of float variables.

Returns:

The Rosenbrock function value.

Return type:

float

Rothellipsoid Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Rothellipsoid(design_variables=3)[source]

Bases: AbstractProblem

Rotated Hyper-Ellipsoid function implementation for optimization problems.

This function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.

design_variables

Number of variables (dimensions) for the problem.

Type:

int

bounds

The bounds for each variable, typically [(-100, 100), (-100, 100), …].

Type:

list of tuple

objectives

Number of objectives, set to 1 for single-objective optimization.

Type:

int

f(x: list) float[source]

Alias for evaluate, calculates the Rotated Hyper-Ellipsoid function value for a given list of variables.

evaluate(x, out, \*args, \*\*kwargs)[source]

Pymoo-compatible function for calculating the fitness values and storing them in the out dictionary.

__init__(design_variables=3)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Rotated Hyper-Ellipsoid function value for pymoo compatibility.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x)[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Parameters:

x (list) – A list of float variables.

Returns:

The Rotated Hyper-Ellipsoid function value.

Return type:

float

Schaffer Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Schaffer(design_variables=2)[source]

Bases: AbstractProblem

Schaffer’s Function for optimization problems.

This class implements the Schaffer’s function, a common benchmark problem for optimization algorithms. The function is defined over a multidimensional input and is used to test the performance of optimization methods.

design_variables

The number of variables for the problem.

Type:

int

bounds

The bounds for each variable, typically set to [(-100, 100), (-100, 100), …].

Type:

list of tuple

objectives

Number of objectives, set to 1 for single-objective optimization.

Type:

int

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Schaffer’s function value for compatibility with pymoo.

f(x: list) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

__init__(design_variables=2)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Evaluate the Schaffer’s function for a given point using pymoo compatibility.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness value.

f(x)[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Schaffer2 Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Schaffer2(design_variables=2)[source]

Bases: AbstractProblem

Modified Schaffer function #2 implementation for optimization problems.

The Modified Schaffer function #2 is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-100, 100], for all i = 1, 2, …, n.

design_variables

The number of design variables.

Type:

int

bounds

The bounds for each variable, typically [(-100, 100), (-100, 100), …].

Type:

list of tuple

objectives

Number of objectives, set to 1 for single-objective optimization.

Type:

int

evaluate(x, out, \*args, \*\*kwargs)[source]

Evaluates the Modified Schaffer function #2 value for a given list of variables.

f(x: list) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

__init__(design_variables=2)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Evaluate the Modified Schaffer function #2 value for a given list of variables.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x)[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Schwefel Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Schwefel(design_variables=2)[source]

Bases: AbstractProblem

Schwefel function implementation for optimization problems.

This function is commonly used for testing optimization algorithms and is evaluated on the range [-500, 500] for each variable.

design_variables

The number of variables in the problem.

Type:

int

bounds

The bounds for each variable, typically [(-500, 500), (-500, 500), …].

Type:

list of tuple

objectives

Number of objectives, set to 1 for single-objective optimization.

Type:

int

evaluate(x, out, \*args, \*\*kwargs)[source]

Calculates the Schwefel function value for a given list of variables, compatible with pymoo.

f(x: list) float[source]

Alias for evaluate to maintain compatibility with the rest of the codebase.

__init__(design_variables=2)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Calculate the Schwefel function value for a given list of variables.

Parameters:
  • x (numpy.ndarray) – A numpy array of float variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x)[source]

Alias for the evaluate method to maintain compatibility with the rest of the codebase.

Parameters:

x (list) – A list of float variables.

Returns:

The Schwefel function value.

Return type:

float

Sphere Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Sphere(design_variables=10)[source]

Bases: AbstractProblem

Sphere function implementation for optimization problems.

The Sphere function is commonly used for testing optimization algorithms. It is defined on a hypercube where each variable typically lies within [-5.12, 5.12].

__init__(design_variables=10)[source]

Initializes the Sphere function with specified design variables and bounds.

Parameters:

design_variables (int, optional) – Number of variables for the function, by default 10.

evaluate(x, out, *args, **kwargs)[source]

Evaluate function for compatibility with pymoo’s optimizer.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: list) float[source]

Calculate the Sphere function value for a given list of variables.

Parameters:

x (list) – A list of float variables.

Returns:

The Sphere function value.

Return type:

float

Styblinskitang Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class StyblinskiTang(design_variables=2)[source]

Bases: AbstractProblem

Styblinski-Tang function implementation for optimization problems.

The Styblinski-Tang function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 5], for all i = 1, 2, …, n.

__init__(design_variables=2)[source]

Initializes the Styblinski-Tang function with specified design variables and bounds.

Parameters:

design_variables (int, optional) – Number of variables for the function, by default 2.

evaluate(x, out, *args, **kwargs)[source]

Evaluate function for compatibility with pymoo’s optimizer.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: list) float[source]

Calculate the Styblinski-Tang function value for a given list of variables.

Parameters:

x (list) – A list of float variables.

Returns:

The Styblinski-Tang function value.

Return type:

float

Sumofdifferentpowers Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Sumofdifferentpowers(design_variables=2)[source]

Bases: AbstractProblem

Sum of Different Powers function implementation for optimization problems.

The Sum of Different Powers function is often used for testing optimization algorithms. It is usually evaluated within the bounds x_i ∈ [-1, 1] for each variable.

n_var

The number of variables for the problem.

Type:

int

bounds

The bounds for each variable, typically [(-1, 1), (-1, 1), …].

Type:

list of tuple

objectives

Number of objectives, set to 1 for single-objective optimization.

Type:

int

f(x: list) float[source]

Calculates the Sum of Different Powers function value for a given list of variables.

Notes

-1 ≤ xi ≤ 1 for all i. Global minimum at f(0,…,0) = 0.

__init__(design_variables=2)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Evaluate method for compatibility with pymoo’s framework.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: list) float[source]

Calculate the Sum of Different Powers function value for a given list of variables.

Parameters:

x (list) – A list of float variables.

Returns:

The Sum of Different Powers function value.

Return type:

float

Threehumps Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Threehumps[source]

Bases: AbstractProblem

Three Hump Camel function implementation for optimization problems.

The Three Hump Camel function is widely used for testing optimization algorithms. The function is usually evaluated on the hypercube x_i ∈ [-5, 5], for all i = 1, 2, …, n.

bounds

Bounds for each variable, set to [(-5, 5), (-5, 5)] for this function.

Type:

list of tuple

design_variables

Number of variables for this problem, which is 2.

Type:

int

objectives

Number of objectives, which is 1 for single-objective optimization.

Type:

int

f(x: list) float[source]

Calculates the Three Hump Camel function value for a given list of variables.

__init__()[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x, out, *args, **kwargs)[source]

Evaluate method for compatibility with pymoo’s framework.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: list) float[source]

Calculate the Three Hump Camel function value for a given list of variables.

Parameters:

x (list) – A list of float variables.

Returns:

The Three Hump Camel function value.

Return type:

float

Zakharov Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Zakharov(design_variables=2)[source]

Bases: AbstractProblem

Zakharov function implementation for optimization problems.

The Zakharov function is commonly used to test optimization algorithms. It evaluates inputs over the hypercube x_i ∈ [-5, 10].

design_variables

The number of variables for the problem.

Type:

int

bounds

The bounds for each variable, typically [(-5, 10), (-5, 10), …].

Type:

list of tuple

objectives

Objectives for the problem, set to [“minimize”] for single-objective optimization.

Type:

list

f(x: list) float[source]

Calculates the Zakharov function value for a given list of variables.

__init__(design_variables=2)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

evaluate(x: List[float], out: dict, *args: Any, **kwargs: Any) None[source]

Evaluate function for compatibility with pymoo’s optimizer.

Parameters:
  • x (numpy.ndarray) – Array of input variables.

  • out (dict) – Dictionary to store the output fitness values.

f(x: List[float]) float[source]

Calculate the Zakharov function value for a given list of variables.

Parameters:

x (list) – A list of float variables.

Returns:

The Zakharov function value.

Return type:

float

Zettle Function

This function is used to test the efficiency of algorithms on smooth, differentiable landscapes.

class Zettle(design_variables=2)[source]

Bases: AbstractProblem

Zettle function implementation for optimization problems.

The Zettle function is widely used for testing optimization algorithms. It is usually evaluated on the hypercube x_i ∈ [-5, 5] for all i = 1, 2, …, n.

design_variables

The number of variables for the problem.

Type:

int

bounds

The bounds for each variable, typically [(-5, 5), (-5, 5), …].

Type:

list of tuple

objectives

Number of objectives, set to 1 for single-objective optimization.

Type:

int

f(x: list) float[source]

Calculates the Zettle function value for a given list of variables.

Notes

-5 ≤ xi ≤ 5 for i = 1,…,n Global minimum at f(-0.0299, 0) = -0.003791

__init__(design_variables=2)[source]

Initialize the problem with variables, bounds, objectives, and constraints.

Parameters:
  • design_variables (int or List[str]) – If an integer, it specifies the number of design variables. If a list of strings, it specifies the names of design variables.

  • bounds (List[Tuple[float, float]]) – Bounds for each design variable as (min, max).

  • objectives (str, int, or List[str]) – Objectives for optimization, e.g., “minimize” or “maximize”.

  • constraints (str, int, or List[str], optional) – Constraints for the problem (default is an empty list).

f(x: list) float[source]

Calculate the Zettle function value for a given list of variables.

Parameters:

x (list) – A list of float variables.

Returns:

The Zettle function value, rounded to six decimal places.

Return type:

float