Algorithms

Functions to implement the randomized optimization and search algorithms.

hill_climb(problem, max_iters=inf, restarts=0, init_state=None, curve=False, random_state=None)[source]

Use standard hill climbing to find the optimum for a given optimization problem.

Parameters:
  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt(), ContinuousOpt() or TSPOpt().
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm for each restart.
  • restarts (int, default: 0) – Number of random restarts.
  • init_state (array, default: None) – 1-D Numpy array containing starting state for algorithm. If None, then a random state is used.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.
Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array containing the fitness at every iteration. Only returned if input argument curve is True.

References

Russell, S. and P. Norvig (2010). Artificial Intelligence: A Modern Approach, 3rd edition. Prentice Hall, New Jersey, USA.

random_hill_climb(problem, max_attempts=10, max_iters=inf, restarts=0, init_state=None, curve=False, random_state=None)[source]

Use randomized hill climbing to find the optimum for a given optimization problem.

Parameters:
  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt(), ContinuousOpt() or TSPOpt().
  • max_attempts (int, default: 10) – Maximum number of attempts to find a better neighbor at each step.
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm.
  • restarts (int, default: 0) – Number of random restarts.
  • init_state (array, default: None) – 1-D Numpy array containing starting state for algorithm. If None, then a random state is used.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.
Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array containing the fitness at every iteration. Only returned if input argument curve is True.

References

Brownlee, J (2011). Clever Algorithms: Nature-Inspired Programming Recipes. http://www.cleveralgorithms.com.

simulated_annealing(problem, schedule=<mlrose.decay.GeomDecay object>, max_attempts=10, max_iters=inf, init_state=None, curve=False, random_state=None)[source]

Use simulated annealing to find the optimum for a given optimization problem.

Parameters:
  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt(), ContinuousOpt() or TSPOpt().
  • schedule (schedule object, default: mlrose.GeomDecay()) – Schedule used to determine the value of the temperature parameter.
  • max_attempts (int, default: 10) – Maximum number of attempts to find a better neighbor at each step.
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm.
  • init_state (array, default: None) – 1-D Numpy array containing starting state for algorithm. If None, then a random state is used.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.
Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array containing the fitness at every iteration. Only returned if input argument curve is True.

References

Russell, S. and P. Norvig (2010). Artificial Intelligence: A Modern Approach, 3rd edition. Prentice Hall, New Jersey, USA.

genetic_alg(problem, pop_size=200, mutation_prob=0.1, max_attempts=10, max_iters=inf, curve=False, random_state=None)[source]

Use a standard genetic algorithm to find the optimum for a given optimization problem.

Parameters:
  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt(), ContinuousOpt() or TSPOpt().
  • pop_size (int, default: 200) – Size of population to be used in genetic algorithm.
  • mutation_prob (float, default: 0.1) – Probability of a mutation at each element of the state vector during reproduction, expressed as a value between 0 and 1.
  • max_attempts (int, default: 10) – Maximum number of attempts to find a better state at each step.
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.
Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array of arrays containing the fitness of the entire population at every iteration. Only returned if input argument curve is True.

References

Russell, S. and P. Norvig (2010). Artificial Intelligence: A Modern Approach, 3rd edition. Prentice Hall, New Jersey, USA.

mimic(problem, pop_size=200, keep_pct=0.2, max_attempts=10, max_iters=inf, curve=False, random_state=None, fast_mimic=False)[source]

Use MIMIC to find the optimum for a given optimization problem.

Parameters:
  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt() or TSPOpt().
  • pop_size (int, default: 200) – Size of population to be used in algorithm.
  • keep_pct (float, default: 0.2) – Proportion of samples to keep at each iteration of the algorithm, expressed as a value between 0 and 1.
  • max_attempts (int, default: 10) – Maximum number of attempts to find a better neighbor at each step.
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.
  • fast_mimic (bool, default: False) – Activate fast mimic mode to compute the mutual information in vectorized form. Faster speed but requires more memory.
Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array containing the fitness at every iteration. Only returned if input argument curve is True.

References

De Bonet, J., C. Isbell, and P. Viola (1997). MIMIC: Finding Optima by Estimating Probability Densities. In Advances in Neural Information Processing Systems (NIPS) 9, pp. 424–430.

Note

MIMIC cannot be used for solving continuous-state optimization problems.