In 3D

In mathematical optimization, the Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It is a typical example of non-linear multimodal function. It was first proposed in 1974 by Rastrigin as a 2-dimensional function and has been generalized by Rudolph. The generalized version was popularized by Hoffmeister & Bäck and Mühlenbein et al. Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima.

On an n {\displaystyle n}-dimensional domain it is defined by:

f ( x ) = A n + ∑ i = 1 n [ x i 2 − A cos ⁡ ( 2 π x i ) ] {\displaystyle f(\mathbf {x} )=An+\sum _{i=1}^{n}\left[x_{i}^{2}-A\cos(2\pi x_{i})\right]}

where A = 10 {\displaystyle A=10} and x i ∈ [ − 5.12 , 5.12 ] {\displaystyle x_{i}\in [-5.12,5.12]}. There are many extrema:

  • The global minimum is at x = 0 {\displaystyle \mathbf {x} =\mathbf {0} } where f ( x ) = 0 {\displaystyle f(\mathbf {x} )=0}.
  • The maximum function value for x i ∈ [ − 5.12 , 5.12 ] {\displaystyle x_{i}\in [-5.12,5.12]} is located at x = ( ± 4.52299366... , . . . , ± 4.52299366... ) {\displaystyle \mathbf {x} =(\pm 4.52299366...,...,\pm 4.52299366...)}:
Number of dimensionsMaximum value at ± 4.52299366 {\displaystyle \pm 4.52299366}
140.35329019
280.70658039
3121.0598706
4161.4131608
5201.7664509
6242.1197412
7282.4730314
8322.8263216
9363.1796117

Here are all the values at 0.5 interval listed for the 2D Rastrigin function with x i ∈ [ − 5.12 , 5.12 ] {\displaystyle x_{i}\in [-5.12,5.12]}:

f ( x ) {\displaystyle f(x)}x 1 {\displaystyle x_{1}}
0 {\displaystyle 0}± 0.5 {\displaystyle \pm 0.5}± 1 {\displaystyle \pm 1}± 1.5 {\displaystyle \pm 1.5}± 2 {\displaystyle \pm 2}± 2.5 {\displaystyle \pm 2.5}± 3 {\displaystyle \pm 3}± 3.5 {\displaystyle \pm 3.5}± 4 {\displaystyle \pm 4}± 4.5 {\displaystyle \pm 4.5}± 5 {\displaystyle \pm 5}± 5.12 {\displaystyle \pm 5.12}
x 2 {\displaystyle x_{2}}0 {\displaystyle 0}020.25122.25426.25932.251640.252528.92
± 0.5 {\displaystyle \pm 0.5}20.2540.521.2542.524.2546.529.2552.536.2560.545.2549.17
± 1 {\displaystyle \pm 1}121.25223.25527.251033.251741.252629.92
± 1.5 {\displaystyle \pm 1.5}22.2542.523.2544.526.2548.531.2554.538.2562.547.2551.17
± 2 {\displaystyle \pm 2}424.25526.25830.251336.252044.252932.92
± 2.5 {\displaystyle \pm 2.5}26.2546.527.2548.530.2552.535.2558.542.2566.551.2555.17
± 3 {\displaystyle \pm 3}929.251031.251335.251841.252549.253437.92
± 3.5 {\displaystyle \pm 3.5}32.2552.533.2554.536.2558.541.2564.548.2572.557.2561.17
± 4 {\displaystyle \pm 4}1636.251738.252042.252548.253256.254144.92
± 4.5 {\displaystyle \pm 4.5}40.2560.541.2562.544.2566.549.2572.556.2580.565.2569.17
± 5 {\displaystyle \pm 5}2545.252647.252951.253457.254165.255053.92
± 5.12 {\displaystyle \pm 5.12}28.9249.1729.9251.1732.9255.1737.9261.1744.9269.1753.9257.85

The abundance of local minima underlines the necessity of a global optimization algorithm when needing to find the global minimum. Local optimization algorithms are likely to get stuck in a local minimum.

See also

Notes