Title
|
|
|
|
A comparison of predictive measures of problem difficulty in evolutionary algorithms
| |
Author
|
|
|
|
| |
Abstract
|
|
|
|
This paper studies a number of predictive measures of problem difficulty, amoung which epistasis variance and fitness distance correlation are the most widely known. Our approach is based on comparing the reference class of a measure to a number of known easy function classes. First, we generalize the reference classes of fitness distance correlation and epistasis variance, and construct a new predictive measure that is insensitive to nonlinear fitness scaling. We then in investigate the relations between the reference classes of the measures and a number of intuitively easy classes such as the steepest ascent optimizable functions. Within the latter class, functions that fool the predictive quality of all of the measures are easily found, This points out the need to further identify which functions are easy for a given class of evolutionary algorithms in older to design more efficient hardness indicators for them. We finally restrict attention to the genetic algorithm (GA), and consider both GA-easy and GA-hard fitness functions, and give experimental evidence that the values of the measures, based on random samples, can be completely unreliable and entirely uncorrelated to the convergence quality and convergence speed of GA instances using either proportional or ranking selection. |
| |
Language
|
|
|
|
English
| |
Source (journal)
|
|
|
|
IEEE transactions on evolutionary computation / IEEE Neural Networks Council. - New York, N.Y.
| |
Publication
|
|
|
|
New York, N.Y.
:
2000
| |
ISSN
|
|
|
|
1089-778X
| |
DOI
|
|
|
|
10.1109/4235.843491
| |
Volume/pages
|
|
|
|
4
:1
(2000)
, p. 1-15
| |
ISI
|
|
|
|
000087431900001
| |
Full text (Publisher's DOI)
|
|
|
|
| |
|