A comparison of predictive measures of problem difficulty in evolutionary algorithms
Faculty of Sciences. Mathematics and Computer Science
New York, N.Y.
IEEE transactions on evolutionary computation / IEEE Neural Networks Council. - New York, N.Y.
, p. 1-15
University of Antwerp
This paper studies a number of predictive measures of problem difficulty, amoung which epistasis variance and fitness distance correlation are the most widely known. Our approach is based on comparing the reference class of a measure to a number of known easy function classes. First, we generalize the reference classes of fitness distance correlation and epistasis variance, and construct a new predictive measure that is insensitive to nonlinear fitness scaling. We then in investigate the relations between the reference classes of the measures and a number of intuitively easy classes such as the steepest ascent optimizable functions. Within the latter class, functions that fool the predictive quality of all of the measures are easily found, This points out the need to further identify which functions are easy for a given class of evolutionary algorithms in older to design more efficient hardness indicators for them. We finally restrict attention to the genetic algorithm (GA), and consider both GA-easy and GA-hard fitness functions, and give experimental evidence that the values of the measures, based on random samples, can be completely unreliable and entirely uncorrelated to the convergence quality and convergence speed of GA instances using either proportional or ranking selection.