Resumen
Today, many complex multiobjective problems are dealt with using genetic algorithms (GAs). They apply the evolution mechanism of a natural population to a ?numerical? population of solutions to optimize a fitness function. GA implementations must find a compromise between the breath of the search (to avoid being trapped into local minima) and its depth (to prevent a rough approximation of the optimal solution). Most algorithms use ?elitism?, which allows preserving some of the current best solutions in the successive generations. If the initial population is randomly selected, as in many GA packages, the elite may concentrate in a limited part of the Pareto frontier preventing its complete spanning. A full view of the frontier is possible if one, first, solves the single-objective problems that correspond to the extremes of the Pareto boundary, and then uses such solutions as elite members of the initial population. The paper compares this approach with more conventional initializations by using some classical tests with a variable number of objectives and known analytical solutions. Then we show the results of the proposed algorithm in the optimization of a real-world system, contrasting its performances with those of standard packages.