's Walsh-schema transform
processing
285-319
article
relevance
theoretical results
partitioning
diverse types
work
structure
analysis
applications
https://doi.org/10.1007/bf00993046
1993-11
articles
algorithm
optimization
problem
Tanese
GA performance
1993-11-01
informal description
certain theoretical results
Bethke's Walsh-schema transform
explanation
questions
true
aWalsh polynomial
fitness function
anomalous experimental results
learning
Tanese's surprising results
number
anomalous results
independent populations
subclasses
work of Bethke
https://scigraph.springernature.com/explorer/license/
form
What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as aWalsh polynomial. The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this article we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillelimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs.We begin by reviewingschema processing in GAs. We then given an informal description of how Walsh analysis and Bethke's Walsh-schema transform relate to GA performance, and we discuss the relevance of this analysis for GA applications in optimization and machine learning. We then describe Tanese's surprising results, examine them experimentally and theoretically, and propose and evaluate some explanations. These explanations lead to a more fundamental question about GAs: what are the features of problems that determine the likelihood of successful GA performance?
function
members
machine learning
Bethke
genetic algorithm
expectations
Walsh polynomials
en
transform
surprising result
What makes a problem hard for a genetic algorithm? Some anomalous results and their explanation
results
large population
population
successful GA performance
reviewingschema processing
single large population
features
types
Walsh analysis
description
smaller independent populations
hillelimbing
2022-01-01T18:06
likelihood
gas applications
performance
polynomials
previous work
Goldberg
features of problems
fundamental questions
article
people
relationship
experimental results
Stephanie
Forrest
pub.1048698842
dimensions_id
doi
10.1007/bf00993046
Melanie
Mitchell
Department of Computer Science, University of New Mexico, 87181-1386, Albuquerque, NM
Department of Computer Science, University of New Mexico, 87181-1386, Albuquerque, NM
0885-6125
1573-0565
Machine Learning
Springer Nature
13
Psychology and Cognitive Sciences
2-3
Psychology
Artificial Intelligence Laboratory, University of Michigan, 48109-2110, Ann Arbor, MI
Artificial Intelligence Laboratory, University of Michigan, 48109-2110, Ann Arbor, MI
Springer Nature - SN SciGraph project