A Gentle Introduction to Theory (for Non-Theoreticians)
Host: Benjamin Doerr (École Polytechnique, France)
This tutorial addresses students, researchers, and practitioners who do not regularly use theoretical methods in their research. For these, we give a smooth introduction to the theory of evolutionary computation.
Complementing other introductory theory tutorials, we do not discuss mathematical methods or particular results, but explain what the theory of evolutionary algorithms aims at, how theoretical research in evolutionary computation is conducted, how to unterpret results from the theory literature, what the most important theory contributions are, and what the theory community is trying to understand most at the moment.
Evolutionary Algorithms and Hyper-Heuristics
Host: Nelishia Pillay (University of Pretoria, South Africa)
Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics and is continuing to do so. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided including the assessment of hyper-heuristic performance. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic.
A case study will be presented for each type of hyper-heuristic to provide researchers with a foundation to start their own research in this area. The EvoHyp library will be used to demonstrate the implementation evolutionary algorithm hyper-heuristic. A theoretical understanding of evolutionary algorithm hyper-heuristics will be provided. A new measure to assess the performance of hyper-heuristic performance will also be presented. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted.
The tutorial will also examine recent trends in evolutionary algorithm hyperheuristics such as transfer learning and automated design. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics.
Exploratory Landscape Analysis
Hosts: Pascal Kerschke (TU Dresden, Germany), Mike Preuss (Universiteit Leiden, Netherlands)
Whenever one performs optimization on a (continuous) problem, having at least a vague idea of its landscape’s structure is usually very beneficial, as this, for instance, allows to select and/or configure a suitable, well-performing o ptimization algorithm. However, characterizing problem landscapes poses (at least) two challenges:
(i) one wants to avoid spending major parts of the overall budget on the characterization of the landscape (as this would reduce the budget that is left for the optimizer), and (ii) the properties should be automatically measurable/computable (otherwise one would always depend on the availability of some expert, i.e., a real person).
As a result, over the last decade an entire research area – denoted Exploratory Landscape Analysis (ELA) – has developed around the topic of automated and feature-based problem characterization for continuous optimization problems.
Decomposition Multi-Objective Optimization: Current Developments and Future Opportunities
Host: Ke Li (University of Exeter, United Kingdom)
Evolutionary multi-objective optimization (EMO) has been a major research topic in the field of evolutionary computation for three decades. It has been generally accepted that combination of evolutionary algorithms and traditional optimization methods should be a next generation multi-objective optimization solver. As the name suggests, the basic idea of the decomposition-based technique is to transform the original complex problem into simplified subproblem(s) so as to facilitate the optimization.
Decomposition methods have been well used and studied in traditional multi-objective optimization. Multi-objective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multi-objective problem into a number of subtasks, and then solves them in a collaborative manner. MOEA/D provides a very natural bridge between multi-objective evolutionary algorithms and traditional decomposition methods. It has been a commonly used evolutionary algorithmic framework in recent years.
Real‐world Applications of Dynamic Parameter Control in Evolutionary Computation
Host: Gregor Papa (Jožef Stefan Institute, Slovenia)
While solving optimization problems with evolutionary algorithms one basic challenge is the selection of the proper control parameters, which adjust the behaviour of the algorithms. Several control parameters can be set in searching for the optimum of an objective function. Suitable control parameter values need to be found for the population size, the mutation strength, the crossover rate, the selective pressure, etc. The choice of these parameters can have a significant impact on the performance of the algorithm and thus need to be executed with care. With parameter control approach no prior training of parameters is needed. It also accounts for the fact that the optimal parameter values typically change during the optimization process: for example, at the beginning of an optimization process we typically aim for exploration, while in the later stages we want the algorithm to converge and to focus its search on the most promising regions in the search space.
The ambition of this tutorial is to inform participants about different parameter control techniques, and by discussing both theoretical and experimental results that demonstrate the unexploited potential of non‐static parameter choices.
Statistical Analyses for Single-Objective Stochastic Optimization Algorithms
Hosts: Tome Eftimov, Peter Korošec (Jožef Stefan Institute, Slovenia)
Moving to the era of explainable AI, a comprehensive comparison of the performance of single-objective stochastic optimization algorithms has become an increasingly important task. One of the most common ways to compare the performance of stochastic optimization algorithms is to apply statistical analyses. However, for performing them, there are still caveats that need to be addressed for acquiring relevant and valid conclusions.
First of all, the selection of the performance measures should be done with great care since some measures can be correlated and their data is then further involved into statistical analyses. Further, the statistical analyses require good knowledge from the user to apply them properly, which is often lacking and leads to incorrect conclusions. Next, the standard approaches can be influenced by outliers (e.g., poor runs) or some statistically insignificant differences (solutions within some ε-neighborhood) that exist in the data.
This tutorial will provide an overview of the current approaches for analyzing algorithms performance with special emphasis on caveats that are often overlooked. We will show how these can be easily avoided by applying simple principles that lead to Deep Statistical Comparison. The tutorial will not be based on equations, but mainly examples through which a deeper understanding of statistics will be achieved. Examples will be based on various comparison scenarios for single-objective optimization algorithms. The tutorial will end with a demonstration of a web-service-based framework (i.e. DSCTool) for statistical comparison of single-objective stochastic optimization algorithms. In addition, R and Python clients for performing the analyses will be also presented.
Large-Scale Optimization and Learning
Hosts: Mohammad Nabi Omidvar (University of Leeds, United Kingdom), Yuan Sun (Monash University, Australia), Xiaodong Li (RMIT University, Australia)
Abstract: Many real-world optimization problems involve a large number of decision variables. The trend in engineering optimization shows that the number of decision variables involved in a typical optimization problem has grown exponentially over the last 50 years, and this trend continues with an ever-increasing rate. The proliferation of big-data analytic applications has also resulted in the emergence of large-scale optimization problems at the heart of many machine learning problems. The recent advances in the area of machine learning has also witnessed very large scale optimization problems encountered in training deep neural network architectures (so-called deep learning), some of which have over a billion decision variables.
It is this “curse-of-dimensionality” that has made large-scale optimization an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing with such problems. It is this research gap in both theory and practice that has attracted much research interest, making large-scale optimization an active field in recent years.
We are currently witnessing a wide range of mathematical, metaheuristics and learning-based optimization algorithms being developed to overcome this scalability issue. This Tutorial is dedicated to exploring the recent advances in this field, and is divided into two parts:
(I) large-scale black-box continuous optimization and (II) large-scale combinatorial optimization. In Part I, we focus on the methods that learn and exploit problem structures to tackle large-scale blackbox optimization problems such as recomposition methods based on variable interaction analysis. The opportunities and challenges of applying Evolutionary Algorithms (EAs) to deep learning are also discussed. In Part II, we introduce traditional problem reduction and decomposition techniques to solve large-scale combinatorial optimization problems, with a special focus on the emerging topic of leveraging machine learning and data mining for combinatorial optimization.
Adversarial Deep Learning by Using Distributed Coevolutionary Computation
Hosts: Jamal Toutou (Univeristy of Malaga, Spain), Una-May O’Reilly (Massachusetts Institute of Technology, USA)
In recent years, Generative Adversarial Networks (GANs) have been recognized as powerful methods for generative modeling. Generative modeling is the problem of estimating the underlying distribution of a set of samples. GANs accomplish this using unsupervised learning (some extensions also apply semisupervised and fully supervised learning). GANs have been successfully applied to many domains. They can generate novel images (e.g., image colorization or super-resolution), sound (e.g., voice translation and music generation), and video (e.g., video-to-video translation, deep fakes generation), finding applications in domains of multimedia information, engineering, science, design, art, and games.
GANs apply an adversarial paradigm. GAN training can be seen as a twoplayer game in which two neural networks compete with each other using antagonistic lost function to train the parameters with gradient descent. This connects them to evolution because evolution also exhibits adversarial engagements and competitive co-evolution. In fact, co-evolutionary algorithms offer a means of solving convergence impasses often encountered in GAN training.
In this tutorial we will explain:
– Main concepts of generative modeling and adversarial learning. GAN gradientbased training and the main pathologies that prevent ideal convergence. We will explain mode collapse, oscillation, and vanishing gradients.
– Co-evolutionary algorithms and how they can be applied to train GANs.
Specifically, We will explain how algorithm enhancements address non-ideal convergence
– We will draw upon the open-source Lipizzaner framework to demonstrate
(url: http://lipizzaner.csail.mit.edu/). This framework is easy to use and extend. It sets up a spatial grid of communicating populations of GANs.
– Students will be given the opportunity to set up and use the Lipizzaner framework during the tutorial employing a jupyter notebook expressly developed for teaching purposes.
Runtime Analysis of Population-based Evolutionary Algorithms
Host: Per Kristian Lehre (University of Birmingham, United Kingdom)
Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics. A rich theory on runtime analysis (also called time-complexity analysis) of EAs has been developed over the last 20 years. The goal of this theory is to show, via rigorous mathematical means, how the performance of EAs depends on their parameter settings and the characteristics of the underlying fitness landscapes. Initially, runtime analysis of EAs was mostly restricted to simplified EAs that do not employ large populations, such as the (1+1) EA. This tutorial introduces more recent techniques that enable runtime analysis of EAs with realistic population sizes. The tutorial begins with a brief overview of the population-based EAs that are covered by the techniques. We recall the common stochastic selection mechanisms and how to measure the selection pressure they induce. The main part of the tutorial covers in detail widely applicable techniques tailored to the analysis of populations. We discuss random family trees and branching processes, drift and concentration of measure in populations, and level-based analyses. To illustrate how these techniques can be applied, we consider several fundamental questions: When are populations necessary for efficient optimisation with EAs? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates? What determines an EA’s tolerance for uncertainty, e.g. in form of noisy or partially available fitness? The proposed tutorial has previously been given at PPSN 2019, 2020, CEC 2015, GECCO 2016, 2018, and -2019 where it was very well received, attracting an audience of approximately 50 participants. Key points in the tutorial will be complemented by web-based visualisations of population-based EAs, allowing participants to interact with and observe the impacts of algorithmic parameter settings and problem characteristics. For an example, please see http://www.cs.bham.ac.uk/~lehrepk/selfadapt/
Automated Algorithm Configuration and Selection with Sparkle
Hosts: Koen van der Blom (Leiden University, Netherlands) Jeroen Rook (University of Twente, Netherlands)
The primary aim when designing and evaluating algorithms to solve computational problems is to find high-quality (or valid) solutions to given problem instances, and to do so as efficiently as possible. The performance of most cutting edge algorithms for challenging problems is significantly influenced by the choice of their components and parameter settings. Fortunately, the tedious job of making these choices can be automated through the use of automated algorithm configuration. Orthogonally to this, one (configured) algorithm may have the best performance on one subset of problem instances, while another algorithm may be the best for a different subset of problem instances. As a result, there is often no single best algorithm, but rather a set of algorithms that are each the best on one or more instances. To take advantage of this, automated algorithm selection can be used to construct a portfolio selector that can be used to predict which algorithm from the portfolio is most likely to perform best for each instance, achieving improved performance over using a single algorithm that is the best on average.
In this tutorial, we will introduce the ideas behind automated algorithm configuration and selection, the advantages of using them, and we will discuss how these techniques can be used more easily and successfully with a new software environment we have recently developed:
Sparkle. Currently, algorithm configuration and selection tools are primarily designed to facilitate the advancement of algorithm configuration and selection techniques, rather than for their use in practice by algorithm designers. The key aim of Sparkle is to simplify access to meta-algorithms, and automated configuration and selection in particular, so they become available to a broader audience with the necessary support for the correct use of these techniques, including adoption of best practises and avoidance of pitfalls. In addition, an important part of the use of automated algorithm configuration and selection in research includes communicating the experimental procedure and results. We will discuss how to do this, and how Sparkle makes this easier.