Competitions

COMPETITIONS

General Video Game AI: Single Player Learning Competition – Hao Tong, Yan Tao, Jialin LIu

The General Video Game AI (GVGAI) Learning Competition explores the problem of transferring and reusing the knowledge learnt on given levels of single-player games to play unseen levels without access to any forward model or explicit game rules. More about this competition can be found on the GVGAI website and the competition webpage.

Open Optimization Competition 2020 – Carola Doerr, Olivier Teytaud, Jérémy Rapin, Thomas Bäck

The Open Optimization Competition aims at fostering a
collective, community-driven effort towards reproducible, open access, and conveniently interpretable comparisons of different optimization techniques, with the goal to support users in choosing the best algorithms and the best configurations for their problem at hand.

The competition has two tracks:

  • 1. A performance-oriented track, which welcomes contributions of efficient optimization algorithms for the following categories:
    • One-shot optimization
    • Low budget optimization
    • Multi-objective optimization
    • Discrete optimization, in particular self-adaptation
    • Structured optimization
    • Constraint handling
    • Algorithm selection and combination
  • 2. Contributions towards “better’’ benchmarking, e.g.,
    • Performance criteria (for example: how to measure robustness over large families of problems?)
    • Visualization of data
    • New benchmark problems (e.g., structured optimization problems)
    • Cross-validation issues in optimization benchmarking
    • Performance statistics
    • Software contributions (e.g., efficient distribution over clusters or grids, software contribution in general)
    • Mathematically justified improvement, i.e., algorithms or configurations with proven performance guarantees

While the performance track is hosted within Nevergrad, the contributions track welcomes contributions to both Nevergrad and IOHanalyzer, the analytical and visualization module of IOHprofiler. Both tools, Nevergrad and IOHprofiler, are open-source platforms, available on GitHub at https://github.com/facebookresearch/nevergrad and https://github.com/IOHprofiler/IOHanalyzer, respectively. The two tools are linked in that performance data files produced by Nevergrad can be conveniently loaded and analyzed by IOHprofiler, through its web-based interface at http://iohprofiler.liacs.nl/.

More information is available at https://github.com/facebookresearch/nevergrad/blob/master/docs/opencompetition2020.md.

Game Benchmark Competition – Tea Tušar, Boris Naujoks, Vanessa Volz

The Game Benchmark for Evolutionary Algorithms (GBEA) is a collection of single- and multi-objective optimisation tasks that occur in applications to games research. Games are a very interesting topic that motivates a lot of research and have repeatedly been suggested as testbeds for AI algorithms. Key features of games are controllability, safety and repeatability, but also the ability to simulate properties of real-world problems such as measurement noise, uncertainty and the existence of multiple objectives. That’s why we suggest this competition as an alternative to purely mathematical test function based ones in order to ensure the applicability of developed optimisation algorithms to real-world problems.

We are proposing a competition with multiple tracks that addresses 2 different research questions all featuring continuous search spaces. An additional aim is to collect more data to serve as a comparison in the benchmark. The GBEA uses the COCO (COmparing Continuous Optimisers) framework for ease of integration.

The competition is further available in a single- and bi-objective version, thus resulting in 4 different tracks. The winners for the above questions will be determined independently

So please submit your best optimisation algorithms! 

More information: http://www.gm.fh-koeln.de/~naujoks/gbea/