An intriguing observations in the study of combinatorial games is the so-called random graph intuition which says that for many two-player games the outcome of two optimal strategies playing against each other (`clever' vs `clever') is identical to the outcome of two players who just choose their moves randomly (`random' vs `random'). Indeed, there is a strong connection between games on graphs and games on random structures. The goal of this project is to enhance the methodical foundations for dealing with such probabilistic games. In particular, we aim at attacking some of the most interesting and challenging problems within the area of Games on Random Graphs. More precisely, we will study (i) Maker-Breaker games, (ii) Ramsey, and (iii) Achlioptas. Here, her goal can also be to avoid certain substructures or e.g. to avoid or create large connected components. Research around all three topics is very active and has lead to many beautiful and surprising results in the last decades.

How do our brains memorize, recall, reason, and interact with their environment? There are two lines of research approaching this question: one from the side of biological knowledge of the anatomy and physiology of neural populations in the cortex, and one from the side of theoretical neural networks that can exhibit the desired emergent behavior. Finding points of contact between these two directions is one of the major current challenges in theoretical neuroscience. In this project we aims at developing a connection between biologically faithful neural network simulations and learning and reasoning abilities that we know that our brain has. By taking a sequence of slow and steady steps, our work plan directly targets the expansion of our understanding of how to construct neurally realistic simulations which exhibit the learning and reasoning characteristics necessary for eventually scaling up to larger scale inference systems.

The aim of this project is to develop in silico simulations of possible dynamic interplays among areas of the visual pathway. Cortical processing of visual input is known to use a distributed representation, with different areas encoding different aspects of the visual interpretation. In this project we will test the general applicability of such type of models to visual processing tasks (like depth perception, three-dimensional information processing, and incorporation of other sensory inputs) and thereby develop a set of principles of inter-areal communication which can be used not only in systems of our own design, but also to allow quantitative predictions regarding the mammalian visual system's structure and dynamics.

Stochastic processes play an increasingly important role in modern graph theory, appearing in many seemingly diverse settings, for example (i) games in which one or more players try to build graphs with specific properties, usually with a random component; (ii) bootstrap percolation phenomena that model the spread of ‘infection’ in certain networks; (iii) (pseudo)-random algorithmic constructions. In many such settings there is a strong interplay between determinism and randomness, and the goal of this project is to explore this connection by focusing on selected problems in probabilistic and extremal graph theory.

Large and complex networks such as e.g. the Internet, the world wide web, peer-to-peer sys- tems, wireless networks, or different kinds of social networks have become increasingly important over the last years. Most of these networks are organized in a completely decentralized way. As a result such networks are inherently dynamic, nodes appear and disappear, communication links are established or removed, and nodes might even be mobile and move as in wireless ad hoc networks. In addition to understanding the static structural properties of large and complex networks and to know the complexity of distributed computations in large, decentralized networks, it is therefore important to also understand the dynamic nature of such networks. The aim of this project is to develop novel models, tools, and methods to better understand fundamental aspects of the dynamism of networks and distributed computations. In particular, we want to study dynamic network models that are both realistic and tractable. Apart from considering the effects of network dynamism from a standard worst-case perspective, we will develop dynamic network models based on random graphs that allow to analyze average-case scenarios. In addition to (and also based on) the study of models for network dynamism, we will explore the possibilities and complexity of distributed algorithms that run on dynamic networks and constantly have to adapt to network changes. Techniques developed to understand dynamic networks and computations on such networks will also help to more generally understand the dynamic nature of distributed computations in networks.

Our brains have an amazing ability to interpret and react to the world around them. Many models have been proposed in an attempt to reproduce some aspect of the brain's abilities. We can categorize existing models into two broad classes: top-down, and bottom-up. Top-down models aim to reproduce high-level behaviors, without worrying about whether the model matches the implementation in the brain. Bottom-up models, on the other hand, focus on using elements and architectures that are directly taken from neurophysiological and neuroanatomical knowledge, without worrying about reproducing high-level behaviors. So far, no model has been able to successfully bridge this gap between top-down and bottom-up.

The project focuses on merging the impressive computational abilities of top-down approaches with the biological plausibility of bottom-up approaches. Specifically, the project tries to form a connection between factor graphs, a powerful top-down model, and cortical fields, a potent class of bottom-up models. The idea for this project arose from the observation that the low-level operations being performed by factor graphs are in many ways quite similar to low-level operations performed in cortical field models. Here we see for the first time a high level system whose computational primitives seem to correspond to traditional neural network primitives found in bottom-up networks.

One of the most celebrated results in modern graph theory is Szemerédi's regularity lemma which states, roughly speaking, that every sufficiently large graph contains a large subgraph with a certain regular structure. In this project we are concerned with a version of Szemerédis regularity lemma, which applies to graphs on n vertices and o(n^2) edges and which again guarantees a large regular substructure. Our specific interest is in whether a given fixed graph is contained in (almost all) substructures guaranteed by the regularity lemma. A 1997 conjecture to this effect by Kohayakawa, Luczak and Rödl has known implications for extremal properties of random graphs, and in the book 'Random graphs' by Janson, Luczak and Rucinski this conjecture is called"the main probabilistic problem concerning extremal properties of random graphs (and we believe, one of the most important open questions in the theory of random graphs)" (Conjecture 8.35, p.232). We are also interested in algorithmic aspects of the sparse version of the regularity lemma. In the dense case it is well known that there exists a polynomial time algorithm that finds a partition guaranteed by Szemerédi's regularity lemma, and for example approximation algorithms for the MAX-CUT problem are based on this algorithm. In the sparse case nothing is known: neither that there exists a polynomial-time algorithm that finds a regular structure guaranteed by the sparse regularity lemma, nor, for example, that this problem is NP-hard.

One of the central questions in theoretical computer science is the analysis of algorithms. Here one distinguishes between worst case analysis, which allows statements about the behaviour of the algorithm for the worst possible input, and average case analysis, which considers the average behaviour of the algorithm. From a practical point of view the latter is in particular important if the worst case analysis does not provide good bounds: it is possible that the algorithm provides empirically good results, even though it has a bad worst case behaviour. The sorting algorithm QuickSort is a well known example for this phenomenon. While its running time is quadratic in the worst case, its average case behaviour is much better. Unfortunately, this algorithm is also one of the few examples where such a phenomenon has been strictly analysed so far. The aim of this project is therefore to develop new models and analysis methods for the average case analysis of algorithms.