Designing better algorithms by testing them with hard problems

- EN - DE- FR- IT

A new research project led by Luca Gambardella, professor at the Dalle Molle Institute for Artificial Intelligence IDSIA (USI-SUPSI) and Pro-Rector for Innovation and Corporate Relations, has been approved by the  Swiss National Science Foundation (SNSF) . The study entitled ’Computational methods for integrality gaps analysis’ approaches the optimisation of algorithms to deal with complex problems from an innovative and original perspective. 

The project, with a planned duration of four years, with one postdoctoral researcher and two PhD students, is conducted in collaboration with Professor Monaldo Mastrolilli of SUPSI, who is also a member of IDSIA (USI-SUPSI). The research is inspired by recent advances in algorithms that combine artificial intelligence techniques and approximate/exact methods. But instead of studying the algorithms, it changes the perspective because it aims to undermine them by generating systematically and mathematically sound problem-solving instances.  

Professor Gambardella, what is meant by complex problems? 

The ’complexity theory’’, as defined by Oxford Languages, studies systems made up of a very large number of elements that interact through defined rules and are subject to certain constraints. The aim is to understand global behaviour and predict the evolution of these systems. In mathematics, one tries to determine the minimum number of operations required to solve such problems. A research topic in this area concerns the measurement of the complexity of a problem by evaluating it according to the number of operations required to solve an instance of it in relation to the amount of input data. 

A number of operations which I imagine could be very high. 

There are ’easy’ problems, i.e. solvable with a number of operations increasing polynomially with respect to the number of inputs, and ’hard’ problems which we will discuss later. An example of an ’easy’ problem is the calculation of the shortest path between two points in a weighted graph, think of the Google Maps route, for example. Unfortunately, there are ’hard’ problems for which no polynomial solving algorithm is known, and there is even doubt as to whether such an algorithm can exist. Solving these problems requires the use of algorithms with an exponentially increasing number of operations with respect to the input data. 

Can you give an example of a ’hard’ problem? 

Instead of finding the route between two points on a map, think about calculating the shortest route starting in a given city, visiting a number of given cities and returning to the original city. It is a case known as the ’travelling salesman problem’ and to solve it one must enumerate all possible paths and measure their length. This requires the calculation of a factorial number of paths leading to an exponential complexity of the algorithm (based on Stirling’s approximation). Unfortunately, when the number of cities grows beyond a few hundred, these exact methods become impractical and we have to resort to methods that no longer guarantee the shortest route but rather approximate it. 

How do we address these problems? 

The traditional strategy to understand whether these exact or approximate algorithms work in practice is to take a set of instances of the problem under investigation and test them on a computer. The actual calculation time to find a solution and the quality of solutions as the size of the problem increases and its structure changes are studied. In parallel, we study, on a theoretical level, what is the maximum ratio between the quality of the solution of an integer linear programme that solves the problem and its relaxation (integrality gap). 

In order to finalise these studies in literature, there are reference libraries containing instances of different problems on which researchers compare and sometimes challenge each other. 

Instead, our research is based on another approach, the systematic and mathematically robust generation of hard-to-solve instances with the aim of understanding the limitations of existing algorithms in order to design better ones.