Time | Content |
---|---|
8:30 | Registration and Coffee |
10:00 | Words of welcome by Martin Vetterli |
10:10 |
Chair: Pierre Vandergheynst 10:10
Pierre Dillenbourg
10:30
Gunnar Karlsson
10:50
Marta Martinez-Camara and Miranda Kreković
11:10
Andrea Ridolfi
|
12:00 | Standing Lunch |
13:30 |
Chair: Rüdiger Urbanke 13:30
Ivan Dokmanić
13:50
Pier Luigi Dragotti
14:10
Patrick Vandewalle
14:30
Thierry Blu
14:50
Michael Gastpar
|
15:30 | Coffee Break |
16:00 |
Chair: Karen Adam 16:00
Michalina Pacholska
16:20
Loïc Baboulaz
16:40
Juri Ranieri
17:00
Laurent Daudet
17:20
Olivier Roy
|
18:00 | Cocktail and Music by Cellier-Duperrex |
Time | Content |
---|---|
9:00 | Opening and Coffee |
9:50 | Opening remarks by Martin Vetterli |
10:00 |
Chair: Patrick Thiran 10:00
Frederike Dümbgen
10:20
Vivek Goyal
10:40
Minh N. Do
11:00
Antonio Ortega
11:20
Yue M. Lu
11:40
Amin Karbasi
|
12:30 | Standing Lunch |
14:00 |
Chair: Jan Hesthaven 14:00
Robert-Jan Smits
14:20
Hyungju Alan Park
14:40
Angelika Kalt
15:00
Sabine Süsstrunk
|
15:30 | Coffee Break |
16:00 |
Chair: Michael Unser 16:00
Stéphane Mallat
|
17:00 | Closing remarks by Martin Vetterli |
17:30 | Standing Dinner | 19:00 | Concert by Uptown Big Band |
When doing research in education, it takes a few months to identify an interesting problem, one that can be tackled with local interventions. I will present 10 problems and the digital solutions we built, one problem and one solution per minute (dual eye tracking, learning analytics, interactive furniture, tangible interfaces, swarm robotic, AR/VR). Some of these solutions may appear as gadgets but they have been tested in school contexts through formal experiments. While many tools can be described as innovative, the starting point has not been a quest for innovation. However, when teachers spontaneously state « I did not know one could do that », it means the project will have impact beyond the product and the publications.
This talk explores the concept of university education beyond the traditional campus setting. It discusses the limitations of online learning and presents alternative formats such as boot camps and study circles. The boot camp format, inspired by the Swedish armed forces, offers intensive and structured training for fast reskilling. On the other hand, study circles draw inspiration from the 19th century Scandinavian tradition, providing accessible education for those in need. The goal is to extend university education to a wider audience and not just those on campus.
Join Marta Martinez-Camara & Miranda Kreković as they share their experiences and adventures in different countries, working to encourage girls to explore and learn computer science. They will discuss their efforts in promoting gender diversity in technology and highlight the importance of empowering girls to pursue careers in the field. Through their work, they aim to break down barriers and create inclusive opportunities for girls to thrive in the world of coding.
Mid-life crises come in all shapes and sizes. Mine was characterized by a mad desire to learn to ski, and by that I mean to learn to ski 'seriously', like a pro. So what's the best way to learn something 'seriously'? Well, the best way is to teach it. The ski instructors' course and a full season as a ski teacher not only introduced me to the beauty of ski carving, but more importantly to the captivating nature of teaching people to ski. Yes indeed, captivating, just like every complex problem which has a beautiful tool for its solution. The complexity comes from the fact that pre-set classes cannot be used, and teaching has to cope with the learning style of the 'student' (visual, auditory, kinesthetic, tactile, and any possible combination of them!). I would love to share with you a very personal journey in teaching with failures (and painful physical falls), successes (and nice smooth turns), and key notions I was able to apply to my role as a lecturer (and skier).
An Lcaviste who flirts with non-anthropoid intelligence may experience a unique sense of shame. But there is a prize for the shameless. I will talk about (possible) examples in hearing shapes of planets, understanding the mechanics of graph neural networks, and learning to extrapolate analog models of the world from digital signals.
In this talk, I will briefly look at key achievements of signal processing research that I have witnessed over the past 25 years. These achievements range from the utilization of wavelets for image compression to the emergence of sparse signal processing and compressive sensing. Additionally, I will discuss how these findings have been overshadowed by the recent surge of data-driven methods. Specifically, I will focus on the lifting scheme and its evolution from a tool in wavelet construction to becoming the fundamental building block for invertible neural networks. This discussion will inspire a reflection on how to integrate theoretical findings in signal processing with machine learning. Within this context, I will explore applications in computational imaging, including neuroscience and art investigation.
This talk will cover several methods to estimate missing pixel information. First, I will discuss super-resolution imaging, where the goal is to interpolate intermediate pixels to generate a higher resolution image. Next, we shift our attention to 3D, adding depth information to images. That enables us to create novel views from a different perspective, allowing a viewer to perceive images in 3D. However, novel view synthesis brings new pixels to be filled in: occlusion areas. Occlusion inpainting is the technique to fill in areas of pixels that were invisible from the original point of view. This results in more naturally perceived 3D images.
We focus on 1D time-samples collected by a mobile sensor (e.g., temperature, pressure, magnetic field, etc.). Using a very efficient high-resolution FRI algorithm, we show that tracking the dynamic frequency contents of these samples reveals the 2D motion of the sensor, making it possible to reconstruct its trajectory (up to an affine transformation). Although this theory is developed under the assumption that the 2D field sampled by the sensor satisfies a parametric model (sum of 2D sinusoids), successful empirical results with real textured images suggest a broader validity.
In a federation, numerous autonomous entities operate independently of one another. They share a common objective of collaboratively addressing a more extensive issue. However, each entity possesses only a partial and potentially distorted perspective of the overall situation. This inherent limitation significantly hampers their ability to effectively resolve the problem, even if all entities genuinely and diligently attempt to do so. The extent of this fundamental performance constraint is remarkably stringent. In our analysis, we examine scenarios where each entity's observations are affected by Gaussian noise, and the overarching problem is framed in terms of quadratic loss. We demonstrate that the cost of pursuing a federated approach grows indefinitely with the number of participating entities, without any upper bound.
In 1908, Gabriel Lippmann received a Nobel Prize in Physics for his invention of interference-based colour photography. Yet, we now use three-colour photography and Lippmann Photographs, the first hyperspectral images, can be found only in museums. Why? In this talk, I will present work done by a large team consisting of researchers from LCAV and Galatea and curators from museum Photo Elysée. I will explain the history and the principle behind Lippmann Photography and show how colour distortions in Lippmann plates can be explained via signal processing phenomena. I will also talk about our attempts to make Lippmann Photography available to the general public: by printing digital Lippmann photographs and by helping Photo Elysée showcase the historical plates.
For decades, representing a physical artwork on a screen simply came down to showing its photograph which greatly reduces its true materiality and visual richness. In 2012, the eFacsimile research project tackled the problem of how to capture and represent faithfully on digital screens the complicated nature of a physical work of art. This then led to the creation of ARTMYN, a technology company that provides unique solutions for digitizing visual artworks. From custom imaging hardware to advanced imaging algorithms and latest web visualization applications, ARTMYN brings the most advanced digital experience to the art world.
Over the last decade, machine learning (ML) has become a powerful tool in security and counter abuse programs. With vast datasets, advanced algorithms, and increased computing power, ML presented transformative opportunities. However, aligning ML theory with practical applications in counter abuse poses challenges that affect its effectiveness. In this talk, we will explore the obstacles in feature engineering, data collection, and model training, while also highlighting emerging opportunities brought about by the ongoing artificial intelligence revolution.
This talk narrates the journey of an unconventional scientific collaboration in computational imaging that evolved into a startup focusing on optical computing. The story takes a recent twist, leading to the exploration of generative AI and large language models. Discover how these diverse fields intersect and drive innovation.
SoftVue is a 3D whole breast ultrasound tomography imaging device recently cleared by the FDA as an adjunct screen modality to mammograph for women with dense breasts. SoftVue is radiation free, operator independent, and does not involve breast compression. For the target population, it has been shown to increase both sensitivity and specif@icity, hence detecting more cancers at an early stage while decreasing call backs. In this talk, I will present an overview of the technology, and discuss the many technical, clinical, and operational challenges that we have faced on our long journey to bring this medical innovation to the market.
Today’s go-to solvers for non-convex optimization problems are of iterative nature: they update an initial guess until convergence to a local optimum is reached. Wide-spread assumptions are that you cannot make conclusions about the global optimality of estimates found this way, and that globally optimal solutions would be too expensive to obtain. In this presentation, I will present recent advances that challenge these assumptions and show that for a wide variety of optimization problems, in particular many commonly encountered problems in robotics, global optimality can be achieved in reasonable time – either by “certifying” the output of iterative solvers, or by solving tight semidefinite relaxations. I will present our recent advances to make these methods both more efficient and more accessible, and present what we believe to be interesting open questions in this exciting area of research.
A particle beam microscope uses a focused beam of ions or electrons to cause the emission of secondary electrons (SEs) from a sample. A micrograph is formed from the SEs detected during the dwell time of the beam at each raster scan location. It seems innocuous to analogize the microscope with an ordinary digital camera, but with serialized pixel-by-pixel data collection. This is valid in some ways, but it precludes maximum information extraction. Time resolution within each pixel dwell time enables significant improvements without changes to the basic microscope hardware. Several fun and easy theoretical results will be shared, along with progress on the quest to fully demonstrate the merit of this idea.
Motion ability strongly affects the quality of life, especially in the aging population. In this talk, we will explore signal processing research efforts for modeling and analyzing human motion. These efforts aim to enable precise, predictive, personalized, and participatory treatments in the fields of neurology and orthopedics.
Decades before graph signal processing (GSP) became an active research area, graphs were used in signal and image processing applications. In this talk, I start by providing several examples of these applications, where graphs are (implicit or explicit) models for processing data rather than representations of a physical network (e.g., a road network). Then, using our recent work on graph filterbanks as motivation, I will argue that progress in GSP can open the way for new methods that can be applied to conventional signals.
Universality is a fascinating high-dimensional phenomenon. It points to the existence of universal laws that govern the macroscopic behavior of wide classes of large and complex systems, despite their differences in microscopic details. In this talk, I will present some recent progress in rigorously understanding and exploiting the universality phenomenon in the context of statistical estimation and learning on high-dimensional data. Examples include spectral methods for high-dimensional projection pursuit, statistical learning based on kernel and random feature models, approximate message passing algorithms, structured random dimension reduction maps for efficient sketching, and regularized linear regression on highly structured, strongly correlated, and even (nearly) deterministic design matrices. Together, they demonstrate the robustness and wide applicability of the universality phenomenon.
Lack of replicability in experiments has been a major issue, usually referred to as the reproducibility crisis, in many scientific areas such as biology, chemistry, and artificial intelligence. In this talk, we investigate the notion of replicability in the context of machine learning and characterize the existence of statistically indistinguishable learning algorithms for certain learning problems.
Europe’s universities are going through turbulent times. Governments are demanding knowledge security, less internationalization and greater national impact. Students want more democratic processes as well as a greater focus on sustainability and student well being. And if this wasn’t enough, AI and notably ChatGPT are stirring up things big time. No wonder, university Presidents are finding it hard to get their 8 hours of sleep at night.
Academic culture plays a crucial role in shaping research outcomes. This talk explores the importance of promoting multidisciplinary collaborations and the challenges faced in implementing institution-wide policies in academia. The complex nature of different disciplines and their traditions is discussed, highlighting the need for leaders in academia to consider policy changes that foster collective efforts.
Funders used to be organizations which received and evaluated proposals and made funding decisions. This is still their core business, but many other things such as adequate policies and science diplomacy are expected and become increasingly important. Funders are part of movements such as Open Science and New Research Culture and often incite or drive change on the national and international level. The current issues are the ethical self-regulation of science but also – in the recent geopolitical context - the increasing steering of funders by (at least some) governments in the western world, where funders enjoyed ample autonomy up to now. I’m happy to share some thought on these topics with you.
I will explain the workings of the Swiss Science Council (SSC), whose president I became in 2021. The SSC is an extra-parliamentary commission that advises the Federal Council on issues related to research, innovation, and higher education.
Compression, communication, detection, inverse problems, source separation: all pet problems of signal processing are invaded by machine learning. Better solutions seem to emerge from stochastic gradient descent in deep networks than from beautifully crafted algorithms. Will hardware be the ultimate retreat of signal processing? Why not being megalomaniac and try to absorb machine learning? Learning is about discovering structures, which is at the core of information theory and harmonic analysis. Entropy, scale separation, random coding, sparsity are conceptual pillars of learning. Still lots to do for signal processing and friends.