Lecturers

Lecturers

Each Lecturer will hold up to four lectures on one or more research topics.


Gabriel Barth-Maron

Topics

Foundation Models, Large Language Models, Multimodal Foundation Models, AI, DL, Variational Inference, RL

Biography

I am a Staff Research Engineer at Google DeepMind in London. My research interests span reinforcement learning, data-efficient learning, multimodal modeling, and training large-scale models. I’m also interested in building tools that accelerate the pace of research in machine learning and AI.

Among other contributions, I pioneered Distributed Reinforcement Learning at DeepMind and in the greater academic community. Our papers Distributed Prioritized Experience Replay and Distributed Distributional Deterministic Policy Gradients (D4PG) helped to prove the effectiveness of using Distributed Reinforcement Learning. We developed and open-sourced Acme, Reverb, and Launchpad to make Distributed RL easier.

Recently I have been working on extending transformers to multiple modalities. One example of this is Gato, a multi-modal, multi-task, multi-embodiment generalist policy. As part of Google DeepMind’s Gemini team I am working on the next generation of large-scale multimodal transformer models.

I hold a BA in mathematical economics and a ScM in computer science from Brown University.

 

Lectures



Sergiy Butenko
 

Topics

Data Science, Networks/Graphs

Biography

Dr. Butenko’s research concentrates mainly on global and discrete optimization and their applications. In particular, he is interested in theoretical and computational aspects of continuous global optimization approaches for solving discrete optimization problems on graphs. Applications of interest include network-based data mining, analysis of biological and social networks, wireless ad hoc and sensor networks, energy, and sports analytics.

Lectures



Topics

Foundation Models, Large Language Models, Natural Language Understanding, Deep Learning

Biography

Sven Giesselbach is the leader of the Natural Language Understanding (NLU) team at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS). His team develops solutions in the areas of medical, legal and general document understanding which in their core build upon (large) pre-trained language models. Sven Giesselbach is also part of the Lamarr Institute and the OpenGPT-X project in which he investigates various aspects of Foundation Models. Based on his project experience of more than 25 natural language understanding projects he studies the effect of Foundation Models on the execution of Natural Language Understanding projects and the novel challenges and requirements which arise with them. He has published several papers on Natural Language Processing and Understanding, which focus on the creation of application-ready NLU systems and the integration of expert knowledge in various stages of the solution design. Most recently he co-authored a book on “Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media” which will be published by Springer Nature.

Gerhard Paaß, Sven Giesselbach, Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media, Springer, May, 2023

https://link.springer.com/book/9783031231896

Lectures



Topics

Liquid Neural Networks, Machine Learning, Generalist Artificial Intelligence

Biography

Ramin Hasani is an AI Research Affiliate at the Computer Science and Artificial Intelligence Lab (CSAIL), Massachusetts Institute of Technology (MIT). Previously, he was jointly appointed as a Principal AI and Machine Learning Scientist at the Vanguard Group and a Research Affiliate at CSAIL MIT. Ramin’s research focuses on robust deep learning and decision-making in complex dynamical systems. Prior to that, he was a Postdoctoral Associate at CSAIL MIT, leading research on modeling intelligence and sequential decision-making, with Prof. Daniela Rus. He received his Ph.D. degree with distinction in Computer Science from the Vienna University of Technology (TU Wien), Austria (May 2020). His Ph.D. dissertation and continued research on Liquid Neural Networks got recognized internationally with numerous nominations and awards such as TÜV Austria Dissertation Award nomination in 2020, and HPC Innovation Excellence Award in 2022. He is a frequent TEDx Speaker.

http://www.raminhasani.com/

Lectures



Nora Kassner

Topics

LLMs, NLP, Knowledge & Reasoning in Deep Learning Models

Biography

I am Nora Kassner, a Research Scientist at DeepMind working on Natural Language Processing. My research focuses on knowledge and reasoning in deep learning models.

Also, I am SIGREP Secretary for 2022-2024; I am co-organizing the Repl4NLP and BigPicture workshops for 2023; and am a member of KI macht Schule (“AI for schools”).

Before that, I was a Research Scientist at Meta AI and a PhD student at the University of Munich supervised by Hinrich Schütze and supported by the Munich Center for Machine Learning.

During my PhD, I interned with the Allen Institute for AI (AI2) and Meta AI. I recieved AI2’s Outstanding Intern of the Year Award in 2021.

Lectures



Petros Koumoutsakos
 

Topics

AI for science

Biography

Petros Koumoutsakos is Herbert S. Winokur, Jr. Professor of Engineering and Applied Sciences, Faculty Director of the Institute for Applied Computational Science (IACS) and Area Chair of Applied Mathematics at Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). He studied Naval Architecture (Diploma-NTU of AthensM.Eng.-U. of Michigan), Aeronautics and Applied Mathematics (PhD-Caltech). He has conducted post-doctoral studies at the Center for Parallel Computing at Caltech and at the Center for Turbulent Research at Stanford University and NASA Ames. He has served as the Chair of Computational Science at ETHZ Zurich (1997-2020) and has held visiting fellow positions at Caltech, the University of Tokyo, MIT, the Radcliffe Institute of Advanced Study at Harvard University and he is Distinguished Affiliated Professor at TU Munich.

Petros is elected Fellow of the American Society of Mechanical Engineers (ASME), the American Physical Society (APS), the Society of Industrial and Applied Mathematics (SIAM) and the Collegium Helveticum. He is recipient of the Advanced Investigator Award by the European Research Council and the ACM Gordon Bell prize in Supercomputing. He is elected International Member to the US National Academy of Engineering (NAE).

His research interests are on the fundamentals and applications of computing and artificial intelligence to understand, predict and optimize fluid flows in engineering, nanotechnology, and medicine.

https://scholar.google.com/citations?user=IaDP3mkAAAAJ&hl=en

 

Lectures



Topics

Natural Language Processing, BioNLP, Machine Learning

Biography

Maria is a Professor in Natural Language Processing (NLP) at Queen Mary, University of London. She is in receipt of an EPSRC/UKRI Turing AI fellowship award on Creating Time Sensitive Sensors from Language & Heterogeneous User-Generated Content (2019-2025) https://www.turing.ac.uk/research/research-projects/time-sensitive-sensing-language-and-user-generated-content .

At the Alan Turing Institute she co-leads the NLP and data science for mental health interest groups and supervises PhD students. She is co-leading projects on Language sensing for dementia monitoring & diagnosis (https://www.dcs.warwick.ac.uk/langsensing/), Opinion summarisation from social media, an AI evidence based framework during pandemics (https://panacea2020.github.io/index.html).

Maria has a DPhil from the University of Oxford on learning pragmatic knowledge from text.  Her work has contributed to advances in knowledge discovery from corpora, automation of scientific experimentation and automatic extraction of information from the scientific literature. She has published widely both in NLP and interdisciplinary venues. Past awards include an IBM Faculty Award for work on emotion sensing from heterogeneous mobile phone data, being a co-investigator on the EU Project PHEME, which studied the spread of rumours in social media (2014-2017) and an Early Career Fellowship from the Leverhulme Trust (2010-2013) on reasoning with scientific articles.

https://scholar.google.co.uk/citations?user=eys5GB4AAAAJ&hl=en

Lectures



Topics

Data Science, Global Optimization, Mathematical Modeling, Financial Applications, AI

Biography

Panos Pardalos was born in Drosato (Mezilo) Argitheas  in 1954 and graduated from Athens University (Department of Mathematics).  He received  his  PhD  (Computer and Information Sciences) from the University of Minnesota.  He  is a Distinguished Emeritus Professor  in the Department of Industrial and Systems Engineering at the University of Florida, and an affiliated faculty of Biomedical Engineering and Computer Science & Information & Engineering departments.

Panos  Pardalos is a world-renowned leader in Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data Sciences. He is a Fellow of AAAS, AAIA, AIMBE, EUROPT, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Panos  Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for “scientific contributions that stand the test of time.”

Panos Pardalos has been awarded a prestigious Humboldt Research Award (2018-2019). The Humboldt Research Award is granted in recognition of a researcher’s entire achievements to date – fundamental discoveries, new theories, insights that have had significant impact on their discipline.

Panos Pardalos is also a Member of several  Academies of Sciences, and he holds several honorary PhD degrees and affiliations. He is the Founding Editor of Optimization Letters, Energy Systems, and Co-Founder of the International Journal of Global Optimization, Computational Management Science, and Springer Nature Operations Research Forum. He has published over 600 journal papers, and edited/authored over 200 books. He is one of the most cited authors and has graduated 71 PhD students so far. Details can be found in www.ise.ufl.edu/pardalos

Panos Pardalos has lectured and given invited keynote addresses worldwide in countries including Austria, Australia, Azerbaijan, Belgium, Brazil,  Canada, Chile, China, Czech Republic, Denmark, Egypt, England, France, Finland, Germany, Greece, Holland,  Hong Kong, Hungary, Iceland, Ireland, Italy, Japan, Lithuania, Mexico, Mongolia, Montenegro, New Zealand, Norway, Peru, Portugal, Russia, South Korea, Singapore, Serbia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey, Ukraine, United Arab Emirates, and the USA.

https://scholar.google.com/citations?user=4e_KEdUAAAAJ&hl=en

Lectures



Topics

AI, Autonomous Systems, Applications to Self-Driving Vehicles, Autonomous Networks

Biography

Joseph Sifakis is Emeritus Research Director at Verimag, a laboratory in the area of safety critical systems that he directed for 13 years. He has been a full professor at Ecole Polytechnique Fédérale de Lausanne (EPFL) for the period 2011-2016.

Joseph Sifakis has made significant contributions to the design of reliable systems in many application areas, including avionics and space systems, telecommunications, and production systems. His current research focuses on autonomous systems, in particular self-driving cars and autonomous telecommunication systems.

In 2007, he received the Turing Award for his contribution to the theory and application of model checking, the most widely used system verification technique.

Joseph Sifakis is a member of six academies and a frequent speaker in international scientific and technical events.

Recent book: Joseph Sifakis, “Understanding and Changing the World From Information to Knowledge and Intelligence“, Springer, 2022.

Awards
Turing Award, 2007
Leonardo da Vinci Medal, 2012
Grand Officer of the National Order of Merit, France, 2008
Commander of the Legion of Honor, France, 2011
Member of the French Academy of Sciences, 2010
Member of Academia Europaea, 2008
Member of the French Academy of Engineering, 2008
Member of the American Academy of Arts and Sciences, 2015 Member of the National Academy of Engineering, 2017
Foreign member of the Chinese Academy of Sciences, 2019

https://en.wikipedia.org/wiki/Joseph_Sifakis

Lectures



Topics

Generative AI, Machine Learning, Deep Learning, Deep Generative Models

Biography

Dr. Jakub (J.M.) Tomczak is an associate professor at Eindhoven University of Technology, and a PI leading the Generative Artificial Intelligence team.

Expertise: Jakub’s research is focused on Generative Artificial Intelligence that aims at combining deep learning and probabilistic modeling.

(International) leadership: Jakub serves as an area chair at top AI conferences (NeurIPS, AISTATS, UAI) and as an action editor for Transactions of Machine Learning Research (TMLR). He is a member of ELLIS. He has supervised a total of 7 PhD candidates (3 completed). Jakub is regularly invited as an international keynote speaker to conferences, summer/winter schools, and companies.

Grants/prizes/awards: Jakub has obtained a total of 480k€ in external funding. This includes about 180k€ for the prestiguous Marie Sklodowska-Curie Individual Fellowship (MSC-IF) carried out at the University of Amsterdam (2016-2018). Recently, he obtained 280k€ as a personal grant from Qualcomm. He also received several smaller grants and individual awards, including the Network Institute Academy Assistant program (a co-PI) and individual scholarships & grants from the Wroclaw University of Technology.

Outputs: Jakub is the author of the first comprehensive book on generative AI (“Deep Generative Modeling“, Springer, Cham, 2022), (co-)authored 25 peer-reviewed journal articles, and 21 peer-reviewed conference publications including NeurIPS, ICML, ICLR, AISTATS, UAI, ICCV, and CVPR. His publications were jointly cited >4,300 times according to Google Scholar (status on February 2023).

Lectures



Topics

Diffusion Models, AI, Deep Learning

Biography

I am a Principal Researcher at Microsoft Research Amsterdam, where I work on the intersection of deep learning and computational chemistry and physics for molecular simulation. My research has spanned a range of topics from generative modeling, variational inference, source compression, graph-structured learning to condensed matter physics. Before joining MSR I was a Research Scientist at Google Brain. I received my PhD in theoretical condensed-matter physics in 2016 at the University of Amsterdam, where I also worked as a postdoctoral researcher as part of the Amsterdam Machine Learning Lab (AMLAB). In 2019 I won the Faculty of Science Lecturer of the Year award at the University of Amsterdam for teaching a machine learning course in the master of AI.

Lectures



Pascal Van Hentenryck

Topics

Artificial Intelligence, Data Science

Biography

Pascal Van Hentenryck is an A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. Prior to this appointment, he was a professor of Computer Science at Brown University for about 20 years, he led the optimization research group (about 70 people) at National ICT Australia (NICTA) (until its merger with CSIRO), and was the Seth Bonder Collegiate Professor of Engineering at the University of Michigan. Van Hentenryck is also an Honorary Professor at the Australian National University. Van Hentenryck is a Fellow of AAAI (the Association for the Advancement of Artificial Intelligence) and INFORMS (the Institute for Operations Research and Management Science). He has been awarded two honorary doctoral degrees from the University of Louvain and the university of Nantes, the IFORS Distinguished Lecturer Award, the Philip J. Bray Award for teaching excellence in the physical sciences at Brown University, the ACP Award for Research Excellence in Constraint Programming, the ICS INFORMS Prize for Research Excellence at the Intersection of Computer Science and Operations Research, and an NSF National Young Investigator Award. He received a Test of Time Award (20 years) from the Association of Logic Programming and numerous best paper awards, including at IJCAI and AAAI. Van Hentenryck has given plenary/semi-plenary talks at the International Joint Conference on Artificial Intelligence (twice), the International Symposium on Mathematical Programming, the SIAM Optimization Conference, the Annual INFORMS Conference, NIPS, and many other conferences. Van Hentenryck is program co-chair of the AAAI’19 conference, a premier conference in Artificial Intelligence. Van Hentenryck’s research focuses in Artificial Intelligence, Data Science, and Operations Research. His current focus is to develop methodologies, algorithms, and systems for addressing challenging problems in mobility, energy systems, resilience, and privacy. In the past, his research focused on optimization and the design and implementation of innovative optimization systems, including the CHIP programming system (a Cosytec product), the foundation of all modern constraint programming systems and the optimization programming language OPL (now an IBM Product). Van Hentenryck has also worked on computational biology, numerical analysis, and programming languages, publishing in premier journals in these areas.

https://scholar.google.com/citations?user=GxFQz-4AAAAJ&hl=en

Lectures



Topics

Machine Learning, Artificial Intelligence, Statistics

Biography

Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a Distinguished Scientist at MSR. He is a fellow at the Canadian Institute for Advanced Research (CIFAR) and the European Lab for Learning and Intelligent Systems (ELLIS) where he also serves on the founding board. His previous appointments include VP at Qualcomm Technologies, professor at UC Irvine, postdoc at U. Toronto and UCL under supervision of prof. Geoffrey Hinton, and postdoc at Caltech under supervision of prof. Pietro Perona. He finished his PhD in theoretical high energy physics under supervision of Nobel laureate prof. Gerard ‘t Hooft.

Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015, he serves on the advisory board of the Neurips foundation since 2015 and has been program chair and general chair of Neurips in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. Max Welling is recipient of the ECCV Koenderink Prize in 2010 and the ICML Test of Time award in 2021. He directs the Amsterdam Machine Learning Lab (AMLAB) and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA).

https://scholar.google.com/citations?hl=en&user=8200InoAAAAJ&view_op=list_works&sortby=pubdate

Lectures