Lecturers
Each Lecturer will hold up to four lectures on one or more research topics.
Topics
Foundation Models, Large Language Models, Multimodal Foundation Models, AI, DL, Variational Inference, RLBiography
I am a Staff Research Engineer at Google DeepMind in London. My research interests span reinforcement learning, data-efficient learning, multimodal modeling, and training large-scale models. I’m also interested in building tools that accelerate the pace of research in machine learning and AI.
Among other contributions, I pioneered Distributed Reinforcement Learning at DeepMind and in the greater academic community. Our papers Distributed Prioritized Experience Replay and Distributed Distributional Deterministic Policy Gradients (D4PG) helped to prove the effectiveness of using Distributed Reinforcement Learning. We developed and open-sourced Acme, Reverb, and Launchpad to make Distributed RL easier.
Recently I have been working on extending transformers to multiple modalities. One example of this is Gato, a multi-modal, multi-task, multi-embodiment generalist policy. As part of Google DeepMind’s Gemini team I am working on the next generation of large-scale multimodal transformer models.
I hold a BA in mathematical economics and a ScM in computer science from Brown University.
Lectures
Abstract TBA
Abstract TBA
Abstract TBA
Abstract TBA
Topics
Data Science, Networks/GraphsBiography
Dr. Butenko’s research concentrates mainly on global and discrete optimization and their applications. In particular, he is interested in theoretical and computational aspects of continuous global optimization approaches for solving discrete optimization problems on graphs. Applications of interest include network-based data mining, analysis of biological and social networks, wireless ad hoc and sensor networks, energy, and sports analytics.
Lectures
Cluster analysis is an important task arising in network-based data analysis. Perhaps the most natural model of a cluster in a network is given by a clique, which is a subset of pairwise-adjacent nodes. However, the clique model appears to be overly restrictive in practice, which has led to introduction of numerous models relaxing various properties of cliques, known as clique relaxations. This talk focuses on a systematic cluster analysis framework based on clique relaxation models.
We discuss continuous formulations for several important cluster-detection problems in networks. More specifically, the problems of interested are formulated as quadratic, cubic, or higher-degree polynomial optimization problems subject to linear constraints. The proposed formulations are used to develop analytical bounds as well as effective algorithms for some of the problems. Moreover, a novel hierarchy of nonconvex continuous reformulations of optimization problems on networks is discussed.
Topics
Foundation Models, Large Language Models, Natural Language Understanding, Deep LearningBiography
Sven Giesselbach is the leader of the Natural Language Understanding (NLU) team at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS). His team develops solutions in the areas of medical, legal and general document understanding which in their core build upon (large) pre-trained language models. Sven Giesselbach is also part of the Lamarr Institute and the OpenGPT-X project in which he investigates various aspects of Foundation Models. Based on his project experience of more than 25 natural language understanding projects he studies the effect of Foundation Models on the execution of Natural Language Understanding projects and the novel challenges and requirements which arise with them. He has published several papers on Natural Language Processing and Understanding, which focus on the creation of application-ready NLU systems and the integration of expert knowledge in various stages of the solution design. Most recently he co-authored a book on “Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media” which will be published by Springer Nature.
Gerhard Paaß, Sven Giesselbach, Foundation Models for Natural Language Processing – Pre-trained Language Models Integrating Media, , Springer, May, 2023
Lectures
Abstract TBA
Abstract TBA
Abstract TBA
Abstract TBA
Topics
Liquid Neural Networks, Machine Learning, Generalist Artificial IntelligenceBiography
Ramin Hasani is an AI Research Affiliate at the Computer Science and Artificial Intelligence Lab (CSAIL), Massachusetts Institute of Technology (MIT). Previously, he was jointly appointed as a Principal AI and Machine Learning Scientist at the Vanguard Group and a Research Affiliate at CSAIL MIT. Ramin’s research focuses on robust deep learning and decision-making in complex dynamical systems. Prior to that, he was a Postdoctoral Associate at CSAIL MIT, leading research on modeling intelligence and sequential decision-making, with Prof. Daniela Rus. He received his Ph.D. degree with distinction in Computer Science from the Vienna University of Technology (TU Wien), Austria (May 2020). His Ph.D. dissertation and continued research on Liquid Neural Networks got recognized internationally with numerous nominations and awards such as TÜV Austria Dissertation Award nomination in 2020, and HPC Innovation Excellence Award in 2022. He is a frequent TEDx Speaker.
http://www.raminhasani.com/
Lectures
Abstract TBA
Abstract TBA
Topics
Diffusion Models, AI, Machine LearningBiography
I am a senior researcher at MSR Amsterdam(opens in new tab), working on machine learning for molecular simulation. I did my PhD in computer science at Mila(opens in new tab) (University of Montreal) with Aaron Courville(opens in new tab). I have worked on a wide variety of topics in core ML, including generative models, variational inference, and Bayesian deep learning. I received a Google PhD fellowship(opens in new tab) in the category of Machine Learning in 2020. Throughout my PhD, I also spent some time interning at Google and Element AI (acquired by ServiceNow), and helped organize INNF+(opens in new tab), a workshop on invertible flows and other likelihood-based models from 2019 to 2021. Prior to my PhD, I obtained my Bachelor’s degree in chemical engineering at National Taiwan University(opens in new tab) (NTU).
Lectures
Abstract TBA
Abstract TBA
Abstract TBA
Topics
LLMs, NLP, Knowledge & Reasoning in Deep Learning ModelsBiography
I am Nora Kassner, a Research Scientist at DeepMind working on Natural Language Processing. My research focuses on knowledge and reasoning in deep learning models.
Also, I am SIGREP Secretary for 2022-2024; I am co-organizing the Repl4NLP and BigPicture workshops for 2023; and am a member of KI macht Schule (“AI for schools”).
Before that, I was a Research Scientist at Meta AI and a PhD student at the University of Munich supervised by Hinrich Schütze and supported by the Munich Center for Machine Learning.
During my PhD, I interned with the Allen Institute for AI (AI2) and Meta AI. I recieved AI2’s Outstanding Intern of the Year Award in 2021.
Lectures
Abstract TBA
Abstract TBA
Abstract TBA
Topics
AI for scienceBiography
Petros Koumoutsakos is Herbert S. Winokur, Jr. Professor of Engineering and Applied Sciences, Faculty Director of the Institute for Applied Computational Science (IACS) and Area Chair of Applied Mathematics at Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). He studied Naval Architecture (Diploma-NTU of Athens, M.Eng.-U. of Michigan), Aeronautics and Applied Mathematics (PhD-Caltech). He has conducted post-doctoral studies at the Center for Parallel Computing at Caltech and at the Center for Turbulent Research at Stanford University and NASA Ames. He has served as the Chair of Computational Science at ETHZ Zurich (1997-2020) and has held visiting fellow positions at Caltech, the University of Tokyo, MIT, the Radcliffe Institute of Advanced Study at Harvard University and he is Distinguished Affiliated Professor at TU Munich.
Petros is elected Fellow of the American Society of Mechanical Engineers (ASME), the American Physical Society (APS), the Society of Industrial and Applied Mathematics (SIAM) and the Collegium Helveticum. He is recipient of the Advanced Investigator Award by the European Research Council and the ACM Gordon Bell prize in Supercomputing. He is elected International Member to the US National Academy of Engineering (NAE).
His research interests are on the fundamentals and applications of computing and artificial intelligence to understand, predict and optimize fluid flows in engineering, nanotechnology, and medicine.
https://scholar.google.com/citations?user=IaDP3mkAAAAJ&hl=en
Lectures
Abstract TBA
Topics
Natural Language Processing, BioNLP, Machine LearningBiography
Maria is a Professor in Natural Language Processing (NLP) at Queen Mary, University of London. She is in receipt of an EPSRC/UKRI Turing AI fellowship award on Creating Time Sensitive Sensors from Language & Heterogeneous User-Generated Content (2019-2025) https://www.turing.ac.uk/research/research-projects/time-sensitive-sensing-language-and-user-generated-content .
At the Alan Turing Institute she co-leads the NLP and data science for mental health interest groups and supervises PhD students. She is co-leading projects on Language sensing for dementia monitoring & diagnosis (https://www.dcs.warwick.ac.uk/langsensing/), Opinion summarisation from social media, an AI evidence based framework during pandemics (https://panacea2020.github.io/index.html).
Maria has a DPhil from the University of Oxford on learning pragmatic knowledge from text. Her work has contributed to advances in knowledge discovery from corpora, automation of scientific experimentation and automatic extraction of information from the scientific literature. She has published widely both in NLP and interdisciplinary venues. Past awards include an IBM Faculty Award for work on emotion sensing from heterogeneous mobile phone data, being a co-investigator on the EU Project PHEME, which studied the spread of rumours in social media (2014-2017) and an Early Career Fellowship from the Leverhulme Trust (2010-2013) on reasoning with scientific articles.
https://scholar.google.co.uk/citations?user=eys5GB4AAAAJ&hl=en
Lectures
Abstract TBA
Abstract TBA
Abstract TBA
Topics
Data Science, Global Optimization, Mathematical Modeling, Financial Applications, AIBiography
Panos Pardalos was born in Drosato (Mezilo) Argitheas in 1954 and graduated from Athens University (Department of Mathematics). He received his PhD (Computer and Information Sciences) from the University of Minnesota. He is a Distinguished Emeritus Professor in the Department of Industrial and Systems Engineering at the University of Florida, and an affiliated faculty of Biomedical Engineering and Computer Science & Information & Engineering departments.
Panos Pardalos is a world-renowned leader in Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data Sciences. He is a Fellow of AAAS, AAIA, AIMBE, EUROPT, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Panos Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for “scientific contributions that stand the test of time.”
Panos Pardalos has been awarded a prestigious Humboldt Research Award (2018-2019). The Humboldt Research Award is granted in recognition of a researcher’s entire achievements to date – fundamental discoveries, new theories, insights that have had significant impact on their discipline.
Panos Pardalos is also a Member of several Academies of Sciences, and he holds several honorary PhD degrees and affiliations. He is the Founding Editor of Optimization Letters, Energy Systems, and Co-Founder of the International Journal of Global Optimization, Computational Management Science, and Springer Nature Operations Research Forum. He has published over 600 journal papers, and edited/authored over 200 books. He is one of the most cited authors and has graduated 71 PhD students so far. Details can be found in www.ise.ufl.edu/pardalos
Panos Pardalos has lectured and given invited keynote addresses worldwide in countries including Austria, Australia, Azerbaijan, Belgium, Brazil, Canada, Chile, China, Czech Republic, Denmark, Egypt, England, France, Finland, Germany, Greece, Holland, Hong Kong, Hungary, Iceland, Ireland, Italy, Japan, Lithuania, Mexico, Mongolia, Montenegro, New Zealand, Norway, Peru, Portugal, Russia, South Korea, Singapore, Serbia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey, Ukraine, United Arab Emirates, and the USA.
https://scholar.google.com/citations?user=4e_KEdUAAAAJ&hl=en
Lectures
The Twin Support Vector Machine (TSVM) is a powerful extension of the conventional Support
Vector Machine (SVM) algorithm, designed to address classification tasks in real world data sets.
Developed as an enhancement to traditional SVMs, TWSVM offers improved robustness and
efficiency, making it a compelling choice for various machine learning applications.
In this lecture, we delve into the theoretical foundations and practical implications of the Twin
Support Vector Machine. We begin by elucidating the fundamental concepts behind SVMs and the
motivation for the development of TWSVM. We explore the key principles underpinning TWSVM,
including the formulation of the twin optimization problem and the incorporation of twin constraints
for enhanced classification performance.
Furthermore, we delve into the algorithmic intricacies of TWSVM, elucidating its training procedure,
kernelization techniques, and model evaluation methods. We highlight how TWSVM effectively
addresses the challenges posed by high-dimensional datasets, thereby enhancing its applicability
across diverse real-world scenarios. Moreover, we investigate recent advancements and extensions
of TWSVM, particularly focusing on optimization techniques that have been developed to further
improve its performance and scalability.
Through this lecture, participants will gain a comprehensive understanding of the Twin Support
Vector Machine and its significance in modern machine learning research and applications. We aim
to equip attendees with the knowledge and insights necessary to leverage TWSVM effectively in
their data analysis endeavors, fostering innovation and advancement in the field of computational
intelligence.
References:
1. Moosaei, Hossein, Fatemeh Bazikar, Milan Hladík, and Panos M. Pardalos. "Sparse least-
squares Universum twin bounded support vector machine with adaptive Lp-norms and feature
selection." Expert Systems with Applications (2024): 123378.
https://doi.org/10.1016/j.eswa.2024.123378
2. Moosaei, Hossein, Fatemeh Bazikar, and Panos M. Pardalos. "An improved multi-task least
squares twin support vector machine." Annals of Mathematics and Artificial
Intelligence (2023): 1-21.
https://link.springer.com/article/10.1007/s10472-023-09877-8
www.ise.ufl.edu/pardalos
Lectures
Topics
AI, Autonomous Systems, Applications to Self-Driving Vehicles, Autonomous NetworksBiography
Joseph Sifakis is Emeritus Research Director at Verimag, a laboratory in the area of safety critical systems that he directed for 13 years. He has been a full professor at Ecole Polytechnique Fédérale de Lausanne (EPFL) for the period 2011-2016.
Joseph Sifakis has made significant contributions to the design of reliable systems in many application areas, including avionics and space systems, telecommunications, and production systems. His current research focuses on autonomous systems, in particular self-driving cars and autonomous telecommunication systems.
In 2007, he received the Turing Award for his contribution to the theory and application of model checking, the most widely used system verification technique.
Joseph Sifakis is a member of six academies and a frequent speaker in international scientific and technical events.
Recent book: Joseph Sifakis, “Understanding and Changing the World From Information to Knowledge and Intelligence“, Springer, 2022.
Awards
Turing Award, 2007
Leonardo da Vinci Medal, 2012
Grand Officer of the National Order of Merit, France, 2008
Commander of the Legion of Honor, France, 2011
Member of the French Academy of Sciences, 2010
Member of Academia Europaea, 2008
Member of the French Academy of Engineering, 2008
Member of the American Academy of Arts and Sciences, 2015 Member of the National Academy of Engineering, 2017
Foreign member of the Chinese Academy of Sciences, 2019
Lectures
At present, there is a great deal of confusion as to the final objective of AI. Some see Artificial General Intelligence as the ultimate and imminent goal suggesting that it can be achieved through machine learning and its further developments.
We argue that despite the spectacular rise of AI, we still have weak AI that only provides building blocks for intelligent systems, mainly intelligent assistants that interact with users in question-answer mode.
A bold step toward human-level intelligence would be the advent of autonomous systems resulting from the marriage between AI and ICT envisaged in particular by the IoT. In this evolution, the ability to guarantee the trustworthiness of AI systems – reputed to be “black boxes” very different from traditional digital systems – will determine their degree of acceptance and integration in critical applications.
We review the current state of the art in AI and its possible evolution, including:
- Avenues for the development of future intelligent systems, in particular autonomous systems as the result of the convergence between AI and ICT;
- The inherent limitations of the validation of AI systems due to their lack of explainability, and the case for new theoretical foundations to extend existing rigorous validation methods;
- Complementarity between human and machine intelligence, which can lead to a multitude of intelligence concepts reflecting the ability to combine data-based and symbolic knowledge to varying degrees.
In light of this analysis, we conclude with a discussion of AI-induced risks, their assessment and regulation.
Topics
Artificial Intelligence, Data ScienceBiography
Pascal Van Hentenryck is an A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. Prior to this appointment, he was a professor of Computer Science at Brown University for about 20 years, he led the optimization research group (about 70 people) at National ICT Australia (NICTA) (until its merger with CSIRO), and was the Seth Bonder Collegiate Professor of Engineering at the University of Michigan. Van Hentenryck is also an Honorary Professor at the Australian National University. Van Hentenryck is a Fellow of AAAI (the Association for the Advancement of Artificial Intelligence) and INFORMS (the Institute for Operations Research and Management Science). He has been awarded two honorary doctoral degrees from the University of Louvain and the university of Nantes, the IFORS Distinguished Lecturer Award, the Philip J. Bray Award for teaching excellence in the physical sciences at Brown University, the ACP Award for Research Excellence in Constraint Programming, the ICS INFORMS Prize for Research Excellence at the Intersection of Computer Science and Operations Research, and an NSF National Young Investigator Award. He received a Test of Time Award (20 years) from the Association of Logic Programming and numerous best paper awards, including at IJCAI and AAAI. Van Hentenryck has given plenary/semi-plenary talks at the International Joint Conference on Artificial Intelligence (twice), the International Symposium on Mathematical Programming, the SIAM Optimization Conference, the Annual INFORMS Conference, NIPS, and many other conferences. Van Hentenryck is program co-chair of the AAAI’19 conference, a premier conference in Artificial Intelligence. Van Hentenryck’s research focuses in Artificial Intelligence, Data Science, and Operations Research. His current focus is to develop methodologies, algorithms, and systems for addressing challenging problems in mobility, energy systems, resilience, and privacy. In the past, his research focused on optimization and the design and implementation of innovative optimization systems, including the CHIP programming system (a Cosytec product), the foundation of all modern constraint programming systems and the optimization programming language OPL (now an IBM Product). Van Hentenryck has also worked on computational biology, numerical analysis, and programming languages, publishing in premier journals in these areas.
https://scholar.google.com/citations?user=GxFQz-4AAAAJ&hl=en
Lectures
Topics
Machine Learning, Artificial Intelligence, StatisticsBiography
Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a Distinguished Scientist at MSR. He is a fellow at the Canadian Institute for Advanced Research (CIFAR) and the European Lab for Learning and Intelligent Systems (ELLIS) where he also serves on the founding board. His previous appointments include VP at Qualcomm Technologies, professor at UC Irvine, postdoc at U. Toronto and UCL under supervision of prof. Geoffrey Hinton, and postdoc at Caltech under supervision of prof. Pietro Perona. He finished his PhD in theoretical high energy physics under supervision of Nobel laureate prof. Gerard ‘t Hooft.
Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015, he serves on the advisory board of the Neurips foundation since 2015 and has been program chair and general chair of Neurips in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. Max Welling is recipient of the ECCV Koenderink Prize in 2010 and the ICML Test of Time award in 2021. He directs the Amsterdam Machine Learning Lab (AMLAB) and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA).
https://scholar.google.com/citations?hl=en&user=8200InoAAAAJ&view_op=list_works&sortby=pubdate