Google Tech Talk
Presented by: Matthew P. Walker, Ph.D.
Abstract:
We spend one third of our lives asleep, yet doctors and scientists still have no complete understanding as to why. It is one of the last great scientific mysteries. This talk will describe new discoveries suggesting that, far from being a time when the brain is dormant, sleep is a highly active process critical for a constellation of different functions. These include the importance of sleep for learning, memory and brain plasticity. Furthermore, a role for sleep in intelligently synthesizing new memories together will be examined, the result of which is next-day creative insights. Finally, a new role for sleep in regulating emotional brain networks will be discussed, optimally preparing us for next day social and psychological challenges.
Bio:
Matthew Walker earned his PhD in neurophysiology from the Medical Research Council in London, UK, and subsequently became an Assistant Professor of Psychology at Harvard Medical School in 2004. He is currently an Associate Professor of Psychology and Neuroscience at the University of California Berkeley. He is the recipient of funding awards from the National Science Foundation and the National Institutes of Health. In 2006 he became a Kavli Fellow of the National Academy of Sciences. His research examines the impact of sleep on human brain function in healthy and disease populations.
,,CedHzg-VRiI,UCtXKDgv1AVoG88PLl8nGXmw, Knowledge, channel_UCtXKDgv1AVoG88PLl8nGXmw, video_CedHzg-VRiI,A Google TechTalk about the book "AI Snake Oil" presented by its coauthor Sayash Kapoor, 2024-11-21
ABSTRACT: AI needs to be categorized into two primary branches—predictive and generative—to critically examine their societal impacts, risks, and potential benefits. Predictive AI, which uses algorithmic forecasting to make decisions in domains such as criminal justice, hiring, and healthcare, presents significant ethical concerns. Evidence shows that predictive AI can unjustly restrict individuals’ life opportunities, owing to fundamental limitations in forecasting human behavior. These limitations are grounded in sociological principles that underscore the inherent unpredictability of human actions. In contrast, generative AI is posited as a transformative tool for knowledge work, with far-reaching positive implications if carefully managed. Despite its disruptive initial rollout and prevalent misuse, generative AI holds the potential to enhance productivity and creativity across numerous fields. However, the rapid and unregulated dissemination of this technology—comparable to providing the public with an unrestricted power tool—suggests a pressing need for structured, ethical frameworks to ensure its responsible application.
ABOUT THE SPEAKER: Sayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is listed in TIME’s 100 Most Influential People in AI. His research focuses on the societal impact of AI. He previously worked on AI in the industry and academia at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW. https://www.cs.princeton.edu/~sayashk/
Organizer: Jan Matusiewicz (Google) https://medium.com/@jan.matusiewicz
AI Snake Oil: https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil?srsltid=AfmBOoo2w8r0_MAuHpALx18d4Zst3KQJhuKt0N-lXaC8PoR9DK4i9kds
,,9XiVqfLVtIs,UCtXKDgv1AVoG88PLl8nGXmw, Knowledge,Technology, channel_UCtXKDgv1AVoG88PLl8nGXmw, video_9XiVqfLVtIs,A Google TechTalk, presented by Tom Payne, 2024-07-03
2024 Zürich Go Meetup.
If you’re interested in attending a Zürich Go meetup, please sign up at meetup.com
,1,A Google TechTalk, presented by Jordan Frery, 2024-05-08
ABSTRACT: In the rapidly evolving field of artificial intelligence, the commitment to data privacy and intellectual property protection during Machine Learning operations has become a foundational necessity for society and businesses handling sensitive data. This is especially critical in sectors such as healthcare and finance, where ensuring confidentiality and safeguarding proprietary information are not just ethical imperatives but essential business requirements.
This presentation goes into the role of Fully Homomorphic Encryption (FHE), based on the open-source library Concrete ML, in advancing secure and privacy-preserving ML applications.
We begin with an overview of Concrete ML, emphasizing how practical FHE for ML was made possible. This sets the stage for discussing how FHE is applied to ML inference, demonstrating its capability to perform secure inference on encrypted data across various models. After inference, we speak about another important FHE application, the FHE training and how encrypted data from multiple sources can be used for training without compromising individual user's privacy.
FHE has lots of synergies with other technologies, in particular Federated Learning: we show how this integration strengthens privacy-preserving features of ML models during the full pipeline, training and inference.
Finally, we address the application of FHE in generative AI and the development of Hybrid FHE models (which are the subject of our RSA 2024 presentation). This approach represents a strategic balance between intellectual property protection, user privacy and computational performance, offering solutions to the challenges of securing one of the most important AI applications of our times.
SPEAKERS:
Jordan Frery, Concrete ML Tech Lead and Research at Zama
Benoit Chevallier-Mames, VP Cloud and ML at Zama
DATE:
May 8 2024
,1,A Google Algorithms Seminar TechTalk, presented by Ziming Liu, 2024-06-04
ABSTRACT: Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.
ABOUT THE SPEAKER: Ziming Liu is a fourth-year PhD student at MIT & IAIFI, advised by Prof. Max Tegmark. His research interests lie in the intersection of AI and physics (science in general):
Physics of AI: “AI as simple as physics”
Physics for AI: “AI as natural as physics”
AI for physics: “AI as powerful as physicists”
He publishes papers both in top physics journals and AI conferences. He serves as a reviewer for Physcial Reviews, NeurIPS, ICLR, IEEE, etc. He co-organized the AI4Science workshops. His research have strong interdisciplinary nature, e.g., Kolmogorov-Arnold networks (Math for AI), Poisson Flow Generative Models (Physics for AI), Brain-inspired modular training (Neuroscience for AI), understanding Grokking (physics of AI), conservation laws and symmetries (AI for physics).
,1,A Google TechTalk, presented by Xinyi Wu, 2024-01-18
A Google Algorithm Seminar. ABSTRACT: Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon where increasing network depth leads to homogeneous node representations. Over the last few years, it has remained as one of the central challenges of building more powerful Graph Neural Networks (GNNs). In this talk, I will discuss two recent papers on this phenomenon and provide some new insights.
The first work studies why oversmoothing happens at a relatively shallow depth in GNNs. By carefully analyzing the oversmoothing mechanisms in a stylized formulation, we distinguish between adverse mixing that homogenizes nodes across different classes and beneficial denoising within the same class. We quantify these two effects on random graphs sampled from the Contextual Stochastic Block Model (CSBM) and show that oversmoothing occurs once the mixing effect starts to dominate the denoising effect. We establish that the number of layers required for this transition is O(logN/log(logN)) for sufficiently dense graphs with N nodes. We also extend our analysis to study the effects of Personalized PageRank (PPR), or equivalently, the effects of initial residual connections on oversmoothing, and shed light on when and why they might not be an ideal solution to the problem.
In the second work, we study oversmoothing in attention-based GNNs, such as Graph Attention Networks (GATs) and transformers. Treating attention-based GNNs as dynamical systems, our study demonstrates that the graph attention mechanism cannot prevent oversmoothing and loses expressive power exponentially. From a technical point of view, the proposed framework significantly extends the existing results on oversmoothing, and can account for asymmetric, state-dependent and time-varying aggregation operators and a wide range of common nonlinear activation functions, such as ReLU, LeakyReLU, GELU and SiLU.
The talk is based on the following papers: https://arxiv.org/abs/2212.10701, https://arxiv.org/abs/2305.16102. Joint works with Amir Ajorlou (MIT), Zhengdao Chen (NYU/Google), William Wang (MIT), Zihui Wu (Caltech) and Ali Jadbabaie (MIT).
ABOUT THE SPEAKER: Xinyi Wu is a fourth-year Ph.D. student in the Institute for Data, Systems, and Society (IDSS) at Massachusetts Institute of Technology (MIT), advised by Professor Ali Jadbabaie. She is affiliated with the Laboratory for Information and Decision Systems (LIDS). She is a recipient of the MIT Michael Hammer Fellowship. She is interested in applied graph theory, dynamical systems, networks, and machine learning on graphs. Her work on oversmoothing in GNNs has been awarded as Spotlight paper in NeurIPS 2023.
,1,Google Tech Talk
Presented by: Matthew P. Walker, Ph.D.
Abstract:
We spend one third of our lives asleep, yet doctors and scientists still have no complete understanding as to why. It is one of the last great scientific mysteries. This talk will describe new discoveries suggesting that, far from being a time when the brain is dormant, sleep is a highly active process critical for a constellation of different functions. These include the importance of sleep for learning, memory and brain plasticity. Furthermore, a role for sleep in intelligently synthesizing new memories together will be examined, the result of which is next-day creative insights. Finally, a new role for sleep in regulating emotional brain networks will be discussed, optimally preparing us for next day social and psychological challenges.
Bio:
Matthew Walker earned his PhD in neurophysiology from the Medical Research Council in London, UK, and subsequently became an Assistant Professor of Psychology at Harvard Medical School in 2004. He is currently an Associate Professor of Psychology and Neuroscience at the University of California Berkeley. He is the recipient of funding awards from the National Science Foundation and the National Institutes of Health. In 2006 he became a Kavli Fellow of the National Academy of Sciences. His research examines the impact of sleep on human brain function in healthy and disease populations.