,,CedHzg-VRiI,UCtXKDgv1AVoG88PLl8nGXmw, Knowledge, channel_UCtXKDgv1AVoG88PLl8nGXmw, video_CedHzg-VRiI,A Google TechTalk about the book "AI Snake Oil" presented by its coauthor Sayash Kapoor, 2024-11-21
ABSTRACT: AI needs to be categorized into two primary branches—predictive and generative—to critically examine their societal impacts, risks, and potential benefits. Predictive AI, which uses algorithmic forecasting to make decisions in domains such as criminal justice, hiring, and healthcare, presents significant ethical concerns. Evidence shows that predictive AI can unjustly restrict individuals’ life opportunities, owing to fundamental limitations in forecasting human behavior. These limitations are grounded in sociological principles that underscore the inherent unpredictability of human actions. In contrast, generative AI is posited as a transformative tool for knowledge work, with far-reaching positive implications if carefully managed. Despite its disruptive initial rollout and prevalent misuse, generative AI holds the potential to enhance productivity and creativity across numerous fields. However, the rapid and unregulated dissemination of this technology—comparable to providing the public with an unrestricted power tool—suggests a pressing need for structured, ethical frameworks to ensure its responsible application.
ABOUT THE SPEAKER: Sayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is listed in TIME’s 100 Most Influential People in AI. His research focuses on the societal impact of AI. He previously worked on AI in the industry and academia at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW. https://www.cs.princeton.edu/~sayashk/
Organizer: Jan Matusiewicz (Google) https://medium.com/@jan.matusiewicz
AI Snake Oil: https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil?srsltid=AfmBOoo2w8r0_MAuHpALx18d4Zst3KQJhuKt0N-lXaC8PoR9DK4i9kds
,,9XiVqfLVtIs,UCtXKDgv1AVoG88PLl8nGXmw, Knowledge,Technology, channel_UCtXKDgv1AVoG88PLl8nGXmw, video_9XiVqfLVtIs,A Google TechTalk, presented by Tom Payne, 2024-07-03
2024 Zürich Go Meetup.
If you’re interested in attending a Zürich Go meetup, please sign up at meetup.com
,1,A Google TechTalk, presented by Jordan Frery, 2024-05-08
ABSTRACT: In the rapidly evolving field of artificial intelligence, the commitment to data privacy and intellectual property protection during Machine Learning operations has become a foundational necessity for society and businesses handling sensitive data. This is especially critical in sectors such as healthcare and finance, where ensuring confidentiality and safeguarding proprietary information are not just ethical imperatives but essential business requirements.
This presentation goes into the role of Fully Homomorphic Encryption (FHE), based on the open-source library Concrete ML, in advancing secure and privacy-preserving ML applications.
We begin with an overview of Concrete ML, emphasizing how practical FHE for ML was made possible. This sets the stage for discussing how FHE is applied to ML inference, demonstrating its capability to perform secure inference on encrypted data across various models. After inference, we speak about another important FHE application, the FHE training and how encrypted data from multiple sources can be used for training without compromising individual user's privacy.
FHE has lots of synergies with other technologies, in particular Federated Learning: we show how this integration strengthens privacy-preserving features of ML models during the full pipeline, training and inference.
Finally, we address the application of FHE in generative AI and the development of Hybrid FHE models (which are the subject of our RSA 2024 presentation). This approach represents a strategic balance between intellectual property protection, user privacy and computational performance, offering solutions to the challenges of securing one of the most important AI applications of our times.
SPEAKERS:
Jordan Frery, Concrete ML Tech Lead and Research at Zama
Benoit Chevallier-Mames, VP Cloud and ML at Zama
DATE:
May 8 2024
,1,A Google Algorithms Seminar TechTalk, presented by Ziming Liu, 2024-06-04
ABSTRACT: Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.
ABOUT THE SPEAKER: Ziming Liu is a fourth-year PhD student at MIT & IAIFI, advised by Prof. Max Tegmark. His research interests lie in the intersection of AI and physics (science in general):
Physics of AI: “AI as simple as physics”
Physics for AI: “AI as natural as physics”
AI for physics: “AI as powerful as physicists”
He publishes papers both in top physics journals and AI conferences. He serves as a reviewer for Physcial Reviews, NeurIPS, ICLR, IEEE, etc. He co-organized the AI4Science workshops. His research have strong interdisciplinary nature, e.g., Kolmogorov-Arnold networks (Math for AI), Poisson Flow Generative Models (Physics for AI), Brain-inspired modular training (Neuroscience for AI), understanding Grokking (physics of AI), conservation laws and symmetries (AI for physics).
,1,A Google TechTalk, presented by Xinyi Wu, 2024-01-18
A Google Algorithm Seminar. ABSTRACT: Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon where increasing network depth leads to homogeneous node representations. Over the last few years, it has remained as one of the central challenges of building more powerful Graph Neural Networks (GNNs). In this talk, I will discuss two recent papers on this phenomenon and provide some new insights.
The first work studies why oversmoothing happens at a relatively shallow depth in GNNs. By carefully analyzing the oversmoothing mechanisms in a stylized formulation, we distinguish between adverse mixing that homogenizes nodes across different classes and beneficial denoising within the same class. We quantify these two effects on random graphs sampled from the Contextual Stochastic Block Model (CSBM) and show that oversmoothing occurs once the mixing effect starts to dominate the denoising effect. We establish that the number of layers required for this transition is O(logN/log(logN)) for sufficiently dense graphs with N nodes. We also extend our analysis to study the effects of Personalized PageRank (PPR), or equivalently, the effects of initial residual connections on oversmoothing, and shed light on when and why they might not be an ideal solution to the problem.
In the second work, we study oversmoothing in attention-based GNNs, such as Graph Attention Networks (GATs) and transformers. Treating attention-based GNNs as dynamical systems, our study demonstrates that the graph attention mechanism cannot prevent oversmoothing and loses expressive power exponentially. From a technical point of view, the proposed framework significantly extends the existing results on oversmoothing, and can account for asymmetric, state-dependent and time-varying aggregation operators and a wide range of common nonlinear activation functions, such as ReLU, LeakyReLU, GELU and SiLU.
The talk is based on the following papers: https://arxiv.org/abs/2212.10701, https://arxiv.org/abs/2305.16102. Joint works with Amir Ajorlou (MIT), Zhengdao Chen (NYU/Google), William Wang (MIT), Zihui Wu (Caltech) and Ali Jadbabaie (MIT).
ABOUT THE SPEAKER: Xinyi Wu is a fourth-year Ph.D. student in the Institute for Data, Systems, and Society (IDSS) at Massachusetts Institute of Technology (MIT), advised by Professor Ali Jadbabaie. She is affiliated with the Laboratory for Information and Decision Systems (LIDS). She is a recipient of the MIT Michael Hammer Fellowship. She is interested in applied graph theory, dynamical systems, networks, and machine learning on graphs. Her work on oversmoothing in GNNs has been awarded as Spotlight paper in NeurIPS 2023.
,1,A Software Design Tech Talk presented by Matthias Felleisen on 2023-02-23. Hosted by Google's Software Design Education team.
ABSTRACT: Software is a message from one developer to other developers across time. As such, developing software incurs a social debt to all those developers who will touch this software in the future---be that an older version of the original creator or someone who isn't even born yet. Understood this way, software development poses two challenges: (1) companies must learn to identify people who understand this idea, because being able to "grind leetcode" doesn't qualify; (2) colleges must create alternative programming curricula to turn students into apprentice developers, because the traditional curriculum doesn't.
This talk will present my answer to the second challenge. I have spent the last 25 years creating undergraduate programming courses that are all about software-as-a-message, and the talk will provide an overview of this alternative curriculum approach. The first challenge remains yours to overcome.
About the Speaker: Matthias Felleisen
Matthias Felleisen is Trustee Professor of Computer Science at Northeastern University. He is also a Fellow of the ACM, received the organization's Karl Karlstrom Award for his work on curriculum development, and was honored with the ACM SIGPLAN Lifetime Award for his research on programming languages.
,1,A Google TechTalk, presented by Gowthami Somepalli , 2023-08-09
Abstract: Cutting-edge diffusion models produce high-quality and customizable images, enabling their use for commercial art and graphic design purposes. First, I will discuss our study of various frameworks to detect replication. Then, I will show how we used these frameworks to identify memorization in Stable Diffusion 1.4 model. In the second part, I will discuss various factors contributing to memorization in diffusion models. While it is widely believed that duplicated images in the training set are responsible for content replication at inference time, I will show results on how the text conditioning of the model also plays an important role. Lastly, I will discuss several techniques we proposed based on these findings for reducing data replication in both training and inference times.
,1,A Google TechTalk, presented by Georgy Noarov, 2023-11-16
Google Algorithms Seminar - ABSTRACT: In predictions-to-decisions pipelines, statistical forecasts are useful insofar as they are a trustworthy guide to downstream rational decision making.
Towards this, we study the problem of making online predictions of an adversarially chosen high-dimensional state that are emph{unbiased} subject to an arbitrary collection of conditioning events, with the goal of tailoring these events to downstream decision makers who will use these predictions to inform their actions. We give efficient general algorithms for solving this problem, and a number of applications that stem from choosing an appropriate set of conditioning events, including:
(1) tractable algorithms with strong no-regret guarantees over large action spaces, (2) a high-dimensional best-in-class (omniprediction) result, (3) fairness guarantees of various flavors, and (4) a novel framework for uncertainty quantification in multiclass settings.
For example, we can efficiently produce predictions targeted at any polynomially many decision makers, offering each of them optimal swap regret if they simply best respond to our predictions. Generalizing this, in the online combinatorial optimization setting we obtain the first efficient algorithms that guarantee, for up to polynomially many decision makers, no regret on any polynomial number of {subsequences} that can depend on their actions as well as any external context. As an illustration, for the online routing problem this easily implies --- for the first time --- efficiently obtainable no-swap-regret guarantees over all exponentially many paths that make up an agent's action space (and this can be obtained for multiple agents at once); by contrast, prior no swap regret algorithms such as Blum-Mansour would be intractable in this setting as they need to enumerate over the exponentially large action space. Moreover, our results imply novel regret guarantees for extensive-form games.
Turning to uncertainty quantification in ML, we show how our methods let us estimate (in the online adversarial setting) multiclass/multilabel probability vectors in a transparent and trustworthy fashion: in particular, downstream prediction set algorithms (i.e., models that predict multiple labels at once rather than a single one) will be incentivized to simply use our estimated probabilities as if they were the true conditional class probabilities, and their predictions will be guaranteed to satisfy multigroup fairness and other "conditional coverage" guarantees. This gives a powerful new alternative to well-known set-valued prediction paradigms such as conformal and top-K prediction. Moreover, our predictions can be guaranteed to be "best-in-class" --- i.e. to beat any polynomial collection of other (e.g. NN-based) multiclass vector predictors, simultaneously as measured by all Lipschitz Bregman loss functions (including L2 loss, cross-entropy loss, etc.). This can be interpreted as a high-dimensional omniprediction result.
ABOUT THE SPEAKER: George Noarov is a CS PhD student at the University of Pennsylvania, advised by Michael Kearns and Aaron Roth. He is broadly interested in theoretical CS and ML, with particular focus on fair/robust/trustworthy ML, online learning, algorithmic game theory, and uncertainty quantification. He obtained his B.A. in Mathematics from Princeton University, where his advisors were Mark Braverman and Matt Weinberg. He has received several academic awards, and his research has been supported by an Amazon Gift for Research in Trustworthy AI.
,1,A Google TechTalk, presented by Clayton Sanford, 2023-07-18
Google Algorithms Seminar - ABSTRACT: Attention layers, as commonly used in transformers, form the backbone of modern deep learning, yet there is little mathematical work detailing their benefits and deficiencies as compared with other architectures. In this talk, I'll present both positive and negative results on the representation power of attention layers, with a focus on intrinsic complexity parameters such as width, depth, and embedding dimension. On the positive side, I'll present a sparse averaging task, where recurrent networks and feedforward networks all have complexity scaling polynomially in the input size, whereas transformers scale merely logarithmically in the input size. On the negative side, I'll present a triple detection task, where attention layers in turn have complexity scaling linearly in the input size. I'll discuss these results and some of our proof techniques, which emphasize the value of communication complexity in the analysis of transformers. Based on joint work with Daniel Hsu and Matus Telgarsky.
Bio: Clayton Sanford is an incoming 5th (and final) year PhD student at Columbia studying machine learning theory. His work focuses primarily on the representational properties and inductive biases of neural networks. He has additionally worked on solving learning combinatorial algorithms with transformers (as a Microsoft Research intern this summer) and climate modeling with ML (as an Allen Institute for AI intern in summer 2022).
,1,A Google TechTalk, presented by Sarath Shekkizhar, 2023-07-10
Google Algorithms Seminar ABSTRACT: Neighborhood and graph construction is fundamental in data analysis and machine learning. k-nearest neighbor (kNN) and epsilon-neighborhood methods are the most commonly used methods for this purpose due to their computational simplicity. However, the interpretation and the choice of parameter k/epsilon, though receiving much attention over the years, still remains ad hoc.
In this talk, I will present an alternative view of neighborhoods where I demonstrate that neighborhood definitions are sparse signal approximation problems. Specifically, we will see that (1) kNN and epsilon-neighborhood approaches are sub-optimal thresholding-based representations; (2) an improved and efficient definition based on basis pursuits exists, namely, non-negative kernel regression (NNK); and (3) selecting orthogonal signals for sparse approximation corresponds to the selection of neighbors that are not geometrically redundant. NNK neighborhoods are adaptive, sparse, and exhibit superior performance in graph-based signal processing and machine learning.
We will then discuss a k-means like algorithm where we leverage the polytope geometry and sparse coding view of NNK for data summarization and outlier detection. I will conclude by discussing a graph framework for an empirical understanding of deep neural networks (DNN). The developed graph metrics characterize the input-output geometry of the embedding spaces induced in DNN and provide insights into the similarities and differences between models, their invariances, and their generalization and transfer learning performances.
Bio: Sarath Shekkizhar received his bachelor's (Electronics and Communication) and double master's (Electrical Engineering, Computer Science) degrees from the National Institute of Technology, Tiruchirappalli, India, and the University of Southern California (USC), Los Angeles, USA, respectively. He recently graduated from Antonio Ortega's group with his doctoral degree in Electrical and Computer Engineering at USC. He is the recipient of the IEEE best student paper award at ICIP 2020 and was named a Rising Star in Signal Processing at ICASSP 2023. His research interests include graph signal processing, non-parametric methods, and machine learning.
,1,A Google TechTalk, presented by Neel Nanda, 2023/06/20
Google Algorithms Seminar - ABSTRACT: Mechanistic Interpretability is the study of reverse engineering the learned algorithms in a trained neural network, in the hopes of applying this understanding to make powerful systems safer and more steerable. In this talk Neel will give an overview of the field, summarise some key works, and outline what he sees as the most promising areas of future work and open problems. This will touch on techniques in casual abstraction and meditation analysis, understanding superposition and distributed representations, model editing, and studying individual circuits and neurons.
About the Speaker: Neel works on the mechanistic interpretability team at Google DeepMind. He previously worked with Chris Olah at Anthropic on the transformer circuits agenda, and has done independent work on reverse-engineering modular addition and using this to understand grokking.
,1,A Google TechTalk, presented by Ashok Cutkosky, 2023/02/15
ABSTRACT: Most algorithms for privacy preserving stochastic optimization fall into two camps. In the first camp, one applies some simple noisy preprocessing to privatize gradients that are then used with a standard non-private algorithm. This technique is relatively simple to implement and analyze, but may result in suboptimal convergence properties. In the second camp, one carefully designs a new optimization algorithm with privacy in mind in order to achieve the best possible convergence rates. This method can achieve optimal theoretical properties, but is difficult to generalize and employ in tandem with the heuristic improvements popular in optimization practice. The ideal solution would be a simple way to incorporate privacy into *any* non-private algorithm in such a way that optimal non-private convergence rates translate into optimal private convergence rates.
In this talk, I will discuss significant progress towards the ideal goal: a variation on the classical online-to-batch conversion that converts any online optimization algorithm into a private stochastic optimization algorithm for which optimal regret guarantees imply optimal convergence guarantees.
,1,A Google TechTalk, presented by Piotr Sankowski, 2023-03-30
ABSTRACT: The Shapley value -- a fundamental game-theoretic solution concept -- has recently become one of the main tools used to explain predictions of tree ensemble models. Another well-known game-theoretic solution concept is the Banzhaf value. Although the Banzhaf value is closely related to the Shapley value, its properties w.r.t. feature attribution have not been understood equally well. This paper shows that, for tree ensemble models, the Banzhaf value offers some crucial advantages over the Shapley value while providing similar feature attributions.
In particular, we first give an optimal O(TL+n) time algorithm for computing the Banzhaf value-based attribution of a tree ensemble model's output. Here, T is the number of trees, L is the maximum number of leaves in a tree, and n is the number of features. In comparison, the state-of-the-art Shapley value-based algorithm runs in O(TLD^2+n) time, where D denotes the maximum depth of a tree in the ensemble.
Next, we experimentally compare the Banzhaf and Shapley values for tree ensemble models. Both methods deliver essentially the same average importance scores for the studied datasets using two different tree ensemble models (the sklearn implementation of Decision Trees or xgboost implementation of Gradient Boosting Decision Trees). However, our results indicate that, on top of being computable faster, the Banzhaf is more numerically robust than the Shapley value.
Joint work with A. Karczmarz, A. Mukherjee, P. Wygocki and T. Michalak.
About the Speaker: Piotr Sankowski is a professor at the Institute of Informatics, University of Warsaw, where he received his habilitation in 2009 and where he received a doctorate in computer science in 2005. His research interest focuses on practical application of algorithms, ranging from economic applications, through learning data structures, to parallel algorithms for data science. In 2009, Piotr Sankowski received also a doctorate in physics in the field of solid state theory at the Polish Academy of Sciences. In 2010 he received ERC Starting Independent Researcher Grant, in 2015 ERC Proof of Concept Grant, and in 2017 ERC Consolidator Grant. He is a president of IDEAS NCBR – a research and development centre operating in the field of artificial intelligence and digital economy. Piotr Sankowski is also a co-founder of the spin-off company MIM Solutions.
A Google Talk Series on Algorithms, Theory, and Optimization
,1,Google Tech Talk
Presented by: Matthew P. Walker, Ph.D.
Abstract:
We spend one third of our lives asleep, yet doctors and scientists still have no complete understanding as to why. It is one of the last great scientific mysteries. This talk will describe new discoveries suggesting that, far from being a time when the brain is dormant, sleep is a highly active process critical for a constellation of different functions. These include the importance of sleep for learning, memory and brain plasticity. Furthermore, a role for sleep in intelligently synthesizing new memories together will be examined, the result of which is next-day creative insights. Finally, a new role for sleep in regulating emotional brain networks will be discussed, optimally preparing us for next day social and psychological challenges.
Bio:
Matthew Walker earned his PhD in neurophysiology from the Medical Research Council in London, UK, and subsequently became an Assistant Professor of Psychology at Harvard Medical School in 2004. He is currently an Associate Professor of Psychology and Neuroscience at the University of California Berkeley. He is the recipient of funding awards from the National Science Foundation and the National Institutes of Health. In 2006 he became a Kavli Fellow of the National Academy of Sciences. His research examines the impact of sleep on human brain function in healthy and disease populations.
,1,Google Tech Talk
April 12, 2013
(more info below)
Presented by Randy McIntosh
ABSTRACT
The Virtual Brain (TVB, thevirtualbrain.org) is an international project that uses real neuroimaging data to construct a simulation of the human brain. Anatomical data setup the conduit for communication between different brain regions. The dynamics for each region are generated from a library of nonlinear models, and produce large-scale activity patterns that can be compared directly to empirical functional data, such EEG/MEG or functional MRI. The talk will present the core of the platform and its applications to understanding the structure-function interplay that forms the basis of cognitive architectures. TVB's use of real data is also at the heart of a larger social neuroscience initiative, wherein small groups of people interact with TVB through wireless EEG headsets, modifying an immersive audiovisual environment that mimics a dream -- My Virtual Dream. The goal is to make use of individual brain signals to augment the group experience through TVB. The two avenues of development for TVB will inform neurally-inspired computing architectures and the evolution of interactive devices that can use a person's physiology to redesign their experience.
Speaker Info:
Randy McIntosh, PhD.
Professor, Department of Psychology, University of Toronto
Director, Rotman Research Institute, Baycrest Centre
,1,Google Tech Talk
November 14, 2012
Presented by Leo A. Meyerovich
ABSTRACT
Why do some programming languages succeed and others fail? We have been tackling this basic question in two ways:
First, I will discuss theories from the social sciences about the adoption process and what that means for designing new languages. For example, the Haskell and type systems communities may be suffering some of the same challenges that the public health community did for safe sex advocacy in the early nineties. Second, informed by these studies, we gathered and quantitatively analyzed several large datasets, including over 200,000 SourceForge projects and multiple surveys of 1,000-13,000 programmers. We find that social factors usually outweigh intrinsic technical ones. In fact, the larger the organization, the more important social factors become. I'll report on additional surprises about the popularity, perception, and learning of programming languages.
Taken together, our results help explain the process by which languages become adopted or not.
Speaker Info:
Leo A. Meyerovich is a Ph.D. candidate at UC Berkeley researching browser parallelization, the Superconductor language for visualizing big data, and language adoption. Earlier, he worked on security extensions for JavaScript and the Flapjax language for functional reactive web programming (http://www.flapjax-lang.org).
,1,Google Tech Talk (more info below)
August 8, 2011
Presented by Luigi Rizzo, Universita` di Pisa
ABSTRACT
Software packet processing at line rate is problematic both in userspace and within the kernel, due to the cost of managing in-kernel metadata, and system calls/and data copy overhead.
We present a novel framework, called netmap, that solves these challenges by integrating and extending good ideas from existing proposals. With netmap, it takes as little as 70 clock cycles to move one packet between the wire and userspace processes -- more than 10 times faster than existing APIs. As an example, a single core running at 900MHz can generate the 14.8Mpps that saturate a 10GigE interface. This efficiency is an enabling factor for doing high speed packet processing within the safe and feature-rich user space environment provided by modern operating systems.
In the talk we will present netmap and its internals, explain why it is efficient yet safe and easy to use, and report our experience in developing and porting applications to the new API -- a task made easy by the existence of a pcap compatibility library.
netmap is available on FreeBSD -- work supported by EU FP7 Project "CHANGE"
URL http://info.iet.unipi.it/~luigi/netmap/
,1,Google Tech Talk
January 6, 2011
Presented by Ron Garret.
ABSTRACT
Richard Feynman once famously quipped that no one understands quantum mechanics, and popular accounts continue to promulgate the view that QM is an intractable mystery (probably because that helps to sell books). QM is certainly unintuitive, but the idea that no one understands it is far from the truth. In fact, QM is no more difficult to understand than relativity. The problem is that the vast majority of popular accounts of QM are simply flat-out wrong. They are based on the so-called Copenhagen interpretation of QM, which has been thoroughly discredited for decades. It turns out that if Copenhagen were true then it would be possible to communicate faster than light, and hence send signals backwards in time. This talk describes an alternative interpretation based on quantum information theory (QIT) which is consistent with current scientific knowledge. It turns out that there is a simple intuition that makes almost all quantum mysteries simply evaporate, and replaces them with an easily understood (albeit strange) insight: measurement and entanglement are the same physical phenomenon, and you don't really exist.
Slides are available here:
https://docs.google.com/a/google.com/present/edit?id=0AelF4upZ7VjZZGM1eGttOGJfNDNkenFtNnFkaw&hl=en
Link to the paper:
http://www.flownet.com/ron/QM.pdf
About the speaker:
Dr. Ron Garret was an AI and robotics researcher at the NASA Jet Propulsion Lab for fifteen years before taking a year off to work at Google in June of 2000. He was the lead engineer on the first release of AdWords, and the original author of the Google Translation Console. Since leaving Google he has started a new career as an entrepreneur, angel investor and filmmaker. He has co-founded three startups, invested in a dozen others, and made a feature-length documentary about homelessness.
,1,Google Tech Talk
March 29, 2010
ABSTRACT
Presented by Luigi Rizzo.
In this talk we will give an overview of some recent activity done at the Universita` di Pisa on link emulation and packet scheduling. We will cover two main topics:
- the "dummynet" link emulator shaper at
http://info.iet.unipi.it/~luigi/dummynet/
which has been recently ported to Linux and Windows (in addition to FreeBSD and OSX), and extended with support for multiple scheduling algorithms. In the talk we will briefly the features of dummynet, discuss its performance and applicability, and describe the strategy used to build kernel modules for three very different systems starting from the same codebase.
- fast packet scheduling algorithms.
http://info.iet.unipi.it/~luigi/qfq/
We will present QFQ, a truly practical WFQ scheduler with O(1) complexity and very small constants (110ns per packet on a low-end workstation, 2.5..4 times faster than the best competitor). QFQ is available on all major platforms as part of dummynet.
The talk will briefly cover the features of QFQ, and compare it with other existing packet scheduling algorithms. (joint work with Paolo Valente and Fabio Checconi).
Luigi Rizzo is an associate Professor at the Universita` di Pisa, and a long time FreeBSD and Asterisk developer. He has worked on various networking topics including multicast congestion control, emulation, and operating system support for high performance networking.
In addition to the work presented here, Luigi and his colleagues are currently working on disk scheduling, and will be glad to discuss the topic with people interested. A description of this work is at
http://www.bsdcan.org/2009/schedule/attachments/100_gsched.pdf
http://algo.ing.unimo.it/people/paolo/disk_sched/
,1,Google Tech Talks
March, 7 2008
ABSTRACT
The world has evolved a long way from the Stone Age to the Iron age, and
we are now in the new age of engineered materials. Today's discussion
will focus in the realm of advanced ceramics, materials that aren't
typically found in nature and can withstand extreme conditions. These
materials are being used to stop bullets, enable diesel engines to run
more efficiently, produce solar cells, and much more. We will focus on
Boron Carbide, Silicon Nitride, Silicon Carbide and a few other
materials, where you will learn about how they came to be, how they are
made, and where they are used.
Speaker: Peter Goldstein
Peter has been fascinated with the world around him and always wanted to
develop new materials since he was a child. He earned B.S and M.S.
degrees in Materials Science and Engineering from the University of
Florida, focusing on ceramic materials. He is currently employed at
Ceradyne, Inc. a high growth, cutting edge Advanced Technical Ceramics
company as a Sales Engineer. He works on developing new applications
with his customers using the materials Ceradyne offers, and as a hobby
enjoys making glass sculptures at his home glass-blowing studio which he
built. There he gets to make new creations beyond the constraints of the
ordinary world by shaping glass with jet-engine like flames. He is also
newly married to a wonderful woman, Jessica, and they live happily
together in Huntington Beach, CA.
,1,Google Tech Talks
February 28, 2007
ABSTRACT
Self-reconfigurable modular robots are metamorphic systems that can autonomously change their logical or physical configurations (such as shapes, sizes, or formations), as well as their locomotion and manipulation, based on the mission and the environment in hand. Because of their modularity, versatility, self-healing ability and low cost reproducibility, such robots provide a flexible approach for achieving complex tasks in unstructured and dynamic environments. They are well suited for applications such as search and rescue, reconnaissance, self-assembly, inspections in hazardous environments, and exploration in space and ocean. The construction and...