,1,Episode #110
HUGE ANNOUNCEMENT, CHATGPT+WOLFRAM! You saw it HERE first!
Pod version: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/110-Dr--STEPHEN-WOLFRAM---HUGE-ChatGPTWolfram-announcement-e210qda
Support us! https://www.patreon.com/mlst
MLST Discord: https://discord.gg/aNPkGUQtc5
Stephen's announcement post: https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
OpenAI's announcement post: https://openai.com/blog/chatgpt-plugins
In an era of technology and innovation, few individuals have left as indelible a mark on the fabric of modern science as our esteemed guest, Dr. Steven Wolfram.
Dr. Wolfram is a renowned polymath who has made significant contributions to the fields of physics, computer science, and mathematics. A prodigious young man too, Wolfram earned a Ph.D. in theoretical physics from the California Institute of Technology by the age of 20. He became the youngest recipient of the prestigious MacArthur Fellowship at the age of 21.
Wolfram's groundbreaking computational tool, Mathematica, was launched in 1988 and has become a cornerstone for researchers and innovators worldwide. In 2002, he published "A New Kind of Science," a paradigm-shifting work that explores the foundations of science through the lens of computational systems.
In 2009, Wolfram created Wolfram Alpha, a computational knowledge engine utilized by millions of users worldwide. His current focus is on the Wolfram Language, a powerful programming language designed to democratize access to cutting-edge technology.
Wolfram's numerous accolades include honorary doctorates and fellowships from prestigious institutions. As an influential thinker, Dr. Wolfram has dedicated his life to unraveling the mysteries of the universe and making computation accessible to all.
First of all... we have an announcement to make, you heard it FIRST here on MLST! ....
[00:00] Intro
[02:57] Big announcement! Wolfram + ChatGPT!
[05:33] What does it mean to understand?
[13:48] Feeding information back into the model
[20:09] Semantics and cognitive categories
[23:50] Navigating the ruliad
[31:39] Computational irreducibility
[38:43] Conceivability and interestingness
[43:43] Human intelligible sciences
,1,**ERRATA**: Open AI/GPT-3 DOES NOT USE Microsoft's ZeRO/DeepSpeed for training
Discord: https://discord.gg/4H8xxDF
In this episode of Machine Learning Street Talk, Tim Scarfe, Yannic Kilcher and Connor Shorten discuss their takeaways from OpenAI’s GPT-3 language model. OpenAI trained a 175 BILLION parameter autoregressive language model. The paper demonstrates how self-supervised language modelling at this scale can perform many downstream tasks without fine-tuning.
00:00:00 Intro
00:00:54 ZeRO1+2 (model + Data parallelism) [GPT-3 DOES *NOT* USE THIS] (Connor)
00:03:17 Recent history of NLP (Tim)
00:06:04 Yannic "Light-speed" Kilcher's brief overview of GPT-3
00:14:25 Reviewing Yannic's YT comments on his GPT-3 video (Tim)
00:20:26 Main show intro
00:23:03 Is GPT-3 reasoning?
00:28:15 Architecture discussion and autoregressive (GPT*) vs denoising autoencoder (BERT)
00:36:18 Utility of GPT-3 in industry
00:43:03 Can GPT-3 do math? (reasoning/system 1/system 2)
00:51:03 Generalisation
00:56:48 Esoterics of language models
00:58:46 Architectural trade-offs
01:07:37 Memorization machines and intepretability
01:17:16 Nearest neighbour probes / watermarks
01:20:03 YouTube comments on GPT-3 video
01:21:50 GPT-3 news article generation issue
01:27:36 Sampling data for language models / bias / fairness / politics
01:51:12 Outro
These paradigms of task adaptation are divided into zero, one, and few shot learning. Zero-shot learning is a very extreme case where we expect a language model to perform a task such as sentiment classification or extractive question answering, without any additional supervision. One and Few-shot learning provide some examples to the model. However, GPT-3s definition of this diverges a bit from the conventional literature. GPT-3 provides one and few-shot examples in the form of “In-Context Learning”. Instead of fine-tuning the model on a few examples, the model has to use the input to infer the downstream task. For example, the GPT-3 transformer has an input sequence of 2048 tokens, so demonstrations of a task such as yelp sentiment reviews, would have to fit in this input sequence as well as the new review.
**ERRATA-continued** It has come to our attention that there was a serious factual error in our video -- GPT-3 DOES NOT USE Microsoft's ZeRO/ZeRO2 or DeepSpeed for training and there is no reference to this in either their blog post or paper. We are really sorry about this mistake and will be more careful to fact-check in future.
Thanks for watching! Please Subscribe!
Paper Links:
GPT-3: https://arxiv.org/abs/2005.14165
#machinelearning #naturallanguageprocessing #deeplearning #gpt3