,1,Watch the full conversation: 👇
Unveiling the Darker Side of AI | Connor Leahy | Eye on AI #122 https://www.youtube.com/watch?v=tYGMfd3_D1o
Immerse yourself in our #DigitalDystopia where basic social trust is fragmenting, and manipulation is rampant. In this eye-opening short, we explore how state-of-the-art systems are capable of orchestrating unimaginable levels of psyops. We reveal the ease with which social reality can be DDoSed and social trends can be manipulated, all at an alarmingly low cost. Witness the stark contrasts between dismissals of these threats and the urgent need for more research. Join us in uncovering the truth of our digital era.
Video Script 👇
basic social trust in the net is going to break apart, like, obviously so this has already been on the case, but like the level of psyops you can run with these kinds of systems is unimaginable, because you can basically, you can DDoS social reality you can manipulate trends and like social memetics to degrees that are not there before were possible these are always possible, but they're very costly. But people either dismissed it or we're like, oh, we have to do more research about this.
,1,Watch the full conversation: 👇
Unveiling the Darker Side of AI | Connor Leahy | Eye on AI #122 https://www.youtube.com/watch?v=tYGMfd3_D1o
Join us as we delve into the heart of AI development, highlighting top GitHub AI repositories and revealing their astonishing capabilities. From auto GPT to recursively improving systems, the scope of AI has exponentially expanded. We discuss the emergence of autonomous agents, sophisticated game-playing systems, and the extensive integration of AI tools with the internet. Despite all precautions, we pose a vital question: is it still unsafe? Unearth the complexities and philosophical dilemmas of AI development with us.
Video Script 👇
Look, just look at the top GitHub AI repositories and you see auto GPT. You're going to see, you know, self recursively improving systems that spawn agents, you're going to see go on archive right now, right now, go on archive, go to, you know, top CS papers, AI paper, and you see LLM autonomous agents, you're going to see, you know, game playing, simulation, symbol, aqua, sing systems, you're going to see people hooking them up to the internet, to bash shells, to Wolfram Alifa, to every single tool in the world. So while we could, if you wanted to go into all the deep philosophical problems that like, okay, even if we sandbox it, and even if we were very careful, maybe it's still unsafe.
,1,Watch the full conversation: 👇
Unveiling the Darker Side of AI | Connor Leahy | Eye on AI #122 https://www.youtube.com/watch?v=tYGMfd3_D1o
Dive deep into the captivating world of artificial intelligence in our latest video, "The Unseen Power of AI: Can We Control It?". We ponder and explore the potential and power of a smarter-than-human AI, and the real-world implications of such advancements. In this thought-provoking visual journey, we question the risks, the challenges, and the ethical dilemmas posed by artificial intelligence.
#AIControl #SmartAI #AIConsequences #ArtificialIntelligence #AIethics #AINarrative #AIAnimation
Video Script 👇
Even if you don't buy the like, oh, you know, system becomes agentic and does something dangerous, fine, you know, like, I think you're wrong, deadly wrong, but we can get into that. But like, what world in which systems like this exist is stable in the current equilibrium? Like, what world could simply look like the world we're living in right now, when you can pay, you know, one cent for 1000 John von Neumann's to do anything?
,1,Watch the full conversation:
Unveiling the Darker Side of AI | Connor Leahy | Eye on AI #122 https://www.youtube.com/watch?v=tYGMfd3_D1o
Dive deep into the captivating world of artificial intelligence in our latest video, "The Unseen Power of AI: Can We Control It?". We ponder and explore the potential and power of a smarter-than-human AI, and the real-world implications of such advancements. In this thought-provoking visual journey, we question the risks, the challenges, and the ethical dilemmas posed by artificial intelligence.
#AIControl #SmartAI #AIConsequences #ArtificialIntelligence #AIethics #AINarrative #AIAnimation
Video Script 👇
And this is a kind of problem you can't solve interactively, really, because if you have a system that's smarter than you, right, hypothetically, you know, we can argue about whether this is possible or when it will happen, assume such a system existed, assume you have a system, which you know is smarter than you, it's smarter than all of your friends, it's smarter than the government, it's smarter than everybody, right. And you turn it on to check whether it will do a bad thing. If it does a bad thing, it's too late. It's smarter than you, how do you, you can't stop it, it's smarter than you, it'll trick you. Now, the interesting questions are, okay, why do you expect it to do a bad thing in the first place? Why do you expect it to be smarter? Those are good. And why do you expect people to turn it on? Those are three very good questions that I'd be happy to get into if you're interested. Yeah.
,1,Watch the full conversation: 👇
Unveiling the Darker Side of AI | Connor Leahy | Eye on AI #122 https://www.youtube.com/watch?v=tYGMfd3_D1o
Witness the intriguing world of AI in our new video, "The New AI Era: GPT-4's Surprising Abilities". In this eye-opening discussion, we delve into the extraordinary capabilities of GPT-4, questioning the stability of a world where such powerful AI exists. Is our current equilibrium at risk? What could the world look like when you can employ AI entities that mimic human genius for a mere cent? Join the discussion and explore the thrilling, yet terrifying potential of AI.
#GPT4 #AIpower #AIdebate #AIfuture #AIEthics #AINarrative
Video Script 👇
And I consider the burden of proof at this point to be on the skeptics of like, look at what GPT-3 and 4 can do, look at what these auto GPT systems can do. These systems can, you know, they can achieve agency, they can become intelligent, they're abilities that humans do not have. GPT-4 has, you know, they have extremely good memory, you know, they can make copies of themselves, etc, etc, right? Even if you don't buy the like, oh, you know, system becomes agentic and does something dangerous, fine, you know, like, I think you're wrong, deadly wrong, but we can get into that. But like, what world in which systems like this exist is stable in the current equilibrium? Like, what world could simply look like the world we're living in right now, when you can pay, you know, one cent for 1000 John von Neumann's to do anything?
,1,Watch the full conversation:
Unveiling the Darker Side of AI | Connor Leahy | Eye on AI #122 https://www.youtube.com/watch?v=tYGMfd3_D1o
Join the conversation as we delve into the profound ethical implications and potential risks of powerful AI systems. In this thought-provoking snippet, we challenge the idea of 'iterative safety' and draw parallels between AI deployment and medicine testing, urging for careful deliberation in AI development and deployment. Let's approach AI advancement with clarity and conscientiousness.
Video Script 👇
So a lot of why I spend a lot of my work now thinking about slowing down AI and like, how can we get regulators involved? How can we get the public involved? I'm like, hey, the public should be aware that there's a small number of techno utopians over in Silicon Valley that, you know, just want to be like, let's be very explicit here. They want to be immortal. They want glory. They want trillions and trillions of dollars. And they're willing to risk everything on this. They're willing to risk building the most dangerous systems ever built and just releasing on the Internet, you know, to your, you know, to your friends, your family, your, your, your community, fully exposed to the full downsides of all these systems with no regulatory input whatsoever. And like, this is what government is for is to like, stop that. Like, this is such a clear cut case of like, Hey, like, why is the public not being consulted here?
,1,Watch the full conversation:
Unveiling the Darker Side of AI | Connor Leahy | Eye on AI #122 https://www.youtube.com/watch?v=tYGMfd3_D1o
Join the conversation as we delve into the profound ethical implications and potential risks of powerful AI systems. In this thought-provoking snippet, we challenge the idea of 'iterative safety' and draw parallels between AI deployment and medicine testing, urging for careful deliberation in AI development and deployment. Let's approach AI advancement with clarity and conscientiousness.
Video Script 👇
They are racing to systems that are extremely powerful that they themselves know they cannot control. They have course have various reasons to downplay these risks, to pretend that, Oh no, actually it's fine. We have to iterate. Like they have a story about iterative safety. They're like, Oh, we have to like, we have to deploy it actually for it to be safe. Now just think about that for three seconds. It sounds so nice when it comes out of Sam Altman's mouth. But think about that for 10 seconds and you're going to see why that's insane. That's like saying, well, the only way we can test our new medicine is to give it to as many people in the general public's office, which actually put one into the water supply, just that's the only way we can know whether it's safe or not. Just put in the water supply, give it to literally everybody as fast as possible. And then once, and then before we get the results for the last one, make an even more potent drug and put that into the water supply as well and do this as fast as possible. That is the alignment strategy that these people are pushing. And let's be very clear here. Let's be very clear about this here.