Lately there’s been an explosion in news around AI, with some new groundbreaking revelation every other day. With chatbots like ChatGPT and image generators like Midjourney and DALL-E, the landscape, especially for content creation, has irrevocably changed. There’s a LOT of angles to this topic and no end to the hot takes. So I decided to take this topic on the best I could. It almost broke me.
A plane had hit the World Trade Center, everybody on the news was talking about it, there were rumors that other planes had been hijacked, that this was some major attack, but nobody knew for sure, there were all kinds of crazy rumors swirling around, but in that moment, it was just as likely that this was just a horrible accident. Just a thing that went wrong. And nothing more than that.
And then the second tower was hit.
That was the moment that we all, at the exact same time, realized… Oh, we’re in a different world now.
I think we’re kind-of having that moment right now with AI.
For the last 10 years or so, we’ve been hearing about how this technology is going to change the world, take all our jobs and leave society unrecognizable. There’s just been this background anxiety about what AI could unleash but for the most part it was just an abstract thing that we couldn’t really see or imagine.
But some recent advancements have made this abstract thing a lot more clear. AIs that can generate articles, music, images and video, and code – meaning they could create other AIs.
There are even people using AIs to build and run businesses, making thousands of dollars a day.
At the very least, these are the new tools that in 5 years or so, you’ll be at a disadvantage if you don’t use them.
And at the most, we’re seeing just the tip of the iceberg of a technology that will radically reshape our society in ways that we can’t possibly comprehend.
This is AI’s second-tower moment. Whichever way this goes… we’re in a different world now.
All right so this is a video about what’s going on with AI right now and the thing is, what’s going on with AI right now is it’s changing EXTREMELY fast.
Like there are entire channels dedicated to following this stuff that post multiple times a week and they’re struggling to keep up. So the idea that this is going to be timely is just laughable.
Things have happened since you started watching this video that already makes this video obsolete. And that’s kind-of the whole point.
For years we’ve been hearing about the idea of the singularity, the moment that AI outpaces us and the unimaginable becomes real. I covered it a few times early on in this channel. And I’ve been making the argument for a while that when you scale things back and look at the big picture, we’re in the singularity. We’ve been in it for a couple hundred years.
I kinda got away from the idea that the singularity was a single event, just one moment when everything changed all at once. I started seeing it as an era of history that began with the birth of the industrial revolution when we learned to harness energy or even the invention of the printing press when we learned how to pass information on through space and time.
But what’s been happening lately is starting to feel like the classic version of the singularity. Like we are rolling downhill and just about to cross the event horizon of this thing where there’s no going back.
Hyperbole much Joe? I don’t know. Not if you listen to the hype around it.
Judging by the hype around AI, this could go in two very different ways.
On one hand, AI could solve all our problems, cure cancer, create new methods of propulsion and clean energy – all the issues we’re currently struggling with and fear will destroy us some day, AI could be an unimaginably powerful tool to solve those problems. That is one plausible outcome. (a beat) Then there’s the–
Other outcome. Where in an inconceivably short time, AI becomes smarter than us, and becomes an intelligence we can’t possibly fathom, leading to the extinction of the human race.
And to me anyway, no honest discussion of AI is possible without talking about both the good–
And the bad.
So that’s what I’m going to try to do here. (a beat) I’m trying really hard guys…
Actually, this script almost killed me.
Honestly, I had to restart this about 5 different times because every time I thought I knew what I wanted to say I’d see a new video or read a new article that would make me have to rethink the whole thing.
There’s just SO many takes on this subject and it’s not that any of them are wrong, it’s that they’re all kind-of right. And they’re all over the place, one of them got into eugenics, I didn’t see that coming.
I mean nobody even knows exactly how these things work, much less where they’re going, even the experts who have been working on this for decades, so what the hell can I add to this discussion?
Like what can I possibly say that hasn’t already been said?
And if I’m being honest, I really haven’t been keeping up with this stuff. There’s like fifty thousand AI apps and I’ve barely even looked at them because I’m always just trying to get the next video out.
So, not only am I trying to make an informative video, I need to get caught up on a technology that I’ve fallen way behind on, like I needed to feel even older and more out of touch.
And the greatest irony of all is I’m struggling with a script about a technology whose main draw is that it can WRITE THE SCRIPT FOR ME.
Maybe I should just pick up where my last video on AI left off.
My last video about AI was… in 2016!?
Awe, just a little baby YouTuber back then.
All right, the last time I talked about the levels of AI, so I might as well start there.
So yeah, big picture, when we talk about AI, there are three levels, artificial narrow intelligence, artificial general intelligence, and artificial superintelligence.
Narrow AI is what we’ve been dealing with for a long time now, and this has taken all kinds of forms, from chatbots to spellcheck to image processing that helps with photography, astronomy, medical diagnostics, and so on.
Voice recognition is a type of AI, virtual assistants like Siri and Alexa, there’s all kinds of AI in gaming, aviation, search engines, the list goes on.
AI has helped make our jobs easier and our lives more efficient in a million different ways.
It’s also manipulated society in ways that we are just now beginning to come to grips with.
All the polarization, isolation, depression we’re seeing caused by social media, these are products of algorithms that are themselves a kind of AI.
I know some of the biggest algorithm experts at YouTube, and I can tell you, they’re just trying to figure it out like the rest of us, only they have millions of channels worth of data verses one or a few channels.
The algorithm is like an alien life form, nobody really knows how it works. It’s a black box that nobody can see inside of or understand, much less control. This is a point that will come up again later.
All of these AIs are amazing and powerful but they’re all examples of Narrow AI, they’re built and trained to do one specific thing. They do that one specific thing better than any person could, but that’s all they can do.
And already we’re seeing this bifurcation of scenarios, simultaneously incredible and amazing…
But also tearing apart the fabric of society.
Artificial general intelligence is an AI that can do all the things a narrow AI can do, but for everything. It’s a generalist, much more like a human being.
I want to be clear I’m not talking about consciousness or sentience at this point, this isn’t an artificial life form, we’re not talking about giving it personhood or rights or anything.
In fact the consciousness thing doesn’t really play into any of this conversation right now, I think it’s better to just set that aside, that muddies waters that are already pretty muddy.
A general AI is just an AI that has the same capabilities as a human brain. It can analyze images, but it can also make music. It can problem solve and strategize.
Maybe even deceive and manipulate.
Basically it can perform the same tasks and functions that a human can. And we haven’t seen this…
Here’s the thing about artificial general intelligence though. When it does happen… You’ll probably miss it.
Because if an AI has the same capacity as humans, one of the capacities that humans have is we can make AIs smarter. So if an AI that’s as smart as we are can create a smarter AI… That, by definition is artificial superintelligence.
This is what futurists have been saying for a long time, that pretty much the instant we reach artificial general intelligence, we will reach artificial superintelligence. AI that’s smarter than a human.
Smarter than all humans combined for that matter.
And this is where things really split into a different directions. On one hand, we could see a utopia where every disease has a cure, aging can be reversed, resources perfectly allocated for all, the climate in equilibrium, economic recessions forever eradicated, all our wants and needs fulfilled so we can spend our limitless lives in pursuit of knowledge and happiness.
Or in an effort to get rid of conflict it suppresses all of us into an authoritarian surveillance state devoid of free will, with all our wants and needs doled out according to its all-powerful insistence on order, creating a technoslave state.
Or it could decide that the common denominator in all the world’s problems is humanity itself and wipes us off the face of the Earth, replacing us with robots and machines that live on for a billion years, eventually spreading to distant stars and eradicating all potential life in the universe from existence. (pause, look around) I may have gone too dark with that.
There is, of course, another option. Which is that all of this is just hype. Just way overblown hype by companies who invested billions of dollars into this and now need to sell the product they made.
In fact there’s an argument that this is just yet another gold rush. AI is the new crypto.
And there definitely is a gold rush of sorts going on right now with these AI tools suddenly being integrated into everything with a keyboard; some of which are awesome and some of which just don’t make any bloody sense.
But really, this isn’t anything new. Companies have been using AI as a catchall buzzword for the last decade.
It’s just like this thing they say to convince you that their thing is better because it has “AI” in it. Everything from cars to HVAC units to washing machines to coffee makers.
In these cases the “AI” is usually just computer algorithms that allow the device to self-adjust or optimize according to the situation. And they’ve got their merits, as the kind of narrow AI I was just talking about. But it’s also just marketing.
So yeah, “AI” is nothing new, and it’s been used to sell us on products for a long time. What we’re dealing with now though is something different. This is generative AI.
Not to be confused with General AI like I was just talking about, generative AI is just what it sounds like, AI that generates something that didn’t exist before.
And this does feel like it just popped up out of nowhere over the last couple of years, but again it’s really just a culmination of a lot of things that we’ve been getting used to for a while now.
We saw the first sparks of this back in 2015 when videos like this from Google’s DeepDream project started making their way around the internet.
The AI would basically start with a picture and then it would find patterns in that picture or in the alignment of pixels that it would recognize as other images, like us seeing faces or shapes in clouds. It would then pull from a set of images and insert them into the image, leading to these mind-bending visuals of animals and faces reminiscent of a heroic dose of mushrooms.
Or… So I’ve heard…
Following that, research continued into training AI models to recognize objects in images, that’s pretty much what that CAPTCHAs are all about, finding the stop lights and cars in images.
You’ve been training AI models this whole time and didn’t even know it.
You also started to see tools in Photoshop like Content-Aware Fill and Scale that will intelligently replace objects and add extra photo to the photo based on the space around it.
That got fully integrated into Photoshop in 2019, I use this all the time with my thumbnails actually.
Then there’s the AI video and image filters that started on Snapchat and then on TikTok, there was the trend of AI generated avatars on social media through apps like Lensa.
I’m leaving a lot out, the point is these tools and toys have been more and more popular for years.
At the same time that this was happening, there were text AIs that were learning natural language patterns.
The first autocorrect features came out as far back as 2003 but became especially useful with the advent of smart phones.
Predictive text in phones and search engines soon followed, voice recognition paired with natural language models lead to Siri in 2011 and Alexa in 2014.
And then in 2021, OpenAI announced DALL-E, which put the image and text features together and had the ability to create entirely new images based off text prompts.
All that labeling of objects and text recognition came together so that you enter the word “camel” it knows what a camel is and searches its massive image archive and creates something resembling a camel.
The first DALL-E was kind-of under the radar though, it was more for research purposes, it was DALL-E 2 that came out last year that gave the public access to this technology.
Midjourney came out at about the same time. And people kinda lost their minds.
It was this public access to generative AI that sparked the whole ruckus, for the first time literally anybody could just enter some words into a prompt and get a pretty good quality image out of it.
And anybody who’s been following this even a little bit can attest to how quickly the AI art has evolved.
And then, by the time we all got used to that idea… Here comes ChatGPT.
Okay, so ChatGPT, in case you don’t already know, works off of what’s called a Large Language Model, or LLM.
LLMs are neural networks. A neural network is a computer system that works like a brain. Just as a brain has billions of connections between neurons, LLMs have billions of connections between mathematical functions that act like neurons.
How LLMs Work
I don’t want to go too deep into how this works but I’ll quote Peter Yang of creatoreconomy.so, who wrote in a tweet thread:
“Imagine that you have a library with a huge collection of books. You want to learn everything you can from these books, such as how to write a good essay and how to speak a foreign language. How can you do that? A LLM is a program that can generate text based on the books in your library.”
So it does kinda work just like predictive text on your phone. Where a phone has a library of your past texts, it can figure out what the most likely next word is going to be so if, say you start to type your street address, it’s learned that from watching your texts and autocompletes it.
Only an LLM has millions upon millions of books and articles to draw from, and they “weigh” what they read in terms of importance.
To be clear, this is a massive oversimplification, but you get the idea.
The process of feeding an LLM gobs of text is called pretraining. When programmers want LLMs to get better at specific tasks, they feed the LLM more focused text. This is called fine-tuning.
But it’s not really interested in giving a “right” answer, it’s programmed to give the most likely answer. A lot of times that answer is right. A lot of times it’s not.
For example, when I asked ChatGPT to give me 5 facts about the YouTuber Joe Scott, this is what I got.
I also asked it for the closest roller skating rink to where I live in East Dallas and it pointed me to a nice little skating rink at the corner of Gus Thomassen and Ferguson, but when I looked it up on Google, it’s actually way up in Plano, about 10 miles away.
So when people ask me if I use ChatGPT to write my scripts for me, my answer is not just no, but hell no.
In fact, back when I was doing my video on Smart Cities, I did try to save a little time and asked it to write a few paragraphs about the planned city of Telosa, Arizona, and it gave me this.
Yeah… This city does not exist. It’s literally just computer renderings right now.
Now I will say, I’ve used it to help brainstorm video titles, and it’s been pretty helpful with that.
Also while I was just testing it out, I came up with a fake character for a fake book that I said I’m working on, I said he was a farmer who’s about to lose the far
And this is pretty good! Like this is seriously helpful, I could use this.
Oh, I also asked it to recite the To be or not to be speech in the style of Snoop Dogg – I was hoping for a to be or not to bizzle – I didn’t get that but it did, in a matter of seconds, create a whole rap verse that honestly is pretty impressive.
Now, I am not a power user of ChatGPT and there’s a million videos out there with great advice on how to create the best prompts to get what you want, I would just direct you to those.
By the way, the word “prompts” is the word of the year for 2023. Get used to that word, you’re going to be hearing it a lot.
But in my experience anyway, it works really well as a kind of springboard or brainstorming partner, and less well as a source of accurate information.
Ironically, the computer program is better at creative thought than accurate information.
Which is fine, except Microsoft put it into Bing. A search engine. A thing people use to find accurate information.
And this is a problem.
As bombastic as the prognostications around AI have been, to me anyway, this is a much more immediate threat, the way AI could accelerate the already massive problem of misinformation online.
I keep hearing about people using this as a shortcut to generating content online, which is fine, like I just said, there’s some really helpful use cases for it, but you have to fact check it. And frankly, a lot of people aren’t going to do that.
Because for many people whether or not it’s accurate doesn’t matter, it’s just about giving people something to click on.
Combine this with the rapid advancement of the image generating apps that are already damn near photorealistic, and we are truly entering a post-truth age.
When literally any content can be faked, any image can be faked, any voice can be faked – I haven’t even gotten to that one yet – and pretty soon nobody’s going to know what to believe anymore. So they’ll just believe whatever they want to believe.
Only increasing the fragmentation and distrust in our society, which is pretty god-awful already.
Now, the AI bros are quick to point out that this is the worst AI will ever be, it will only get better from here. But on the other hand…
It will only get better from here.
Now there are other LLMs being developed like LaMDA from Meta and Bard from Google that have not been released to the public. Google especially is being extremely cautious about releasing Bard because their entire reputation is based on their accuracy.
Some say Microsoft was being really risky putting ChatGPT into Bing so early but let’s be honest, nobody was using Bing, they had nothing to lose.
But this does put pressure on Google to counter with their own AI search assistant and now we’ve got an AI arms race.
But I guess at least there’s only a handful of these LLMs right now because they take literally billions of dollars to train them. Only that’s not true anymore.
They did it to create an LLM that academic institutions can test without needing billions of dollars of funding. According to their paper, it worked about as well as Meta’s LaMDA AI model.
It didn’t stay up long though, they took it down on March 20th because of they were concerned about “hallucinations.” That’s what they call it when computers tell confident lies.
You know, like ChatGPT does about half the time.
But the point had been made. The price of LLMs is going down fast and pretty soon people will be making these things in their garages.
And since GPT-4, which hasn’t been released to the public just yet has the ability to write code, you now have the ability for AI to write more AI and improve it.
Anybody else feeling like this is kinda spiraling out of control?
It’s no wonder over 1000 tech leaders signed a letter in March to put a pause on AI development beyond GPT-4.
It’s from the Future of Life Institute, which is controversial in its own way, and there’s a heated debate around their intentions, which is a rabbit hole I’m just gonna have to leave out of this video.
But even with the generative AI that has come out in the last year, we’ve seen a flurry of app creation, literally every other day a new crazy “world-changing” app makes the news.
Some of it is the same “AI” marketing I was talking about before that’s nothing new, but some of it is just bonkers.
Now, my theory is that these new AI tools are just that, tools. You still have to know what you’re doing to get the kind of results they’re talking about here.
It’s like Photoshop, just because you have photoshop doesn’t mean you can do the same thing that a professional designer can do.
Actually there’s a great meme that’s going around that anybody who’s worked in a creative field will understand, it says, “To replace creatives with AI, clients will need to be able to accurately describe what they’re looking for. We’re safe.”
Sorry but that made me laugh quite a bit when I first saw it.
But anyway to test my theory, I made a whole separate video where I actually try to use some of these apps, and I’m uploading it along with this video to Nebula.
Because I’ve seen a ton of videos talking about these crazy features these apps can do but I’ve never seen anybody actually use them outside of the product demos so I gave it a try myself.
Some were garbage, but some, I really think I might be using from now on. And trust me, if a tech dolt like me can do cool things with them, they must be pretty powerful.
So if you’re on Nebula, you can watch that right after this video, if you’re not on Nebula, well, here’s why you should sign up for it.
Nebula is the premium streaming service that I helped start along with some of my friends who just happen to be some of the best educational YouTubers in the world. And I am completely unbiased in that statement.
On Nebula, you can watch our videos ad-free and earlier than anywhere else, and you can also see Nebula exclusive videos you can’t find anywhere else. Including Real Engineering’s Logistics of D-Day series, Real Science’s unbelievably good series Becoming Human, and for those with a morbid curiosity, you can find my Mysteries of the Human Body series and my ongoing Forgotten Atrocities series.
And something I don’t talk about nearly enough is Nebula Classes. This is like some of the other online learning platforms that you’ve heard of but the classes are run by educational YouTubers. Going full-on educational.
You can learn how to produce videos like Volksgeist, how to produce music like Adam Neely, and how to sue like a lawyer with Legal Eagle. And there’s new ones added all the time.
And the reason why we made this platform is because YouTube… can be a lot. With the ever-changing algorithm, that AI once again, we’re always chasing the content that will work here and we’re always having to be more clickbaity and sensational, Nebula’s the place where we can let our hair down and make content we care about.
So if you ever thought about supporting a YouTuber, this is the biggest bang for your buck, not only are you supporting hundreds of content creators, you get tons of content in return, and if you sign up using the link below, you can get do all that for 40% off the annual plan. Which comes out to a little over two and a half bucks a month.
Which is probably worth it just so you don’t have to hear ad reads like this.
So click the link down in the description to check it out and to watch me test some of these AI apps, which I can tell you if you were to test all these apps yourself, it would cost more than the Nebula subscription.
Boom. You’ve already saved money.
All right, to kinda wrap this thing up, there are some issues that need to at least be mentioned because so much of this topic gets overwhelmed by the existential dread stuff and while I know that’s kind-of my thing… there are some right here right now problems that are far more immediate threats.
Obviously there’s the issue of jobs being lost to AI, and yeah, this is something to be concerned about.
But there’s a bit of a debate around this because one could make the argument that these new tools could help employees be more productive and more creative.
There’s another line that tech bros say a lot which is, “your job isn’t going to be replaced by AI, it’s going to be replaced by someone else using AI.”
Corridor Crew’s Rock Paper Scissors, Ghostwriter977’s Drake song Heart On My sleeve
Yes, they used AI to do this, but these are extremely talented people
I mean, when photoshop came around a lot of people worried it would put photographers and illustrators out of a job, but really most professionals just adapted to it and started using it.
I personally know a few artists that are embracing DALL-E and Midjourney as jumping off points to kinda spark new ideas.
As a writer, I see this as a way to push through writer’s block.
I do worry a bit that it could all become an ouroboros, like a snake eating its own tail because people will use these AIs to create online content, then the AI will pull from that content to make new content, and the cycle just repeats itself.
There is also the whole debate around the fact that AI is using other people’s work as part of those large language models and image sets, much of which is copyrighted. A lot of people are up in arms about that.
Which I get but at the same time, it’s not completely copying it, it’s using it as inspiration. Which is what we all do isn’t it?
Its like Ed Sheeran was just won his copyright infringement lawsuit claiming he copied the chord progression from the song Let’s Get It On, but a LOT of pop songs use the same chord progressions. In fact there were songs before Let’s Get It On that used the same chord progression.
So this is an ongoing debate around art that started long before AI and will go on long after. But AI does make it more interesting.
The pessimist in me does worry that the money people will see AI as a cheap or even free content creator and that will squeeze creative jobs out of existence. But the optimist in me thinks that humans working with AI could unleash even more mind-bending ideas and that the cream will rise to the top.
One counter to that argument came from Justine Bateman on Twitter.
- Actors being scanned for AI-generated work
- Bruce Willis made headlines for this
- Films custom made for the viewer and special-ordered
- Viewers scanning themselves and having themselves inserted into films
- Training AI on old shows and creating new seasons
Is this something people want? Would you want to see that?
- It’s not possible now, but in 5 or 10 years, it absolutely will be
- Already there are programs like Nothing Forever, which is a never-ending AI generated Seinfeld episode
- This is terrible. It will get better.
- Which was briefly taken down by Twitch when the AI started using homophobic slurs. So there’s that.
Computers making all the creative stuff while people work in ever lower-paying jobs is not the future I was hoping for.
As for administrative and legal jobs under threat by AI, that I don’t think is anything new. I don’t mean to minimize that as a threat, it definitely is, to a lot of people, but that’s a trend that’s been going on for some time now.
And again, I think the more familiar you are with these tools, the safer your job will be. But there’s a lot of takes on that subject.
One area of AI that I ran across that did put me on my heels was AI in warfare.
I talked a little about AI in my episode on the future of war, and that’s a debate that is only getting bigger and more confusing.
AI gets used in a lot of ways by the military, from image processing to logistics but what about AIs having the power to decide if a person lives or dies?
Just the fact that drones were conducting strikes controlled by an operator hundreds of miles away was iffy, but now the debate is whether or not an AI should be allowed to make that decision.
But even bigger than that, is that increasingly there is talk of giving AI the ability to launch nuclear weapons.
I think I need a lighting change.
Back in the 80s, the Soviets instituted a Dead Hand mechanism on their nuclear arsenal, in the event that the decision makers might get wiped out in a nuclear strike on Moscow, it would automatically trigger all their nukes to fire at the US.
You could say this was an early algorithm or automation of the nuclear arsenal.
Similarly, the US has contingencies in place in the event of a first strike scenario, but that time horizon on a first strike has gotten smaller and smaller.
First ICBMs could reach the US in about 30 minutes, then nuclear submarines cut that time in half.
Today we’re seeing the birth of hypersonic missiles that could cut that in half yet again.
In a nuclear scenario, military leaders may have just 7 minutes or so to make a decision and retaliate, and that’s led many to consider handing those decisions over to AI to save precious seconds.
When we talk about an AI arms race, this is the ultimate version of that. We could see a day in the very near future where the US, China and Russia all have their nuclear arsenals controlled by AI.
AI that we don’t fully understand or control. With the ability to wipe out all life on Earth.
You know… just something to think about.
Ultimately, I think what we’re dealing with is an amplifier. It’s a highly-evolving set of tools that can make production more productive, creatives more creative, bad actors into worse actors, scammers more successful, and weapons more deadly.
It’s the wild west right now, and we don’t really know where this goes.
And I can hear all the comments coming in right now, “You forgot about this, you forgot about that.” Yeah there’s a lot I left out of this video because it’s just too much. And frankly it kinda broke my brain.
Any one of the topics I brought up here and the stuff that I left out for that matter could be entire videos of their own. And if this one does well and you want to see a video on any of those topics, let me know, maybe I’ll do them.
Also, there are a ton of videos and creators doing great work covering this topic, I’ll put links down in the description and I encourage you guys to check it out because we gotta stay on top of this. The genie’s out of the bottle.
But I think I’ll leave things off with a quote from Geoffrey Hinton, who is often called the Godfather of AI. He’s been working on AI and neural nets since the 80s and was working with Google on their AI systems and recently left so that he can speak freely about this subject.