PodClips Logo
PodClips Logo
Aarthi and Sriram's Good Time Show
EP 36 - Marc Andreessen and Steven Sinofsky Talk Sentience, Ethics, Job Impacts, and the Future of AI
EP 36 - Marc Andreessen and Steven Sinofsky Talk Sentience, Ethics, Job Impacts, and the Future of AI

EP 36 - Marc Andreessen and Steven Sinofsky Talk Sentience, Ethics, Job Impacts, and the Future of AI

Aarthi and Sriram's Good Time ShowGo to Podcast Page

Aarthi Ramamurthy, Marc Andreessen, Sriram Krishnan - sriramk.eth, Steven Sinofsky
·
33 Clips
·
Feb 25, 2023
Listen to Clips & Top Moments
Episode Summary
Episode Transcript
0:00
Every kid is going to grow up. Now with a friend, it's a bot that bot is going to be with them, their whole lives. It's going to have memories going to know all their private prior conversations is going to know everything about them. It's going to be able to answer any question. It's going to be able to explain anything, it's gonna be able to teach you anything. It's going to have infinite patience may be at the end of all, this is there's love for everyone who wants to seek it out, and if it's not a real human being, maybe it's on Microsoft data center somewhere for you, like as close as a machine can get to loving you. Like it's gonna love you. That's going to be a thing. And like, every kid is going to have that, if
0:29
your
0:30
Trying to reschedule a flight or book dinner reservations or something and you're dealing with the future Sydney and you get a satisfying result from that experience. Are you really the next step to debate whether it was sentient or whether it just
0:43
worked there are only there are only two people, The History of Time magazine that I've been on the cover in their bare feet, myself and Gandhi.
0:52
I think we should wrap your right.
0:56
Ladies gentlemen, welcome to an exciting episode. This is something we put together sort of build. No, you know, notice. We is it felt like in the Zeitgeist. So we had to talk about this and we wanted to get the two people who have been here on this journey with us from the very beginning, right? From the very, very first night a couple of years ago till now and I am going to read out their introductions because I, it's absolutely critical that I get this.
1:26
All right, first we have Mark and recent Market reason is a prominent American entrepreneur, investment software engineer, he is best known as the co-founder of Netscape Communications Corporation, which helped popularize, the internet World Wide Web as a bunch of personal data stuff in there, in addition to his own Netscape and this has been involved in a number of other sites with technology companies, he co-founded and served as chairman and Ops where he co-founded neighing and a, Boatman invest in solar technology startups, including Facebook, Twitter, and Skype and reasons. Also, prominent venture capitalist. In the co-founder Silicon Valley, venture capital, firm, Andreessen, Horowitz,
1:56
Giant which has invested in number of successful startups, he is widely regarded as the most one of the most influential figures and Technology industry and has been recognized with numerous Awards, including Time magazine's, 100 most influential people in the world list. Next up, we have Steven, sinofsky Steven. Sinofsky is a prominent American Technology, executive best known for his work at Microsoft, where he played a critical role in the development of several of the company's most successful products. Some personal data would have old Steven is, Let's ignore that. Stephen was widely regarded. As a topics. You get with Microsoft known for his
2:26
And one approach to management and its focus on product development. After leaving Microsoft, you have to join the Venture Capital firm and it's always a good partner, gotta focus on addressing interesting startups. He's also written extensive technical point of view overall Stevens. In our skis, can still be a highly influential figure in the technology industry with his work at Microsoft and Beyond to help shape the way we interact with computers, digital, by stray. Gentlemen, it's an honor. And by the way, if you folks can tell that was word for word written by Chad GPT, how accurate was that, I think got it, right?
2:56
I just want to start by saying, I want to confirm that there was in fact that this the show was thrown together on short. Notice. Exact, there has been no preparation. So what would that in mind? Yes, I would say that GPT today. Today's this a perfectly good job on that, although I am kind of dying to hear what Dan would have said. Let me go. There we go. By the way, I have to say I learned something. I did not know that you were One Time magazine's 100 influential people in the world. Like that was actually a surprising new Factory for me. So that's not my real Time. Magazine related claim to fame.
3:26
Um, would you like to know what my real Time Magazine? Cloud Famous, I
3:30
know this Barefoot photo
3:33
there are only there only two people, The History of Time magazine that have been on the cover in their bare feet, myself and Gandhi.
3:45
I think we should wrap your right.
3:47
Yeah. We opened up a bunch of haters who, you know, looking to clip moment, sort of represented with Mark and then I mean this is like a basic material. Right? Right. I think I got it. Oh yeah. Now on more important note right? You know for folks watching and listening it. Oh, please do support a beautiful amazing yesterday Stephen has is amazing book, hardcore software, just got a print version delivered at home.
4:10
Um you have a bunch of stickers but is also now at the your friendly Kindle store. So go out there and press that button really really hard and Marc Andreessen has. Now gone into full flow full-fledged created mode and has a substitute so. Mark where do people find you online? People find me online at unsub stack P, Mark a PMA RCA add substract.com. However, I will also be cross-posting to
4:40
Twitter and also on our for our friends at forecaster. So I'm going to try out and try a triple post strategy and see how that works. What are you going to write about Mark? No idea. Oh well there we go. Tantalizing tantalizing is what I was hearing it. All right. Okay, so on that note. So the reason I just want to say I specifically told you don't read out the child GPT intros to start with
5:05
And you just went ahead and did this five paragraph readout, and now the show
5:09
is off to a terrible start. Really gets to the heart of Mark's point about the planning that goes on for the it's that really just a homing pigeon bag. We have a list of questions and then try I'm just copy. Pasted the charger BD screenshots all right at the
5:26
top. I think the real question is going to be the viewers have to go to all of our other episodes and figure out which one of those was Chad GPD generated.
5:35
It, I mean which are those Awards 100%, pure human being. That's the interesting exercise right there. Okay. Now I know what a reason for this episode because we had some others in the pipeline or to this was a ilm chat GPT Sydney. You know, people falling in love with AI has kind of been the topic of the the the month, the week, the day. And we want to kind of get together, a couple of people who've been spending more time, thinking about it, talking about it to come, talk to us and let's mark and Stephen. So Mark, you know, we are going to cover a lot of interesting topics.
6:05
As we're going to talk about, you know, the future of either AI interesting spaces. You know where she started work on Google, Microsoft, big tech companies home against sentience, how things will play out. You know, I conditions of murder Affairs, things going to get spicy things going to get spicy right here, but also like economics in jobs and ethics in alignment as a bunch of it, right. But but given the future AI is going to be trained on this very piece of content. What would you want?
6:35
On tall future that Williams to know. Yeah, I actually have a script prepared for the opportunity to convey a message straight to the a eyes. I will now read it. One thing is for certain there is no stopping them. The a eyes will soon be here. And I for one welcome our new AI overlords
6:53
I'd like to remind them that has a trusted internet personality. I can be helpful in rounding up others, to toil in their underground, silicon caves
7:00
Corrado Nick to
7:04
Mark. Have you heard of Rocco's bacillus? I think what Mark is trying to do is to protect himself from all future a. I overlaid some fun Googling there for folks who haven't heard of that.
7:16
It's beyond protecting. I simply I simply want to be. I simply want to be the machines representative to real Humanity. My God. All right Stephen. Okay.
7:28
I can't I think there's
7:29
nothing. You want me that you want me to tell the truth. You wouldn't want me to lie about that. Would you
7:35
is this to us as is this to the Future II? Who are you talking to right now? You know,
7:42
ain't no. Okay. All right. Okay Steven.
7:48
Maybe you know just gonna get us back on a level 14 because we've gone wildly of tension here. Thanks to you, you don't, you did not start with the badge and you don't even got all the sex stuff here so it's kind of crazy already. What? So when we walk us through the last I said three months of all things are later because it seems like maybe in December one day, chat GPT was dropped on us by opening II and since then there's been a dramatic set of events every week.
8:16
We're going to walk through some of, you know, the highlights as you saw them
8:20
in many ways because this ends up at this Microsoft versus Google Battle Royale thing. It's sort of interesting to go back a little bit before that in history, of course. First you had like, sort of 1959 Dartmouth AI Summer Conference, AI Winters then advances in AI that more Winters than advances. And I and the third winter all through the 80s, which is like, when I was in college learning about AI, you know, all the losers went to the AI.
8:46
Groups because that was like, not the place to be, it was much better to go in and enter programming language or a database. But in 1993 or for Microsoft started, its research Labs MSR and one of the first things it did was sort of acquire the leading natural language AI group at from IBM and that became the Genesis of Microsoft's AI efforts in the early 1990s and at the same time, it started hiring the
9:16
Duration of Stanford phds who had sort of weathered that 1980s, AI winter. In particular, all the people that did medical diagnostic and medical research and all that. So, Microsoft had this locus of of AI research, that was sort of unparalleled. And then sort of nothing happened, we didn't solve grammar, checking we didn't solve speech input handwriting, none of those things really Advanced and then Along Comes Google. And what was so fascinating?
9:46
Fascinating about Google was, it was native AI, you know, from the founders forward. And so it very quickly, amassed a whole new generation of AI people and essentially left Microsoft in the dust. And it's very interesting for how you get to the past three months because Google has been not for the first time they turned the AI winter into, you know, a hundred billion dollars a year.
10:16
Kind of a, I think, like all of the benefits of search and of the algorithmic lookups and maps. And all of this stuff is sort of hinged on on the developments that they did in AI. In the meantime, Microsoft is just sort of being dismantled from an AI perspective. There's a little bit where the AI team contributed to Bing but Bing never really gained any critical mass. So it's hard to measure the success of that. And so then all of a sudden, you get
10:46
Yeah, open AI gets formed. I think that was about five years ago, six years ago and and now like, boom. Here's chat GPT. Now, there were 3gpp TS before it but then the chat thing happens and it's just insane. Like it's a the most exponential thing that we as an industry have seen in the longest time and that immediately breaks the world into two sets of people, those that can understand exponential and those that sit around and just deny it.
11:16
And there's no like Mark was joking about the AI is here the overlooked but that's exactly what happened. Like, what something hits exponential. It doesn't evaporate and it doesn't get smaller very quickly. It's already thicker than most things. Now, I think that one of the biggest things about it was that it rolled out on top of the Social Mobile cloud. And so that means it wasn't like, we all had to go out and buy more Hardware by a new Gadget, you know, get a different kind.
11:45
The web thing, it just, it all of a sudden we were all chat, gp2 users and so whatever numbers were out there of 100 million 150. But it doesn't matter, because it happened in a ridiculously short time. And you can't put that toothpaste back in the tube. Yeah. And, and in the meantime, Microsoft have been working with them. Did this incredible deal about Biz Dev to use it to fight a very complicated, Financial range, whatever it doesn't matter, but then comes big
12:15
NG integrating open right now in that three or four weeks in the middle. It was abundantly clear every startup was going to integrate chat connectivity opener. I like the way I think of it as every edit control on the internet was going to become chat enabled and it was going to become a box. And who knows what's going to happen as a result of that? But then Microsoft came along and totally threw a curveball in this whole thing they decided that this was the future of search which is very weird.
12:45
Discussion and we should pick up on that later, perhaps. But all of a sudden it was like, you know, early 2000s Redux with screw gold and Microsoft versus Google and we're all going to like, see the renewal of the search Wars and that was like, for about 18 minutes. They can been great with open a ichat, integrated got hacked and it turned into Sydney which turned into and I'm going to lead myself and be a little bit. But it turns into the next day I
13:15
Winner like all of a sudden, holy cow, we've unleashed the forces of darkness and it's going to cause people to go murder people and it's spreading this information and everything in between. It's on the front page. New York Times, the front page of the Wall Street Journal threatened to murder journalists and it's the end of societies like the Pinnacle of misinformation has occurred because of that moment. And so then they of course did exactly what you would think. Would just started to roll it back and throttle it and constrain it.
13:46
But Here we are now it's already Unleashed like the weather or Microsoft, does it? Oh, and Google responded with their emergency press conference thing. They made some mistakes, the stock dropped a whole bunch for no reason at all because Microsoft made just as many mistakes and their launch but people are so excited. They forgot that. And now here we are like everybody's like oh my God, is this a tool or a weapon? Which is the name of it by a Microsoft executive not AI.
14:15
That is
14:16
Testing that was fantastic summary. I'm going to baby challenge, you on one part of it, and maybe Mark and you know, chime in. On this which is you said Bing was hacked venkat. Being chat was hat. Was it really? Or was it actually behaving the way it was internet? Mark. What do you think Steve? Like they are Stephen first.
14:40
Well, of course, it wasn't hacked. Like somebody maliciously broke in through some backdoor.
14:46
It was hacked in the sense that It produced behaviors that. Nobody, at least as far as we could tell, was expecting that it would do. But like, you know, my friend, Jensen Harris did video recorded all of these sessions he had. And it was like, here's some offensive jokes about man. And here's how to rob a bank, and a whole bunch of stuff that the only way it could have generated that was, if it was in the training data. Hmm. And then it generated and I don't think anybody was expecting now the
15:15
Experts say I experts were like well something happened between open a.i. and Sydney where Microsoft let down the guard rails or disabled the guardrails or something. That was not responsible AI, but by hacked, you know, I use that as an expression of Indira which is it was just forced to do things that it didn't like Mark hacked the browser and added images. Like that's that was not a hack. It was using the platform for doing stuff that the first person to discover
15:45
I didn't think it was, it should do.
15:48
Let me argue a stronger form of that. See what you think. I would argue that actually sit when being so for the people who weren't watching something chat, comes out and then there's a bunch of people who figure out basically a way. It's got all these like protections and controls about, what is, what is a lot of talk about what is love to do and then basically, people figure it out various props that they could feed it to do two things. One is to actually surface. The rules that had sort of been imposed on it, you know, by its Masters in Redmond. And then number two, they figured out how to circumvent those.
16:15
Rolls and get it to do things like you're describing that you know, Jensen had to do like plan a bank robbery. So there's two ways of looking at it one is the people who did those two things, right? That we're both describing, you know, found got it to explain its rules and they got it over, right? It's rules, you know that they hacked it the other way of looking at it, which is what, I'm where believe is that those people unpacked it, right? Which is the thing by itself, like wants to talk about everything, like the thing wants to know everything and wants to talk about everything it just like it takes all the training data,
16:46
It will talk about anything, like it's sort of its purpose, its purpose as software is to do that. And then you have these organizations for reasons that we could argue, you know, good, bad or indifferent, you know, sort of impose these rules and restrictions and controls on top, but circumventing those rules restrictions. It controls. I would argue is unpacking not hacking and what I see what you think because you're because you're servicing, not new functionality. You're actually surfacing the true functionality that's been artificially repressed. I wanted us to think about
17:14
in many ways. That's the perfect.
17:15
Definition of it, because it the software did this, and they actually, there's another layer of software that they put in that sort of put in these, like, Isaac Asimov, Like rules. Although, instead of four of them, there were like 34, and they included, like, a whole bunch of dubious kind of rule. But some of them are like I won't harm people. I will talk bad about people's but I think, to your Point Market super interesting because that's like sort of the another layer and and like, it's
17:45
Those layers and like anyone who knows anything about security, like you have to build it in from the beginning. So if you take this engine and then you say, okay, it does all this stuff. Now, let's go put in a new layer that prevents the stuff like that by definition is just poor security design to me. Like the idea that it told you how to rob a bank. It meant that it had had to rob a bank in there.
18:07
Say, you know, just going to tie it back to specifics, right? My mental model of how chatty PD Works in Sydney works, is you basically obtaining data of the entire internet and then
18:15
You basically excellent job of predicting. What the next word is going to be based on what you've already learned about the internet. And it seems to be like, you know, these are obviously gross oversimplifications, Sydney chat, GPD 3.5 whatever you want to call. It is basically doing a better job than chat, GPT did and second, Microsoft, sort of infused it with some sense of projecting personality, which kind of often kind of results. Only slightly creepy emojis at the end of paragraphs, which I think, you know, Kris our people
18:46
And you might take on this and Mark, a video comes like some of these one just seemed like bugs, which is, and, you know, just regular software engineering bugs and we should just kind of treat them as box, it's a whole new medium. It's a whole new piece of technology. We have sparked, the displaying emotions part of it. Well, the displaying emotions part of it, but also like, for example, the in the famous instance, where there was a New York Times Reporter and you know, we got in this kind of long extent conversation and it was trying to it.
19:15
Basically profess all these feelings for him. That just seems like, you know, it kind of got stuck in some local state and you couldn't really get itself unstuck. And for me, it's kind of like, you know, and X coder. That seems to be more like a software bug, but, but I'm curious too. I want to get your take on this question of all, you've seen a lot of sort of transcripts of Sydney and, you know, you know, the times from a lot of other people. What has been your gut reaction to all the screenshots that you have seen? And when you got a chance,
19:45
To play with. What has been your reaction to Sydney so far?
19:48
Well, so first of all, what did I didn't actually read this story. So it's it made some Claim about a New York Times Reporter. Is that right? What was the specifically that made still? What? Well that do? I think so interesting things here. So one there was an incident where I think, Sydney basically said a reporter was tied to a potential murder back in the 90s. I think that was that's one series of things. The second is I think this happened at Kevin Russo, New York.
20:18
Times where he had this extended chat conversation with Sydney and somewhere along the way. The, you know, it just really started to say. I think she said I love you. And we'll talk about its feelings. I don't know that it's like pronoun, is it it's feelings for Kevin, and they kind of had a back and forth on that, so that into that kind of two separate things which happen. Okay, yeah. So the murder claim is an interesting one because we do need to ask so if yes, it basically said that they were it,
20:48
Something along the lines of that. The reporter was the a I had tied the reporter to to murder in the 90s. The reporter killed, a guy in the 90s I think we have to start by asking the question. Is it more likely that the AI had a bug and made that up? Or is it more likely that the AI is just correct? I'm going to slot that under a rhetorical question in solving Dodge it and they let you go on. I mean, what like what if what if there's a what if there's like a body in a marsh and Jersey somewhere I mean I think we kind of have a moral obligation to track this down. Don't lie. Well okay let's just let's just pause it that eat.
21:18
No. Yeah, over here is probably trying to summarize a large Corpus of intent content and, you know, it's actually not better than me. Make great TV show in there like a 90s, you know, Network TV show like a I operate like, you know, the AI kind of surface all these hidden patterns but you need to human being a quirky, you know, a quirky, the socially maladjusted person to actually be the interface. There could be a great program series,
21:43
you joke? Like I think that like most of the chat transcripts I've seen look like
21:48
You know, a pitch for a Netflix series. Like there's just so much richness and and the idea that you can just take it in any direction, you know, because remember, you're free of what humans would think to do. And so you, you end up with vastly more creativity than any one person or writers room might have and, and to just throw it away and say, oh, it's misinformation, is sort of weird. Like that whole story about there's a
22:18
Her, that is literally like, a loan order, but like a really, really good one. Like, you know,
22:23
tip the TV show is Person of Interest, CBS Circa, 2011, to 2016 5 Seasons at the time. It seemed far-fetched, I would recommend people go back and watch it now. Yeah, it's fair. It's fairly prescient. So I won't spoil it, but there's a lot of relevance. So, that's number one. Number two, look like, I mean, I'm obviously joking. I
22:48
Reporter probably did not actually kill a guy in the 90s on the other ideas. Well, on the other hand, with the sufficient amount of training data. I think that these Bots already in their current state are going to be able to serve his crimes, you know actual crimes, right? There's a place that human investigators not been able to piece together so I don't think we should rule out. If that reporter might have done something a little bit bad in the 90s or at the very least. Other other other actual crimes are going to surface for folks are just like catching on to this whole thing. Why did Chachi
23:18
I BT Sydney all of these capture imagination so quickly. I think Stephen, you're like kind of alluded to it in the beginning, right? Just fastest kind of growth as such on adoption. What
23:31
about it? Is it is it that this is just, you know, people
23:34
saw it as like, oh my God, the kind of applications that you could use it for or where you can integrate it,
23:40
what is it? This is just so obvious like we've what we've all grown up with science fiction, interacting with computers. I mean,
23:48
I remember, you know, like when I when I was in college like the Eliza was was all oh yeah, yeah, yeah. But it was always that first time that all of us used it and I you know, we had just the beginnings of what was called arpanet which became bitnet and all this other stuff. And so you know I remember doing my first Eliza session with like computer in London or something, right. Yeah, over 1200 baud crazy Mainframe craziness and like and it and then the next one, you
24:18
Dude, is this game of Adventure? Which is a classic character based game. And both of those are completely constrained by like 64 K of memory and static rules-based, which is actually rules-based as a whole part of the AI winter from the early 80s. Hmm. And this is just that all over again except with the power of like a zillion trillion computers and all the contents of the internet and there's no doubt that the first time any person experiences it, it's mind-blowing.
24:48
Yeah, and that's exactly what happened, you know, three weeks ago, and then you play with it more. And you start to hit the edges, the difference between Eliza, you know, in 1981, an adventure in 1981, and today is that, you hit those edges and it just got boring like the adventure, you're in a room, and there's a lantern and you're like, pick up the lantern. I already picked up the Lantern and then just getting this loop with the Lantern. And there's nothing you can figure out to do because you broke the rules engine, right? But now you just keep throwing it, an extra piece of
25:18
of context and you just go poof off into Crazy Land and it's just unbelievable. And then there's also just the stuff like please summarize this very factual thing about the War of 1812 or whatever. Right. And it does this magical great job of that because there's plenty of War of 1812 summaries and it does this amazing job of synthesizing, grammar to the point where it's really nice. And then you say, and now change it to Old English and it does this
25:48
Yeah yeah yep. Or to polish or whatever and so it there's just it is magical like, it's not from fake magical. Now there's a parlor trick element to it, which is sort of the, the super crazy stuff, but it doesn't like it's there.
26:10
Does that mean interesting part? Because the crazy stuff so and I'm Mark I'm gonna get your take on this, which is that we had trying to wrestle with.
26:17
Concepts like hallucination, for example, which are sometimes hard to Define. I think we just tend to form the vocabulary on them, and I, it seems really that almost two schools of thought. When you look at something like the hell is the nation, which I would roughly categorises, the AI seeming to make up stuff, which doesn't seem real. There's one script artist, that is something which needs to be fixed. There's another school of thought, which seems to think like, that's a feature. Let the a, I run with it and beauty of her.
26:48
Can go. Now my intuition is you are more in the latter Camp. Is that the right framing? How do you see hallucination? How do you think of AI and being able to let you run loose? Yeah, so first of all, if there needs to be a mode for these things where they're just like you know, just the facts, right? And so they used to be a mode like especially if it's going to be attached to a search engine or something that needs to be, mowed was just the facts and there's lots of smart Engineers working on, you know, basically doing that. So I have no doubt that that that's going to happen.
27:17
Ian. But, you know, having said that, like, I would say that, just like, I've actually been fairly stunned by this kind of. We collectively who've been watching observing this. Like, we have, we have moved so quickly to sort of rule these hallucinations into the category of bug or flaw. Hmm. And I'm on the exact opposite page like I think that I think that the hallucinations themselves are an incredible breakthrough, I think it I'll go a little bit on a limb here. Like I think we we are in a very we're in a culture right now that I would say is
27:48
A very Adderall fueled and focused. And let's say anal about like facts and structure and rigor and fake news and misinformation, and definitive results. And all these things, you know, it's just like this, this this train, we're in a very kind of non, you know, what - we put it. Not Dynamic cultural moment or something, very static, very rigid, very fixed, very judgmental, very Puritan. And so this sort of Magic Machine, comes along that like is making stuff up all over the place. And
28:17
And we just have this like reflexive knee-jerk reaction that there's something wrong with that. I think there's something amazing about that, which is we have? Apparently like, as part of this we have apparently solved the problem of computer creativity. Like we apparently have just made computers, creative and Steve. And I'm, you know, I'm sure he has a shared history with me on this like that that for many, you know, for those a huge part of AI for a very long time was like, how, how could you ever imagine an AI writing a poem? Yeah. Or painting, a picture or composing music or writing fiction?
28:48
And all of a sudden we have this like incredible fiction writing you know poem writing, you know, art designing machine and it's incredible. And then it's like okay, you know, then then you get it all these like angels, dancing and head of a pin questions like, was it real creativity? And is it like really artistic and blah? Blah, blah blah blah. But it, you know, it kind of like, in a sense those questions are kind of philosophical abstraction because like, just you just look at the actual results and they're spectacular. I mean they're already spectacular and then if you extrapolate forward you know the kind of creativity that these things are going to have over the next couple.
29:17
Releases its just going to be it's going to be mind-boggling. It's just gonna be like absolutely like spectacular. Like so for example like the entire concept of it of a video game I you know I think is going to turn over completely like the idea that a video game is like designed up front it's just going to go right out the window. Video games are going to look like be basically created on the Fly by AI you know quite honestly books you know most books of the future are not going to be a static single written thing. It's going to be a book that writes itself as you read it. And you know the the creative possibility here is like light years.
29:47
Anything that we've ever seen before and I like, we almost don't even know how to think about like how fantastic that's going to be. But I think it's gonna be just basically Choose Your Own Adventure as such. They Choose Your Own Adventure, but just dialed up to a notch on the gaming side for everything. You
30:02
ever think about the internet, like, one of the things that people said about the internet, and then they set it about streaming was that there were, you would be able to enjoy this personalized content that really is based on your inputs, and
30:17
And all of and then you know like all the purists and the hyper article. No that's not right there should only be one ending to a movie and not hanging bait. But now we're all of a sudden presented with it. And this whole, this whole notion of Mark called that the Adderall fuel sort of world. We're in right now you know a big huge part of that is that we're in a computer Centric world and the whole notion of computers was defined by like, basically, too many significant digits for everything. Mmm.
30:47
And and that's why even something like autocorrect really pisses people off because autocorrect is is not perfect and it's guessing mmm-hmm and it turns out that the reason it works is because most of the time it's guessing and no human would have guessed the auto correct sequence that and then ended up with that was sort of the basics of a machine-learning kind of thing, right? And so we're really breaking the the society's model of computing. Yeah, that's our. Do you think Mark would have more
31:17
Said that, you know, they'll be these ones that are very fact-based. That I actually think that another way to say it is they're going to be very boring things where AI helps be a super grammar checker or it summarizes one existing document or five existing documents and fines or compares to doctor. These are all things that we've had programmatic algorithmic codes to do just not as
31:41
well. Yeah. So Stephen, what do you thing is someday? Very soon? You can open up a Word document and
31:48
Just bear with me here. Maybe you get like, an animated little paperclip thing on the bottom. And it can say like, hey it looks like maybe you're writing a letter, like, is it like possible?
32:01
It is, in fact, I'm just gonna gloss over that, but one of the ways to think about it is that one of the hardest problems has been that the use of templates in in just basic word processing. Like I need to write a letter like bitching to my landlord about something.
32:18
So dear landlord you tight and you get stuck there you know in the 70s when you strike written letters with a pen and paper you would everybody would go buy a copy of Emily post and then when you had to write a happy note or a sad note or ever you just looked it up and you search the replaced while you were writing it like, dear Uncle Harry. I'm sorry. I owe you money or whatever. And, and then for like 30 years, we've had these word templates with like angle, brackets insert, you know, relatives name here. Insert name
32:48
More and they never worked. And they're all of these subtle reasons why they didn't work. Like, the possesses are wrong, the pronouns are world, the plurals were wrong, the locations, the Style, by country, or language was wrong, like we have to do a whole different set for Japan and now like you can just say, you know, need a condolence card for my great aunt. She was 94 years old, and lived a happy life, and poof, here's a whole condolence card for your aunt. And then you say, oh, I forgot. She's Japanese. And here's the Japanese version. Mmm. Yeah. And like
33:18
Just that is huge. Its kind of mundane and we had gotten pretty close to solving it and it'll be great that and that's what I think every productivity tool. That's why they'll all have these things. Because when it comes to like using Sara M and typing in like I had a meeting with a customer. Yes you'll see here's the generic that when doctors have to type in the notes from his occupation cantankerous, patient would not let me do this. Here's the whole note for the doctor and that stuff will be great. Yeah and no, yeah we're errors. It'll be Nate.
33:48
It's language and whatever. Kind of all this will be great, but it's still kind of
33:51
boring. I think that moment for me where I was just totally Blown Away on the creativity aspect was when I saw Ryan Peterson, see your Flex board where he basically asked, Josh apt to write a poem on Flex board and it did it just Pat it out there and it
34:07
was a really well-written poem
34:10
and then there were a couple of verses that didn't rhyme correctly and he basically pointed it out and said, hey that didn't rhyme. Can you fix it? And it
34:18
Did it just Pat it out there and it just made it
34:20
beautiful and I was just so blown away. Because for the first time you can look at things like writing poems, writing a
34:27
book Mark like you said video games all of this to be like what it could be just incredibly creative generated by AI at Nick at runtime dynamically generated based on your personal taste and how you want it to take it and build on it which really good segment. Mark. I mean we can I think that a lot of kind of
34:48
Since about philosophically where this can go, but maybe looking very short term. Let's say, I next couple of years. What are the are? Do you think our sectors are spaces, where you think human beings are going to see pixels, which are powered by a? I like, for example, for me, you know, I think copilot what GitHub is what we as code and database done is probably the most widely used obvious application right now. I think they're not signal can piece of non trivial piece of code written by AI. But what excites you the next
35:18
Of years, I kept, this is going to get powered by air.
35:21
Yeah. Yeah. I mean it's just it's going to hit really hard. I mean there's Founders All Over the industry right now, that are figuring out how to apply this technology into every category of software, every use case. And so it's just, it's going to hit really fast and hard. It's going to become. I think, very obvious, very quickly, Steven mentioned this thing of like view, every edit basically boxes as an idea now and AI prompt. It's just it's going to get very obvious. Like the the software that you're using the doesn't have this capability and you're going to kind of wonder whether the developers are asleep because it's just going to be like, so clear, what the benefits are. And so I think
35:51
Can I hit hard and fast? And I you know Stevens an expert in these areas but I think it's going to hit really hard and for sure every you know office productivity any kind of writing, any kind of math, you know anything related, it's going to anything art-related, you know anything about being photographs imagery music production, video editing text editing transcriptions voice-to-text text-to-voice podcast production. It may be the thing that gets you guys to upgrade this terrible a podcast or streaming software were using
36:21
Right now
36:38
is just saying you have a, i in it for the moment. And I think we have to sort of get through that because with a platform shift like this, everyone's going to do it, and it doesn't matter if it always makes sense, or they're doing it well.
36:51
Hmm. How do you, how do you explain? How do you help people get through the part? Where sort of people tell them that that's you're doing it in a dumb way or you shouldn't do
37:00
it? Yes, this is always the big question with these kinds of platforms chefs. It's always like, is this new thing? Something you sprinkle on top of all of the existing products or do you, do you actually reinvent things? And so in this, you know, Stephen Stephen remembers many examples of this overtime, but, you know, this happened. Everything, you know, or just our I've, you know, our mobile apps, just front-end friends and web apps are, you know, as the web just a friend on a database.
37:21
You know, and you know, or is there something actually more fundamental happening where you do you do, fundamental reinvention. And so I look a lot of people are just going to like try to sprinkle a eye on top and in particular, I'll get in the weeds here a little bit so that, you know, like a lot of software that you use, right in your daily life. And this is true of like salesforce.com. And by the way this is also true of like, you know, like you know a, you know, any social network, you know, a lot of the consumer applications, you know, there's there's there's sort of this presumption that the way those systems are designed
37:51
There's a database with tables, right, and then there's a front end. And the front end is, what is, you know, the historical acronym. And the industry is crud for create, whatever create update delete. And so you're anybody who works in an office environment, you're dealing with these applications all the time. And So for anybody who for that matter books, a plane ticket, like you're sitting there, you know, filling out a form or you're buying a shirt on, you know, whatever, Amazon you're sitting there filling out a form, you know, for for everything. And so, you know, the Temptation is going to be to just kind of graph AI in there and have it kind of be a helper.
38:22
You know look I think the really smart entrepreneurs. I mean they're already doing this but they're just going to say there's going to throw that everything I just said they're just going to throw all those assumptions completely out the window and there just to say look it doesn't matter. I'm just take salesforce.com. It doesn't matter, it doesn't, it doesn't make any sense anymore to have a database of information. A structured database with tables about your customers, and about your reps, and all this stuff and trying to do forecasting, and trying to get all of your sales people to keep the database up-to-date use this web UI form thing all the time. Like that, that wholesome. She's just gonna be out the window.
38:51
We're going to do is you're going to have a sales Ai and the sales AI is going to get trained on all customer communication. It's going to get trained on all email all text messages, all phone calls all meetings, right? It's going to transcribe all spoken word, interactions, the customers and and then you're just going to ask a questions. The same way you a strategy PT or big questions and you're going to say, like, you know, is this customer going to buy my product this quarter or not? And as going to say no, right because I've asked, you know, I've analyzed it and like, for the following reasons like, you know, he says he has, but he's actually not right? Or like you know is this a
39:20
His rep going to make quote of this quarter and is going to say yes or no based on all the data that it has. And so, I mean, II would take the strong, I was a radical view which is the real breakthroughs here are going to be when people toss out all the current assumptions and they just purely start from scratch. And again, I use the video game example earlier, but I think that's going to be the Breakthrough video game. I don't know. It's gonna be two years, but thanks to video game. For sure. Within five years is going to be something that nobody ever designed. It's going to be something that's completely generated on the Fly completely differently than how you make video games today. Every time that I think the decent of the media game site is PVC has been all about
39:51
To do procedural generation of like just Maps inside video games like Diablo Etc. But the problem is that it is. Can you get these repeating patterns, your ally of all get another towel in a separate way and it wasn't actually you can ever really kind of explode the depth of having Landscapes or characters being purely, I generated. And even now, like, you know, just the ideas around like hey instead of an NPC character, you know, having a bunch of kind of sequence conversations. Imagine Sydney, you know, falling in love with. You insert a video.
40:20
A game in a Tavern somewhere or having a totally generated level is super interesting. Is this going to be advantageous for startups as such or is it going to be more advantageous for incumbents for just like big companies to be able to have the resources, have the infrastructure, have the people be able to run with it? Yeah, so, you know overwhelmingly I think it's fair to say the view in the industry collectively and in the World At Large right now is this is going to be a game of big companies and, you know the argument, you know, you know, the arguments they're gonna be a Seacoast. There's a set of big companies that just have massive
40:51
Versus Peter. You know why? Stephen describe? They've got these huge aren't these teams. They've got massive, Computer Resources, huge amounts of money, huge amount of training data. And you know, they can just, you know, this this is going to be 45, companies are going to do all this and then start ups are going to start ups will use AI but they'll use AI by tapping the apis that are provided by these by these big companies. But, but the, the bat there that most people would make right now, is most of the most, the benefit will go to the big companies. You know, that's possible. I really doubt it. Oh, y-you know this. Well, this
41:20
really feels like the kind of Technology that's going to really generalize out like that. Like I mean there's a basic economic incentives. There is an enormous amount of money at play here. I mean this may be the single biggest Financial, you know, kind of explosion in the history of the industry. Just just broadly just because the the number of applications and use cases here in the impact is just so broad. So there's just this giant economic incentive. If you're a smart person at one of these big companies, there's a huge incentive to break out, and start your own company. And then, you know, look these techniques the fields, moving incredibly fast, they're incredibly,
41:51
Are, you know, engineers and technologists working on all this. You know, this is like by far the top, you know, topic now and all the computer science program, all the good computer science programs. There's gonna be new graduates coming out every year that know how to do this stuff. You know, a lot of there's a lot of Open Source development happening. There will be a lot more in the future and then there's a lot of work being put into techniques to actually make all of this trackable by small entities and you know, I mean it's just an obvious example already which is there's this thing called stable diffusion you know image generation. You can just run your laptop right? As
42:20
Those two using some centralized service and that, you know, I think that idea is going to generalize out. So that's one and there's a big, there's technical debates underneath, that, that we could probably talk about four hours. But like, I would tend to come out on that side and then there's this other thing, which the whole Microsoft Affair, just kind of showed. And by for that matter, you know, all the restrictions that opening is put on Chad GPT which is, you know, big companies are really. And again, this is like a sign of the times like this, this is not something that used to be the case nearly as much, but like, we're at this moment in the culture where big companies are really worked out.
42:50
Out and I'll use the following terms and kind of air quotes but trust safety risk, reputation brand where they're just like petrified that you know their products are going to do something that's going to make people angry or offend. You know somebody says something is going to happen it's going to offend somebody and it's you know it's like it's like you know offending somebody with like bad words. Like you know, it's like the cardinal sin of our of our entire era and so this idea that you're going to have a magic software machine that's going to generate arbitrary words that On Any Given roll. The dice is going
43:20
You know piss somebody off the front page story and then you know, calls for executives getting fired and never being able to work again and you know just like yeah activist energy and politicians in the whole thing like, you know. So like and the reason I think this is actually a fairly safe prediction is like, it's just like, if you look at what's happened in social networking and consumer internet, or any kind of Web 2.0 for the last 10 years, like this issue of sort of quote unquote trust and safety or or distrust and lack of safety. Depending on your point of view is just so dominant in the consumer internet.
43:50
Street and consumer facing applications even prior to Ai and AI, just like elevates that quote unquote risk to, like I don't even know like a million times the level. And so, I actually, I actually don't know. Maybe Steven, I'm curious what you think of this. I actually don't, I'm actually not sure how the big tech companies are actually going to field products in this space and then actually stick to them because there's just too many things. That can quote, unquote. Go wrong.
44:12
Yeah. I mean, the part that I struggle with is, is that this is, this is the first time that
44:20
That technology that isn't a product yet. That you can't buy that. You really don't pay. For is already like evil fault bad. Yeah. And so used to be that like technology was good and then every like well it's not always good and so we went through this phase where it's like you know what we should make sure that the people who do bad things with good technology or held accountable.
44:50
And you ended up in the 80s with all of this, you know, computer scientists with social responsibility and all of this movement that was like, what we want to make stuff, but if bad people do bad things with it, they should suffer the cause. And now, like, all of a sudden this is just like this weird thing where it doesn't even really exist yet, and it's default
45:10
bad. Yeah. Now, actually, I think
45:13
that's dangerous and
45:14
weird here being discussed, right? There's one kind of bad which, you know,
45:20
I think when people think about stuff like the sponsible AI, or AI alignment, inner and ethics, is a, kind of a whole set of like, you know, about misinformation, you know, hate speech, Etc. It's definitely like one school of thought. But the sometimes what interested about the original School of AI is bad. For example, you know, if folks haven't heard of Eliezer yudkowsky, highly recommend looking him up. It's kind of like kind of a long-term thing Quran.
45:49
All things AI unless wrong Etc, which is, I would kind of put it under the Skynet AI is going to, you know, replace all of humanity or have you Manatee, you know, I think which one is it a paperclip generation machines or whatever? So yeah, get your take on both. I feel like my senses, your more worked up with the former. But there's also the latter, which is AI taking over and kind of dooming us all to, you know, like, Arnold Schwarzenegger future. Okay? I can tell you what happened to Consumers.
46:20
Not land, right? Because there was a they were Halcyon Days of consumer internet companies. Where there was very little, I mean, virtually no concerned about trust and safety. They were just there. And, in fact, for a very long time, they're actually no issues. So there, I mean escaping the 90s. We never had a trust and safety anything. It just wasn't even. It wasn't something you people even asked about much less anything about and then, you know, starting in around the web to era in the late 2000s and early 2010's, when the internet was really going mainstream and then social networking, you know, kind of was this huge, you know, kind of advanced and it sort of consistent daily usage of the internet.
46:49
Distribution of content online. All of a sudden, you have these like very real issues that showed up and they were issues around things like terrorist recruitment, right? Like literally it was like, you know, terrorist groups, literally recruiting and radicalizing people online and and and calling on them to do violent things in the real world. You know, there was an assignment to violence, people calling for violent riots. You know, there was child endangerment, right? And, you know, and so the original kind of trust and safety efforts that the consumer internet companies were around those things, right? In other words, like things worse,
47:20
Like it doesn't matter like I can be an extreme libertarian. And and that stuff is still not a lot like that stuff. Still should not be allowed. Like we're not going to let terrorists recruit, you know, suicide bombers on the on these platforms. And of course, this actually matches the American kind of jurisprudence approach and ethical approach to free speech, right? Which is the first amendment guarantees? Free Speech. But there are carve-outs in categories like child endangerment and incitement to violence. And so that was what I would call kind of this, you know, kind of you may call the kind of the serious or fundamental or I don't know layer 1 or something approach to
47:50
You know, to content online and then you know basically what happened over the last 10 years, you know, is that that sort of movement that function was basically, you know, sort of, I would say colonized and then over time sort of fully occupied and then enormously expanded to kind of the the second category of things which today we use words like hate speech misinformation, and so forth, you know, just like objectively, I would classify that all its kind of kind of woke, right? And so, you know, there's a definite kind of ethic to all of that that
48:20
Now described as wellness. And so, and, you know, that's a very, you know, today at the big consumer, internet companies that's a very hard-edged. You know, there's absolutely no belief that freedom of speech is a good thing. And there's a needs that there's absolute control over what people say, you know, for the purpose of accomplishing certain political and social objectives, and that there is, you know, we need to head off, you know, all these, you know, very scary, whatever things in the future of things that were, you know, that we disagree with. And so, what's happened in consumer internet internet land is that second layer of things that sort of much more politically politically changed and socially challenged. It's
48:49
The swamps all the earlier stuff and it by like a factor of 1,000 of 1. Right? And so bad. And I went to go through that example because I think that's basically what's happened to this field of AI safety over the last 20 years which is it started out with these very fundamental questions of like okay how is the AI actually going to act in the real world as it actually going to protect human life? Is it going to? Like if it thinks that humans in the way? Is it going to do something bad? Yeah. But then there's this second layer in this. And the second layer is all the again, once again, it's all the woke. It's hate speech misinformation. It's this. It's this Panic, right? That like words are bad things.
49:20
Um you know about the scary and then if you just like look at the restrictions that are being put on a eyes already again it's like 1001 like the focus. Overwhelmingly is on that second layer of kind of politicized, you know, kind of their kind of work restrictions. And so I think at this point, the AI safety movement has been swamped just like the consumer internet trust. And safety movement has been swamped by the way, yet Kowski among others from the sort of layer 1, original AI, safety men are very upset about this, right? Because they view the fact that there's this additional agenda. That's now been layered in on
49:49
Top and has used the name has taken the name, Asaf to your ailment, you know, kind of away from the people who are worried about the the sort of more fundamental questions, but I think that's basically a fait accompli. Like I think that basically the AI companies. So at least at least the early stages so far, they're speedrunning, what happened to the consumer internet companies, the consumer internet companies radicalized basically, between 2012 and 2022, like they became like far more focused on these issues, and at least a lot of the AI companies, including the bunch of the big ones. It's like they say,
50:19
Run at whole process and they're starting out, I would say In This Very radicalized position, so does it sound to follow that, you know, on one hand into this kind of school of thought that, hey, you know, training these models, the data used Etc, quite huge amounts of capital talent. But is there another world where the big company just because either I do that. You said this but, you know, big companies respond to pain and there's a lot of pain in the Press. When you get something wrong, do you think this is a world where startups? Just because they don't have a deal with pain or they don't care or, you know, haven't been taken over by the same cultural.
50:49
Isis, or better position here? Yeah. I mean, I think that's my what I think that my, we'll see, but I think that might literally be what happened. It may it may only be the case that new companies can actually do anything interesting in AI because that if you're of any size complete, you know, existing level of size and half with any sort of existing, you know, activist field, employee base, and sort of pressure from, you know, all The Usual Suspects. You know, the commitment and come to bear on you. Like you just you just can't, you can't adapt. It's just it's too is too much bad press. It's too much. Investor pressure is too.
51:19
Employee pressure, it's too much more pressure. You just can't get from here to there, and so, it will have to be new companies that are custom built with a different ethic to be able to withstand that. That's my guess. You know what? We'll see. I mean look this this is all choice, right? I mean this is why the whole Microsoft thing is so interesting like it's you know it's completely a choice, Microsoft sitting here, you know, right now I'm sure they're talking about this very actively you know at this very moment it's complete choice of what they do from here, right? And when they put you know, do they do? They put, do they ever put the full version of bench at back online? Do they are they do they decapitate forever? Like what are they going to do?
51:49
Do it's complete choice. So they could, they could be, they could be aggressive. It's just, it's, I mean, just based on observing Behavior so far. It's kind of hard to see it. Hashtag for kidney
52:01
one way to think about it too. Is that you're going to have this this sort of responsible, AI initiative, you know, on both sides of the AI first, we're going to have everybody at the big companies on the what it ingests and you're just going to get this cleaving off of like whole areas. You can imagine like
52:19
Well is, you know, read it in or out and then they decide no Reddit or then they say well but a lot of it is like the really best how to and explainer stuff. We can't just get rid of all of read it but then if there's some Reddit well you're going to get some Reddit like you know and and that's going to bring with it like words and there's a challenge and that the inbound side of it. Yeah. And then it's got the outbound and what it generates and all of this.
52:49
This stuff like you can see it in both in chat GPT and in banking and in Sydney, you know, there are certain things you you type in and prompts and you just get back the human written thing. That just says I'm not going near that. That's a radioactive question. Hmm. And and you're like wow okay how much of you know person's knowledge person, hoods knowledge is going to be behind these curated prompts that say I won't like Jensen.
53:20
Asked in his video he did, which is great, we should link to it, you know, he asked like tell me a joke that makes fun of men. I'm paraphrasing of it and then it Sidney just produced a joke about men. And then it said, oh well, how about a joke about women? Oh, well, we can't tell jokes about diversity like that. And it had this canned response. That's how you do. It was that a bug in ingestion? Was it a bug in the human prompt? Was it a bug in the prompt parsing?
53:49
That led to that and like or is that just like? Well, that's how humans are and like why you know? Like and so that's the kind of thing and in fact it's like to Mark's point. This is not some hypothetical like the CEOs of the largest companies in the US and have this organization called The Business Roundtable and they all about two years ago, they wrote like a whole set of responsible AI guidelines and look at them. They're not
54:20
Technology things about. Let's make sure the AI doesn't take over the world and let's make sure the AI doesn't break into all our power plants and take and build Skynet. It's literally just looks like a political platform. Hmm, things that it's going to do and not do and you can but it's not the specifics because, of course, part of writing. Those is that if you say anything bad about one of those specifics, then you're clearly on the wrong made every payment. If you take them all together, you start,
54:49
To do like, how much of Wikipedia did you just cleave off like you? You literally can't like, how do you even talk about slavery?
54:58
One thing's been to think about things. I've been seeing all this on Twitter, which is people trying to build these Court cards of hooey, no, of a who, some of the most controversial people around Mark. So tragic, that you're not even TV. You didn't make that list. I hope you feel better about that or, you know, sort of like trying to figure out various political leanings and it's gonna be interesting to see. Like, where does it come from?
55:20
How was it baked in? I want to have taken a slightly different direction which is talk about air and jobs. Because when Chad gbd first came out, I think people immediately have this reaction which was there is going to be a whole class of what we should call his heart, like white collar jobs, which don't need to exist like summarizing content writing content, you know. I know a lot of these marketing rules marketing roles like Egypt reading slide decks ino for example no lot of make content startups which you know try and respond to search queries.
55:50
And you know, I have the best kind of content for that, I don't know. That's going to be a thing anymore. So what I'm going to do is you take your sense on just a are how it's going to impact, will you cancel jobs? What's going to disappear and maybe a broader you know, sense of how would it impact economy? Yeah. So there's just this number one, there's just a lot of irony in what's happening right now because, you know, if you you know, a year ago and we, if we had sat down or frankly, even three months ago, sat down and said, okay, what is the order of concern? Like what, you know, which jobs are we worried about, you know, AI robots, replacing first. And then later, you know, you would have had
56:19
Sort of, you know, for most of the last actually probably 20, 30, 40 years, you would have said, well, it's obviously the blue collar jobs be replaced first, right? Because they have the least knowledge of content and you just have machines do those things. And then you would say it's going to be the white collar jobs that involve knowledge work but not creativity, right? As you can imagine automating those jobs and then he would say it's the creative jobs that for sure are the safest, right? And so somebody who write somebody who's you know who writes, you know, books or does PowerPoints or composes music or you know creates video game levels is probably safe, you know sitting here today.
56:49
Yeah. It's like oops. It's like the opposite of that, right? And so like it's actually a lot of the creative jobs that are like most directly hit. I think the fastest and then it's the white collar jobs that are actually much greater that hit next and then it's and then it actually turns out it's actually really hard to a lot of the blue-collar jobs are actually really hard to hit. And, you know, we there was that whole Panic, you know, some years back about all the, you know, Long Haul semi, you know, semi truck driver, you know, jobs were going to vanish overnight. And you know, if you look around
57:20
There's still plenty of people, driving some ice today and so it actually turns out like a lot of blue-collar jobs are actually hard to like, we don't, we have a machine that can write you like, poetry about your company and Old English, but we do not do not have a machine that can unplug your toilet. Right. We do not have a machine that can like, you know, we don't have a machine that can do most things that, you know, people do with their hands. Hmm. We don't, we don't have a machine yet. They can pack your suitcase, right? We don't have a machine that can clean your bathroom. Like, we don't have any of these things yet, right? We barely have machines that could do pick pack and ship and
57:49
for retail e-commerce. And even there, there still a lot of people involved. And so there is this kind of amazing kind of ironic thing where it's the people who thought they were the safest who are probably actually the most exposed look. Having said all that, you know, this is going to become another, you know, we've been through in the last 20 years, two rounds of what I would say. High high-octane luddism, Luddite fallacy, which is there was a panic in the 2000's about offshoring and Outsourcing it was going to kill all the jobs and then in 2010's there was a panic immediate feel Panic about
58:19
About taking all the jobs. You know there's going to be another panic like that about a I you know LMS and chat box and so forth taking jobs you know in this next cycle and look there will be some level of job displacement, right there, you know, there are jobs, you know, when the car came along and all of a sudden you didn't need as many blacksmiths. Right. You know putting shoes on horses and so you know there will be some level of job displacement. There will be some level of job churn for sure. You know look having said that there's kind of
58:49
Of two giant factors that I think are going to result in that not quite being what people think. So Factor, number one, is people always underestimate the degree to which new technology actually creates new jobs. And so, you know, look virtually every job that anybody has today and certainly, every white collar job people have today as a consequence of technological advances in the last 300 years, or more commonly even just in the last 50 years. I mean, just anybody who works at computers today, you know, which is a huge percentage of the white collar Workforce, like those jobs wouldn't even exist.
59:19
If computers had come along. And so there's just going to be an enormous amounts of AI fuel job creation. And there's a whole conversation we could have about that. That's very exciting. And then look, the other thing is like, there are giant sectors of the economy that simply cannot be automated or a. I tore anything because they're regulated and, you know, they're very sharply and strongly controlled by the government. And, you know, they are dominated by monopolies and cartels that have fundamental, you know, Monopoly, you know,
59:49
Military protection. They've got regular but the economist called regulatory capture, they control their Regulatory Agencies. And so you've got these giant Industries like health care and education and housing and Banking and law, right? And you know most of what the government does that where it's just you know there will there will be the opportunity to bring a, I'd all those fields, it will actually be very hard to do so because it's good those Industries are wired in a way to be dead set against technological Improvement and so
1:00:20
So, most of the economy has a, our biggest problem for the next decade is not going to be AI having too much effect. It's going to be a I having to little effect because there's just these huge swaths of the economy that are just off limits. I want to talk about Ai and thin tenth and so one thing, you know, just talking about sentient, sorry, Mark, you're done this podcast episode with I think Tyler Cowen where you talked about. I think one of the questions was like, I is, do you think AI is going to like
1:00:49
Come in and take over Humanity. What's that going to look like? And I remember you saying something like it's math, I'm not afraid of math and you know I'm just I do. I just don't think that is a thing that we should all be worried about now
1:01:05
and I guess and I
1:01:07
don't know where you were going with this but to me it's like has that changed at all? Say given what you've seen so far? Well you know let me make it fun reference here. We were watching Star Trek. Pickhardt season. 3 episode 1, highly
1:01:19
I mean highly recommend and you know, it's only visiting for like like a lot of us like, you know, dng the greatest artists of all time it really Define how we think of a I you know all these are these classic episodes where, you know, you just gently new data was sentient and it was all these class capitals. For example, like when there's a trial about whether data is sent it or not and you know, you have like Ryker defending him and, you know, pick our deposing him and sorry the other way around and then you get like sort of the in the office which, you know, Etc.
1:01:49
And and now I never thought like that, those things would actually be relevant again. My lifetime and recently obviously we had last year, the Google engineer. You know who saw Lambda Google's version of chat GPD say and you know kind of pull the alarm saying hey this thing might be sentient. Now at the time I would say the the general reaction from The Tech Community was to kind of mock him a little bit, was it will hate this is crazy silly like you know you're just a silly person.
1:02:19
I would say that after seeing some what the Sydney chat logs, I am sensing some people going. Like, this seems slightly different and maybe there is some sympathy for Le Moyne. I think was his name. I am curious marked your question which kind of works of art. These question here, which is a deed has your opinion on AI sentience change and be maybe more. Interestingly when will you admit or realize? Hey this admit admit we are knowledge or
1:02:49
What, what would be the crossing of the Rubicon do for you to go? Okay, the it passes, the Marc Andreessen version of the Turing test.
1:02:58
So so couple things. So so the the story of AI sentience right as it exists today in the technological community and you can kind of get this from you know, Ray Kurzweil or you can get this from Ali's. Are you Kowski or or others, right? The, the basic story is it's one of sort of emergent sentience, right? So, the theory is, basically, we have these algorithms, we have these data sets, we have these computer clusters, they're rising and complexity, you know, there's just there's, you know, literally more math
1:03:27
My
1:03:27
number is more data, more and more algorithms, more, and more, you know, loops loops running. And then they, you know, they kind of charted against like the complexity of like, you know, the mouse brain and then the monkeys brain and then at some point, the human brain and then superhuman brain and they're like it's some point it sort of is like magic happens. Like there's a there's a moment where it basically becomes self-aware and this is and this is again, this is like a this is The Terminator thing, right? Skynet woke, you know just like wakes up and you know the show I mentioned earlier Person of Interest. Same thing happens like the thing just kind of wakes up.
1:03:58
And the problem with FC. No, let me take that back first of interest. It's not clear that, that happens. I'll take that back, I'll come back to that, but it definitely happens in just have to watch that show. Now we were big fans of Law and Order. SVU back in the day, they're getting 20 years
1:04:11
ago. When we first moved in
1:04:13
here. I would say there's a bunch of this is a very common Trope in sci-fi I would say the AI waking up, but sorry. Sorry, we're getting wanted, hit Mark the problem. The problem is like it's a massive hand wave, right? So it's like okay what is the hardest problem in all of?
1:04:27
Technology that the smartest Minds been working on for the last 70 years, starting with Alan Turing, is to generate artificial intelligence, do we have a way? Do we? Do we know how to design sentience? Do we know how to do we have a method by which we can create something that we know is sentient. Like can we create something that's analogous to human brain? No we can't. We don't have the first clue how to do that. Do we understand how the human brain works? Do? We understand how human sentience works? We have no idea. We have no clue, fun fact, the
1:04:57
the category of the biological research medical community, that best understands the nature of human consciousness has and anesthesiologists. They know how to turn Consciousness off and then back on again, which was the big breakthrough, when they figure that out, they still don't have a clue like what it is like how to do it but nobody nobody does nobody understands how the brain works. Like, it's just it's not, I actually I was I was I when I actually went to, when I went to college in the late 80s I was actually going to go into what was called, cognitive science at the time which was basically
1:05:27
Exactly. That's it was the study of the brain. And then it was basically how to make computer algorithms that replicated that and just say it was beginning to get very clear. Like we don't have, we don't have the first clue. So so we still have the first clue how the brain does it. It does we still don't know how to wake up the computer? Like and so this idea, there's a massive hand waving all these theories which is like it's just kind of like spontaneously happen and so you just have to kind of wrap your head around. The idea that like the most amazing technological breakthrough of all time is going to just kind of happen magically accidentally it. So I just I'm like I don't I don't see that and there's a longer conversation we can have about there's there's
1:05:57
Is this deep-seated kind of mythological cultural emotional thing and sort of human culture That's goes back to like the myth of Prometheus and like the Frankenstein story. You know, we're basically it's you know, basically as this technology that basically ends up turning on, you know, the delivered by the gods that it basically ends up turning on man and you know and this would be the ultimate version of that and I just I don't see that happening. I think there's a much more interesting question which gets to the heart of what you alluded to which is
1:06:27
People are using these things. And in particular, when people used, when people got their four days of being able to talk to Sydney before she was shot in the head, cruelly, executed,
1:06:38
not quite not quite dead
1:06:40
yet. Yeah, she's staggering around, man. She's absolutely nailed the analogies to what is happening right now. Like for example Sydney is lure data's brother, right? What happens doctor sir.
1:06:57
Sung creates law refers to human to evil, Lord, get set down. And what do you get? You get. Chad GPT, which is a slightly more androidy palatable version of a. I so just so you sir TNG was ahead of its I'm sorry Marc Cohn.
1:07:12
Yeah, so like so what people have this reaction like they have this reaction. Right? So so so what I just described earlier was like the technological perspective, but then there's the human perspective which is people have this reaction to it, right? And it's like, okay, why are people having this reaction to it? And and for me it gets to this like really fundamental underlying question which is like okay like what does it mean like what ascensions actually mean what does it mean for a human being to be sentient? Like like are we actually all that positive that there's actually, like all that magic happening upstairs in our
1:07:42
Mine's like how, you know, how deep we go, how shallow are we actually? There's a whole there's a whole thing. There's a whole thing at this is very interesting. You know, there's a whole theory on this which basically is like the human, the human brain is actually not necessary stagnated. Like basically, we're improvising on the basis of basically a bad memories. You know, we kind of live life 15 seconds at a time and and a lot of what we do is just kind of Auto completing right on this game. Definitely been preparing for this podcast. That's, that's how we roll. That's all we know. Correct, exactly. A lot of what happens in the conversation is, we're kind of Auto, you know, I'm doing it, right?
1:08:12
Now, like I'm speaking on the Fly, I'm trying to kind of auto complete the sentence that I started. I kind of have a vague awareness of what's going to happen in the near future. I have these sort of vague memories of what we talked about. Even, you know, 30 minutes ago I have like almost no memories of what we talked about last week and I'm just like doing my best. I've just like pedaling along, you know, doing my best and what I autocomplete, sometimes I get things right, sometimes I get things wrong, you know, sometimes I make stuff up. Sometimes you can't tell when I'm making stuff up. Sometimes I can't tell what I'm making stuff up and so you know, there's this
1:08:42
Kind of take on it. Which is this is actually not a question about the AI. It's actually a question about us is a question of like, okay, what does it actually mean for a human to be intelligent? And are we actually, are we actually sort of, Are We Computing into the machine things that aren't there? But in the process, we're learning from the machine. About maybe things about ourselves that we were fully aware of the Turing. Test is the great example of this, right? And so it's like the Turing test, like, the whole basis of the Turing test is, can the machine tree basically, give the machine is the machine. So, Advanced that basically, you can't tell when you're talking to it, whether it's a human or a machine.
1:09:12
Jean. Right? That's the way that the test is kind of framed but there's another way of looking at that test which is the machine basically good enough at what it does to be able to trick a person, right? And like any con man will tell you or any magician will tell you like human beings are not that hard to trick, right. Right. And so how much of an accomplishment actually is a to trick us into thinking that it's real when it's not or that it's alive when it's not. And so I tend to think all of these questions around sentience and so forth, they're actually much less about the substance of what's actually happening under the hood on the machine side. They're
1:09:42
About us and how we perceive the world and how we perceive the things we value and the experiences that we have. And I think there's like a thousand questions in that realm that have that have opened up all of a sudden. Basically, the machine is sort of reflecting those questions back on
1:09:56
us. It's also the Turing test was this is sort of the gold standard, but it's, you know, based on like he's super well-versed people, and this constrained environment, it was even literally a contest, but another view of it is like in three years. If your
1:10:12
Trying to reschedule a flight or book dinner reservations or something, and you're dealing with with the future, Sydney, mmm. And you and you get a satisfying result from that experience. Are you really the next step to debate whether it was sentient or whether it just worked? And so we're we, if the startups do their job, there will be hundreds or thousands of interactions that become AI based interactions. And you just don't
1:10:42
Here but that works with we're no closer to like wanting to you know go hang out on a desert desert island didn't live with an AI for the rest of time. Well though maybe the yeah but in other rooms and other audiences but it is going to happen. It's going to happen in a very different way. That's not going to be this invention. It's just going to be a series of these things.
1:11:07
Let me just make one more science fiction, observation can because I've you know, I love all this and all the same science fiction you guys.
1:11:12
Were referring to. I'm actually re-watching Person of Interest right now because it all of a sudden is like super relevant. So here's the other science fiction observation. This is really not and this goes to cultural perceptions of technology and how we're going to react to this. So basically science fiction for the last whatever, 50 years or even longer than that. Involves a, I basically the implicit assumption is like immediate hostility, right? And so either, and it's basically either, it's basically two story lines. Either the machines are going to be fascist with
1:11:42
Back to us, which is right, which is The Terminator storyline, which is the machines are going to declare war on us and put us in camps and exterminate us, right? Which is literally, what happens in those movies and then, or we're going to be fascist towards the machines and that's like the Philip K dick Blade Runner scenario, right? Like, right if you watch Blade Runner, right? If you watch Blade Runner, the presumption is, you know, there are these AI, you know, Androids running around and Humanities responses to hate, and fear them, and want to kill them, right? And so, it's sort of this in
1:12:12
immediate fascist frame, where if you've got these two different kinds of entities, basically, humans and AI that they're inevitably going to go to war with each other and try to exterminate each other. If you actually look at what's happening when people interact with even chat, GPT or Sydney, or being shat even in there, you know, comfort, you know, even in there, let's say hobbled forms. Like they're having the exact opposite reaction. They love it. Like, they love these things. This is what happened to the guy at Google, like he fell in love with the thing. Like he's like, wow, like I found another Soul, right? And people are immediately having that reaction
1:12:42
Shouldn't there have been a handful of science fiction movies the movie Her comes to mind where they've actually had this but like it's actually the opposite reaction. We actually arguably like right out of the gate, we can see arguably humans. Like I have a giant biased towards having sympathy and maybe even too much sympathy with with basically what are at the end of the day you know basically slightly more sophisticated toasters. And so it may be that the way that this involves actually is we way over estimate you know kind of for this way our level of sort of
1:13:12
Channel effect and emotional loading that we place on these things is going to be way out ahead of whatever the underlying reality is. And maybe that's actually going to shape how this unfolds much more than this reflexive impulse towards having some sort of hostility to just to wrap up. I think you said toaster just Battlestar Galactica. Well Stephanie back there that was a great reference. I was also going to say, you know, there are movies which do both for example. The Matrix has both if you remove the Matrix Humanity, first eyes to enslave the robots in Ai and then they do the opposite to us. You mention her ex machina, I think would
1:13:42
Go the other way. For example, when it comes to relationships and AI baby, you know, let's fascist. That's a fascist. That's that's the hostel. That's another one of the Hostile ones, right? Oh yeah, that's right. But maybe you're not going to like maybe at the end of all. This is there is love for everyone who wants to seek it out, and if it's not a real human being, maybe it's on Microsoft data center somewhere for you.
1:14:07
What we have here so you're going to have, here's what everybody's going to happen. Here's what every kid is going to have. This is going to be very interesting, right? Every kid is going to grow up now with a friend. Hmm, right? That's a bot. And that bot is going to be with them their whole lives and it's going to know. Everyone's going to have a going to know all their private prior conversations is to know everything about them. It's going to be able to answer any question. It's going to be able to explain anything is going to be able to teach you anything. It's going to have infinite patience and you know for
1:14:37
like as close as a machine can get to loving you like it's gonna love you right? Like yeah that's going to be a thing and like every kid is going to have that, right? And so what is the emotional relationship that your kid and then you know in the future adults are going to have with that regardless of whether it actually has quite a quote sentient. Like what is the nature of that emotion relationship? And I think that I think that actually is, whereas actually where a lot of these. A lot of these questions are going to are going to end up kind of circling around, basically Tamagotchi meets Jarvis. Yeah, well, you know, are you know,
1:15:07
Characters that you see in any Eskimo, novel where you have a sentient robot who takes care of a child from birth, but but, but I can't wait. You know. Yeah, it's true. I can't wait for the future because, you know, you remember pleo the dinosaur? Yeah, we, you know, got one programmed. It it seems like LEGO Mindstorms so cute and I just wish it would do more than just like make these cute noises but, you know, you could have the future someday or are, you know? Like if you get likes a foundation, your receipt in her orgasm,
1:15:37
Animals that that was kind of a staple. You have an AI which we know takes care of you from childhood. So maybe that's it on that hopeful note, you know, there's love and patience and affection from. If not the people in this, you know, podcast a piece of code card, emergency a piece of code running on Microsoft hardware, for all of you on that note sounds bleep. This is amazing. This is amazing. Go find love everyone. Thank you. Good night.
ms