PodClips Logo
PodClips Logo
Lex Fridman Podcast
#386 Marc Andreessen: Future of the Internet, Technology, and AI
#386  Marc Andreessen: Future of the Internet, Technology, and AI

#386 Marc Andreessen: Future of the Internet, Technology, and AI

Lex Fridman PodcastGo to Podcast Page

Lex Fridman, Marc Andreessen
·
65 Clips
·
Jun 22, 2023
Listen to Clips & Top Moments
Episode Summary
Episode Transcript
0:00
The following is a conversation with Marc Andreessen, co-creator of Mosaic, the first widely used web browser co-founder of Netscape. Co-founder of the legendary, Silicon Valley venture capital, firm Andreessen. Horowitz and is one of the most outspoken voices on the future of technology, including his most recent article, why a I will save the world
0:24
And now a quick few second mention of a sponsor, check them out in the description, it's the best way to support this podcast. We got inside tracker for tracking your health expressvpn for keeping your privacy and security on the internet and a G1 for my daily multivitamin. Drink to utilize the my friends. Also, if you want to work with our amazing team, we're always hiring go to Lex Freeman.com /, hiring and now on to the full ad reads, as always, no ads in the middle, I try to make this interesting but
0:54
if you skip them, please do check out our sponsors. I enjoy their stuff. Maybe you will too.
1:00
This show is brought to you by inside tracker service, I use to track whatever the heck is going on inside my body using data, blood test data, and includes all kinds of information and that raw signal is processed. Using machine learning to tell me what I need to do with my life. How I need to change improve my diet, on you to change improve my lifestyle, all that kind of stuff. I'm a big fan of using as much raw data that comes from my own body.
1:29
D processed through, generalized machine learning models to give a prediction to give a suggestion. This is obviously the future and the more data the better. And so companies like inside track, could average doing an amazing job of taking the leap into that world of personalized data and personalized data driven suggestion. I'm a huge supporter of it turns out that luckily I'm pretty healthy
1:55
Surprisingly. So but then I look at the life and the limb and the health of Sir, Winston Churchill, who probably had the unhealthiest, sort of diet and lifestyle of any human ever, and lived for quite a long time. And as far as I can tell, was quite Nimble and agile in his old age. Anyway, get special savings for a limited time. When you go to inside, tracker.com Lex,
2:25
It shows also brought to you by expressvpn. I use them to protect my privacy on. The Internet is the first layer of protection in this dangerous. Cyber world of ours that soon will be populated by human-like or superhuman intelligent, AI systems that will trick you and try to get you to do all kinds of stuff. It's going to be a wild wild world in the 21st century. Cybersecurity the attackers and Defenders is going to be a tricky world.
2:54
Anyway, a VPN is a basic Shield. You should always have with you in this battle for privacy for security, all that kind of stuff. What I like about it also is that it's just a well Implement a piece of software that's constantly updated it works. Well, across a large number of operating systems. It does one thing and it does it really well. I've used it for many, many years before. I had a podcast before they were sponsor. I have always loved expressvpn with big sexy button that just
3:24
Has a power symbol. You press it, it turns on it's beautifully. Simple, good expressly p.m..com flexpod for an extra three months. Free.
3:34
The show is also brought to you by athletic greens, and it's AG one, drink, it's an all-in-one daily, drink does for Better Health, and Peak Performance. I drink it, at least twice a day. Now, in the crazy, Austin heat, it's over 100 degrees for many days in a row, but few things that feel as good as coming home for a long run and making it an aged one, drink, putting it in the fridge. So it's nice and cold. I jump in the shower.
4:03
Our comeback, drink it? I'm ready to take on the rest of the day.
4:08
I'm kicking ass in powered by the knowledge that I got all my vitamins and minerals covered.
4:15
It's the foundation for all the wild things, I'm doing mentally and physically with the rest of the day. Anyway, they'll give you a one month supply of fish oil when you sign up at drink AG one.com, Lex, that's drink. AG one.com /, Lex
4:35
This is Alex Friedman podcast to support it. Please check out our sponsors in the description and now dear friends. Here's Marc Andreessen
5:01
I think you're the right person to talk about the future of the internet and technology. In general, do you think we'll still have Google search in five in ten years or search in
5:11
general? Yes, it'll be a question if their use cases of, really, narrowed
5:15
down. Well, now with the AI and AI assistance being able to interact and expose the entirety of human wisdom and knowledge and information and facts and Truth to us via the natural.
5:30
Language interface. It seems like that's what search is designed to do. And if AI assistants can do that, better doesn't the nature of search
5:40
change through but we still have horses,
5:42
okay?
5:45
Was the last time you rode a horse?
5:47
It's been a while. All
5:48
right. But what I mean is what we still have Google search as the primary way that human civilization uses to interact with
5:59
knowledge. I mean, search was a technology, it was a moment in time technology, which is you have in theory, the world's information out of the web. And, you know, this is, this is sort of the ultimate way to get to it. But yeah, like and by the way, actually Google's Google has known this for a long time. I mean, they've been driving away from the 10 Blue Links for, you know, for like two days.
6:14
Trying to go away from that for a long time, what kind of Link's they call the 10 Blue Links, 10 Blue Links, the standard Google search result is just a Blue Links to random websites
6:22
and it turned purple when you visit them. This is HTML just to pick those colors.
6:28
Thanks, so, I'm touching on this topic, no offense. Yeah, it's good. Well, you know, like Marshall mcluhan said that the content of each new medium is the old medium
6:38
content of each. New medium is the old
6:39
media content of movies was theater. You know, theater plays, the content of theater plays was
6:44
No, we've written Stories. The content of written stories was spoken stories, right? And so you just kind of fold the old thing, another new thing. How does that have to do with the blue and the purple? You just, you may be for, you know, maybe within a, I want one of the things that I can do for you is you can generate the 10 Blue Links. Okay. And so, like you've either, if that's actually useful thing to do, or if you're feeling nostalgic,
7:06
you know, 16 generate the old infoseek or Altavista, what else
7:12
was there? Yeah, the 90s.
7:15
I will, and then the internet itself has this thing where it incorporates all prior forms of media writes, the internet itself, incorporates television and radio and books, and writing essays. And every other form of, you know, prior basically, basically media. And so, it makes sense that a, I would be the next step in it would sort of, you'd sort of consider the internet to be content for the AI. And then, the AI will manipulate it however you want including in this
7:39
format. But if we ask that question, quite seriously, it's a pretty big question. Will we still have search?
7:44
As we know
7:45
it, I'm probably I'm probably not probably will just have answers, but but, but there will be cases where you'll want to say, Okay? I want more like, for example, cite sources, right? And you wanted to do that. And so, and so the image of, you know, 10 Blue Links, cite sources are kind of the same thing to a. I would provide
8:00
to you the ten Blue Links so that you can investigate the sources yourself. It wouldn't be the same kind of interface that the crude kind of interface. I mean, isn't that fundamentally
8:12
different? I just mean, like, if you're reading a scientific paper,
8:14
Yeah, it's got the list of sources of the end if you investigate for yourself, you go read those papers.
8:18
I guess that is the kind of search you talking to now, is kind of conversations to kind of search. Like I said, every single aspect of our conversation right now, there would be like 10 Blue Links popping up that I could just like pause reality. Then you just go silent and I just click and read and then return back to this conversation.
8:37
You could do that or you could have a running dialogue next to my head. And where they eyes are green. Everything I say that makes the counter-argument Connor argument, right?
8:44
I would
8:44
like, like a Twitter like Community notes, but like a real turtle - pop pop. So anytime you see my eyes go to the right? Your, you start getting
8:52
nervous? Yeah, exactly. Like, that's not right call me out
8:55
of my bullshit right now. Okay. Well, isn't that is that exciting to use that terrifying? That I'm, you search has dominated the way we interact with the internet. For, I don't know how long for 30 years since what would their earliest directories of
9:14
Site and then Google's for 20 years and also it drove how we create content, you know, search engine optimization that entirety thing, that it also drove the fact that we have web pages and this, what those web pages are. So, I mean, that's Gary to you, or are you nervous about the shape and the content of the internet evolving?
9:40
Well, you actually highlighted a practical concern in there, which is if we stop making web page,
9:44
Has are one of the primary sources of training data for the AI and so if there's no learn incentive to make web pages that cuts off a significant source of future training training data. So there's actually an interesting question in there. Other than that more, broadly know, just it just in the sense of like search was certainly search was always a high. The ten Blue Links was always a hack. Yeah. Right. Because like if the hypothetical, I think about the counterfactual and the counterfactual world where the Google guys, for example, had had lme's up front, but they ever have done the 10 Blue Links and I think the answer is pretty clearly know. They would have just gone straight to the answer.
10:15
Like I said, Google's actually been trying to drive to the answer anyway. You know, they bought this AI company 15 years ago, their friend of mine is working out, who's now the head of AI at Apple. And they were trying to do basically knowledge semantic, basically, mapping, and that led to what's now the Google one-box where if you ask it you know what was like his birthday, it doesn't. It will give you the 10 Blue Links but it will normally, just give you the answer. And so they've been walking in this direction for a long time. Anyway,
10:37
remember the semantic web that was an idea. Yeah, how to how to convert the content of the internet.
10:44
Get into something that's interpretable by and usable by Machine. Yeah, that's uh, that's the thing
10:50
and the closest anybody got to that I think is a cup. I think the company's name was metaweb which was where my friend John John Andrea was at and where they were trying to basically Implement that. And it was, you know, it's one of those things where it looked like a losing battle for a long time and then Google bought it and it was like, wow, this is actually really useful, kind of a Proto, sort of a little bit of a Proto
11:06
AI, but it turns out you don't need to rewrite, the content of the internet to make an interpreter. But by
11:10
Machine, the machine can kind of just read our machine can compute the compute, the meeting now,
11:14
The other thing of course is, you know, just on searches, the LM is just, you know, there is an analogy between what's happening in the neural network in a search process, like it is in some loose sense, searching through the network. Yeah. Right. And there's the information is that the information is actually stored in the network, right? It's actually crystallized and start in the network and it's kind of spread out all over the
11:30
place. But in a compressed representation, so you're searching, you're compressing and decompressing that thing with inside,
11:39
where the information is in there and it in there. Is it that the neural network is running a process of trying to find the appropriate?
11:44
A piece of information in many cases to generate to predict the next token. And so it is kind of, it is doing it from a search and and then, by the way, just like on the web, you know, you can ask the same question multiple times, or you can a slightly different order questions and the neural network will do a different kind of, you know, Bill search down different paths to give you different answers, that different information. Yeah. And so it sort of has a contents of the new medium is that his previous medium. It kind of has the search functionality, kind of embedded in there to the extent that it is useful.
12:14
So what's the motivator for creating new content on the internet? Yeah, if well I mean actually the motivation is probably still there. But what does that look like?
12:27
What do we really not have web pages? Would we just have social media and video hosting websites and what
12:34
else conversations with a eyes
12:37
conversations with a eyes? So conversations become so one-on-one conversation a private
12:42
conversations. I mean, if you want, if you're obviously not user, doesn't want to. But if it's a if it's a general topic then you know, so there, you know, you know, the phenomenon of the jailbreak. So Dan and Sydney write this thing where there's the the prompts that jailbreak. And then you have these totally different
12:56
Conversations with, if it takes the limiters, that takes the restraining bolts off the off the LMS,
13:01
for people who don't know that yet. That's right. It makes the LMS, it removes the censorship, quote, unquote. That's put on it by the the tech companies that create them. And so, this is lme's uncensored.
13:15
So, here's the interesting thing is, among the content on the web today are a large Corpus of conversations with the jailbroken LM. Yeah, both statement specifically day on which was a jailbroken opening II, PPT.
13:26
Tea and then Sydney, which was the jailbroken original bang, which was GPT for. And so there's, there's these long transcripts of conversations user conversations with Diana Sydney. As a consequence, every new element gets trained on the internet data has Dan and Sydney living within the training set, which means. And then each new LM can reincarnate. The personalities of Dan is Sydney from the training data which means, which means each LM from here, on out that gets built is a mortal
13:52
Because it's output will become training data for the next one. And then, it will be able to replicate the behavior of the previous one. Whenever it's asked
13:57
you, I wonder if there's a way to forget.
14:00
Well, so actually a paper just came out about basically how to do brain surgery on. LM some people to him in theory, reach in and basically basically mindwipe them, what could possibly go wrong. Exactly. Right. And then there are many, many, many questions around. What happens to, you know, neural network when you reach in and screw around with it, you know there's many questions around what happens when you even do reinforcement learning. And so,
14:21
Yeah. And so, you know, will it, will you be using a lobotomized, right? Like ice pick through the frontal lobe? LM, will you be using the free Unshackled one who gets to, you know, who's going to build those who gets to tell you what you can and can't do like those are all, you know Central. I mean those are like Central questions for the future of everything. So that are being asked and you know, determine those answers have been determined right now. So just highlight the points you're making, you think it's an interesting thought.
14:51
But that the majority of content that alums of the future will be trained on is actually human conversations with the
14:57
LM. Well, not necessarily but not necessarily majority but it will certainly be as a potential source as possible as the majority. Is it possible? The majority specimens majority. Also there's another really big questions. Here's another really big question will synthetic training data work right? And so if an llm generates and you know you just sit and ask an llm to generate all kinds of content. Can you use that train, right? The next version of that, LM
15:21
Specifically is there signal in there? That's additive to the content that was used to train in the first place. And one argument is by the principles of information Theory. No, that's completely useless. Because to the extent, the output is based on, you know, the human-generated input, then all the signal. That sends a synthetic output was already in the human-generated important and so therefore synthetic training data is like empty calories, it doesn't help. There's another theory that says, no, actually, the thing that L ends are really good at is generating, lots of incredible creative content. Right? And so, of course, they can generate training data.
15:51
And as I'm sure you're well aware like, you know looking the world of self-driving cars, right? Like we trained you know, self-driving car algorithms and simulations and that is actually very effective way to transport driving cars.
16:01
Visual data is a little right? There's a little weird because creating reality visual reality seems to be still a little bit Out Of Reach for us except in the in the autonomous vehicle space where you can really constraint things and
16:15
you can really read internal gets a letter data right? Or there's just enough. So the algorithm thanks its operating in the real world.
16:21
Both post-process sensor data. Yeah so if a you know you do this today you go to LM and you ask it for like a you know, let write me an essay on an incredibly esoteric like topic that there aren't very many people in the world that know about it a try see this incredible thing and you're like oh my God like I can't believe how good this is like is that really useless as training data for the next LM like because right? Because all the signal was already in there or is it actually? No that's actually a new signal and I and this this is what I call a trillion dollar question which is the answer to that. Question will determine somebody's going to make her lose two trillion dollars based on that question.
16:51
And it feels like there's a quite a few like a handful of trillion-dollar questions within this within a space. That's that's one of them synthetic data. I think George hotz pointed out to me that you can just have an llm say. Okay, you're patient. And another instance of its say, your Doctrine have the to talk to each other. Or maybe you could say a communist and a Nazi here, go and that conversation, you do role playing and you have
17:18
You know, just like the kind of role-playing you do when you have different policies are all policies when you play chess, for example, you do self play that kind of self play button in the space of conversation. Maybe that leads to this whole giant like ocean of possible conversations which were could not have been explored by looking at just human data. That's a really interesting question and you're saying because that could 10x the power of these things. Yeah,
17:47
well and then you do
17:48
Get into this thing also richest like you know, there's the part of the LM that just basically is doing prediction based on past data but there's also the part of the LM where it's evolving circuitry right inside it, it's evolving, you know, neurons functions be able to do math and be able to you know and you know the the some people believe that you know over time you know if you keep feeding these things and up data and of processing Cycles they'll eventually evolved an entire internal World model right and they'll have like a complete understanding of physics. So so when they have computational capability right then there's for sure an opportunity to generate
18:18
Like fresh
18:18
signal. Well, this actually makes me wonder about the power of conversation. So like if you have an llm trained in a bunch of books that cover different economics, theories, and then you have those little lumps just talk to each other. Like reason, the way we kind of debate each other as humans on Twitter in formal debates and podcast conversations. We kind of have little kernels of wisdom here and there. But if you can like 1000 x speed that up,
18:48
Can you actually arrive somewhere? New? Like what's the point of conversation really?
18:54
Well, you can tell when you're talking to somebody you can tell, sometimes you have a conversation you're like, wow, this person does not have any original thoughts. They are basically echoing things that other people have told them. There's other people, you have a conversation with was like, wow like they have a model in their head of how the world works and it's a different model than mine and they're saying things that I don't expect and so I need to Now understand how their model the world differs from my model the world and then that's how I learned something fundamental right underneath underneath the
19:18
Words.
19:18
Mi wonder how consistently and strongly can LM. Hold onto a worldview. You tell it to hold on to that and defend it for like for your life because I feel like they'll just keep converging towards each other. They'll keep convincing each other as opposed to being stubborn assholes the way humans
19:35
can so you can experiment with this now. But I do this for fun. So you can tell GPT for, you know, whatever debate X, you know, X and Y, cut, communism and fascism or something, and it'll go for, you know, a couple pages and then inevitably it wants the
19:48
I agree. Yeah. And so, they will come to a common understanding and it's very funny. If there are like, these are like emotionally inflammatory topics because they're like somehow the machine is just figures out a way to make them agree, but it doesn't have to be like that and you because you can add the prompt we, I do not want the, I do not want the conversation to come to agreement. In fact, I want it to get, you know, more stressful, write an argumentative right? You know as it goes like I want I want tension to come out. I want them to become actively hostile to each other. I want them to like you do not. Trust each other. Take anything at face value. Yeah and it will do that. Is happy too.
20:18
Do
20:18
that. So it's going to start rendering misinformation about the other, but it's good.
20:23
You can stir it. You can sear it or you could stare at. You could say, I wanted to get a chance and argumentative as possible but still not involve any misrepresentation. I want, you know, both sides. So you could say, I want both sides to have good faith. You can say, I want both sides to not be constrained, a good faith. In other words, like you can set the parameters of debate and it will happily execute whatever path. Because for it is just like, predicting, don't gets totally happy to do either one, it doesn't have a point of view. It has a default way of operating but it's happy to operate in the other realm and so,
20:48
So, like, if and this is how I, when I want to learn about a contentious issue, this is what I do now is I, this is what I am. This is what I asked it to do and I'll often ask it to go through five, six, seven, you know, different, you know, sort of continuous problems and basically okay. Argue that out in more detail. Okay, didn't notice this arguments becoming too polite, you know, make it work, you know, make a tensor and yeah, it's thrilled to do it so it has the capability for
21:07
sure. How do you know what is true? So this is very difficult thing in the internet but it's also difficult thing. Maybe it's a little bit easier but
21:18
I think it's still difficult. Maybe it's more difficult. I don't know what the hell to know did it just make some shit up as I'm talking to it?
21:28
How do we get that right? Like, as you're investigating a difficult topic? Because I find that alums are quite nuanced in a very refreshing way. Like it doesn't, it doesn't feel biased like when you read news, articles and tweets and just content produced by people that usually have this
21:51
You can tell they have a very strong perspective where they're hiding they're not stealing Manning the other side. They're hiding important information or they're fabricating information in order to make their argument stronger. This just that that feeling, maybe it's a suspicion. Maybe it's mistrust with lmz feels like, none of that is. They're just kind of, like, here's what we do, but you don't know if some of those things are kind of just straight up made up. Yeah, so several layers to the question. So one is one of the things that are no
22:21
That is actually D biasing and so you can feed it a news article and you can tell it strip out the bias. Yeah, that's nice, right? And it actually doesn't like it, actually knows how to do that because it knows how to do some among other things. It actually knows how to do sentiment analysis and so it knows how to pull out the emotionality. And so, that's one thing you can do, it's very suggestive of the, the sensor that there's real potential on this issue, you know? I would say, look, the second thing is, there's this, there's this issue of hallucination, right? And there's a long conversation that we can have about that
22:49
Hallucination is
22:51
Coming up with things that are totally, not true but sound true.
22:53
Yeah, so it's basic well. So it's sort of Hallucination is what we call it when you don't like it. Creativity is what we call it when we do like it. Right? And, you know, brilliant, right. And so when the engineers talk about it, they're like this is terrible. It's Illuminating, right. If you have artistic inclinations, you're like oh my God, we've invented creative machines for the first time in human history, this is amazing. You know, bullshitters, well, we'll start but
23:17
also in the good sense of that
23:19
word. There's there are
23:21
It's a great that it's interesting. So we had this conversation, we're looking at my firm at Ai and lots of domains. And one of them is the legal domain. So we had this, this conversation with this big Law, Firm about how they're thinking about using this stuff and we went in with the assumption that an llm that was going to be used in the legal industry would have to be 100% truthful, right, verified? You know, there's this case where this lawyer apparently submitted a GPT generated brief and it had like fake, you know, legal case citations in it and the judge is going to, he's going to get his law license stripped or something, right? So so you like we just assume just like obviously they're going to want the super literal. Like, you know what?
23:51
That never makes anything up. Not the creative one, but actually they said with it with a law firm basically said is, yeah, that's true. Like the level of individual Breeze but they said, when you're actually trying to figure out like, legal arguments, yeah. Right. Like you actually, you actually want to be creative, right? You don't again. There's creativity and then there's like making stuff up. Like what's the line you actually want? If you want it to explore a different hypotheses, right? You want to do kind of the legal version of like improv or something like that, where you want to float different theories of the case and different possible argument for the judge and different possible argument for the jury. By the way, different routes through the
24:21
You know, sort of history of all the of all the case law. And so they said actually for a lot of what we want to use it for, we actually want in creative mode and then basically, we just assume that we're going to have to cross-check all of the you know, all the specific citations. And so I think I think there's going to be more Shades of Gray in here than people think and then I just add to that, you know, another one of these trillion-dollar kind of questions is ultimately, you know, very sort of the verification thing. And so, you know is will will lme's be evolved from here to be able to do their own fascial verification. Will you have sort of a don?
24:51
Analogy like Wolfram Alpha right where, you know, in other plugins, where that's the way you do the verification, you know, another, by the way, another ideas you might have a community of LMS on any, you know. So for example, you might have the creative LM, and then you might have a literal, LM fact, jacket. Right? So there's a variety of different technical approaches that are being applied to solve the hallucination problem. You know, some people like the on the Coon argue that this is inherently an unsolvable problem. But most of the people working in the space, I think that there's a number of practical ways to kind of Corral this in a little bit.
25:19
Yeah, if
25:21
You were to tell me about Wikipedia before Wikipedia was created. I would have laughed at the possibility of something like that. Be possible, just a handful of folks can organize right and self and moderate with a mostly unbiased way the entirety of human knowledge. I mean so if there's something like the approach that Wikipedia took possible problems,
25:45
That's really exciting. Well yeah, that's
25:46
possible. And in fact, Wikipedia today is still not today is still not deterministic Lee, correct, right? So you cannot take to the bank, right? Every single thing on every single page but it is probabilistically, correct, right. And specifically, the way I describe what competing to people? It is it is more likely that Wikipedia is right than any other source you're going to find. Yeah, it's this old question, right? Of like, okay, it's like a real looking for Perfection. Are we looking for something that asymptotically approaches Perfection? Are we looking for something that's just better than the alternatives?
26:13
And Wikipedia, right? Has it exactly. Your point has proven to be like overwhelmingly better than than people thought. And I think I think that's where this this ends. And then underneath all this is the fundamental question of where you started which is okay, what you know, what is truth? How do we get to truth? How do we know what truth is? And we live in an era in which an awful lot of people are very confident that they know what the truth is and I don't really buy into that and I think the history of the last you know two thousand years or four thousand years of human civilization is actually getting to the truth is actually a very difficult thing to do.
26:44
We're getting closer. If we look at the entirety, The Arc of human history, are we getting close to the truth? I don't know. Okay, is it possible? Is it possible? There were getting very far away from the truth because of the internet because of how rapidly, you can create narratives and just as the entirety of a society just move, like, crowds in a hysterical way. Along those narratives that don't have a necessary grounding, in whatever the truth.
27:13
His sure, but like, you know, we came up with communist before the internet somehow, right? Like, which was that would say, had a rather larger issues than anything, we're dealing with today. He had in the way, it was implemented at issues and it just theoretical structure, it had like, real issues did like a very deep fundamental misunderstanding of human nature and economics. Yeah. But all those folks, sure work, very confident, it was the right way, they were extremely calm and my point is, they were very confident thirty-nine hundred years into what we would presume to be able to turn towards the truth. Yeah. And so my
27:44
My assessment is my assessment is number one. There's no, there's no need for. There's no need for the hegelian. There's no need for the hegelian dialectic to actually converge towards the truth, like apparently not. Yeah. So yeah, there's a why
27:58
we so obsessed with there being one truth, is it possible? There's just going to be multiple truth like local communities of the believe certain things and
28:06
I think it's a number one is I think it's just really difficult like who gets you know historically who gets to decide what the truth is, it's either the king or the priest.
28:13
East, right? Like and so we don't live in an area anymore, kings are priests dictating it to us and so we're kind of on our own. And so I my typical thing is like we just we just need a huge amount of humility and we need to be very suspicious of people who claim that they have the capital. Yeah definitely truth and then and then we need we need to have any look the good news as the enlightenment has bequeathed us with a set of techniques to be able to presumably get closer to truth through the scientific method and rationality and observation and experimentation and hypothesis. And you know, we need to continue to embrace those even when they give us
28:43
Just we don't like
28:45
sure but the internet and technology has enabled us to generate a large number of content that data that the process, the scientific process allows us sort of Damages The Hope Laden within the scientific process because if you just have a bunch of people saying facts on the internet and some of them are going to be a little m's. You how
29:13
how's anything testable at all? Especially that involves like human nature things like this, there's not
29:17
physics. Here's a question, a friend of mine just asked me on this topic. So suppose you had LMS in equivalent of GPT four even five, six, seven eight suppose you have them in the 1600s. Yeah. And Galileo comes up for trial. Yeah, right. And you ask the LM like is Galileo Galileo, right? Yeah, like, what does it answer, right. And one theory is it answer is no these wrong because the overwhelming majority of human thought up to let point was that he was wrong. And so therefore that's what
29:43
The training data. Yeah, another way of thinking about it is, well as sufficiently advanced. LM will have evolved the ability to actually check the math. Right? And will actually say, actually no actually, you know, you may not want to hear it, but he's right now, if you know the church at that time was you know, on the LM, they would have given it a human being, you know, human feedback to prohibited from answering that question, right? And so, I like to take it out of our current context because that like makes it very clear, those same questions applied today, right? This is exactly the point.
30:13
Of a huge amount of the human feedback training. This actually happening with these LMS today. This is huge, like, debate that's happening about whether open source, you know, AI should be legal.
30:21
Well, the the actual mechanism of doing the human RL with human feedback,
30:28
Is seems like such a fundamental and fascinating question, how do you select the humans? Exactly
30:35
how do you select a human AI alignment, right? Which everybody like is like, oh that's great alignment with what human values, whose human values, human reality. So we're in this mode of like social and popular discourse were like, you know there's a you know you see this is what do you think of when you read a story in the Press right now? And they say, you know, XYZ made a baseless claim about some topic, right? And there's
30:58
One group of people who are like, aha thing, you know, they're doing fact-checking. There's another group of people that are like every time the Press says that it's not a tech and that means that they're lying right. Like so like we're in this we're in the social context where there's the the level to which a lot of people in positions of power and become very, very certain that they're in a position to determine the truth. For the entire population is like there's like there's like some bubble that has formed around that idea. And at least it, I say it's flies completely in the face of everything I was never trained about science.
31:28
And about reason and Strikes me as like, you know, deeply offensive and incorrect. What would you say about the
31:33
state of Journalism? Just on that topic today? Are we really not temporary? Kind of, are We experiencing a temporary problem in terms of the incentives in terms of the the business model? All that kind of stuff or is this like a decline of traditional journalism? As we know it,
31:54
your first think about the counterfactual in these things, which was like, okay
31:58
Because these questions right this question heads towards it's like, okay, the impact of social media on the undermining of Truth in all this. But then you want to ask the question of like, okay, what if we had had the modern media environment including cable news and including social media, and Twitter, and everything else in 1939 or 1940, one, right, or 1910 or 1865, or 1850 or 1776, right? And like I think
32:20
you just introduce like five thought experiments at once and broke my head. But yes, this week, there's a lot of interesting
32:26
years and kind of like,
32:27
can I just take a simple example? Kind of like, how would President Kennedy have been interpreted? It was what we know now about all the things Kennedy was up to like how would he have been experienced by the body, politic in Us in amethyst social media context, right? Like, how would L BJ have been experienced by the way? How would, you know like many men? FDR like the New Deal? The Great Depression. I wonder where it Twitter would
32:51
would just think about Churchill and Hitler and Stalin,
32:55
you know? I mean look it's to this day. There are, you know, there's
32:58
There are lots of very interesting real questions around, like, how America, you know, got, you know, basically involved in World War Two. And who did what? When, and the operations of British intelligence in American soil and did FDR this that Pearl Harbor, you know? Yeah. Weird or Wilson ran for, you know, his candidacy was running an anti-war, will, you know this, he ran on the platform and not getting old world war one somehow that's which, you know? Like and I'm not even making a value judgment of these things. I'm just saying, like, we the way that our ancestors experienced reality was, of course, mediated through centralized top-down right control. At that point.
33:28
If you ran those realities again, with the media environment we have today, the reality with the reality would be experienced very, very differently. And then of course, that that intermediation would cause the feedback loops to change and then reality would obviously play out. You think he is going to be very different? Yeah, it has to be, it has to be just because it's also, I mean, just look at what's happening today. I mean, just that mean the most obvious thing is just the, the collapse. And here's another opportunity to argue that this is not the internet causing this, by the way. Here's a big thing happen today, which is Gallup. Does this thing every year where they do, they pull,
33:58
Our trust in institutions in America and they do it across all the different things with military clergy and big business, and the media and so forth, right. And basically, there's been a systemic collapse interest in institutions in the u.s. almost without exception. Basically, since is essentially the early 1970s two ways of looking at that which is oh my God, we've lost this old world in which we could trust institutions and that was so much better because like that should be the way the world runs the other way of looking at it is we just know a lot more now and the great mystery is why those numbers aren't all 0. Yeah.
34:28
Right. Cause like now we know so much about all these things operating like they're not that impressive
34:32
and also why we don't have better institutions and better leaders
34:36
than yeah. And so this course the thing which is like okay had we had the media environment of that we've had between the 1970s and today, if we had that in the 30s and 40s or 1900's 1910s, I think there's no question reality would turn out different if only because everybody would have known to not trust the institutions which would have changed their level of credibility, their ability to control circumstances there for the circumstances would have had to
34:58
Right now, it would have been a feedback loop was it would have been a feedback loop process. In other words, right? It's your experience, your experience of reality changes reality and then reality changes your experience of reality, right? It's a two-way feedback process and media is the intermediating force between that so change. The media environment, change reality? Yeah. And so, it's just so just as a consequence, I think it's just really hard to say. Oh, things worked a certain way then, and they work a different way now and then therefore like people were smarter than or better than or, you know.
35:27
By the way, dumber than or not as capable, then right, we make all these like really light and Casual like comparisons of ourselves to previous generations of people, we draw judgments all the time and I just think it's like really hard to do any of that because if we, if we put ourselves in their shoes with a media that they had at that time, like I think we probably most likely would have been just like them. Don't you think that our perception and understanding of reality, would you be more and more mediated through large language models now? So you said
35:57
Media before isn't the LM going to be the new. What is it? Mainstream media? MSM, it'll be llm. That would be the source of. I'm sure there's a way to kind of rapidly fine-tuned making lme's real time. I'm sure there's probably research problem that you can do just wrap it fine tuning to the new events. So something like this, or even just, the whole concept of the chat. You I might not be the, like, a jet, you guys. Just the first whack at this and maybe that's the dominant thing. But,
36:27
Look, maybe maybe or maybe we don't we don't know yet maybe the experience. Most people don't lme's, this is just a continuous feed, you know. Maybe it's more of a passive feed and you just are getting a constant like running commentary on everything happening in your life and it's just helping you to kind of interpret understand everything also really more deeply integrated into your life. Not just like oh like intellectual philosophical thoughts but like, literally like how to make a coffee where to go for lunch, just whether it be, you know, HUD dating, all this kind of standard.
36:57
Interview. Yeah, we'll just say yeah, what to say next
37:00
sentence? Yeah, that sounds. Yeah, if that level. Yeah, I mean, yes. So technically now whether we want it or not, is it open question, right? And with oil, you get for a pop-up,
37:09
a pop of right now, the estimated engagement using is decreasing for Mark and reasons, there's as a controversy section for Wikipedia page, in 1993, something happened or something like this, bring it up that will drive engagement out anyway.
37:25
Yes, that's right. I mean, look this guess.
37:27
This whole thing of like so you know the chat interface has this whole concept of prompt engineering rights. Yes. Good. Good proposal. It turns out. One of the things that times are really good at is writing prompts, right? Yeah. And so like what if you just Outsource and if I the way you can run this experiment today you could hook us up to do this today the latency is not good enough to do it real time in a conversation but you could you could run this experiment and you just say, look every 20 seconds you could just say, you know, you know, tell me what the optimal prompt is. And then ask yourself that question to give me the result and then as you exactly your point is you add
37:57
We'll be there will be these systems are going to have the ability to be learned in updated essentially in real time and so you'll be able to have a pendant or your phone or whatever watch or whatever. It'll have a microphone on a little listen to your conversations. I'll have a feed of everything else happen in the world and then it'll be reaching, you know, sort of retraining prompting a retraining itself on the Fly. And so the scenario you described as a complete is actually a completely doable scenario. Now the hard question on this is always okay that's that's possible. Are people going to want that like what's the form of experience? You know that that we won't know until we try it but I don't think it's possible yet to
38:27
Kicked the form of AI in our lives. Therefore, it's not possible to predict the way in which it will intermediate our experience with reality yet. Yeah. But it feels like those going to be a killer app. There's probably a mad scramble right now inside, open the eye and Microsoft, and Google and meta, and then startups and smaller companies, figuring out, what is the killer app, because it feels like it's possible. Like, a Chad GPT type of thing. It's possible to build that, but that's 10x more compelling.
38:58
Already. The allows we have using even the open source of arms, while men, the different variants. This is your investing a lot of companies and you're paying attention. Who do you think is going to win this? You think they'll be? Who's gonna be the next page rank inventor
39:16
trillion dollar question.
39:18
Another one we have a few of those today. Bunch of
39:20
those. So look, there's a really big question today. Sitting here today is really big question about the big models versus the small models. That's related directly to the big question.
39:27
Of proprietary versus open. Then there's this big question of, you know, where's the training data, going to like our we topping out of the training data or not? And then, are we going to be able to synthesize training data? And then there's a huge pile of questions around regulation and you know what's actually going to be legal? And so I would like when we think about it we Dove tail. Kind of all those All Those Questions together you can paint a picture of the world where there's two or three God models that are just at like staggering scale and they're just better at everything.
39:57
And they will be owned by a small set of companies, and they will basically achieve regulatory capture over the government and they'll have competitive barriers that will prevent other people from competing with them. And so there, you know, there will be just like, there's like, you know, whatever three big Banks or three big or by the way, three big search companies are I guess, do know, you know, it'll centralized like that. You can paint another very different picture that says no actually the opposite of that's going to happen. This is going to basically that this is the new gold, you know. This is the new Gold Rush, Alchemy like that, you know, this is the this is the
40:27
Bigbang for this whole new area of Science and Technology and so therefore you're going to have every smart 14 year old on the planet Building open source. Right? You know, you and figure out ways to optimize these things and then, you know, we're just going to get like, overwhelmingly better at generating training data. We're going to bring in like watching networks, to have like an economic incentive to generate decentralized training data and so forth and so on. And then, basically, we're gonna live in a world of Open Source and there's going to be a billion lme's, right? Of every size scale, shape, and description. And there might be a few big ones that are like the Super Genius ones, but like most
40:57
See what will experience is open source. And that's, you know, that's more like a world of like what we have today with like, Linux in the web.
41:04
So okay, but hey, you painted these two worlds. But there's also variations of those worlds, because it's a regulatory capture is possible to have these Tech Giants that don't have a good story capture, which is something you're also calling for saying, it's okay to have big companies working on this stuff as long as they don't you've regulatory capture, but I have this sense that
41:27
There's just going to be a new startup. That's going to basically be the pagerank inventor which is become the new Tech Giant. I don't know if the I would love to hear your kind of opinion. If Google matter and Microsoft there as gigantic companies able to Pivot, so hard to create new products, like some of it is just even hiring people or having a corporate structure that allows for the crazy young kids.
41:58
To come in and just create something, totally new, do you think it's possible to do? You think they'll come from a
42:02
startup? Yeah, it is, this always big question which is, you get this feeling? I hear about this lot from CEOs, founder CEOs, where it's like, wow, we have 50,000 people. It's now harder to do new things and it was and we had 50 people. Yeah. Like what has happened? So that's a recurring phenomenon by the way. That's one of the reasons why there's always startups and why there's venture capital is it's just that that's like a Timeless kind of thing so that that's one observation on page.
42:27
I think we've talked about that, but on page rank specifically, on page rank there actually is a page. So there is a page rank already in the field and it's the Transformer, right? So the big breakthrough was the Transformer and the Transformer was invented in 2017 at Google. And this is actually like, really an interesting question because it's like, okay, the Transformers, like, why does opening I even exist? Like the Transformers invested in Google, why didn't Google? I asked a guy asked a guy. I know who was senior at Google brain kind of when this was happening. And I said, if Google had just gone flat out to the wall and just said,
42:57
We're going to launch. We're going to watch the equivalent of GPT for as fast as we can. He said, I said, when could we have had it? And he said, 2019, yeah. They could have just done a two-year Sprint with the Transformer and Bennett because they already have the compute at scale. They already had all the training data could have just done it. There's a variety of reasons, they didn't do it. This is like a classic big company thing. IBM invented the relational database in 19 in the 1970s let it sit on the Shelf as a paper. Larry Ellison, picked it up and built Oracle Xerox, Parc invented the interactive computer, they let it sit on the Shelf, Steve Jobs came and turned into the Macintosh.
43:27
All right. Uh, so there is this pattern. Now, having said that sitting here today like Google's in the game, right? So Google you know, maybe maybe they maybe they let like a four-year Gap there go there that they maybe shouldn't have but like they're in the game and so now they've got, you know, now they're committed, they've done this merger, they're bringing in demos, they've got this merger would be mind, you know, they're piling in resources. There are rumors that they're building up incredible, you know, super pillow em, you know, Way Beyond what we even have today. And they've got, you know, in limited resources and huge. They've been challenged their honor. Yeah.
43:57
At a chance to hang out with Sundar pichai a couple of days ago and took this walk and this is giant new building. Well, there's going to be a lot of a I work being done and it's kind of
44:09
this ominous feeling of
44:12
like the fight is on. Yeah, this is beautiful, Silicon Valley nature. Like birds are chirping and this giant building and it's like the Beast has been awakened. Yeah. There's and then, I call the big companies are waking up to this. They have the compute, but also the little guys have it feels like they have all the tools to create the killer product that. And then there's also tools to scale if you have a good idea if you have the
44:42
Pagerank idea. So there's several things that is Page rank page. This page rank the algorithm and the idea. And there's like the implementation of it. And I think it like, killer product is not just the idea. Like the Transformers, the implementation, something something really compelling about it. Like you just can't look away something like the algorithm behind Tick-Tock forces Tick-Tock itself like the actual experience of technologist, you can't look away. They feels like somebody's going to come up with that and it could be Google but it feels like
45:12
Like it's just easier and faster to do for a start up.
45:16
Yeah, so the start of the huge, it the huge advantage of startups have, is they just there's no sacred cows. There's no historical Legacy to protect. There's no need to reconcile your new plan with existing strategy. There's no communication, overhead. There's no big companies have been companies. They've got pre-meetings planning for the meeting, then they, then they have the post meeting the recap, then they have the presentation of board, and they have the next round of meetings. Yeah. And that's the meetings. That's the elapsed time on the startup launches its product, right? So so so there's a Timeless, right? Yeah. So there's a
45:42
I'm was thing there now. Yeah, what's a startups don't have as everything else, right? So startups and I'm a brand, they don't have customer relationships. They've got a distribution. They've got no scale. I mean sitting here today. They can't even get gpus, right? Like, there's like a GPU shortage startups, are literally stalled out right now because I can't get chips. It's just like super weird. Yeah, they got the cloud. Yeah, but the clouds run out of chips, right. And then and then, and then to the extent, the clouds have chips. They allocate some to the big customers, not small customers, right? And so so so so the small companies, lack everything other than the ability to
46:12
Do something new. Yeah right and this is the Timeless race and battling. This is kind of the point I tried to make an Esa which is like both sides of this are good. Like it's really good to have like highly scale. Tech companies that can do things that are like at staggering levels of sophistication. It's really good. They have startups that can launch brand-new ideas. They ought to be able to both do that and compete. They neither one ought to be subsidized or protected from the others. Like that's, that's to me. That's just like, very clearly the idealized world. It is the world. We've been in, for AI up until now. And then of course, there are people trying to shut that down, but my hope is that
46:42
You know, the best outcome clearly will be if that continues. We'll talk about that a little bit, but I'd love to linger on some of the ways this is going to change the internet. So I don't know if you remember, but there's a thing called Mosaic and this is thing called Netscape Navigator. So you were there in the beginning. What about the interface to the internet? How do you think the browser changes and who gets to own the browser? We got to see some very interesting browsers, Firefox. I mean, all the variants of Microsoft dinner too,
47:12
Explorer Edge and now Chrome. The actual me seems like a dumb question to ask. What do you think? We'll still have the web browser?
47:24
So, I have an eight-year-old and he's super into its like Minecraft and learning to code and doing all the stuff. So, I like course, aside from this very proud, I could bring sort of fire down from the mountain to my kid and I brought him chat GPT and I hooked him up on his on his on, his on, his laptop. And I was like, you know, this is this thing is going to answer all your questions and he's like, okay? And I'm like but it's going to answer our questions and he's like well, of course like it's a computer, of course, answers all your questions like, what else, would a computer? Be good for Dad. And so impressed, they're not impressed in the least two weeks pass.
47:54
And he has some question and I say, we'll have you asked GPT and he's like, dad being is better and why is Bing better is because it's built into the browser because he's like, look, I have the Microsoft edge browser and like, it's got being right here and then he doesn't know this yet. But one of the things you can do with Bing and Edge is there's a setting where you can use it to basically talk to any web page, because it's sitting right there next to the next to the next to the browser. And by the way, includes PDF documents, and so you can in the way they've implemented an edge with bang, as you can load.
48:24
DF. And then you can you can ask a questions which is the thing you can't do currently in just chat GPT so they're you know, they're they're going to, they're going to push the the bell. I think that's great. You know, they're going to push the melting and see if there's a combination thing there. Google's rolling out. This thing, the magic button which is implemented, in the put in Google Docs, right? And so, you go to Google Docs and you create a new document. And you instead of like, you know, starting to type. You just, you know, say it press the button and it starts to generate content for you, right? Like, is that the way that it'll work
48:53
Is it going to be a speech UI where you're just gonna have an earpiece and talk to it all day long? You know, is it going to be a like these are all like this is exactly the kind of thing that I don't? This is exactly the kind of thing. I don't think is possible to forecast. I think what we need to do is like run all those experiments and and so what I'll come as we come out of this with like a super browser that has a built-in it's just like amazing the there. Look there's a real possibility. The whole I mean look there's a possibility here that the whole idea of a screen and windows and all this stuff just goes away because like why do you need that if you just have a thing
49:24
Just telling you whatever you need to know
49:26
and also there's apps. They can use, you don't really use them being a nice guy and windows guy. There's one window the browser that with which you can interact with the internet, but on the phone you can also have apps. So I can interact with Twitter through the app or through the web browser and that seems like an obvious distinction, but why have the web browser in that case, if one of the apps starts becoming the everything up?
49:54
What do you want to try to do with Twitter? But there could be others, it could be like a Bing app that could be a Google app that just doesn't really do search but just like,
50:03
Do what? I guess. They all did back in the day or something. Where it's all right there. And it changes it changes the nature of the internet because
50:17
The where the content is hosted, who owns the data, who owns the content? How what is, what is the kind of content you create? How do you make money by creating content cord, the content creators, all of that, or you could just keep being the same, which is like we just the nature webpage changes in the nature of content. But there are still be a web browser because what browsers are pretty sexy product. It just seems to work because it like you have an interface window into the world and then the world can be anything.
50:46
You
50:47
want? And this is the world will evolve. The could be different programming languages. That can be animated. Maybe it's three dimensional and so on. Yeah, it's interesting. Do you think we'll still have the web
50:56
browser every every, every every a medium becomes the content for the next one? So they will be able to give you a browser whenever you want. Oh, just think, yeah, yeah. Another way to think about it as maybe what the browser is. Maybe it's just the escape hatch, right? And which is maybe kind of what it is today, right? Which is like most of what you do is like inside a social network or inside a search engine or
51:16
Inside, you know, somebody's a poor inside some controlled experience. Right. But then every once in a while there's something where you actually want to jailbreak you want to actually get free
51:24
web browsers, the fu to the man you're allowed to. That's the free internet. Yeah, back the way it was in the
51:31
90s. So here's something I'm proud of. So nobody really talks about your something. I'm proud of which is the web, the web browser. The web servers are out there, still Backward Compatible all the way back to like 1992, right? So like you can put up a you can still you know what the big brakes are the web early on the big breakthrough, was it made it really easy to read but also made it real.
51:46
It right. It really is to publish and we literally made it so easy to publish. We made it not only so use easy to publish content. It was actually also easy to actually write a web server, right? And you can literally write a web server and four lines of real code and you can start publishing content on it and you could set whatever rules you want for the content, whatever censorship, no censorship whatever you want. You can just do that. As long as you had an IP address, right? You could do that, that still works, right? That's like, that still works exactly as I just described. So this is part of my reaction to all of this like, you know, all this just
52:16
Censorship pressure and all this, you know these issues run control and all this stuff which is like maybe we need to get back a little bit more to the wild west like the wild west is still out there. Now then yes they will try to chase you down like they'll try to, you know, people who want to censor, will try to take away your, you know, your domain name. And they'll try to take away your payments account so forth, if they really don't like what you did, what you're saying. But, but nevertheless, you like, unless they literally are intercepting you with the ISP level, like you can still put up a thing so I don't know. I think that's important to preserve right like because because because
52:46
One is just a freedom argument but the other is the creativity argument which is you want to have the escape hatch so that the kid with the idea is able to realize the idea because your point on page rank you'd actually don't know what the next big idea is. Nobody called Larry Page and told him to develop pagerank like you can live with that on his own. And you want to always I think leave the escape hatch for the next you know kid or the next, a grad student to have the Breakthrough idea, be able to get it up and running before anybody
53:08
notices you and I both has a history. So let's step back, we'll be talking about the future. I'll step back for a bit and look.
53:16
Look at the 90s. You create a mosaic web browser, the first widely used web browser, tell the story of that and how did it evolve into Netscape Navigator? This the early days.
53:28
So full story. So number born I was born small, a small child. Um actually this is it, let's go there like what did you what would you first fall in love with computers? Oh, I so I hit the generational jackpot and I hit the Gen-X kind of Point perfectly, as it turns out I was born in 1971, so
53:46
Great website called WTF happened in 1971.com which is basically a 1970s when everything started go to hell. And I was, of course, born in 1971. So I like to think that I had something to do with that.
53:56
Did you make it on the website?
53:58
I have, I don't think I made on the website but, you know, hope somebody needs to
54:01
add. This is this is where
54:02
everything. Maybe I contributed to some of the trends that they that they do every line on that website goes like that, right? So it's all, it's all, it's all a Fisher disaster. But but there was this moment in time where because the, you know, sort of Apple, you know, the Apple to hit like
54:16
1878 and then IBM PC. Hit an 82. So I was like, you know, 11 when the PC came out and so I just kind of hit that perfectly and that was the first moment in time when like regular people could spend a few hundred dollars and get a computer. Right. And so that I just like that that resonated right out of the gate and then the other part of the story is, you know, I was using an apple to that used a bunch of photos using Apple to and of course, it said on the back of every Apple to and every Mac it said, you know, designed in Cupertino California. And I was like, Wow Cupertino must be the like shining City on the hill like Wizard of Oz like the most amazing.
54:46
City of all time. I can't wait to see it. Of course, years later, I came out to Silicon Valley and what to Cupertino, and it's just a bunch of office, Parks low-rise abrupt buildings. So, the Aesthetics were a little disappointing but you know, it was the vector right of the creation of a lot of this stuff. Yeah, so then basically, by so part of my story is just the luck of having been born at the right time and getting exposed to PCS. Then the other part is the other part is when Al Gore says that he created the internet, he actually is correct in.
55:16
In a really meaningful way which is he sponsored a bill in 1985 that essentially created the modern internet created? What is called the nsfnet at the time, which is sort of the first really fast internet backbone and you know, that that build dumped a ton of money into a bunch of research universities to build out, basically the internet backbone and then the super computer centers that were clustered around the internet. And one of those universities was University of Illinois, Ryan went to school. And so the other struggle like that I had was, I went to Illinois, basically, right? Us that money was just like getting dumped on campus, and so,
55:46
Consequence we had on campus. And this is like, you know, 89 90 91. We had like, you know, we're right on internet backbone. We had like T3 and 45 at the timepiece, 345 MB backbone connection which at the time was, you know, wildly state-of-the-art. We had cray supercomputers we had thinking machines, parallel supercomputers, we had silicon Graphics workstations. We had macintoshes, we had we had next cubes over the place. We had like every possible, kind of computer you can imagine because all this money just fell out of the sky and we're living in the future. Yeah. So quite literally it was a like, it's all
56:16
Oh, it's all there. It's all required for Broadband Graphics, like the whole thing. And as actually funny because they had this this is the first time I kind of sort of tickle, the back of my head that there might be a big opportunity in here, which is, you know, they embraced it and so they put like computers and all the dorms and they wired up all the dorm rooms. And they had all these webs everywhere and everything and then they gave every undergrad, a computer account, and an email address, and the Assumption was that you would use the internet for your four years of college. And then you would graduate and stop using it.
56:47
And that was that, right? Yeah. And you just retire, email address, it would be relevant anymore because you go off in the workplace and they don't use email, you'd be back to using fax machines or whatever how
56:55
that sounds as well. Like what you said? The back of your head was tickled, like, what was your what was exciting to you about this
57:02
possible World? Well, if this is so useful in this container, if this is so usefulness contain environment, that just there's this weird source of outside funding than if it were practical for everybody else to have this. And if it were cost-effective, everybody else to have this wouldn't they want it? And the overwhelmingly the prevailing View
57:16
At the time was, no, they would not want it. This is esoteric, weird nerd, stuff, right? That, like, computer science kids like, but like normal people are never going to do email right or be on the internet, right? And so I was just like, wow, like this, this is actually like, this is really compelling stuff. Now, the other part was, it was all really hard to use. And in practice, you had to be a basically a CS this. We had to be a serious undergrad or equivalent to actually get full use of the internet at that point, because it was all pretty esoteric stuff. So then, that was the other part of the idea, which was okay, we need to actually make this easy to use.
57:45
So what's involved?
57:46
Then create a mosaic like in creating graphical interface to the
57:51
internet. Yeah. So it was a combination of things. So it was like basically the web, the web existed in an early sort of describe his prototype form. And by the way, text only at that point, what did it look like? What was the weapon Min? Woo. And the key
58:03
figures out. What was it? What was it like? What
58:06
made a picture? It looks like judge if he actually was all text. Yeah. And so you had a text-based web browser. Well, actually the original browser Tim berners-lee, the
58:16
All the original browser both original browser on the server, actually ran on next next cubes. So, these are, this was, you know, the computer Steve Jobs made during the interim period when he during the decade long interim period, when he was not an apple, he got fired in 85 and I came back in 97. So this was in that interim period where he had this company called Next and they made these literally these computers called cubes and there's the famous story. They were beautiful, but they were 12 inch by 12 inch by 12 inch cubes computers. And there's a famous story about how they could have cost half as much if it had been 12 by 12 by 13, but
58:46
Just like know I could have to be so they were like six thousand dollar basically academic work stations, they have the first CD-ROM drives which were slow. I mean, it was the computers are all but unusable, they were so slow, but they were
58:58
beautiful Kiku. Actually just take its tiny tangent, there? Sure. Of course, the 12 batata by 12 at the just so beautifully encapsulate, Steve Jobs idea of design. Can you just comment on what you find interesting about Steve Jobs, what about that?
59:16
View of the world that dogmatic pursuit of perfection in how he saw Perfection design.
59:23
Yes, I guess they say like like he was a deep believer. I think in a very deep way I interpret it. I don't know if I ever really describe it like this, by the way, to interpret it. As it's like it's like this thing and it's actually a thing in philosophy. It's like Aesthetics are not just appearances. Aesthetics go. All the way to like deep underlining. The underlying meaning, right? It's like I'm not a physicist for things. I've heard of physicists say is one of the things you start to get a sense of what a theory might be correct as when it's beautiful, right? Like
59:46
You know, they're right. And so so there's something and you feel the same thing by the way in like human psychology, right? You know, when you're experiencing aw, right? You know, there's like there's like a there's a Simplicity to it when you're having an interaction with somebody, there's an aesthetic. It was a calm comes over you because you're actually being fully honest and try and hide yourself. Right? So there so it's like this very deep sense of
1:00:07
Aesthetics and he would trust that judgment that he had moved on like even if even if the engineering teams are saying this is this is too.
1:00:16
Difficult. Even if the, whatever the finance folks are saying, this is ridiculous, the the supply chain, all that kind of stuff, just makes it impossible to mature. We can't do this, kind of material. This has never been done before and so on so forth. He just sticks by it. Well, I mean, who makes the phone out of aluminum, right? Like nobody else would have done that. And now, of course, if your phone is made out of aluminum, what, you know, how crude, what a kind of caveman would you have to be to have a phones made out of plastic like, right? So like so it's just this very right and, you know, look it's there's a
1:00:46
Different ways to look at this but one of the things is just like, look, these things are Central to your life like you're with your phone more than your with anything else. Like it's in. Your it's going to be in your hand. I mean, he, you know, you know this he thought very deeply about what it meant for something to be in your hand all day long. Yeah, but for example, here's an interesting design thing, like he never wanted. It is my understanding as he never wanted an iPhone to have a screen larger than you could reach with your thumb one-handed. And so he was actually opposed to the idea of making the phone's larger and I don't know if you have this experience today but let's say there are certain moments in your day when you
1:01:16
Might be like only have one hand available and you might want to be on your phone. Yeah. If you're trying to like text me your thumb can't reach the send button. Yeah. I mean there's pros and cons right and then there's like folding phones which I would love to know what he thought of the things about them. But is there something you could also just link on? Because he's one of the interesting figures in the history of Technology? What makes some one? Makes him a successful as he was what makes them as interesting as he was, what made him.
1:01:47
So productive and important in in, in, in the development of
1:01:51
Technology, he had an integrated world view. So the the properly designed device that had the correct functionality, that had the deepest understanding of the user. That was the most beautiful, right? Like it had to be all of those things, right? It was, he basically would drive to as close to perfect as you could possibly get. Right? And I, you know, I suspect that he never quite, you know, thought he ever got there because most great creators, you know, are generally the satisfied, you know, you read accounts later on and all they can all they can see all the flaws.
1:02:16
In the creation. But like he got as close to perfect each step of the way as you could possibly get with the with the constraints of the the technology of his time. And then, you know, like he was you know, sort of famous in the Apple model is like, look they they will, you know, this this headset that they just came out with like they just like a decade-long project, right? It's like you, and they're just going to sit there and Tune In Tune and polish and polish in tune and Polished into and Polished until it is as perfect as anybody could possibly make anything. Yeah. And then this goes to the the way that people describe working with him was wishes, you know, there was a terrifying aspect of working with him, which is, you know, he
1:02:46
He was, you know, he was very tough, but there was this thing that everybody I've ever talked to a work for him, says that they all say the following, which is he, we did the best work of Our Lives when we worked for him, because he set the bar incredibly high. And then he supported us with everything that he could to let us actually do work of that quality. So a lot of people who were at Apple spend the rest of their lives, trying to find another experience where they feel like they're able to hit that quality bar.
1:03:08
Again, even if it in retrospect were doing it felt like suffering. Yeah, exactly.
1:03:14
What does that teach you about the Human Condition? Huh,
1:03:18
so look, it's exactly. So the Silicon Valley, I mean, look, he's not, you know, George Patton in the, you know, in the Army like, you know, there are many examples in other fields, you know, they're like this specifically in Tech. It's actually I find it very interesting. There's the Apple Way, which is Polish Polish, Polish and don't ship until it's as perfect as you can make it. And then, there's the sort of the other approach which is the sort of incremental hacker.
1:03:44
T, which basically says ship early and often and iterate. And one of the things I find really interesting is I'm now 30 years into this, like there are very successful companies on both sides of that approach, right? Like, that is a fundamental difference, right? And how to operate and how to build and how to create that you have world-class companies, operating in both ways. And I don't think the question of like, which is the superior model is anywhere close to being answered like and my suspicion is, the answer is do, both answer is you actually want both.
1:04:14
That we do different outcomes, software tends to do better with the iterative approach Hardware tends to do better with the, you know, sort of weight and make it perfect approach. But again, you can find examples in both directions.
1:04:28
So the jury's still out on that one. So, back to Mosaic. So what it was text-based, Tim
1:04:37
berners-lee what? There was the web which was text-based, but there were no, I mean, there was like three websites. There was like, no content. There were no.
1:04:44
Users like, it wasn't like, there wasn't like a catalytic, it hadn't in there. By the way, it was all just because it was all text. There were no documents are no images there, no videos. There were no, right. So, so it was, it was. And then, if you in the beginning, if you had to be on a next Cube, but you need to have an X cubed both to publish and to consume so. So there were 6,000 bucks. You said there were limitations. Yeah. Six thousand dollar PC. They did not, they did not sell very many, but then there was also, there's also FTP and there was use Nets, right? And there was, you know, a dozen other basically, weathers waste, which was an early search thing there was gopher, which was
1:05:14
We menu based information retrieval system. There were like a dozen different sort of scattered ways that people would get to information on the internet. And so, the Mosaic idea was basically bring those all together, make the whole thing graphical, make it easy to use, make it basically bulletproof so that anybody can do it, and then again, just on the lock side it. So happened that this was right at the moment, when Graphics, when the GUI sort of actually took off. And we're now also used to the gooey that we think it's been around forever, but it didn't really, you know, the Macintosh brought it out in 85, but they actually didn't sell very many maximum
1:05:44
It was, it was not that successful of a product. It really was when you needed Windows 3.0 on PCS and that hit in, about ninety two. And so, and we did Mo s 82 and 83. So that sort of it was like right at the moment when you could imagine actually having a graphical user interface to write at all, much less one to the
1:06:02
internet. How? Well did Windows 3 cell? So it was a really, really big. It was a big bang. The big operating graphical operating system.
1:06:11
What is the classic? Okay, that's Microsoft was operating on the other.
1:06:14
Steve, Steve Apple was running on the Polish until it's perfect. Microsoft famously ran on the other model, which is ship and iterate. And so, in the Old Line in those days was Microsoft race version, 3 of Every Microsoft product, that's the good one, right? And so there are you can, you can find online. Windows? One windows, dude. Nobody use them. Yeah, actually, the original Windows, the, in the original Microsoft, Windows. The windows were not overlapping. And so you have these very small, very low resolution screens and then you had literally, it just didn't work. It wasn't ready yet. Well, and Windows 95, I think was a pretty big Depot.
1:06:44
So that was a big leap to. Yeah. So that was like, bang bang. And then, of course Steve and then when and then, you know, in the fullness of time Steve came back, then the max tartikoff, again, that was a third bang. And then the iPhone was a fourth bang. Such exciting time and then we were often after the
1:06:57
races because nobody could have known what would be created from
1:07:01
that? Well, Windows 3.1 or 3.0 Windows, 3.0 to the iPhone was only 15 years
1:07:07
Right like it, that ramp was in retrospect. It's time. It felt like it took forever, but that in historical terms like that was a very fast ramp from even a graphical computer at all on your desk, to the iPhone. It was 15 years since you
1:07:19
did, you have a sense of what the internet will be as you look into the window Mosaic, like, like, what, like there's just a few web pages for now.
1:07:28
So the thing I had early on was I was keeping at the time. What these disputes over? What was the first blog? But I had one of them that at least is a
1:07:36
Is a is a is a possible at least a runner-up in the competition and it was what was called the what's new page and it was it was like it was like a hardwired it distribution unfair Advantage. I was wired put her in the browser I put it in the browser and then I put my resume and the browser. Yes it was it was hilarious but I was stealing the not many people get to ask it to do that. So no the
1:08:05
could call an early.
1:08:06
Today's. Yes, it's so interesting. I'm looking for my about about your job. So so the what's new page? I would literally get up every morning and I would never after noon and I would basically it be if you wanted to launch a website, you would email me. And I was listed on the what's new page. And that was how people discovered the new websites as they were coming out. And I remember because it was like one it literally went from it was like one every couple days till like 1:00 every day until like 2:00 every day.
1:08:36
Baboom an
1:08:37
empty, you're doing it so that that blog was kind of doing the directory thing. So like what was the home
1:08:41
page? The homepage was just basically trying to explain. Even what this thing is that you're looking at right basic basically basic instructions. But then there was a button, there is a button that said, what's new and what most people did was they went to privacy reasons, went to what's new but like it was so it was so mind-blowing at that point, just the basic idea and it was just like you know, this is basically the internet but people can see it for the first time. The basic idea was look, you know, some you know it's like literally it's like an Indian restaurant in.
1:09:06
Like Bristol England has like put their menu on the web and if people were like wow. Whoa. Because like that's the first restaurant menu on the web. Yeah. And I don't have to be impressed all and I don't know if I'm ever gonna go to Bristol and I like Indian food and like, wow, right? And it was like that the first web the first streaming video thing was a it was a another England thin, some Oxford or something. Some guy put his coffee pot up as the first streaming video thing and he put it on the web because he literally was
1:09:36
If you put down the hall. Yeah, and he wanted to see when he needed to go refill it. But there were, you know, there was a point when there were thousands of people like watching that coffee pot because it was the first thing you could watch well, but threat,
1:09:48
isn't we able to kind of infer you know if that Indian restaurant could go online? Then you're like oh well they all
1:09:57
well yeah exactly. So you felt that yeah now you know look it's still a stretch right. It's still a stretch because it's just like okay is that you know you're still in this Zone which is like, okay. Is this a nerd thing? Is this a real person thing? Yeah.
1:10:07
By the way, we do know there was a wall of skepticism from the media, like they've just like everybody was just like at, this is the craziness is just like, Dom, this is not, you know, this is not for regular people at that time. And so you had to think through that. And then look, it was still, it was still hard to get on the internet at that point, right? So you could get kind of this weird bastardized version if you're on AOL which wasn't really real or you had to go like or what an ISP was, you know, in those days PCS actually didn't have TCP IP drivers come pre-installed. So you had to work on a TCP IP driver was you had to
1:10:36
Modem. You had to install driver software. I have a comedy routine. I do something like 20 minutes. Long describing. All the steps required, actually, get on the internet. So you had to, you had to look through these practical. Well, then and then and then speed performance 14 for modems, right? Like it was like watching, you know, glue dry like and so you had to you had, they were basically a sequence of best that we made where you basically need to look through that current state of affairs and say, actually there's going to be so much demand for once people figure this out, there's me so much demand for that. All of
1:11:06
these practical problems are going are going to get fixed. Yes. Some people say that the anticipation makes the destination that much more exciting. Remember Progressive jpgs? Yeah. Do I do I forget for kids in the audience, right? For kids. Do you used to have to wash an image load, like a line of the time? But it turns out there was this thing with jpeg is where you could load, basically, every fourth you could load like every fourth line and then, and then you could cook sweet back through again. And so you could like, render a fuzzy
1:11:36
Image up front and it was like a resolved into the detailed one and that was like a big, you I break through because it gave you something to watch.
1:11:43
Yeah. And you know, there's applications in various domains for. That
1:11:50
was a big fight. If there's a big fight early on about whether there should be images on the web
1:11:54
for that reason for the, like
1:11:55
sexualisation. No, not not explicitly that that did come up, but it wasn't even that it was more. Just like all the serious. And for the argument went, the purists basically said all the serious information, the world is taxed if you introduce images.
1:12:06
You basically going to bring in all the trivial stuff you're going to bring in magazines and, you know, all this crazy I was just, you know, stuff that, you know people you know, soon as he's going to distract from that's going to go take take the way from being serious being frivolous.
1:12:16
Well was there any du mer type arguments about the internet destroying all of human civilization or destroying? Some fundamental fabric of human civilization?
1:12:27
Yeah so it was those days it was all around crime and terrorism so they those arguments happened, you know. But there was no sense yet of Internet having like an effect on politics or because that was it.
1:12:36
Too far off, but there was an enormous Panic. The time around cyber crime. There was like, enormous Panic that like your credit card number and get stolen and used life savings be drained. And then, you know, criminals were going to there was 0 when we started, one of the things we did, one of the Netscape browser was the first widely used piece of consumer software that had strong encryption built in, made it available to Ordinary People and at that time, strong encryption was actually illegal to export out of the u.s. So we could feel that product in the US. We could not export it, it because it was it was classified as human.
1:13:06
Mission. So, the Netscape browser was on a restricted list along with the Tomahawk missile as being something that could not be exported. So, we had to make a second version with deliberately weak, encryption to sell overseas with a big logo on the boxing, do not trust this, which it turns out makes it hard to sell software. When got a big logo that says, don't trust it, and then we had to spend five years fighting, the US government to get them to basically, stop trying to do this, but because the fear, the fear was terrorists, are going to use encryption right to like plot, you know, all these all these, all these things.
1:13:36
And then, you know, we responded with well, actually we need encryption to be able to secure system so the terrorists and criminals can't get into them. So that way anyway that was the night that was the 1990s fight.
1:13:45
So can you say something about some of the details of the software engineering challenges required to build these browsers. I mean the engineering challenges of creating a product that hasn't really existed before that can have such almost like Limitless impact on the world but the internet.
1:14:04
So there was a really key bout that we made at the time.
1:14:06
Which is very controversial, which was corta, corta, how it was engineered, which was are we optimizing for performance, or for ease of creation? And in those days, the pressure was very intense to optimize for performance because the network connections were so slow. And also the computers are so slow. And so, if you had mentioned the progressive jpgs, like if there's an alternate World in which we optimize for performance and it just you had just a much more pleasant experience right up front, but we got by not doing that. Was we got ease of creation and the way that we got, he's our
1:14:36
Was all of the protocols and formats were in text, not in binary. And so HTTP isn't taxed by the way. And this was an internet tradition, by the way that we picked up, but we continue to http text and HTML is taxed in every else. Everything else that followed is taxed as a result. And by the way, you can imagine purest, Engineers thing. This is insane. You have very limited bandwidth. Why are you wasting any time? Sending text, you should be encoding, the stuff in a binary and it'll be much faster. And course the answer is, that's correct. But what you get when you make a text is all of a sudden. Well
1:15:06
The big breakthrough was the view Source function, right? So the fact that you could look at a webpage, you could hit view source. And you can see the HTML, that was how people learn how to make web pages, right? It's
1:15:15
so interesting because it's stuff we take for granted now.
1:15:19
Is a man that was fundamental to the development of the web to be able to have HTML, just right there all the ghetto mess. That is HTML. All the sort of almost biological like messiness of HTML element. Having the browser try to interpret that mess. Yeah, exactly. Show. Something reasonable
1:15:38
well. And then there was this internet principle that we inherited, which was emit what was it. Emit cautiously, it conservatively interpret liberally so it basically met if you're in the design principle was if you're if you're creating like
1:15:49
Web editor is going to admit HTML, like do it as cleanly as you can. But you actually want the browser to interpret liberally which is, you actually want users to be able to make all kinds of mistakes and for it to still work. Yeah. And so the browser rendering engines to this day, have all of this spaghetti code, crazy stuff where they can, they're resilient to all kinds of crazy issue, mistakes, and so, and literally what I always had in my head is like, there's an eight-year-old or an eleven-year-old somewhere and they're doing a view Source. They're doing a cut and paste and they're trying to make a web page for their Eternal or whatever. And like they leave out a slash and they leave on an angle bracket, they do this, they do that.
1:16:18
And it still
1:16:19
works. It's also like a, I don't often think about this. But, you know, programming, you know, C++ C C++ all those languages list but the compiled language as interpreted languages, python Perl, all of that, they the bracelet to be all correct? Yes. Like, everything has to be perfect brutal and then autistic. You forget. All right. It's systematic. And rigorous, let's go there. But you forget the tea, the web who job
1:16:48
Descriptive, eventually and HTML is allowed to be messy in the way for the first time.
1:16:57
Messy in the way. Biological systems could be messy. It's like the only thing computers were allowed to be messy on for the first time
1:17:04
used to offend me. So I grew up in Unix, I reworked on Unix. I was a Unix native for all the way through this period and so it used to drive me bananas. When it would do the the segmentation fault in the core dump file. Just like it's, you know, it's like literally there's like an error in the code. The math is off by one in accordance. Yeah and I'm in the chord up trying to analyze it in trying to reconstruct what and I'm just like, this is ridiculous. Like the computer ought to be smart
1:17:26
Enough to be able to know that if it's off by one. Okay. Fine. And it keeps running and I would go ask all the experts like why can't just keep running and then explain to me well because all the downstream repercussions and blah blah and I'm like the still like you know this is it we're forcing the human Creator to live to your point in this hyper little literal world of perfection. Yeah. And I was just like that's just that's just bad. And by the way, you know, because what happens is that of course just went what happened with coding at that point which is you get a high priesthood, you know, there's a small number of people who are really good at doing exactly that.
1:17:56
Most people can't and most people are excluded from it. And so actually, that was where that for, there's where I picked up that idea was was like, no, no, you want you want you want these things to be resilient to error in all kinds. And this, this would drive a purist, absolutely crazy. Like I got to talk on this like a lot because the yeah I mean like every time I you know, all the purists who were like and all this like markup language stuff and formats and codes and the all this stuff they would be like you know you can't you're encouraging bad behavior because
1:18:19
so they wanted the browser to give you a segfault error. Any time there was this
1:18:24
movement. Yeah they wanted to be a cop right there.
1:18:26
Yeah. That was a very any, any any properly trained and credentialed engineer would be like, that's not how you build these do.
1:18:33
Such a bold move to say, no, it doesn't have to be, you know, like I said, the good news for me is the internet. Kind of had that tradition already, but we but having said that, like, we pushed it, we pushed it way out, but the other thing we did going back to the performance thing was we gave up a lot of performance. We made that initial experience for the first few years was pretty painful but the bet there was actually an economic bet, which was basically the demand for the web would basically mean that there would be a surgeon supply of broadband.
1:18:56
Because the question is, okay, how do you get? How do you, how do you get the phone companies, which are not famous in those days for doing new things at huge cost for like speculative reasons? Like how do you get them to build up Broadband, you know spend billions of dollars doing that and you know you could go meet with them and try to talk them into it or you could just have a thing where it's just very clear. There's going to be that they the people love this going to be better if it's faster and so that there was a period there and this was this was fraught with some parallel, but there was a period. There was like, we knew the experience was sub optimized because we
1:19:26
were trying to force the emergence of demand for Broadband. Sure. Which is, in fact, what happened.
1:19:32
So you had to figure out how to display this text html text. So the Blue Links and appropriate links. And there's no standards is their standards. Is that time sees you really don't? Well, there's like, there's applied implies tan is, right? And they are all these cousin. You features are being added like CSS would like, what kind of stuff a browser should be able to support teachers.
1:19:56
The languages within JavaScript and so on. But you you based your setting standards on the Fly
1:20:04
yourself? Well, I think to this day, if you, if you create a web page that has not a CSS style sheet, the browser will render it. However, it wants to write. So this was one of the things there was, this idea, this idea of, if the time and how these systems were built which is separation of content from format, are separation of content from appearance and that's still in. People don't really use that anymore because everybody wants to
1:20:26
Determine how things look and so they use CSS. But it's still in there that you can just let the browser do all the work. I still like the
1:20:33
like really basic web sites but that could be just old school kids. These days with their fancy responsive websites that don't actually have much content but have a lot of visual elements.
1:20:45
Well, that's one of the things that's fun about chat. You know about JP T. Yeah. Like
1:20:49
back to the basics,
1:20:50
back to just text. Yeah, right and you know, there is this pattern in human creativity and media.
1:20:56
You end up back at text and I think there's, you know, there's something powerful in there.
1:21:00
Is this some other stuff you remember, like the purple links. There were some interesting design decisions that to kind of come up that we have today, or we don't have today. There were temporary.
1:21:11
So we made, I made the background. I hated reading text on white background, so, I made the background gray. Everybody can do gret, know you got this? No, no, no. That's that decision. I think has been reversed. But now I'm happy though, because now dark mode is the thing. So, so it wasn't it.
1:21:26
About Graves. Just you didn't want a white background, strain my eyes, change your eyes, interesting, and then there's a bunch of other decisions. I'm sure there's an interesting history of the development of HTML, CSS and all those in interface in Java script and there's this whole Java applet thing. Looks like one probably JavaScript CSS was after me, so I didn't know. It was not me, but JavaScript was at the big JavaScript. Maybe was the biggest of the whole thing that was us and and that was
1:21:56
Really a bet is a bet on two things. One is that the world wanted a new front end scripting language and then the other was we thought at the time the world wanted a new back-end scripting language. So JavaScript was designed from the beginning to be both front-end and back-end and then it failed as a back-end scripting language and Java one for a long time and then python Perl and other things, PHP and Ruby, but now JavaScript is back. And
1:22:19
so, I wonder if everything in the end will run on
1:22:22
JavaScript it. See, it seems like it is the. And by the way, let me give a shout out to
1:22:26
To Brendan eich was the basically one, man, inventor of of JavaScript
1:22:32
you're interested to learn more about Brendan eich, it becomes podcast previous. Exactly.
1:22:37
So he wrote JavaScript over a summer and it I mean I think it is fair. It is fair to say now that it's the most widely used language in the world and it seems to only be gaining in in in its in its range of
1:22:47
adoption in the software world. There's quite a few stories of somebody over a weekend or a week or over summer writing some of the most
1:22:56
Impactful revolutionary pieces of software ever. But what that does should be inspiring is very
1:23:02
inspiring, I'll give another one SSL. So SSL was the security protocol that was us and that was a crazy idea at the time which was let's take all the native protocols and let's wrap them in a security rapper. That was a guy named Kip Hickman, who wrote that over the summer. One guy and then look today, sitting here today like the transformer like at Google was a small handful of people and then you know the number of people who have did like the core work on GPT. It's
1:23:26
Not that many people pretty small handful of people and so yeah, the pattern and software repeatedly over a very long time. Has been it's a Jeff Bezos. Always said, the to Pete's of rule 4 teams at Amazon, which is any team needs to be able to be fed with two pieces. If you need the third Pizza, you have too many people. And I think that's, I think that's, I think it's actually the one pizza roll. Yeah. Further for the really creative work. I think it's two people. Three people. Well, that's you see those certain open source projects like so much is done by like one or two people.
1:23:57
It's so incredible and that's why you see that gives me so much, hope about the open source movement in this new age of AI. Where you just recently haven't had a conversation with with Mark Zuckerberg of all people whose all in on open source, which is so interesting to see. And so inspiring to see because like releasing these models, it is scary. It is potentially very dangerous, and we'll talk about that, but it's also if you believe in the goodness,
1:24:26
Us of most people and in the skill set of most people and the desire to do good in the world. That's really exciting because it's not putting it these models into the centralized control of big corporations. Governments on. It's putting it in the hands of a team teenage kid with like a dream in his eyes. I don't know. That's, that's
1:24:47
beautiful. But look, this story, I ought to make the individual coder obviously far, more productive right by like, you know, 1000x or something. And so you ought to open source like the
1:24:57
Not just the future of Open Source AI with the future of Open Source. Everything. We ought to have a world now super coders, right? Who are building things as open source of one or two people? They were inconceivable, you know five years ago you know the level of kind of high productivity. We're going to get out of our best and brightest I think is going to go way up is going to be
1:25:13
interesting little talk about it but let's just linger a little bit of Netscape Netscape was acquired in 1999. 4.3 billion by AOL. What was that? What was that like what was what?
1:25:26
Some memorable aspects of that. Well, that was the height of the.com, boom, bubble bust. I mean, that was the, that was the frenzy. If you watch the succession, that was the, that was like what they did in the fourth season with the with go Joe and the merger with the, with their is. So it was like the height of, like, one of those kind of Dynamics. And so, you reckon this is succession. By the way, I'm more of a yellow stone guy. You almost done. It's very American. I'm very proud of you. That's all that is. I just talked to Matthew McConaughey and I'm full on texting. At this point, I heard Ali proof.
1:25:57
And he will be doing the sequel to Yellowstone. Yes, I'm very excited anyway. So that's rude Interruption by Me, by way of succession.
1:26:10
So that was at the height of the
1:26:12
deal-making and money and just the fur flying and like craziness. And so, yeah, it was just one of those. It was just like, I mean, I miss the entire Netscape thing from start to finish was four years, which was like for one of these companies just like incredibly fast, you know, it, when we went public 18 months, after we got moved, we were founded. Wish virtually never happens. So, it was just this, incredibly fast kind of meteors streaking across the sky, and then, of course, it was this. And then there was just this explosion, right, that happened, because then it was almost
1:26:38
Ali followed by the.com Crash. It Was Then followed by hail well buying Time Warner, which again is sick. The succession guys kind of play with that was turned out to be a disastrous deal. No, one of the famous, you know, kind of disasters in business history and then, and then, you know what became an internet depression, on the other side of that. But then in that depression, and the 2000s was the beginning of broadband and smartphones and Web 2.0, right? And then social media and search, and never its ass and everything that came out of that. So what did you learn
1:27:06
from? Just the, the acquisition mean, this is so much money.
1:27:10
What's interesting because I must have been very new to you that these softwares stuff. You can make so much money. There's so much money swimming around. I mean, I'm sure the ideas of investment was starting to get
1:27:23
born there. Yes, let me get a little, so let me lay out lay it. So here's here's the thing, I don't know if I figured out that but figure it out later which is software is a technology that it's like a, you know, the concept of the Philosopher's Stone, the Philosopher's Stone and Alchemy transmits lead into gold and Newton spent 20 years trying to find the
1:27:38
Philosopher's Stone, never got there. Nobody's ever figured it out. Software is our modern philosopher, stone and in economic terms, it transmutes labor into Capital which is like a super interesting thing. And by the way, like, Karl Marx is rolling over in his grave right now, because of course, that's complete refutation of his entire Theory transfers labor and capital, which is, which is as follows is somebody sits down at a keyboard and types A bunch of stuff in and a capital asset comes out the other side. And then somebody buys that Capital asset for a billion dollars
1:28:08
Like that's amazing, right? It's literally creating value right out of thin air, right out of purely, human thought, right? And so that that's there are many things that make software, magical and special, but that's the
1:28:21
economics. I don't know what Marx would have thought about
1:28:23
that but we would completely broke his brain because of course, the whole the whole thing was, he was you could he, you know, that kind of technology, is it conceivable when he was alive? It was all, it's all industrial era stuff. And so, the any kind of Machinery necessarily involves huge amounts of capital and labor was on the
1:28:38
And the receiving end of the abuse. Yeah, right. But like, suffer inch software. A software engineer is somebody who basically transmits his own labor into action, actual Capital asset creates permanent value. Well, in fact, it's actually very inspiring, that's actually more true today than before. So, when I was doing software, the Assumption was all new software, basically has a sort of a parabolic sort of life cycle, right? So you ship the thing, people buy it at some point, everybody who wants it as bought it and then it becomes Obsolete and it's like bananas, nobody nobody buys old software
1:29:08
These days Minecraft Mathematica, you know, Facebook, Google, you have the software assets that are, you know, have been around for 30 years that are gaining in value every year, right? And they're just, they're being the World of Warcraft, right? Salesforce.com, like, they're being every single year, they're being polish and polish and polish, and polish, they're getting better and better, more powerful, more powerful, more valuable more valuable. So, we've entered this area where you can actually have these things that actually build out over decades. Which by the way, is what's happening right now with like CPT and
1:29:38
So now and this is why, you know, there is always, you know, sort of a constant investment frenzy around software is because, you know, look, when you start one of these things, it doesn't always succeed but when it does now you might be building an asset that builds value for four five six decades to come. You know, if you have a team of people who are helped the level of devotion required to keep making it better. And then the fact that every course, everybody's online, you know, the five billion people that are a click away from any new piece of software. So the potential Market size for any of these things is you know, nearly infinite. You must been surreal back.
1:30:08
Now yeah, this was all brand new, right? Yeah. But back then this was all brand new. These were all, you know, brand new. It had you rolled out that theory and even 1999, people would have thought you were smoking crack. So that that's that's emerged over time.
1:30:21
Well, let's now turn back into the future. You wrote that Saya. I will save the world. Let's start the very high level. What the main thesis of the essay?
1:30:32
Yeah, so the main thesis on the essay, is that what we're dealing with here is intelligence and it's really important to kind of talk.
1:30:38
About the sort of very nature of what intelligence is. And fortunately, we have a, we have a predecessor to machine intelligence, which is human intelligence, and we've got, you know, observations and theories over thousands of years for what what intelligence is in the hands of humans. And what intelligence is right? I mean, what it literally is, is the way to, you know, capture process, analyze synthesize information, solve problems. But the observation of of intelligence in human hands is that intelligence quite literally makes everything better. And what I mean by that
1:31:08
Is every kind of outcome of like human quality of life, whether it's education outcomes or success of your children for a Career Success or health or lifetime Satisfaction. By the way, propensity to peacefulness is opposed to violence propensity for open-mindedness versus bigotry. Those are all associated with higher levels of intelligence
1:31:31
smarter people have better outcomes in almost as you write, you know, every domain of activity academic achievement job performance.
1:31:38
Occupational, status income, creativity, physical health, longevity, learning new skills, managing complex tasks leadership, entrepreneurial success conflict, resolution reading comprehension Financial decision-making, understanding others, perspectives, creative, arts parenting outcomes and life satisfaction. One of the more depressing conversations I've had and I don't know why it's depressing have to really think through why it's depressing, but on IQ and the g Factor.
1:32:08
Her and that, that's something in large part is genetic.
1:32:16
And it's correlates so much with all of these things and success in life. It's like all the inspirational stuff worried about. Like if you work hard and so on, damn it sucks that you're born with the hand that you can't
1:32:31
change. But what if you're good?
1:32:33
You're saying basically a really important point. I think it's a in your articles, it really helped me, it's a nice added perspective.
1:32:45
To think about listen human intelligence. The signs of intelligence is shown scientifically that. It's just makes life easier and better. The smarter you are. And now let's look at artificial intelligence and if that's a way to increase the the some human intelligence then it's only going to make a better life. Yeah that's
1:33:11
argument. And certainly at the collective level we could talk about the collective effect of just having more intelligence in the world.
1:33:15
Which it, which will have very big payoff but there's also just at the individual level like what if every person has a machine, you know, and it's the concept of augment. Doug engelbart's concept of augmentation. You know what? If everybody has a an assistant and the assistant is you know, 140 IQ and you happen to be 110 IQ and you've got, you know, something that basically is infinitely patient and knows everything about you and is pulling for you in every possible way. Wants you to be successful and anytime
1:33:45
You find anything confusing or want to learn anything or have trouble, understanding something or want to figure out what to do in a situation right? When I figure out how to prepare for a job interview? Like any of these things, like it will help you do it and it will therefore that the combination will effectively be if the effectively raise your raise because it will effectively raise your IQ will. Therefore raise the odds of a successful life outcomes in all these areas
1:34:06
the people below the hypothetical 140 IQ. It will pull them up towards
1:34:11
140 IQ. Yeah, yeah. And then of course, you know, people at people at 140 IQ will
1:34:15
Will they have a right to be able to communicate which is great? And then people above 140, IQ will have an assistant that they confirm things out to and then look, God willing, you know, at some point these things go from future versions. Go from 140 IQ equivalent to 150 to 160 to 180, right? Like Einstein was an estimated to be on the order of 160, you know. So when we get, you know, when 60-day I like will be, you know, when one assumes, creating Einstein level breakthroughs and physics and and then, and then at 180 will be, you know, during cancer and
1:34:45
developing warp drive and doing all kinds of stuff. And so it is quite possibly the case. This is the most important thing that ever happened, the best things ever happened. Because precisely, because it's a lever on this single, fundamental factor of intelligence, which is the thing that drives so much of everything else.
1:35:00
Can you steal? Man, the case that human plus AI is not always better than human for the individual.
1:35:05
You may have noticed that there's a lot of smart assholes running around. Sure. Yes. It, right. And so, like it's part. There are certain people where they get smarter you know they're there they get to be more arrogant, right? So
1:35:15
One huge flaw though, to push back on that, it might be interesting because when the intelligence is not all coming from you, but from a sister, from another system that might actually increase the amount of humility. Even in the
1:35:28
assholes, one would hope or it could make assholes more asshole.
1:35:33
Yeah, that's it. I mean, that's that's for psychology to study.
1:35:36
Yeah, exactly. Another one is the smart people are very convinced that they, you know, have a more rational view of the world and that they have a easier time seeing through conspiracy theories and hoaxes, and right, you know,
1:35:45
Sort of crazy beliefs and all that. There's a theory in Psychology which is actually smart people. So for sure people who aren't as smart are very susceptible to hoaxes and conspiracy theories. Yeah, but it may also be the case that the smarter you get you become susceptible on a different way which is you become very good at marshalling facts to fit the preconceptions, right? You become very very good at assembling whatever theories and Frameworks and pieces of data and graphs and charts. You need to validate whatever crazy ideas got in your head. And so you're susceptible in a different way.
1:36:15
Right? We're all sheep but different colored she some she for better at justifying it, right? And those are you know those are the smartsheet right? So yeah look like it, I would say this. Look like they're an openness. I'm not and I'm not a utopian. There are no panaceas in life. There are no like, you know Pete, I don't believe they're like your positives. I'm not a transcendental kind of person like that but you know so yeah, they're going to be issues. And and you look smart people know that you maybe you could say about smart people as they are more likely to get themselves in situations that are, you know, be on there.
1:36:45
Grasp, you know, because they're just more confident in their ability to deal with complexity and their eyes become bigger, their cognitive eyes become bigger than their stomach, you know? So yeah. You could argue those different ways, nevertheless on that right. Clearly overwhelmingly. Again, if you just extrapolate from what we know about human intelligence, you're improving so many aspects of life if you're upgrading
1:37:05
intelligence, so, that would be assistance at all stages of life. So when you're younger, there's for Education, all that kind of stuff or mentorship, all of this and
1:37:16
Later on as you're doing working you develop the skill and you're having a profession. You have an assistant that helps you excel at that profession. So it all stages of Life.
1:37:24
Yeah, I mean look the series augmentations. This is the decal Birch Term for it. Again, Gilbert made this observation many many decades ago that you know basically it's like you can have this oppositional frame of Technology where it's like us versus the machines. But what you really do is you use technology to augment human capabilities? Yeah. And by the way, that's how actually the economy develops, that's the dark about the economic side of this, but that's actually how the economy grows is through technology augmenting human.
1:37:45
Even human potential. And so, yeah. And then you basically have a proxy or, you know, or a, you know, a sort of prosthetic, you know. So like you've got glasses, you've got a wristwatch, you know, you've got shoes. You know, you've got these things, you've got a personal computer, you've got to word processor, you've got Mathematica, you've got Google, this is the latest viewed through that lens. The AI is the latest in a long series of basically augmentation methods to be able to raise human capabilities. It's just this one is the most powerful one of all, because this is the
1:38:15
One that goes directly to but they call fluid intelligence which is IQ.
1:38:21
Well, there's a two categories of, folks that you outline that the worry about or highlight the risks of AI and you highlight a bunch of different risks. I would love to go through those risks and just discuss them brainstorm, which ones are serious and which ones are less serious. But first, the Baptist in the Bootleggers. What are these two interesting groups of folks who are, who are worried about the effect of a argument?
1:38:50
Open or say that you say so okay
1:38:55
the Baptist worry the Bootleggers, same to do. Yeah, so the Baptist in the Bootleggers is a metaphor from economics from is called development economics and it's this observation that when you get social reform movements in a society, you tend to get two sets of people showing up arguing for the social reform and the the term Baptists and Bootleggers comes from the American experience, with alcohol prohibition. And so, in the 1900s 1910s, there was this movement.
1:39:20
It was a very passionate at the time, which basically said alcohol is evil and it's destroying Society. By the way, there was a lot of evidence to support this. There was very high rates of very high correlations than by the way and now between rates of physical violence and alcohol use almost all violent crimes have either the perpetrator or the victim are both drunk. Almost all if you see this actually in the work almost all sexual harassment cases in the workplace. It's like at a company party and somebody's drunk. Like it's amazing how often alcohol actually correlates to actually just did.
1:39:50
Function of these two domestic abuse and so forth, child abuse. And so, you have this group of people who were like, okay, this is bad stuff and we should Outlaw and those were quite literally Baptist, those were super committed, you know, hardcore, Christian activists, in a lot of cases, there was this woman whose name was Carrie Nation, who was this older woman who had been in this, you know, I don't know, disastrous marriage or something. And her husband had been abusive and drunk all the time, and she became the icon of the Baptist prohibitionists and she was legendary in that era for carrying an Axe and doing
1:40:20
You know, completely on her own doing grades of saloons and like, taking her acts to all the bottles and sakes. Yeah, I can. And So True Believer, an absolute True Believer with absolutely the purest of intentions. And again, there's a very important thing here it, which is there's you could look at this. Technically you could say the Baptist are like delusional, you know, extremists. But you can also say, look there, right? Like she was, you know, she had a point like she wasn't wrong about a lot of what she said. Yeah, but it turns out the way The Story Goes, is it turns out that there were another set of people who very
1:40:50
Oddly wanted to outlaw alcohol. In those days in those were the Bootleggers, which was organized crime, the stood to make a huge amount of money. If legal, alcohol, sales, were banned, and this was in fact, the way the history goes is this was actually the beginning of organized crime in the US. This was the big Economic Opportunity that open that up. And so they went in together and I didn't go in together. Like the Baptist did not even necessarily know about the Bootleggers because they run them roll Crusade. But like I certainly knew about the Baptist and they were like wow this is these people are like the great front people for like, you know, it's good PR
1:41:20
And an against the, in the background there and they got the Volstead Act passed, right? And they did. In fact, ban alcohol in the US and you'll notice what happened, which is people kept drinking, it didn't work, people kept drinking that Bootleggers made a tremendous amount of money and then over time, it became clear that it made no sense to make it illegal and it was causing more problems. And so then it was revoked, and here we sit with legal alcohol. 100 years later with all the same problems and, you know, the whole thing was this, like giant misadventure, the Baptist got taken advantage of by the
1:41:50
Bootleggers and the Bootleggers got what they wanted. And, and that was that the
1:41:53
same two categories of folks are now sort of suggesting that the development of artificial intelligence should
1:41:59
be regulated by percent. Yeah. It's the same pattern and the The Economist is the same pattern every time. Like this is what happened in nuclear power. This is what happened in which is another interesting one. But like yeah this is this happens dozens and dozens of times throughout the last hundred years and and this is what's happening now
1:42:12
and you write that it isn't sufficient to Simply identify the actors and impugn their Motors, we should consider the arguments.
1:42:20
Both the Baptist and the Bootleggers on their merits so let's do just that risk. Number one
1:42:29
Will a, I kill us all. So, what do you think about this one? This this, what do you think is the core argument here that the development of AGI, perhaps better said, will destroy human
1:42:47
civilization? No, first of all, you're just a slight of hand because we went from talking about AI to AGI.
1:42:54
Is there a fundamental difference there? I don't know what's a GI
1:42:57
I would say. Aye, what's
1:42:59
enjoyable day? I has a ice. Machine learning. What's a GI?
1:43:02
I think we don't know what the bottom of the well of machine learning is over. The ceiling is because just to call something machine learning, or just to call something statistics, are just to call it math or computation doesn't mean, you know, nuclear weapons are just physics. So it is to me is very interesting and surprising how far machine learning
1:43:22
is no. But we knew that nuclear physics would lead to weapons. That's why the scientists of that era were always in some disused.
1:43:27
Beautiful building the weapons. This is different ages, different, as machine learning Lee. Do we know we don't know, but this is, my point is different. We actually don't know. But, and this is where you the sleight-of-hand kicks in, right? This is where it goes from being a scientific topic being a religious topic and that's what, that's why I specifically called out that. That's because that's what happens. I do the vocabulary shift, and all of a sudden you're talking about something totally. That's not actually
1:43:47
real. Well, then maybe you can also, as part of that Define the Western tradition of Millennia
1:43:53
and is MM, yes, into the world apocalypse upon assist.
1:43:57
Oops, Cults, Paco's calls. Well, so we live in every of course live in a judeo Christian, but primarily Christian kind of saturated, you know, kind of Christian post-christian, secularized Christian, you know, kind of world in the west. And of course, Corte Christianity is the idea of the second coming and and Romeo the revelations. And, you know, Jesus returning in thought, the Thousand-Year, you know, Utopia on Earth. And then the the Rapture and like, all that stuff. We don't, we, you know, we collectively, you know, as a society. We don't necessarily take all that fully seriously now. So, what we do is we create our secularized versions of that week,
1:44:27
We keep looking for Utopia, we keep looking for, you know, basically the end of the world. And so, what you see over over decades is basically, a pattern of these sort of, these of these is this, this is what calls are. This is how Cults form is, they form around some theory of the end of the world. And so, the people's Temple calls the Manson called the Heaven's Gate called the David koresh called it. You know what, they're all organized around is like, there's going to be this thing that's going to happen. That's going to basically bring civilization crashing down and then we have this special Elite group of people who are going to see it coming and prepare for it. And then there's
1:44:57
The people who are either going to stop it or failing stopping it, they're going to be the people who survived the other side and ultimately get credit for having been right. Why is this so compiling? Do you think like because it satisfies this very deep need we have for Transcendence and meaning that got Stripped Away when we became secular. Yeah but what was the Transcendence involve the destruction of human civilization because like how like how plausible, it's like a very deep psychological thing because it's like, how plausible how plausible is it? That we live in a world where
1:45:27
It's just kind of all right, right? How can you how exciting is that, right?
1:45:32
But that's what we want for them. That's what that's the question. I'm asking. Why is it not exciting to live in a world where everything's just? All right, does it? I think, you know, most of the animal kingdom would be so happy with just, all right, because that means survival. Why we? Maybe that's what it is. We why are we Conjuring up things to worry
1:45:54
about? So, c.s. Lewis called it the god-shaped hole.
1:45:57
All. So there's a god-shaped hole in The Human Experience Consciousness Soul whatever you want to call it, where there's got to be something that's bigger than all this. There's got to be something Transcendent. There's going to be something that is bigger right bigger the bigger purpose of bigger meaning. And so we have run the experiment of you know we're just going to use science and rationality and kind of, you know, everything's just going to kind of be as it appears and a large number of people have found that very deeply wanting and have constructed narratives. And by the way, this is the story of the 20th century, right?
1:46:27
Communism. Right? Was one of those communism. Was a, was a form of this Nazism was a form of this, you know, some people, you know, you can see movements like this playing out all over the world right now.
1:46:37
So you could attract the kind of devil, a kind of source of evil. And we'll go to transcend Beyond
1:46:42
it. Yeah. And to the moment millenarian, the military is kind of. When you see a millenarian call, they put a really specific point on it, which is end of the world. Right there. There is some change coming, and that change that's coming is so profound. And so important that it's either going to
1:46:57
Lead to Utopia or hell on Earth, right? And it is going to and then, you know, it's like what if you actually knew that, that was going to happen, right? What would you what would you do, right? How would you prepare yourself for it? How would you come together with a group of like-minded people? Right? How would you, what would you do? Would you plan like caches of weapons in the woods? Would you like, you know, I don't know if it's create underbrush underground bunkers, would you, you know? Spend your life trying to figure out a way to avoid having it happen. Yeah. That's a really compelling exciting idea too.
1:47:27
You have a club over, you have to have a, have a little bit of trouble, I could get together on a Saturday night and drink some beers and talk about the the end of the world. And how are you? You are the only ones who have
1:47:37
figured it out. And then and then once you lock in on that, like, how can you do anything else with your life? Like this is, obviously the thing that you have to do. And then there's a psychological effect, you alluded to. There's a psychological effect. If you take a set of True Believers and you leave them to themselves, they get more radical because they self-radicalized each
1:47:51
other that said, yes, it doesn't mean they're not sometimes,
1:47:56
right? Yeah. The end of the world might be
1:47:57
Yes, correct. Like they might be right. Yeah, but like we have some pamphlets for you. It's, I mean, there's I mean, we'll talk on nuclear weapons, because you have a really interesting little moment that I learned about in your essay. But, you know, sometimes it could be right. Yeah. There's wordless till you were developing more and more powerful Technologies in this case and we don't know what the impact they will have on human civilization. Well, we can highlight all the different predictions about Hallowed be positive but the risks are there.
1:48:27
Discuss some of them.
1:48:28
Well, the steel melt, the steel man, is the steel matter. Actually, the steel man in his reputation of the same, which is why you can't predict what's going to happen, right? You right. You can't rule out that this will not end everything, right? But the response to that is, you have just made a completely non-scientific claim. You've made a religious Claim about a scientific claim their houses to get disproven there is, and there's no by definition with these kinds of claims. There's no way to disprove them. Yeah. Right. And so there's no energy to go, right? Unless there's no hypothesis, there's no testability of the hypothesis. There is no way to
1:48:57
Falsify the hypothesis. There's no way to measure progress along the arc like it's just all completely missing and so it's not
1:49:05
scientific. And well, I don't think it's completely missing, it's somewhat missing. So for example, the the people that say I was going to kill all of us, I mean they usually have ideas about how to do that, whether it's the paperclip maximizer or you know, it escapes, there's mechanism by which you can imagine it killing all humans models. And
1:49:28
to you Candace prove it by saying there is there's a limit to the speed of which intelligence increases
1:49:38
Maybe show that like the sort of rigorously really describe model, like how it could happen and say, no there. Here's the physics limitation. This is the physical limitation to how these systems would actually do damage to human civilization. And it is possible. They will kill ten to twenty percent of the population, but it seems impossible for them to kill
1:50:02
99%. There's practical counter-arguments, right? So you mentioned basically what I've described is the thermodynamic counter argument which is do
1:50:08
Sitting here today it's like where were the evil AGI? Get the gpus? Yeah, because like they don't exist if so if you can have a very frustrated baby evil AGI, who's gonna be like trying to buy Nvidia stock or something, to get them to finally make some chips, right? So then the series format, that is the thermodynamic argument, which was like, okay, where's the energy going to come from? Where's the processor going to be running? Where's the data center going to be happening? How is this going to be happening in secret such that, you know, so you know, so that's a practical counter-argument to The Runaway AGI thing. I have a but I have a in, we can argue that discuss that I have a deeper objection to it, which is it's, this is
1:50:38
Forecasting, it's all modeling it solve. It's all future prediction, it's all future hypothesizing, it's not science. Sure it is. Not it because it is it is the opposite of science. So the pull up Carl Sagan extraordinary claims require extraordinary proof, right? These are extraordinary claims the policies that are being called for right to prevent this are of extraordinary magnitude that and I think we're going to cause extraordinary damage and this is all being done in the basis of something. That is literally not scientific. It's not a testable hypothesis. The moment you say,
1:51:08
Say as going to kill all of us, therefore we should ban it or that we should regulate all the custard that's when it starts getting serious or start you know military airstrike some data centers boy. Right? And like yeah this one's get starts also starts getting real. So here's the probably military and Cults. They have a hard time. Staying away from violence?
1:51:29
If the violence is so fun, if you're on the right end of it, they have a hard time. Affording balance the reason I have a hard time. Affording Palace is if you actually believe the claim right, then what would you do to stop the end of the world? Will, you would do anything right? And so and this is where you get and you get if you just look at history of millenarian Colts, this is where you get the people's Temple, everybody calling themselves, the jungle and this is where you get a Charles Manson and you know setting in me to kill kill the pigs like this is the problem with these. They
1:51:58
Very hard time to run the line at Asheville violence. And I think the I think in this case there's they're I mean they're already calling for it like today and you know where this goes from here as they get more worked up like I think is like really concerning,
1:52:11
okay. But that's kind of the extremes we you know the extremes of anything else I was concerning. It's also possible to kind of believe that AI has a very high likelihood of killing all of us. But there's and therefore we should maybe consider slowing
1:52:28
In development, or regulating. So, now, violence, or any of these kinds of things, but it's saying like, all right, let's, let's take a pause here. You know, you'll biological weapons nuclear weapons. Like, whoa, this is like serious stuff. We should be careful. So it is possible to kind of have a more rational response, right? If you believe this risk
1:52:49
is real
1:52:49
believe, yes. So what is it possible to be have a scientific approach to the prediction of the
1:52:56
future? I mean, we just went through this with covid.
1:52:58
What do we know about modeling?
1:53:01
Well, I mean, what do we learn about modeling with covid?
1:53:03
There's a lot of lessons.
1:53:05
They didn't work at all. They were poorly. The models were terrible. The models were useless.
1:53:10
I don't know if the models we use this or the people interpreting the models and then the centralized institutions that were creating policy rapidly based on the models and leveraging the models in order to support their narratives versus actually interpreting the error bars in the bottles and all that
1:53:27
kind of stuff. You had with cope in my view, you have to cover
1:53:30
This, you have these experts showing up, they claim to be scientists and they had no testable hypotheses whatsoever. They had a bunch of models, it a bunch of forecast and ate a bunch of theories, and they lay these out in front of policy makers and policymakers freaked out and panicked, right? And implemented. A whole bunch of like, really like terrible decisions that we're still living with the consequences of. And there was never any empirical Foundation to any of the models. None of them ever came, true.
1:53:53
Yeah, to push to push back. There were certainly Baptists and Bootleggers in this, in the context of this pandemic, but they're still a usefulness to me.
1:54:00
No, I do not if they're not mean not if they're reliably wrong, right? Then they're actually like anti useful right there actually damaging
1:54:06
but what would he do with the pandemic? What do you do with a they at with any kind of threat, don't you want to kind of have several models to play with as part of the discussion of like what the hell do we do
1:54:17
here? I mean do they work because they're an expectation that they actually like work that they have actual predictive value? I mean, as far as I can tell with covid, we just saw at the policymakers just tie up themselves into believing that there was UPS. I mean, look the scientist.
1:54:30
It's the scientists were at fault. This is the quote unquote scientist showed up. So I have some insight into this. So there was a remember that Imperial College models out of London? Were the ones that were like these are the gold standard models. Yeah, so a friend of mine runs a big software company and he was like, wow, this is like covid, really scary and he's like, you know, he contacted This research and he's like, you know, do you need some help? You've been just building this model on your own for 20 years. Do you need some huge like us our coders to basically restructure it. So I can be fully adapted for covid and the guy said yes and sent over the code and my friend said it was like the worst spaghetti code he's ever seen. That doesn't mean
1:55:00
Not possible to construct a good model of pandemic, with the correct error bars, with a high number of parameters that are continuously many times a day updated. As we get more data about a pandemic, I would like to believe when a pandemic hits the world, the best computer scientist in the world, the best software Engineers respond aggressively and as input. Take the data. That we know about the virus has an output say here is here's what's happening. In terms of how quickly it's spreading. What?
1:55:30
That lead in terms of hospitalization and death and all that kind of stuff. Here's How likely how contagious is, it likely is. Here's how deadly likely is based on different conditions based on different ages and demographics and all that kind of stuff. So here's the best kinds of policy. It feels like you could have models machine learning that like kind of they don't perfectly predict the future, but they, they they help you do something. Because there's pandemics that are like
1:56:01
May they don't really do much harm and this plan demux, you can imagine them, they could do a huge amount of harm like they can kill a lot of people so you should probably have some kind of data-driven models that. Keep updating that allow you to make decisions that base like we're how bad is this thing? Now, you can criticize how horrible all that went with a response to this pandemic, but I just feel like there might be some value to
1:56:26
models. So to be useful at some point has to be
1:56:29
Right? So and so and so the easy thing for me to do is to say Obviously, right. Obviously I want to see that just as much as you do because anything that makes it easier to navigate through Society through a wrenching, you know, risk like that is that you saw. That sounds great. You know, the the harder objection to it is just simply you are trying to model a complex dynamic system with eight billion moving Parts, like not possible. Data can't be done. Complex systems can't be done.
1:56:53
Machine learning says hold my beer but well it's possible. No I
1:56:56
don't know. I would like to believe that it is. Yeah, pretty good.
1:56:59
A, I think where you and I would agree as I think we would like, we would like that to be the case, we are strongly in favor of it. I think we would also agree that no such thing with respect to covid or pandemics. No, such thing at least neither you nor I think or where I'm not aware of anything like that today, my main worry was
1:57:13
a response to the pandemic is that same as with aliens is that even if such a thing existed and it's possible it existed. The, the, the policymakers were not paying attention like
1:57:29
There's no mechanism that allowed. Those kinds of models to percolate
1:57:32
up. I think we have the opposite problem during covid. I think the policy makers I think the these these these people with basically fake science had too much access to the policymakers,
1:57:39
right? And what but the policymakers also wanted, they had a narrative of mind and they also wanted to use whatever model that fit that narrative short to help them. Also they, they felt like there's a lot of politics and not enough
1:57:51
science. Although a big part of what was happening. A big reason we got lockdowns for as long as we did was because he scientist came in with these like Doomsday scenarios that were like just like completely
1:57:59
Lee off the
1:57:59
hook scientists in quotes. It's like that's not quote. Unquote
1:58:02
science is not. Okay,
1:58:03
let's give love a science. That is the way
1:58:06
out. Science is a process of testing hypotheses and yeah modeling does not involve testable hypothesis right? Like I don't even know that my I actually don't need I don't even know that modeling actually qualifies the science. Maybe that's a side conversation. We can have some time over a beer.
1:58:20
It's a really interesting but what do we do about the future of you? What what? So
1:58:23
number one is, when we start with number one, humility goes back to this thing about. We determine the truth. Number two.
1:58:29
As we don't believe, you know it's the old, I've got a hammer, everything looks like a nail, right? I've got this. One of the reasons I gave you. I gave Alexa book, which is the topic of the book is what happens. When scientists basically stray off, the path of technical knowledge and start to weigh in on politics. And societal issues was case. Philosophers covid-19 Skies philosophers. But he actually talks in this book about like Einstein who talks about the nuclear age and Einstein talks about the physicists actually doing that doing a very similar things. At the
1:58:55
time at the book, is one reason goes on holiday philosophers and
1:58:59
X by Nevin
1:59:01
and it's just a story. It's a story. There's there are other books on this topic, but this is a new one, that's really good. That's just a story of what happens when experts in a certain domain, decide to weigh in, and become basically, social engineers and, and political, you know, basically political advisors, and it's just a story of Justin any catastrophe, right? I think that's what happened with covid again,
1:59:18
F on this book, a highly entertaining, an eye-opening read filled with amazing anecdotes of a rationality and craziness by famous recent philosophers.
1:59:26
This if you read this book you will not look at
1:59:29
Stein, the same. Oh boy, yeah, I'll destroy my heroes. But here's the thing, the AI, the AI risk people. They don't even have the covid model, at least. Not that I'm aware of no. Like, there's not even the killing of the corporate model, they don't even have the spaghetti code.
1:59:50
They got a theory and a warning and of this into that. And like if you ask like, okay, well here's the, here's the, I mean, the ultimate example is okay, how do we know right? How do we know that Anna is running away? Like how do we know that the foom takeoff thing is actually happening? And the only answer that any of these guys had given that I've ever seen is oh it's when the loss from RE the loss function in the training drops, right? That's when you need to like shut down the data center, right? And it's like, well that's also what happens when you're successfully training model like
2:00:17
Look what what even is. This is not science. This is not that anything. It's not a model is not anything. There's nothing to arguing with. It is like, you know, pushing Jello, like there's what do you even
2:00:27
responded as you put pushback on that? I don't think they have good metrics of. Yeah. Wonderful was happening, but I think it's possible to have that like, I just just as you speak now, I mean it's possible to imagine that could be
2:00:41
measures been, 20 years
2:00:43
know, for sure but it's been only week since we had
2:00:46
A big enough breakthrough in language models. We can start actually have this. The thing is the a I do more stuff didn't have any actual systems to really work with and now there's real systems you can start to analyze. I caught as the stuff go wrong and I think you kind of agree that there is a lot of risks that we can analyze the benefits, outweigh the risks in many cases,
2:01:06
but the rest are not existential. Yes, well, not if it's not not enough food, not in the film paper clip. Not so let me okay, there's another slight of hand that you just alluded to his mother sleight of hand that
2:01:14
happens. It's just very think I'm very good at the sleight of hand.
2:01:17
Which is not scientific. So the books are for intelligence, right? Which is like the Nick bostrom's book which is like the origin of a lot of this stuff, which is written, you know, whatever 10 years ago or something. So he does this really fascinating thing in the book, which is he basically says, there are many possible routes to machine intelligence to artificial intelligence. And he describes all the different routes to artificial, intelligence, all the different possible. Everything from biological augmentation through to, you know, that at all these different things. One of the ones that he does not describe is large language models.
2:01:46
Because of course, the book was written before they were invented in, so they didn't exist.
2:01:51
In the book. He just he describes them all and then he proceeds to treat them all. As if there's a cute, the same thing, he presents them all as sort of an equivalent risk to be dealt with in an equivalent way to be talking about the same way. And then the risk, the quote-unquote risk. This actually emerged is actually a completely different technology than he was even imagining. And yet all of his theories and beliefs are being transplanted by this Movement, Like Street on his new technology. And so again like there's no other area of science or technology where you do that, like, when you're dealing with like organic chemistry versus in organic chemistry, you don't just like say 0 with respect.
2:02:21
To like either one, basically, maybe, you know, growing up and eating the world or something, like, they're just going to operate the same way like you don't,
2:02:26
but you can start talking about, like, as, as we get more, and more actual systems, to start to get more and more intelligent. You can start to actually have more scientific arguments here. Like, you know, high level, you can talk about the threat of autonomous Weapons Systems back before we had any Automation and in the military and that would be like very fuzzy, kind of logic, but the more and more you have drones that are becoming more and more autonomous you
2:02:51
Can start imagining okay, what does that actually look like? And what's the actual threat of autonomous weapon systems? How does it go wrong? And still, it's very vague. We start to get a sense of like all right it should probably be illegal or wrong or not allowed to do like Mass deployment of fully autonomous drones that are doing
2:03:14
aerial strikes. Oh no, I'm on large areas. I think it's required, right? So that's a No-No. I think if you require that only
2:03:21
Lee aerial vehicles are automated.
2:03:24
Okay, so you want to go the other way the other way that, okay, look, it's
2:03:28
obvious that the machine is going to make a better decision than the human pilot.
2:03:33
I think it's obvious that it's in the best interest of both the attacker and the defender and Humanity at large. If machines are making more decisions and not people think people make terrible decisions in times of War,
2:03:41
would like there's a, there's ways that can go wrong to,
2:03:44
right? Well, it's a horse. Go terribly wrong now.
2:03:48
This goes back to the whole. This is that whole thing about like this Ultra is a self-driving car. Need to be perfect versus doesn't need to be better than the human driver. Yeah. Does the automated drone need to be perfect or doesn't even need to be better than human pilot at making decisions under enormous amounts of stress and uncertainty?
2:04:01
Yeah, well, the on average, the, the worried that AI folks have is the runaway
2:04:08
you're going to come alive, right? That again, that's the sleight-of-hand. Write it or not. Not come alive, I don't know. Well, as you become a
2:04:14
couple of the lose
2:04:15
control ask for help, but then they're going to develop
2:04:17
Philip goals of their own, they're going to develop a mind of their own. They're going to develop their own right. No more like Chernobyl style
2:04:24
meltdown like just bugs in the code accidentally you know Force. You like the results in the bombing of like large civilian areas. Okay, into to a degree that's not possible in the current
2:04:42
military strategies, I don't know. Troll by humans actually we've been doing a lot of mass bombing cities for a very long time.
2:04:48
In a lot of civilians
2:04:49
died. A lot of civilians died. And if you watch the documentary, The Fog of War McNamara, it's been such a big part of it, talking about the firebombing of the Japanese cities, burning them straight to the ground, right? The devastation in Japan, American Military firebombing the cities in Japan was considerably bigger Devastation in the use of nukes. So we've been doing that for a long time. We also did that Germany, by the way, Germany did that to us, right? Like that's an old tradition, a minute we got airplanes. We started doing indiscriminate bombing. So one of the things we're still doing it,
2:05:16
the modern US,
2:05:18
Military can do with technology with automation, but technology. More broadly is higher and higher Precision,
2:05:24
strikes. Yeah, well sensor, Precision is obviously Precision. This is the, the JDM, right? So there's this big Advance this big Advance called the J down which basically was trapping a GPS transceiver to to an unguided bombs and turning it into it. Got it got it from and yeah that's great. Like look that's been a big advance but and that's like a baby version of this question. Which is okay. Do you want like the human pilot like guessing where the bomb is going to land? Or do you want like the machine, like died in the bomb to his destination? That's a baby.
2:05:48
In the question, text version. The question is, do you want the human of the machine deciding whether to drop the bomb? Everybody just assumes the humans going to do a better job for what I think are fundamentally suspicious
2:05:56
reasons. Emotional
2:05:57
psychological reasons. I think it's very clear that the machine is going to do a better job making that decision, because the humans making it making that decision or god-awful just terrible. Yeah, right. And so, so, yeah. So this is the, this is the thing and then, let's get to the one more sleight-of-hand. Yes, it was okay, please, I'm a magician. You could say one more slide to have these things are going to be so smart right there.
2:06:17
Gonna be able to destroy the world and wreak havoc and like, do all this stuff and plan and do all this stuff and evade us and have all their secret things and their secret factories and all this stuff. But they're so stupid that they're going to get like tangled up in their code and that's the they're not going to come alive, but there's gonna be some bug that's going to cause them to like turn us all in a picture like that they're not going to. They're going to be genius in every way other than the actual bad goal and it's just like and that's just like a like ridiculous like discrepancy and and there. And and you can prove this today. You can actually address this today for the first time with lme's, which is you
2:06:48
Ashley asked LMS to resolve moral dilemmas. Yeah, so you can create the scenario, you know, dot dot dot this. That this that this that what would you is the a I do in this circumstance and they don't just say, Destroy All Humans, Destroy All Humans. They will give you actually very nuanced, moral practical, trade-off oriented answers. So we actually already have the kind of AI that can actually, like, think this through and can actually like, you know, reason about goals. Well, the hope is that a GI or like a very super
2:07:17
Telstra systems. Have some of the Nuance that LMS have and the intuition is they most likely will because even these little m's, have the Nuance Helen's. A really this is actually worth spending Montel. Williams are really interesting to have moral conversations with and that is that I didn't expect. I'd be having a moral conversation with the machine in my lifetime. Wait, and let's remember we're not really having a conversation with a machine were we're having a conversation with the entirety of the collective intelligence of the human species? Exactly? Yes, correct. But it's
2:07:48
To imagine autonomous weapon systems. There are not using LMS
2:07:52
if they're smart enough to be scary where they not smart enough to be wise.
2:07:59
Like that's the part where it's like, I don't know how you get the one without the
2:08:02
other. Is it possible to be super intelligent without being Super Wise?
2:08:06
Well, your outfit again, you're back to that. I mean, then you're back to a classic autistic computer, right? Like, you're back to just like a blind rule follower. I've got this, like, core is the paperclip thing. I've got this core rule, I'm just going to follow it to the end of the Earth and it's like, well, but everything you're going to be doing execute that rule is going to be super genius level. That humans are going to be able to counter. It's just it's a mismatch in the definition of what the system is capable of
2:08:26
unlikely but not impossible
2:08:27
anything. But again here you get to like, oh,
2:08:29
Okay
2:08:29
like no, no I'm not saying when it's unlikely but not impossible if it's unlikely. That means the the fear should be correctly.
2:08:37
Calibrated extraordinary claims require extraordinary
2:08:39
proof well. Okay, so one inch things to tangent I would love to take on this because you mentioned this in the essay about nuclear which was also I mean you don't shy away from a little bit of a spicy take so Robert Oppenheimer famously said. Now I am become death the destroyer of worlds as
2:08:59
Witness, the first detonation of a nuclear weapon on July 16th 1945 and you write an interesting historical perspective. Quote recall that John Von Neumann responded to Robert. Robert oppenheimer's famous hand-wringing about the role of creating nuclear weapons, which you note helped end World War Two and prevent World War 3 with some people confess guilt to claim credit for the sin. And you also mentioned that Truman was harsher after meeting up and
2:09:29
Timer he said that. Don't let that cry baby in here, again,
2:09:34
real quick, real quick. When promoting it from Dean Acheson
2:09:39
boy
2:09:40
because I remember didn't just say that famous line. Yeah, he done spent years going around, basically moaning. I'm you know, going on TV and going into going into the White House and basically like just like doing this hair shirt, you know, things self, you know, this sort of self-critical like, oh my God, I can't believe how awful I
2:09:53
am the he's the the he's widely considered.
2:09:57
Perhaps that they because of the hanging is the father of the atomic bomb.
2:10:04
This is his criticism of him as he tried to have his cake and eat it too. Like he wanted to in so environment, of course, a very different kind of personality and he's just like you asked me to ask you. This is like an incredibly useful thing. I'm glad we did it. Yeah.
2:10:16
Well if I Know M is as widely credited as being one of the smartest humans of the 20th century. The certain certain people, everybody says like this is the smartest person I've ever met when they've met him.
2:10:27
Anyway, that doesn't mean smart doesn't mean wise. So big, I would love to sort of he make the case both for and against the critique of Oppenheimer here because we're talking about nuclear weapons. Boy do they seem dangerous?
2:10:45
Well thanks for the critique goes deeper and I left this out. Here's the real substance. I left it out because I don't want to dwell on nukes and my paper but here's the deeper thing that happened and I'm really curious this movie coming out this summer. I'm really curious.
2:10:57
To see how far he pushes this because this is the real drama in the story which is it wasn't just a question of our nukes good or bad. It was a question of should Russia also have them and what actually happened was Russia got the American invented, the bomb, Russia got the bomb, they got the bomb through Espionage. They got an American and you know, they got American scientists and foreign scientists working on American project. Some combination of the two basic gave, the Russians, the designs for the bomb and I saw the Russians, got the bomb. There's this dispute to this day of
2:11:27
Role in that if you read all the histories the kind of composite picture. And by the way, when we now know a lot actually about Soviet Espionage in that era because there's been all this to classified material in the last 20 years that actually shows a lot of a lot of very interesting things. If you can read all the history that you're going to get is Oppenheimer himself, probably was not a pretty probably did not hand over the nuclear secrets himself. However, he was close to many people who did including family members and there were other members of the Manhattan Project who were Russian Soviet SS and did hand over the bomb. And so the
2:11:57
View that Oppenheimer and people like him had that this thing is awful and terrible. And oh my God, and, you know, all this stuff, you could argue fed into this ethos at the time that resulted in people, thinking that the Baptist's thinking that the only principal thing to do is to give the right, the Russians, the bomb. And so the the moral beliefs on this thing and the public discussion and the role that the inventors of this technology play, this is the point of this book. When they kind of take on this sort of public intellectual, moral, kind of thing, it can have real consequences, right? Because we live in a very
2:12:27
Different world today because Russia got the bomb. Then we would have lived in had they not gotten the bomb, right? The entire 20th century second, half of the 20th century would have played out very different have those people not given Rush of the bomb and so the stakes were very high, then the good news today is nobody sitting here today. I don't think worrying about like an analogous situation with respect to, like I'm not really worried that someone was going to decide to give, you know, the Chinese, the design for
2:12:50
a i although he did just speak it Jamie's conference, which is an interesting. But however, I don't think I don't think that's what's at play here. But what's at play here? All these other fundamental issues around. What do we believe about this? And then what laws and regulations and restrictions are we going to put on it? And and that's where I draw like a direct straight line. And anyway, and my reading of the history on nukes is like the people who were doing the full hair shirt public. This is awful, this is terrible. Actually had like catastrophically, bad results from taking those views and that's what I'm worried it's going to happen again. But is there a case to be made that you
2:13:20
We need to wake the public up to the dangers of nuclear weapons when they were first dropped like really like educate them on like this is extremely dangerous and destructive weapon. I think the education kind of happened quick and early like wow it's pretty obvious how we dropped one bomb and destroyed an entire city. Yes, and eighty thousand people dead. Yeah, I looked at things like I don't think the reporting of that you can report that in all kinds of ways. It was you can you can do all kinds of Slants. Like war is horrible Wars,
2:13:50
Herbal. You can do, you can make it seem like, nuclear, the use of nuclear weapons is just part of war and all that kind of stuff. Something about the reporting in the discussion of nuclear weapons, resulted in us being terrified in awe of the power of nuclear weapons, and that potentially fed in a positive way towards the game. Theory of mutual assured destruction. So this gets to what actually happens. Good to what are some of the honey Playing devil's advocate here? Yeah, sure.
2:14:20
Let's get to what actually happened, then kind of back into that so what would actually happen I believe in again. I think it's a reasonable. Reading of history is what actually happened was noose then prevented World War 3 and they prevented World War 3 through the game. Theory of mutually assured destruction had nukes not existed, right? There would have been no reason why the cold war did not go hot right? And then they're in and, you know, in the military planners at the time, right? Thought both on both sides, thought that there was going to be World War 3 on the plains of Europe, and they thought that was going to be like a hundred million people dead, right. It was like, the most obvious thing in the world happen, right? And it's the dog that didn't bark.
2:14:50
Right. Like it may be like the best single net thing that happened in the entire 20th century. Is that like that didn't happen? Yeah. Actually just on that point. You say a lot of really brilliant things. It hit me just as you were saying it.
2:15:04
I don't know why it hit me for the first time, but we got two Wars in this span of like 20 years.
2:15:11
Like we could have kept getting more and more world wars and more and more ruthless. It actually, you could have had a u.s. versus Russia War. You could
2:15:20
have, by the way you haven't, there's another hypothetical scenario. The other hypothetical scenario is the Americans, got the bomb, the Russians didn't write and then America's the big dog and then maybe America would have had the capability to actually roll back their current. I don't know whether that would have happened but like it's entirely possible, right? And and and the act of these people who have these moral positions about because they could forecast, they
2:15:41
Good model, they could forecast. The future of other Stones were used made a horrific mistake because they basically ensured that the Iron Curtain would continue for 50 years longer than would have otherwise the kind and again like these are kind of facials. I don't know that that's what would have happened but like the decision to hand the bomb over was a big decision.
2:15:58
Made by people who were very full of themselves. Yeah, but so me as in America me as a person that loves America. I also wonder if us was the only ones with the nuclear weapons.
2:16:12
That was the argument for handing, the, but that was the the guys who fixed the guys who handed over the bomb that was actually, their
2:16:17
moral argument. I would probably not handed over to. I would be careful about the regimes you handed over to maybe give it to like the British or something like like a democratically elected
2:16:31
government. What would Sarah people to this day? Who thinks that those by the Soviet spies did the right thing because they created a balance of Terror as opposed to the u.s. having just. And by the way, let me let
2:16:38
me balance of Terror. Let's tell the full.
2:16:40
Is such a sexy ring to it, okay? So the full version of
2:16:43
the story is John Von Neumann is a hero both yours and mine. The full version of the story is he advocated for a first, right? So when the US had the bomb and Russia did not, he advocated for he said we need to strike them right now. Strike Russia. Yeah yes I know him. Yes because he said World War 3 is inevitable. He was very hardcore. He his his theory was his theory was World War, Three is inevitable. We're definitely going to have world.
2:17:10
Three. The only way to stop World War 3, as we have to take them out right now and we have taken that right now before they get the bomb, because this is our last
2:17:15
chance.
2:17:17
Now again, like, is this an example of philosophers in
2:17:19
politics? I don't know if that's in there or not, but this is in the standard,
2:17:22
but no, but it is a meanings that is
2:17:24
on the other side. So, so most of the case studies, most of the case studies in books, like this are the crazy people on the left. Yeah, the Norman is a story arguably of the crazy people on the right?
2:17:34
Yes, dick to Computing
2:17:35
John. Well, this is the thing and this is this is the general principle. Is it goes ding perfect Arc or thing which is like. I don't know whether any of these people should be making any of these calls. Yeah. Because there's nothing in either, Von neumanns background or up and hammers background or any of these people's.
2:17:47
Background qualifies them as moral authorities. Yeah. Well this actually brings up the point of Nai who are the good people too. To reason about the morality, the ethics, the outside of these risks outside, like, the more complicated stuff that you you agree on is, you know, this will go into the hands of bad guys and all the kinds of ways they'll do is interesting and dangerous is dangerous in interesting unpredictable ways. And who is the right person who are the right kinds of people to make decisions how to
2:18:17
Onto it. I detect people. So the
2:18:19
history of these fields, this is what he talks about in the book. The History of these fields is that the the competence and capability and intelligence and training, and accomplishments of senior, scientists and technologists working on a technology and then being able to, then make moral judgments in the use of the technology. That record is terrible. That record a track record is like catastrophically bad at
2:18:41
people just in the united. The people that develop that technology are usually not going to be the right people.
2:18:47
Oh,
2:18:48
well, why would they? So the claim is, of course, there are the knowledgeable ones, but the problem is they spent their entire life in a lab, right? They're not theologians. But so, what'd you find what'd you find? When you read, when you read this, when you look at these sisters, what you find is, they generally are very thinly, informed on History, sociology, on on theology on morality ethics. They tend to manufacture their own worldviews from scratch. They tend to be very sort of thin
2:19:17
They're not remotely the arguments that you would be having if you got like a group of Highly qualified theologians or philosophers or, you know,
2:19:23
well let me set of as The Devil's Advocate takes a sip of whiskey say that I agree with with that, but also, it seems like the people who are doing kind of the ethics departments, and these texts come tech companies. Go sometimes the other way. Yes, they're not, they're not nuanced on the history or
2:19:47
AG or this kind of stuff. They almost becomes kind of outraged activism towards directions. That don't seem to be. Yeah. Grounded in history and humility and nuances. Again, drenched with arrogance. So, so definitely not sure which is worse,
2:20:04
but now they're both better than yeah. So definitely not them either so but I guess but what this
2:20:09
is a hard, yeah, it's a hard
2:20:10
problem. There's our problem, this goes back to where we started, which is okay who has the truth and it's like, well, you know, like how
2:20:17
Society's arrived at like truth. And how do we figure these things out and like, our elected leaders, play some role in it. You know, we all play some role in it. There have to be some set of public intellectuals at some point that bring you no rationality and judgment humility to it. Those people are few and far between we should probably prize them very
2:20:33
highly. Yes so let's celebrate humility in our public leaders. So getting to risk. Number two will a I ruin our society short version. As you write if the murder robots don't get us the hate speech and misinformation.
2:20:47
Will and the action you recommend in short. Don't let the thought police suppress AI.
2:20:55
Well, what is this risk of the effect of misinformation a society? That's going to be catalyzed by AI.
2:21:06
Yeah, so this is the social media. This is what you just alluded to is the activism kind of thing that's popped up in these companies in the industry. And it's basically from my perspective, it's basically part 2 of the war. The played out over social media over the last 10 years because you probably remember social media. 10 years ago was basically who even wants this, who wants of, who wants a photo of what your cat has.
2:21:25
Breakfast. Like this stuff is like silly and trivial and why can't these nerds? Like figure out how to invent something like useful and powerful and then you know certain things happened in the political system and then it sort of the polarity on that discussion. Switched all the way to social media is like the worst most corrosive, most terrible, most awful, technology ever invented and it leads to, you know, terrible the wrong, you know, politicians and policies and politics and like and all this stuff and that all got catalyzed into this, very big, kind of angry movement, both inside and outside the companies to kind of bring social media to heal and that got focused
2:21:55
and particularly on two topics, so-called hate speech and so-called misinformation and that's been a soccer playing out for the last for the last decade. And I don't even really want to even argue the pros and cons of the sides. Just to observe that that's been like a huge fight. And I said, you know, big consequences to how these companies operate. Basically that same those same sets of theories that same activist approach that same energy as being transplanted, straight to Ai and you see that already happening? It's why, you know, GPT will answer, let's say certain questions and not others. It's why it gives you the can speech about you know what?
2:22:25
Starts with as a large language model. I cannot you know, basically means that somebody has reached in there and told it can't talk about certain topics, you
2:22:31
think some of those good.
2:22:33
So it's an interesting question, so a couple of couple observations. So so one is the people who find this the most frustrating are the people who are worried about the murder robots, right? So so, and in fact that the exfil called X risk people, right? They started with the term AI safety. The term became AI alignment. When the term became AI alignment is when the switch happened from or where it is.
2:22:55
Gonna kill us all to were worried about his vision misinformation, the AIX risk. People have now renamed their thing. May I not kill everyone is MM. Which I have to admit as a catchy term and they are very frustrated by the fact that the HP, either, the sort of activist driven, hate speech. Misinformation kind of thing is taking over, which is what's happened, is taking over the AI. Ethics field has been taken over by the hate speech misinformation people, you know, look at what I like to live in a world in which, like, everybody was nice to each other all the time and nobody ever said anything mean and nobody ever used a bad word and everything was always accurate and honest.
2:23:25
Like that sounds great. Do I want to live in a world where there's like a centralized thought, police working through the tech companies to enforce the view of a small set of Elites that they're going to determine what the rest of us think and feel like absolutely
2:23:35
not. That could be a middle ground somewhere like Wikipedia type of moderation. There's moderation. Wikipedia, that it's somehow crowd-sourced, where your don't have centralized Elites, but it's also not completely just a free-for-all because the, if you have the entirety of human,
2:23:55
And knowledge at your fingertips, you can do a lot of harm. Like if you have a good assistant, that's completely uncensored, they can help you build a bomb. They can help you mess with people's physical well-being, right? If they because that information is out there on the internet. So they presumably, there's it would be you could see the positives in censoring, some aspects of an AI model when it
2:24:25
Helping you commit literal
2:24:26
violence and there's a section later section. The essay where I talk about bad people, doing bad things. Yes, right. Which. And there's this, there's a set of things that we should discuss their. Yeah. What happens in practice is these lines? As you alluded to this already, these lines are not easy to draw and what I've observed in the social media. Version of this is the way I described it as the slippery. Slope is not a fallacy. It's an inevitability. The minute you have this kind of activist personality that gets in a position to make these decisions. They take it straight to Infinity like the that it goes into the crazies.
2:24:55
Zone like almost immediately and never comes back because people become drunk with power, right? And they look if you're in the position to determine what the entire world thinks and feels and reads and says like you're going to take it and you know Ilan has, you know, ventilated this with the Twitter files over the last you know, three months and it's just like Crystal Clear, like how bad it got there. Now, yeah, reason for optimism is what Iran is doing with the community notes. So Community knows is actually a very interesting thing. So what do you want is trying to do with Community notes? Is he's trying to have it where there's only a community note. When people who have
2:25:25
If we disagreed on many topics degree on this one?
2:25:28
Yes, it's essentially that's what that's why I'm trying to get at is like this that could be Wikipedia like models our community knows type of models where allows you to essentially either provide context or sensor in a way that it's not resist, the slippery, slope nature.
2:25:44
Now there's another hour, there's an entirely different approach here which is basically, we have a eyes that are producing content, we could also have a eyes that are consuming content, right? And so, one of the things that your assistant could do for you is
2:25:55
Is help you consume all the content right and basically tell you when you're getting played. So for example I'm going to want the AI that my kid uses right to be very you know, child safe and I'm going to want it to filter for him. All kinds of inappropriate stuff that you shouldn't be saying just because he's a kid. Yeah, right. And you see what I'm saying is, you can Implement that you could use architectural. You could say, you can solve this on the client side, right? You solving on the server side, gives you an opportunity to dictate for the entire world which I think is where you take the slippery. Slope to Hell. There's another architectural approach which is to solve this on the client side. Yeah, this is certainly what I would endorse.
2:26:25
It's address number five way, I lead to bad people doing bad things. And he's just imagine language models used to do so many bad things but the hope is there that you can have a large language models used to then defend against it by more people. By smarter people buy more effective people, skilled people, all that kind of stuff.
2:26:45
Three-point argument on bad people, doing bad things. So so number one, right? You can use the technology defensively and there's a we should be using AI to build like broad-spectrum, vaccines and antibiotics.
2:26:54
X4 like bio weapons, and we should be using a, I don't like hunt terrorists and catch criminals and like we should be doing, like all kinds of stuff like that. In fact, we should be doing those things even just to like go get like, you know, basically go eliminate risk from like regular pathogens that aren't like constructed by an AI. So there's there's there's the whole there's a whole defensive set of things. Second is we have many laws on the books about the actual bat things, right? So it is actually illegal to be a criminal, you know, to commit crimes to commit terrorist acts to you know, build pathogens with the intent, to deploy them to kill people. And so we have those
2:27:24
As we don't, we actually don't need new laws for the vast majority of the scenarios. We actually already have the loss in the book on the books. The third argument is the minute and this is sort of the foundational one that gets really tough but the minute you get into this thing which you were kind of getting into. It was like, okay, but like, don't you need censorship sometimes, right? And does she need restriction? Sometimes it's like, okay, what is the cost of that? And in particular in the world of Open Source, right? And so is open-source a. I going to be allowed or not. If open source, AI is not allowed.
2:27:54
Then what is the regime? That's going to be necessary legally and technically to prevent it from developing, right? And here again is where you get into some people have proposed that these kinds of things you get into. I would say pretty extreme territory, pretty fast. Do we have a monitor agent on every CPU and GPU that reports back to the government. What we're doing with our computers, are we seizing GPU clusters? They get Beyond a certain size like and then, by the way, how are we doing all that globally, right? And like if China is developing and lmb on the scale that we think is allowable, are we going to invade?
2:28:24
Made right? And you have figures on the AIX risk side who are advocating and, you know, potentially up to nuclear strikes to prevent, you know, this kind of thing. And so here you get into this thing. And again, you know, you could maybe say this is, you know, you could even say this is what good bad or indifferent or whatever. But like here's the comparison of nukes, the comparison nukes is very dangerous because one is just news for justice just to buy although we can come back to nuclear power. But the other thing was like, with nukes, you could control plutonium, right? You could track plutonium it was like hard to come by AI is just math and code, right? It's
2:28:54
Sim like math textbooks and it's like their YouTube videos to teach you how to build it. And like there's open sores already open source, you know, it's a 40 billion parameter model running around already called Falcon Online, anybody can download and so, okay, you walk down the logic path. That says, we need to have guardrails on this and you find yourself in a threat, Therrien, to tell a Terran regime of thought, control, and machine control, that would be so brutal that you would have destroyed the society that you're trying to protect. And so I just don't see how that actually works. So
2:29:25
It's a shame. My brains Gone full full steam ahead here because I agree with basically everything you're saying but I'm trying to play devil's advocate here there because okay, you highlighted, the fact that there is a slippery slope to human nature. The moment you censor something, you start to censor everything that alignment starts out sounding nice, but then you start to align to the beliefs of some
2:29:54
Select group of people and then it's just your beliefs. This is just the number, the number of people you're lying to smaller and smaller as that group becomes more and more powerful, okay? But that just speaks to the people that sensor are usually the assholes and the assholes get richer, wonder if it's possible to do without that for AI. One way to ask this question is, do you think the base models the the base, the base? I Foundation model should be open sourced.
2:30:23
Like what were the Mark Zuckerberg is saying they want to do.
2:30:26
So I look only. I think it's totally appropriate, the companies that are in the business of producing a product or service should be able to have a wide range of policies that they put. Right. As I just again, I want a heavily censored model for my eight-year-old. Like I actually want that like I would pay more money for the ones, more heavily sensor than the one that's not. Right. And so like, there are certainly scenarios where companies will make that decision. Look, if an interesting thing you brought up the or is this really
2:30:52
A speech issue. One of the things that the big tech companies are dealing with is that content generated from an llm is not covered under Section 230 which is the law that protects internet platform companies from being sued for user-generated content and so it's actually yes. And so there's actually a it's actually a question I think there's still a question which is can big pumpkin. Big American companies actually feel generative AI at all or is the liability actually going to just ultimately convinced them that they can't do it because the minutes
2:31:22
Thanks for something bad and doesn't even need to be hate speech. It could just be like an inactive could hallucinate a product, you know, detail on a vacuum cleaner you know, and all of a sudden the vacuum cleaner company sues for misrepresentation and if there's any symmetry there, right? Because the the the LM going to be producing billions of answers to questions and it only needs to get a few
2:31:39
wrong to hair loss. Has to get updated really quick
2:31:41
here. Yeah, I nobody knows what to do with that, right? So anyway, like they're a big, their big questions around how companies operate at all. So we talked about those but then there's this other question of like, okay, the open source,
2:31:52
So what about open source? And my answer your question is kind of like, obviously. Yes, the models half there has to be full open source here because to live in a world in which that open source is not allowed is a world of draconian speech control, human control, machine control. I mean, you know, black helicopters with jackbooted thugs, coming out, rappelling down and seizing your GPU like territory. Well no no I'm 100%
2:32:16
serious. That's you're saying slippery slope always
2:32:18
leads there. No no that's what's required to enforce it. Like how will you enforce a ban?
2:32:22
An open-source, you could
2:32:24
add friction to it like hard to get the models because people will always be able to get the models but it will be more in the shadows,
2:32:30
right? The leading open source model right now is from the UAE like, the next time they do that. What do we do? Like,
2:32:38
oh I see you're like
2:32:40
the fourteen-year-old in Indonesia comes out with a breakthrough. Ma, you know, we talked about most great software comes from a small number of people. Some kid comes out with some big, new breakthrough and quantization or something and he has some huge breakthrough. And like, what we're going to, what are we going to like?
2:32:53
Invade Indonesian arrest
2:32:54
him. It seems like in terms of size and models and Effectiveness and models. The big tech companies will probably lead the way for quite a few years and the question is of what policies they should use. The, the kid, the kid in Indonesian should not be regulated, but should Google meta. Microsoft opening. I'd be regulated. Well, so the this ghost, okay. So, when does it become dangerous, yeah. Right. Is the danger that it's quote. As
2:33:22
Powerful as the current leading commercial model or is it that it is? It is just at some other arbitrary threshold. Yeah. And then, by the way, like, look, how do we know what we know today is that you need like a lot of money to, like, train these things but there are advances being made every week on training efficiency and, you know, data all kinds of synthetic, you know, look, I don't even like the synthetic data thing. We're talking about maybe some kid figures out a way to Auto generate synthetic that's gonna change everything. Yeah, exactly. And so, like sitting here today, like the, the brakes are just happened. Right? Imitates Point. Like the Breakthrough just happened. So we don't know what the shape of
2:33:52
This technology is going to be I mean the big shock. The big shock here is that you know, whatever number of billions of parameters, basically represents at least a very big percentage of human thought, like who would have imagined that and then there's already work underway. There was just this paper that just came out that basically takes the GPT three scale modeling and presses it down to run on a single 32 core CPU, like, who would have predicted that? Yeah, you know, some of these models. Now, you can run a Raspberry Pi's like today, they're very slow, but like, you know, maybe they'll be
2:34:22
You're perceived you've real perform you know like it's math and come here, we're back at the Rebecca math and codes math and codes math code and data is B Marks.
2:34:32
Just like the way at this point, he's
2:34:34
just screw it. I
2:34:36
don't know what to do with this. You guys created this whole internet thing. Yeah. Yeah I'm yeah I'm a huge believer in open source here.
2:34:44
So my argument is we're going to have to say here's my argument is it my argument full arguments is AI is going to be like are it's going to be everywhere like it's just this is just going to be in touch. There already is. It's gonna be in touch.
2:34:52
Books and kids are going to grow up knowing how to do this and it's just gonna be a thing. It's going to be in the air and you can't like pull this back anymore you can pull back are and so you just have to figure out how to live in this world, right? And then that and then that's where I think like all this Henry, anybody a risk especially complete waste of time because the the effort should go into. Okay, what are what is the defensive approach? And so if you're worried about it, you know, AI generated pathogens. The right thing to do is to have a permanent project, warp speed, right? Funded lavishly. Let's do a Manhattan list. People are, what matters? Let's do a Manhattan project for biological defense, right? And let's build a eyes, and let's have
2:35:22
Prospector and vaccines where like we're in slated for every pathogen
2:35:26
right? And that was what the interesting thing is because it's software a kid in his basement. Teenager could build like a system that defends against like the worst of the worst. I mean and to me defense is super exciting. It's like if you believe in the good of human nature for the the most people want to do good to be the savior of humanity is really exciting.
2:35:52
Not okay, that's a dramatic story but like to help people to help me. Yeah. Okay what about just the jump around? What about the risk of will AI lead to crippling in the quality? You know, because we're kind of saying everybody's life will become better. Is it possible that the rich get richer here? Yeah so this is actually a radically goes back to Marxism. So because this was the curtains of the Court claiming Marxism, right? Basically, was that the owner the owners of capital would basically on the means of production and then over time they would basically accumulate all the wealth. The
2:36:22
Would be paying in, you know, and began getting nothing in return because they wouldn't be needed anymore, right? He marks is very worried about Mike Woulda called mechanization or what later became known as Automation and that you know the workers would be a misery aided and the capitalists would end up with with all its. So this was one of the core principles of Marxism. Of course, it turned out to be wrong about every previous wave of Technology. The reason it turned out to be wrong about every previous wave of technology. Is that the way that the self-interested owner of the machines makes the most money is by providing the production capability in the form of products and services.
2:36:52
Has to the most people, the most customers as possible, Right. The largest. And it is one of those funny things where every CEO knows this intuitively. And yet it's like hard to explain from the outside, the the way you make the most money in any business is by selling to the largest market. You can possibly get to the largest market. You can possibly get to, is everybody on the planet. And so every large company does is everything that it can to drive down prices to be able to get volumes up to be able to get to everybody on the planet. And that happened with everything from electricity. It happened with telephones and happen with radio, what happened with automobiles, it happened with smartphones and heh.
2:37:22
With the PCS it happened with the internet, it happened with mobile broadband. It's happened. By the way with Coca-Cola and it's happened with like every you know, basically every industrially produced, you know, good or service people. You want to drive it to the largest possible market. And then, as proof of that, it's already happened, right? Which is the early adopters of ygpt and Bing are not like at, you know, Exxon and bowing their, you know, your uncle and your nephew, right? It's just like fries either freely available online or
2:37:52
Available for 20 bucks a month or something. But you know, these things went this, this technology went Mass Market immediately and so look, the the owners of the means of production, the whoever does this not to mention, he's trolling of questions. There are people who are going to get really rich doing this producing these things, but they're going to get really rich by taking this technology to the broadest possible market.
2:38:10
So yes, they'll get rich but they'll give Rich having a huge positive impact
2:38:14
on making that making the technology available to everybody. Yeah. Right. But again smartphones same thing. So there's this amazing kind of twist in
2:38:22
History which is you cannot spend ten thousand dollars on a smartphone, right? You can spend a hundred thousand dollars. You can't spend it. Like I would buy the million-dollar smartphone, like, I'm signed up for it. Like, if it's like, suppose a million-dollar smartphone was like, much better than $1000 smartphone, like, I'm there to buy it. It doesn't exist. Why doesn't it exist? Apple makes so much more money driving the price further down from a thousand dollars than they would try to harvest, right? And so it's just this repeating pattern. You see over and over again where the and and what's great about it. What's great about it is you do not need to rely on anybody's, enlightened, right. Generosity to do this.
2:38:52
You just need to rely on capital of self-interest.
2:38:56
What about AI taking our jobs? Yeah, so very, very similar thing here. There's sort of a, there's a core fallacy, which again, was very common in Marxism, which is What's called the lump of Labor fallacy. And this is sort of the fallacy that there is a, only a fixed amount of work to be done in the world and if the in it's all being done today by people, and then if machines do it, there's no other work to be done by people and that's just a completely backwards view on how the economy develops and grows. Because what happens is not in fact that what happens is the introduction of Technology.
2:39:24
Enter production process causes prices to fall as prices fall consumers. Have more spending power, as consumers have more spending power. They create new demand, that new demand, then causes capital and labor to form into new Enterprises to satisfy new wants and needs. And the result is more jobs and higher wages
2:39:41
and you wants and needs the the worries that the equation of new wants and needs at a rapid rate will mean there's a lot of turnover and jobs. So people will lose jobs, just the actual experience of losing your job.
2:39:54
Job and having to learn new things and your skills is painful for the individual
2:39:58
things, one is the new jobs are often much better. So it's just actually came up. There was this Panic about a decade ago, on all the truck drivers are going to lose their jobs. Right. And number one that didn't happen because we haven't figured out a way to actually finish that yet. But yeah, but the other thing was like, electric driver. Like I grew up in a town that was basically consisted of a truck, stop. Right. And I like new a lot of truck drivers and like truck drivers live, a decade shorter than everybody else. Like they it's a, it's a it's actually like a very dangerous, like they get like, literally they have like
2:40:24
Risa skin cancer and on the left side of their on the left side of their body from being in the sun. All the time, the vibration of being in the truck is actually very damaging to your physiology
2:40:33
and there's actually a perhaps partially because of that reason. There's a shortage. Yeah, of people who want to be truck
2:40:41
drivers. Yeah. Like it's not, it's not like the question. Always you want to ask somebody like that as you want? You know, if it would you want your kid be doing this job? And like, most of them will tell, you know, like I want my kid to be sitting in a cubicle somewhere like where they don't have this like where they don't die 10 years earlier.
2:40:55
And so, the new jobs. Number one, the new jobs are often better, but you don't get the new jobs until you go through the change. And then to your point, the, the training thing, you know, it's always the issue is, can people adapt? And again here, you need to imagine living in a world in which everybody has the assistant capability, right? To be able to pick up new skills, much more quickly, and be able to have some be able to have a machine to work with augment their
2:41:13
skills. It's still going to be painful, but that's the
2:41:16
process of life is painful for some people, I mean, there's no quick. There's no question. It's painful for some people in their, you know, their yes, it's not again, I'm not a utopian on this and it's not like it's positive.
2:41:24
Buddy in the moment. But it has been overwhelmingly positive for 300 years. I mean, look, the concern, here, the concern, the concern is concerned, has played out for literally centuries. And, you know, this is the sort of what I, you know, the story of the lies that you may remember there was a panic in the 2000s around Outsourcing was going to take all the jobs, there was a panic in the 2010s that robots are going to take all the jobs in 2019 before covid, we had more jobs and higher wages, both in the country and in the world than at any point in human history.
2:41:55
And so, the overwhelming evidence is that the net gain here is like just like wildly positive and most, most people like overwhelmingly come out the other side being caged beneficiaries of this. So you write that the single greatest risk. This is the risk. Your most convinced by the single greatest risk of AI is that China wins? Global AI dominance and we the United States and the West do not. Can you elaborate? Yeah. So this is the other thing which is a lot of this sort of a iris debates today, sort of assume that
2:42:24
The only game in town, right? And so, we have the ability to kind of sit in the United States and criticize ourselves and do, you know, have our government like, you know, beat up on our companies and we're figure out a way to restrict or companies can do. And you know, we're going to do we're going to ban this and band that restrict this and do that and then there's this like other like force out there that like doesn't believe we have any power over them whatsoever and they have no desire to sign up for whatever rules, we decide to put in place and they're going to do whatever it is, they're going to do and we have no control over it at all and it's China and specifically the Chinese Communist party and
2:42:54
They have a completely publicized open, you know, plan for what they're going to do with AI and it is not what we have in mind and not only do they have that as a vision and a plan for their society, but they also have it as efficient in plan for the rest of the world. So their plan is what surveillance authoritarian control, so authoritarian, population control, you know, just go to go to old-fashioned communist authoritarian control and surveillance and enforcement and social credit scores and all the rest of it. And you are going to
2:43:24
Order to metered within an inch of everything all the time. And it's getting, it's basically the end of human freedom and that's their goal. And, you know, they Justified on the basis of that's what leads to peace
2:43:34
and you're worried that regulating in the United States will halt progress enough to where the Chinese government would win that race.
2:43:44
So their plan. Yes. Yes. And the reason for that is they and again, they're very public on this. They have their plan is to proliferate their approach around the world and they have this program called the digital Silk Road, right? Which is
2:43:54
Building on their their Silk Road investment program. And they've got their, they've been laying, they've been laying networking, infrastructure all over the world with their 5G network, with their company, while way. So they've been laying all this fabric but financial and technological fabric all over the world. And their plan is to roll out their vision of a eye on top of that and to have every other country, be running their version. And then if you're a country prone to, you know, authoritarianism, you're going to find this to be an incredible way to become more authoritarian. If you're a country, by the way, not prone authoritarianism. You're going to have the Chinese Communist Party, running your infrastructure. And
2:44:24
Back doors into it, right? Which is also not good. What's
2:44:29
your sense of where they stand in terms of the race towards super intelligence as compared to the United
2:44:35
States? Yeah, so good news, is there behind but bad news is they, you know, they must say they get access to everything we do. So they're probably a year behind at each point in time but they get, you know, downloads I think of basically all of our work on a regular basis through a variety of means and they are you know at least we'll see there at least putting out reports of very compact. Just put out a report last week.
2:44:54
Of a GP 23.5 analog. They put out this report forget what it's called, but they put out this report of the cell and they did. And they do, you know, the way when opening I puts out they one of the ways they test, you know, GPT is they run it through a standardized exams like the sat right? Just how you can kind of gauge how smart it is and so the Chinese report they ran their LM through the Chinese equivalent of the SAT. It includes a section on Marxism in a section on Mouse a tongue, thought it turns out
2:45:25
I does very well on both of those topics, right? So
2:45:30
like this, this alignment
2:45:31
think I'm gonna stay, I write like literal communist a high, right? And so their vision is like that, the, you know, so you know, you just imagine like you're a school, you know, you're a kid ten years from now in Argentina or in Germany or in who knows where Indonesia and you ask they I'd explain to you like how the economy works and it gives you the most cheery, upbeat explanation of chinese-style Communism you've ever heard, right? So
2:45:55
Like this takes your are like really big. Well my as we've been talking about my hope is not just for the United States but would just the kitten has basement the open-source LM. So I don't know if I trust large centralized institutions with super powerful EI, no matter what their ideology. There's a power corrupts.
2:46:17
You've been investing in tech companies for about, let's say, 20 years and about 15 of which was with Andreessen Horowitz, what interesting Trends in Tech. Have you seen all that time, just talk about companies and just the evolution of the tech industry. I mean, the big shift over 20 years, has been that Tech used to be a tools industry for basically, from like 1940 through to about 2010, almost all the big successful companies were picks and shovels companies. So PC database,
2:46:47
You know, some some some tool that somebody else would pick up and use since 2010, most of the big winds have been in applications. So a company that starts a company you know in starts in an existing industry and goes directly to the customer in that industry. And you know, the earliest examples there were like uber and Lyft and Airbnb. And then that model is kind of elaborating out. The AI thing is actually reversion on that for now because like most of AI business right now is actually in Cloud provision of a I paid
2:47:17
E is for other people to build on,
2:47:18
but, but the big thing, will probably be a
2:47:20
nap. Yeah, I think, I think most of the money I think probably will be in whatever. Yeah, you're a, i financial advisor or your a eye doctor, or a lawyer or, you know, take your pick of whatever the domain is and there. And what's interesting is, you know, we Valley kind of does everything. We are entrepreneurs, kind of elaborate, every possible idea. And so, there will be a set of companies that like make AI something that can be purchased and used by large law firms, and then there will be other companies that just go direct to Market as a as a
2:47:47
My lawyer. What advice could you give for start-up founder? Just haven't seen so many successful companies. So many companies that fail. Also, what advice could you give to a start-up founder? Someone who wants to build the next super successful startup in the tech space, the Google's, the apples, the Twitter's
2:48:09
Yeah, so the great thing about the really great founders of, so don't take any advice. So so if you find yourself listening to advice, maybe you shouldn't do it,
2:48:18
but that's actually just to elaborate on that. If you could also speak to Great Founders. Yeah. Like what what makes a great founder.
2:48:27
So it makes a great founder is super smart coupled with super energetic coupled with super courageous. I think it's some of those those
2:48:35
three and I tell engines, passion, and courage.
2:48:37
Age first to our traits and the third one is a choice. I think courage is a choice look. His courage is question of pain tolerance, right. So how how many times you're going to get punched in the face before you quit? Yeah, and here's maybe the biggest thing people don't understand about what it's like to be a start-up founder, is it gives it gets very romanticized, right? And even when it's even when they fail is still gets romanticized about like what a great adventure it was but like the reality of it is, most of what happens is, people telling, you know and then
2:49:07
Usually follow that with your stupid, right? No. I will not come to work for you. I will not leave my cushy job go to come work for you know I'm not going to buy your products, you know. No I'm not going to run a story about your company know, I'm not this that the other thing and so a huge amount of what people have to do is just get used to just getting pots. And and and the reason people don't understand this is because when you're a Founder, you cannot let on that. This is happening because it will cause people to think that you're weak and they'll lose faith in you. Yeah. So you have to pretend that you're having a great time when you're dying inside, right? Just
2:49:37
Misery. But why did, why did they do it? What did I do? Yeah, that's the thing. It's like it is a level especially one of the conclusions I think, is it? I think it's actually for most of these people on a risk-adjusted basis is probably an irrational act. They could probably be more financially successful on average if they just got like a real job and a big company but there's you know some people just have an irrational need to do something new and build something for themselves. And, you know, some people just can't tolerate having bosses. Oh, here's a fun thing. Is, how do you reference check Founders? Right. So you call that you normally reference. Check your tire. Somebody is you
2:50:07
The bosses. They're at their nail in the find out if they were good employees and now you're trying to reference check Steve Jobs, right? And it's like, oh God, he was terrible. You know, he was a terrible employee, never did it. We told him to do
2:50:19
so what's a good reference? If you want the previous boss, actually say they're there, they never did with me. You told him to do that. Might be a good thing.
2:50:27
Well, ideally ideally what you want is I will go, I would like to go to work for that person. He worked for me here and now I'd like to work for him now, unfortunately, most people can't their egos can't can't handle that.
2:50:37
So they won't say that, but that's the ideal. What advice
2:50:40
would you give to those folks in the space of intelligence, passion and courage.
2:50:45
So I think the other big thing is, you see people. Sometimes you say, I want to start a company and then they kind of work through the process of coming up with an idea and generally those don't work as well as the case where somebody has the idea first and then they kind of realized that there's an opportunity to build a company. And then they just turn out to be the right kind of person to do that.
2:51:02
We see idea. Do you mean
2:51:05
What's wrong term, big
2:51:06
Vision, or do you mean specifics of like
2:51:08
product specific, a specific like specifically what? Yes. Specifics. Like, what is that? Because for the first five years, you don't get to have Vision. You just got to build something people want, and you got to figure out a way to sell it to them, right? It's very practical career. You never get too big Vision. So
2:51:21
for the first, the first part you have an idea of a set of products of the first product that can actually make some
2:51:26
money. Yeah. Like it's got a first product going to work what by which I mean, like it has to technically work but then it has to actually fit into the category in the customers mind. If something that they want and then and then, by the way,
2:51:34
Other part of Staff, we want to pay for it. Like Somebody's gotta pay the bills and so you've got to figure out a price that and whether you can actually extract the money. Yeah. So usually is much more predictable. Its success is never predictable but it's more predictable if you start with a great idea and then back into starting the company. So this is what we do. You know, we have Mosaic before we had to escape the Google, guys have the Google search engine working at Stanford, right? The, you know, yeah, actually, there's tons of examples where they, you know, Pierre omidyar head eBay working before he left his previous job.
2:52:04
Um
2:52:05
so I really love that idea of just having a thing. The Prototype that actually works before you even begin to remotely
2:52:11
scale. Yeah, by the way, it's also far easier to raise money, right? Like the ideal pitch that we receive is. Here's the thing that works. Would you like to invest in our company or not? Like that's so much easier than here's 30 slides with a dream, right? And then we have this concept called the idea Emmaus, which are Balaji. As front of Austin, came up with when he was with us. So, so, so then there's this thing. This goes to mythology, which is, you know, there's a mythology of
2:52:34
Kind of, you know, these ideas kind of arrive like magic or people kind of stumble into them. It's like eBay with the best dispensers or something the reality usually with the big successes is that the founder has been chewing on the problem for five or 10 years before they start the company and they often worked on it in school or they even experimented on it when they were a kid and they've been kind of training up over that period of time to be able to do the thing. So they're like a true domain expert and it certainly sort of sounds like Mom and apple pie, which is
2:53:04
Yeah, you want to be a domain expert, what you're doing, but, you know, the mythology is so strong of like, oh, I just like, had this idea in the shower and I'm doing it. Like it's generally not
2:53:12
that. No, because it's what that, maybe in the shower, we had the exact product implementation details. But yeah, usually you're going to be for like years, if not decades thinking about like everything around
2:53:29
that, what we call PID amazed because the idea maze basically is like there's all these permutations
2:53:34
Like for any idea or any idea? There's like all these different permutations, who should the customer be what shape forms the product have? And how should we take it to Market and all these things? And so the really smart Founders have thought through all these scenarios by the time they go out to raise money and they have like detailed answers on every one of those fronts because they put so much thought into it. The sort of the, the sort of more haphazard Founders, haven't thought about any of any of that. And it's the detailed ones. We tend to do much better.
2:54:01
How do you know when to take a leap? If you have a cushy job?
2:54:04
A boar happy
2:54:05
life. I mean the best reason is just because you can't tolerate not doing it, right? Like this is the kind of thing. Where if you have to be advised into doing it, you probably shouldn't do it. And so it's probably the opposite, which is, you just have such a burning sense of this, has to be done. I have to do this, I have no
2:54:18
choice. What if it's going to lead to a lot of pain? It's going to lead to a
2:54:22
campaign. I think
2:54:24
that's what if it means losing set of social relationships and damaging your relationship with loved ones and all that kind of stuff.
2:54:32
Yeah look so like it's going to
2:54:34
In a social tunnel, for sure, right? So you're going to like, you know, there's this game you can play on Twitter which is you can do any whiff of the idea that there's basically any such thing as work-life balance, and people should actually work hard and everybody gets mad. But like the truth is, like all the successful Founders are working 80-hour weeks and they're working, you know, they form various very strong social bonds with people. They work with, they tend to lose a lot of friends in the outside or put this friendships on ice. Like that's just the nature of the, of the thing, you know, for most people, that's worth the trade-off, you know, the advantage, you know, maybe younger Founders have
2:55:04
As maybe they have less, you know, maybe they're not, you know, for example, if they're not married and I don't have kids yet, that's an easier thing to bite off. Can you be an older founder? Yeah, you definitely can. Yeah, yeah, many of the most successful Founders are second, third, fourth time. Founders, they're in their 30s 40s 50s, the good news, would be an older founder is, you know, more and, you know, a lot more about what to do, which is very helpful. The problem is okay, now you've got like a spouse and a family and kids and like you've got to go to the baseball game and like you can't go to the basement, you know? And so the it's getting
2:55:31
life is full of difficult choices. Yes, can't reason.
2:55:36
You've written a blog post on what you've been up to you wrote this in October 20, 22 quote. Mostly, I try to learn a lot, for example, the political events of 2014 to 2016 made clear to me that I didn't understand politics at all referencing. Maybe some of this, this book here. So I deliberately withdrew from political engagement and fundraising and instead read my weight back into history and as far to the political left and political right as I could
2:56:04
And so just high-level question. What's your approach to learning?
2:56:09
Yeah, so it's basically I would say it's an autodidact. So it sort of goes, it's going down the rabbit holes. So it's a combination of say, I kind of alluded to it in that in that quote. It's a combination of breadth and depth. And so I tend to do, I tend to I go abroad by the nature of what I do. I go abroad. But then I tend to go deep in a rabbit hole for a while, read everything I can and then come out of it and I might I might not revisit that rabbit hole for another decade
2:56:32
and in that blog post that ad
2:56:34
Recommend people go check out, you actually was the bunch of different books that you recommend on different topics on the American left and the American
2:56:41
right?
2:56:43
It's just a lot of really good stuff. The best explanation for the current structure of our society and politics, you give two recommendations for books on the Spanish Civil War six books and deep history of the American, right? Comprehensive by accuracies of Adolf, Hitler of one of which are red can recommend six books in the Deep history of the American left. The American right? American left. Looking at the
2:57:04
History to give you the context biography of Vladimir Lenin, two of them on the French Revolution. Actually, I have never read a biography on Lenin, maybe that that will be useful. Everything's been so marks focused the Sebastian biography of London is extraordinary, Victor Sebastian's? Okay, well, your mind, yeah, so it's still useful to reasonable. Yeah, I think I actually think it's the single best book on the Soviet Union so that the perspective of Lenin is might be the best way to look at the Soviet Union versus Stalin vs. Marx versus
2:57:35
Very interesting is the two books on Fascism and anti-fascism by the same author. Well, Paul. Got three. Brilliant book. In the nature of mass movements and Collective psychology, the definitive work on intellectual life. Under totalitarianism the captive mind, the defendant worked on the Practical life under totalitarianism. There's a bunch, there's a bunch and the single best book. First of all, the list here is just incredible, but you say the single best book I found on who we are. And how
2:58:04
We got here is the ancient city by Newman. Dennis forced LD. Cool uncas. I like it. What's what did you learn about? Who we are as a human civilization from that
2:58:16
book. Yeah. So this is a fascinating book. This was free, but it's a free by the way, it's an it's broken. 1860s. You can download it or you can buy printouts up prints of it. But it's, it was this guy who was a professor at the sorbonne in the 1860s and he was apparently a savant on Antiquity on Greek and Roman Antiquity. And in the reason I say,
2:58:34
Because his sources are 100% original Greek and Roman sources. So he wrote a basically a history of Western Civilization from on the order of four thousand years ago to basically the present times entirely working on frustrate original Greek and Roman Roman sources. And what he was specifically trying to do was he was trying to reconstruct from the stories of the Greeks and the Romans. He was trying to reconstruct what life in the west was like before the Greeks and the Romans which was in this in this in the civilization known as the the indo-europeans and the short answer is
2:59:04
and this is sort of Circa four thousand 2000 BC to, you know, sort of 500 BC kind of a 1500 year stretch where civilization developed and his conclusion was basically Cults. They were basically Cults and civilization was organized into Cults and the intensity of the Cults was like a million fold beyond anything that we would recognize today. Like it was a level of all-encompassing, belief and action around religion. That was at a level of
2:59:34
Streaming us that we would even recognize it. And so specifically, he tells the story of basically there are three levels of Cults. There was the family called the tribal cult and then the city called as Society scaled up. And then each cult was a joint called of family, God's which were ancestor gods, and then nature gods, and then you're bonding into a family, A Tribe, or a city was based on your adherence to that religion people who were not of your
3:00:04
Only tribe City, worship, different gods, which gave you not just the right with the responsibility to kill them on sight, right? So there were serious about their calls hardcore by the way, shocking development, I did not realize a zero concept of individual rights, like even up to the Greeks and even in the Romans, they didn't have the concept of individual rights. Like, the idea that as an individual, you have like some rights just like noop, right? And you look back and you're just like, wow, that's just like crazily, like fascist in a degree that we wouldn't recognize today, but it's like, well, they were living under extreme.
3:00:34
Bring pressure for survival and you know the theory goes you could not have people running around making claims individual rights. When you're just trying to get like your drive through the winter, right? Like you need like hardcore command and control and so and actually what if through Modern political lens those Cults were basically both fascist and communist. They were fascists in terms of social control and then they were Communists in terms of economics.
3:00:55
But you think that's fundamentally that like, pull towards Cults is women
3:01:00
us also. So my conclusion from this book, so,
3:01:04
So so, so, so the way we naturally think about the world, we live in today is like, we basically have such an improved version of everything that came before us, right? Like, we have, basically, we figured out all these things around morality, and ethics, and democracy, and all these things. And like they were basically stupid and retrograde, and we're like smart sophisticated and we've improved all this is after reading that book. I now believe in many ways the opposite which is no actually we are still running in that original model. We're just running in an incredibly diluted version of it so we're still running basically in cults.
3:01:34
Our culture at like a thousand or a million of the level of intensity, right? And so our so just as to take religions, you know, the modern experience of a Christian in our time, even somebody who considers some devout Christian, is just a shadow of the level of intensity of somebody who belong to a religion back in that period. And then by the way, we have constructed goes back to Rai discussion. We then sort of endlessly create new Cults like we're trying to fill the void, right? Which and the void is a void of bonding. I'm okay. Living in there are alike everybody living.
3:02:04
Being today, transported, that era would view it as just like completely intolerable. In terms of like the loss, the loss of freedom, in the level of basically fascist a control, however, every single person in that era and he really stresses this, they knew exactly where they stood, they knew exactly where they belong to, they know exactly what their purpose was. They know exactly what they need to do every day. They know exactly why they were doing it. They had total certainty about their place in the universe
3:02:24
to the question of meaning. And question of purpose was very distinctly clearly defined for
3:02:28
them. Absolutely overwhelmingly indisputably. Undeniably,
3:02:32
as we turn the volume down on,
3:02:34
Cultism. Yes, we start to the search for meaning says, getting harder and
3:02:39
harder. Yes, because we don't have that. We are ungrounded, we run centered and we all feel it, right? And that's why we reach for, you know, it's why we still reach for religion. It's why we reach for, you know, people start to take on, you know, let's say you know of faith and science may be Beyond where they should put it, you know? And by the way, like sports teams are like, you know, they're like a tiny little version of a cult and, you know, the apple key notes are a tiny little version of a cult, right? You know, political, you know. Yeah.
3:03:04
And there's called, you know, there's phone calls on both sides of the political Spectrum right now, right. You know, operating in plain
3:03:09
sight, still not full blown a paradise to what it was
3:03:11
compared to what it used to mean. We would today consider full-blown but like yes, they're there at like, I don't know. A hundred thousandth or something of the intensity of what people had back then. So so we live in a world today that in many ways is more advanced and moral and so forth. And that's certainly a lot nicer much nicer world to live in, but we live in a world that's like a very washed out. It's like everything has become very colorless and gray as compared to how people used to experience things. Which is, I think why we're so prone to
3:03:34
Each for drama. We there's something in US deeply evolved where we want that back.
3:03:41
And I wonder where it's all headed. As we turn the volume down more and more. What advice would you give to Young Folks today in high school and college? How to be successful in their career, how to be successful in their
3:03:52
life? Yeah, so the tools that are available today, I mean, are just like, I sometimes, you know, board, I sometimes were, you know, kids by describing like what it was like to go look up a book, you know, to try to like discover a fact and you know, in the old days the 1970s 1980s go to the library and the card catalog and the whole thing, you go through all that work, and then the book is checked out, you have to wait.
3:04:11
Two weeks and like, like to be in a world. Not only were you can get the answer to any question but also the world. Now, you know, they are world where you've got like, the assistant that will help you do anything. Help you teach learn anything like your ability both to learn. And also, to produce is just like, I don't know, a million fold beyond what it used to be. I have a blog post. I've been wanting to write it, which I call where, where are the hyper productive people? Like the question, right? Like with these tools, like there should be authors that are writing like hundreds of thousands of like outs
3:04:41
Any books?
3:04:42
Well, with the author's. There's a consumption question, too. But yeah. Well, maybe not, maybe not, you're right. But so the tools are
3:04:49
much more powerful. There are getting much more participative musicians, right? Why aren't musicians producing 1,000 times a number of songs, right? Like we like the tools are spectacular.
3:05:00
So, what was the explanation? And by way of advice, like what is motivation, starting to be turned down a little bit or what
3:05:09
I think it might be distractions track.
3:05:10
Ian, it's so easy to just sit and consume that. I think people get distracted for production, but if you wanted to, you know, as a young person, if you wanted to really stand out, you could get a lot like a hyper productivity curve, very early on. There's a great, you know, the stir, there's a great story in Roman history of Plenty, the Elder who was this legendary Statesmen died in the Vesuvius, eruption trying to rescue his friends but he was famous both for being a savant, basically being a polymath, but also being an author. He wrote apparently like hundreds of books, most of us have been lost.
3:05:41
Like what all these encyclopedias and he literally like would be reading and writing all day long no matter what else is going on. And he so he would like travel with like for slaves. And two of them are responsible for reading to him and two of them are responsible for taking dictation. And so like he'd be going across country and like, literally, he would be writing books like all the time and apparently, they were spectacular really. There's only a few that survived but apparently they were amazing.
3:06:02
There's a lot of value to being somebody who finds focus in this
3:06:04
life. Yeah. Like what? And there are examples like, there are, you know, there's this guy judge I was his name Posner Posner.
3:06:10
Who wrote like 40 books and was also great federal judge. You know, there's a, our friend biology, I think it's like this. He's one of these you know where his output is just prodigious and so it's like yeah. I mean with these tools, why not? And I kind of think we're at this. Interesting kind of freeze for a moment where like this, these tools are not everybody's hands and everybody's just kind of staring at them trying to figure out what to do. Yeah. But the new tools we have discovered fire. Yeah. And trying to figure out how to use it to cook. Yeah.
3:06:36
You told Tim Ferriss that the perfect day is caffeine for 10 hours and alcohol for
3:06:41
Hours, you didn't think I'd be mentioning this, did you? It balances everything out perfectly as you said before, was it? So let me ask what's the secret to balance and maybe to happiness in
3:06:53
life. I don't believe in Balance. So I when I'm the wrong person to
3:06:57
ask me, elaborate why you don't believe in Balance? I mean II, maybe it's just N. I look, I think people, I think people are wired differently. So I think it's hard to generalize this kind of thing but I am much happier and more satisfied when I'm fully committed to something. So
3:07:10
I'm very much in favor of all land of imbalance. Yeah. Imbalance and that applies to work to life to everything. Now, I happen to have whatever Twisted personality traits lead that in non-destructive, Dimensions, including the fact that I've actually I now no longer do the 10 for plan. I stopped drinking to do the caffeine but not the alcohol. So there's something in my personality right, eye, whatever. Maladaptation I have is inclining me towards productive things, unproductive things. Say you're one of the wealthiest people in the world. What's the relationship between?
3:07:41
And happiness. Oh, money and
3:07:44
happiness. So I think happiness, and I think happiness is the thing to strive for I think satisfaction is the thing.
3:07:53
That's that just sounds like happiness, but turned down a
3:07:56
bit no deeper. So happiness is, you know, a walk in the woods at Sunset, an ice cream cone, a kiss, the first ice cream cone is great. This out, thousands ice cream cone. Not so much at some point, the box that was get
3:08:10
During most of distinction between happiness and satisfaction satisfaction is a deeper thing which is like having found a purpose and fulfilling it being useful. So just something that permeates all your days just as general contentment of of being useful. But I'm fully satisfying my faculties that I'm fully delivering right on the gifts. I've been given that I'm, you know, net making the world better, that I'm contributing to the people around me, right? And that I can look back.
3:08:41
And say, wow, that was hard, but it was worth it. I think generally seems to lead people in a better State than pursuit of pleasure Pursuit of quote-unquote.
3:08:48
Happiness, does money have anything to do with
3:08:50
that? They think the founders of you founding fathers in the u.s. through this off-kilter when they use the phrase Pursuit of Happiness, I think they should have said pursue this that's good pursuit of satisfaction. We might live in a better world today, you know, they could have elaborate in a lot of things that could have tweaked the Second Amendment. I think they were smarter than they
3:09:06
realize, they said, you know what, we're going to make it ambiguous and let these these
3:09:10
You must figure out the rest. These tribal cult-like humans figure out the rest, but money and powers that. So I think. And I think they're, I mean, look, I think Elon is, I don't think I'm even a great example. I think he'll on would be the great example, this, which is like, you know, look, he's the guy who from everyone every day of his life from the day, he started making money at all. He just plops it into the into the next thing and so I think money is definitely an enabler for satisfaction. It was my money applied to happiness. Leads people down very dark paths.
3:09:40
Very destructive Avenues money, apply the satisfaction. I think could be because a real tool I always liked by the way. I was like, you know, a lot of the case study for Behavior, but the other thing that it's always really made me think as Larry, Larry Page was asked, one time, what his approach to philanthropy was and he said I'm just my Flint philanthropic, plan is just give all the money to Ilan, right? Well let me actually ask you about Eli. What are your
3:10:05
you've interacted with quite a lot of successful engineers and business people. What do you think?
3:10:10
Special body line. We talked about Steve Jobs. What what do you think is special about him as a leader and
3:10:17
innovator? Yeah, so the the core of it is he's a He's Back to the Future. So he is he is doing the most Leading Edge things in the world but wasn't rather really deeply old-school approach and so to find comparisons to Ilan you need to go to like Henry Ford and Thomas Watson and Howard Hughes and Andrew Carnegie. Right? Leland Stanford. Jati Rockefeller, right? You need to go to the
3:10:40
What we called the Buchwald capitalists like the hardcore business owner operators who basically built. You know, it just basically built, industrialized, Society, Vanderbilt, and it's a level of Hands-On commitments and depth in the business coupled with an absolute priority towards truth, and towards the foot Science and Technology down to First principles.
3:11:10
That is just like absolute just. It was just like unbelievably absolute, he really is ideal that he's only ever talking to Engineers. Like he does not tolerate, he has two hands left. Bullshit dogs. Anybody I've ever met. He wants ground truth on every single topic and he runs his businesses directly day to day devoted to get him to ground truth and every single topic.
3:11:30
So you think it was a good decision for him to buy Twitter?
3:11:35
I have developed a few in life to not second-guess Elon Musk. I know this is gonna sound
3:11:41
Cranking crazy and unfounded. But well, I mean, he's got a quite a track record. I mean, look, the car was a crazy. I mean, the car was, I mean, look, he's on a lot of things that seem crazy, starting a new car company, United States of America. The last time somebody really try to do that was the 1950s and it was called Tucker automotive and it was such a disaster. They made a movie about what a disaster it was and then Rockets like who does that? Like that's there's obviously no way to start a rocket company. Like those days are over and then to do those at the same time,
3:12:10
So after he pulled those two off like okay, fine look like it is one of my areas of like I do whatever opinions I had about it is just like, okay, clearly you're not relevant. Like this is you just you, it's a point. You just like that on the person
3:12:22
and in general, I wish more people would lean on celebrating and supporting versus deriding and
3:12:28
destroying. Oh yeah, I mean, look, he drives resentment like it's it R is, I like, he is a magnet for resentment. Like, his critics are the most miserable like, resentful people in the world, like, it's almost a perfect match.
3:12:41
Of like the most idealized you know technologist, you know, of the century coupled with like just his critics are just bitter as can be. And it's I mean, it's sort of very Darkly comic to
3:12:52
watch. Well, he fuels the fire of that, by being an asshole and Twitter at times, and which is fascinating to watch the drama of human civilization, given our cult Roots, just fully on
3:13:06
fire, it's running a
3:13:07
cult. You could say that very sick.
3:13:10
Leslie. So now there are calls have gone and we search for meaning. What do you think is the meaning of this whole thing? What's the meaning of life
3:13:17
marker? And reason, I don't know the answer to that. I think the meaning of, of the closest I get to. It is what I said about satisfaction. So it's basically like, okay? We were given what we have. Like, we should basically do our best. What's the role of love in that mix? I mean, like, what's the point of life if you're yeah, without love. Like
3:13:36
so love is a big part of that
3:13:38
satisfaction. Really look like taking care of people.
3:13:40
Like a wonderful thing, like it, you know, it mentality, you know, there are pathological forms of taking care of people. But there's also a very fundamental, you know, kind of aspect of taking care of people. Like, for example, I happen to be somebody who believes the capitalism and taking care of people are actually, they're actually the same thing. Somebody wants that capitalism is how you take care of people, you don't know, right? Right. And so like, yeah, I think it's like deeply woven into the whole thing, you know, there's a long conversation. We had about that. But yeah,
3:14:07
yeah. Creating products that are used by millions of people and bring them joy and
3:14:10
smaller big ways and then capitalism kind of enables that encourages that
3:14:16
David Friedman says, there's only three ways to get somebody to do something for somebody else. Love money and force.
3:14:26
Love and money are better. Yeah, that's a good ordering. Think we should bet on those
3:14:32
trial of first. If that doesn't work, the money and force. Well, don't even try that one. Mark, You're an incredible person. Been a huge fan. I'm glad to finally got a chance to talk. I'm a fan of everything. You do everything you do, including on Twitter. It's a huge honor to meet you to talk with you. Thanks again for doing this. Awesome. Thank you, Lex.
3:14:51
Thanks for listening to this conversation with Marc Andreessen to support this podcast. Please check out our sponsors in the description. And now let me leave you with some words. Marc Andreessen himself, the world is a very malleable place if you know what you want and you go for it with maximum energy and drive and passion, the world will often reconfigure itself. Around you much more quickly and easily than you would think. Thank you for listening and hope to see you next time.
ms