In this post, I want to share some of the highlights and insights from a recent podcast episode that I listened to. It was Lex Fridman Podcast #234, where Lex interviewed Marc Andreessen, the co-creator of Mosaic, the first widely used web browser, the co-founder of Netscape, and the co-founder of the legendary Silicon Valley venture capital firm Andreessen Horowitz. Marc is one of the most outspoken and visionary voices on the future of technology, and he recently wrote an article titled “Why AI Will Save the World” which I highly recommend.

The Internet as a Medium

One of the themes that Marc and Lex discussed was how the Internet is not just a network of computers, but a medium of communication and expression that incorporates and transcends all previous forms of media. Marc quoted Marshall McLuhan, who said that “the content of each new medium is the old medium”. For example, the content of movies was theater, the content of theater was written stories, and the content of written stories was spoken stories. Similarly, the content of the Internet is television, radio, books, essays, and every other form of prior media.

Interestingly, these patterns seem to have a tendency to return to text. Marc argued that this means that the Internet is not a static or fixed thing, but a dynamic and evolving one. He said that every new technology that comes along changes the nature of the Internet and opens up new possibilities for creativity and innovation. He gave examples such as search engines, social media, video streaming, podcasts, blogs, e-commerce, online education, online gaming, cryptocurrencies, NFTs, VR/AR/MR, and AI. He said that each of these technologies creates new ways for people to interact with information, with each other, and with themselves.

AI as a Search Process

One of the first topics they talked about was how search engines like Google have dominated the way we interact with the Internet for the past 20 years, but how that might change in the next 5 or 10 years with the rise of AI assistants that can communicate with us in natural language and provide us with answers, not just links. Andreessen argued that search was always a hack, a temporary solution that was optimal for accessing the world’s information on the web, but not necessarily the best way to interact with knowledge. He said that Google has been trying to move away from “the 10 blue links model for a long time”, and that AI assistants will be able to do that better by leveraging information from multiple sources and generating content on demand.

Fridman asked whether this would change the motivation and the format of creating content on the Internet, since search engines have driven how we optimize and structure web pages. Andreessen said that there might be less incentive to make web pages, but also more opportunity to create different kinds of content, such as conversations with AIs. He also pointed out that web pages are still a valuable source of training data for AIs, and that AIs might be able to generate synthetic training data as well. He called this a trillion dollar question: whether synthetic data can add signal or not to train better AIs.

Marc explained that AI is fundamentally about searching for patterns in data. This is what neural networks do when they learn from data. They search for patterns that allow them to make predictions about future data. Similarly, this is what humans do when they learn from experience. They search for patterns that allow them to make decisions about future actions. He said that this is also what evolution does when it selects for traits. It searches for patterns that allow organisms to survive and reproduce in changing environments.

I found this discussion very intriguing, because it made me think about how much search engines have shaped our online behavior and expectations, and how much that might change in the near future. I wonder how AI assistants will affect our ability to discover new information, to verify facts, to learn new skills, and to express ourselves online. Will we still have web pages as we know them? Will we still have blogs like this one? Will we still have podcasts like Fridman’s? Or will we have something entirely different?

Creativity and Hallucination in AI

Another topic they discussed was how AIs can be creative or hallucinate, depending on how we view their output. Andreessen said that we call it hallucination when we don’t like what the AI creates, or when it generates something that isn’t based on reality. When we do like it, we call it creativity.

Fridman asked how we can know what is true or not when we interact with an AI, especially when it comes to contentious or complex topics. Andreessen said that this is a very difficult problem, and that there is no easy way to determine what is truth or who gets to decide what is truth. He said that we need a lot of humility and skepticism when we encounter claims of truth from any source, human or machine. He also said that we need to embrace the techniques of the Enlightenment, such as the scientific method, rationality, observation, experimentation, hypothesis testing, etc., even when they give us answers we don’t like.

He also gave some examples of how AIs can be used for different purposes depending on how we steer them or constrain them. He said that he likes to use AIs for learning about contentious issues by asking them to debate different sides of an argument in good faith or bad faith. He said that he can also ask AIs to strip out the bias from news articles or other sources of information. AIs can be used for legal arguments as well, either in creative mode or in literal mode. He describes experiments he does himself where he gives very specific prompts to two AI models, such as “you are a right wing politician defending point X” and instructing the other model to be left wing, and then giving instructions such as “the discussion should start friendly but end up very heated and none of the AIs should change their mind”. I’m very fascinated by this type of experiment and I’m looking into setting this up for myself because I’m very curious about the results.

The ethical and social implications of AI development and deployment

Another topic they talked about was how AI development and deployment will affect society and humanity in the near and long term. Andreessen said that he is optimistic about the positive impact of AI on various domains, such as health, education, entertainment, productivity, etc. He said that AI will save the world by solving many of the problems that we face today, such as climate change, poverty, disease, etc. He also said that AI will create new opportunities and possibilities for human creativity and expression.

Andreessen and Fridman also discussed that the people who are creating AI are not necessarily the best people to make judgements about the deployment of these AIs to the society at large. He draws the analogy with nuclear scientists. These people spend their entire lives in laboratories and they specialize on the technical application. The are not sociologists, and they are perhaps less connected to the society at large and with geo-political landscape. Yet, when nuclear bombs were developed, the were the ones who had a huge say in whether or not it was moral to actually use these weapons.

The competence, capability, intelligence, training, accomplishments of senior scientists and technologists working on a technology, and then being able to then make moral judgments in the use of the technology, that track record is terrible. That track record is catastrophically bad.

Therefore, it should not be the scientists who decide whether or not we start utilizing AI for automated airstrikes, it should be the politicians, philosophers, sociologists and policy makers of the world who are more connected with the morality and socio-political implications of these decisions.

202307180607

https://lexfridman.com/marc-andreessen/