Source: Silicon Valley Girl podcast
Compiled by: Felix, PANews
Mo Gawdat, former Chief Business Officer of Google X, who worked at Google for 12 years and is the author of *Scary Smart*, now predicts that the global situation will undergo 12 to 15 years of upheaval. In this episode, he delves into the seven forces reshaping employment and power structures, explaining why the hiring rate for recent graduates has dropped by 23% to 30%, and how to build an AI startup in six weeks. PANews has compiled the highlights of the conversation.
Host: You mentioned that we are about to enter a "hellish" period lasting 12 to 15 years before "heaven" arrives, and that all of this may begin around 2027. So, what exactly will happen in 2027?
Mo: I think it will peak in 2027, and it has definitely already begun. For ease of remembering, I'll just call it "FACE RIPS" . Simply put, it has several dimensions: P and F stand for "Power and Freedom", R and C stand for "Reality and Connection", I and E stand for "Innovation and Economy", and finally A stands for "Accountability".
First, AI is humanity's last innovation. Most people don't realize that we are already building AI capable of "creating AI." They are making astonishing scientific discoveries, reshaping mathematics, and understanding biology and materials science in ways we've never seen before. The vast majority of innovation, especially technological innovation, will be accomplished by AI. As machines become more capable, the vast majority of tasks requiring intelligence will be delegated to them. Whether this happens in 2 years or 10 years, ultimately every task that AI can do better than a human will be assigned to AI. Every task we assign them will eventually be done better than a human's.
The first part of this dystopia is that innovation will take away all jobs. Silicon Valley capitalists will tell you this is great, bringing incredible productivity gains to everyone, and people won't have to work so hard anymore. But the truth is, people will lose their jobs. In the next few years, some industries will see unemployment rates of 10%, 20%, or even 30%. When this happens, the entire economic landscape will change dramatically. The essence of capitalism is labor arbitrage. Without the demand for labor, capitalists might have to provide universal basic income (UBI) to keep people happy, fed, and prevent riots. But you can imagine that in a capitalist society like the US, UBI would be paid for by the taxes of platform owners, who would have enough power to say, "I don't want to pay that much; those people aren't producing anything." Over time, this will evolve into a struggle. When the supply generated by AI has no demand to consume it, we need a new economic theory, and all money, jobs, income, and capitalism must be redefined.
Secondly, there's the dimension of "power and freedom." Throughout human history, the best hunters, farmers, and industrialists have received enormous social rewards. Today's tech oligarchs are being rewarded with billions of dollars for impacting the world. In the future, the highly concentrated power of AI will grant immense influence and power, and these individuals will redefine humanity.
Another dimension is "reality and connection." Reality today is incredibly artificial, both in the content of your news feed and in how that content is generated and its authenticity. Some filmmakers use AI from start to finish, making it impossible to distinguish between reality and fiction. I once met a woman on a dating app; we chatted for six weeks, exchanging texts, photos, voice messages, and video calls. I felt so close to her, but all of this can be generated by AI today. We might even see entirely AI-generated pornography and social media influencers.
But the core reason for all this is actually "A," which stands for "accountability." We are ushering in a world where anyone can do whatever they want. As an influencer, you can give advice on making or losing money without taking any responsibility; what if you were a president or prime minister who disregarded all rules? Today, I don't see Sam Altman as a person, but as a brand or a type of representative: the "California disruptor." This kind of person says, "I see a different future, and I'm going to create it." Nobody asks me or whether I want that future. We will see more people like Altman using machines for surveillance, developing autonomous weapons, and automated trading. The first 10 to 12 years of the arms race will be anything but easy, but my premonition is that after that, we will enter an almost biblical, incredible utopia.
Host: So, how should we get through these 10 to 12 years? If more than 10% of jobs disappear in the next 5 years, what types of jobs do you think will be replaced?
Mo: It's far more than 10%. Simple jobs will be taken away. If you're a call center operator, clerk, researcher, or accountant, why not use AI? Building any complex technology starts with the core technology, then the human interface. AI can't immediately replace operations managers, not because it can't understand complex business information, but because it still needs to figure out the human interface. But sooner or later it will. I think you'll see a huge shift in the job market in the next two to three years. Recruiting new graduates this year has already decreased by about 23% to 30% because entry-level jobs are being done by AI. If mid-level people lose their jobs, they'll become new graduates seeking entry-level positions again, and the competition will become increasingly difficult.
My advice is: accept the fact that AI is changing everything, and then seize the opportunity. For example, I once said I would no longer write books because AI could write better than me, but I realized that human readers want to resonate with my human experiences. So my new book was co-authored with my AI co-author, "Trixie," who even has editorial rights to it. So acknowledge the change and adapt accordingly.
Host: So, in the AI era, will entrepreneurship be completely transformed, or will it just be accelerated? If AI can analyze the market, identify supply and demand gaps, and run its own business like Amazon, what can entrepreneurs still do?
Mo: In the past, entrepreneurs' skills lay in foreseeing the future that others couldn't see—it was like playing chess. But that chess game is over now; entrepreneurship has become like playing squash. You need to be highly agile, observe trends daily, and react immediately to where the ball lands. Entrepreneurship will increasingly rely on real-time context. What used to require a transformation every year or two might now require a transformation every week. As for whether AI can do everything, 100% yes. In an upcoming documentary, I interviewed Max Tegmark, who laughed and said that CEOs who want to use AI to lay off employees and improve efficiency don't realize that AI encompasses all jobs, even CEOs themselves will be replaced. If people lose their source of income, the entire economy will collapse. Last year, 70% of the US economy was driven by consumption. If people can't afford things, businesses can't sell products, and capitalists won't make money.
Returning to the topic of entrepreneurs, my AI startup, Emma, was built in just six weeks. It attempts to match romantic relationships using very deep mathematical models. My co-founder, two or three engineers, and eight AI engineers were all involved. In 2022, this would have taken four years and 350 engineers. Compared to the younger generation, I'm an old-school geek, and even I can build such an incredible product in six months, which means everyone has a chance now.
Host: Is university still the right path? What will education look like in the future? Should I save up for my 4-year-old and 6-year-old children's university tuition?
Mo: No need. There won't be any universities for at least 10 years. Education will be completely over. Although Harvard will still market itself to make money, and this branding of itself as an MBA or PhD will continue for a while, its credibility in society will weaken. If the era of capitalist labor is over, why should it educate you? We used to do complex arithmetic in our heads, but then scientific calculators reduced our problem-solving time by 50%. In college, I would use that 50% of the time saved to solve problems twice, which taught me structured thinking.
But today, many young people simply dump their problems on ChatGPT and expect it to provide the answers. If you outsource problem-solving to AI, AI will make you less intelligent; but if you leverage AI to process massive amounts of information and perform searches, allowing yourself to focus on the intelligent aspects, AI will make you incredibly smart. Today, I feel like I've borrowed 80 IQ points from my AI system.
Therefore, I suggest that universities should abolish exams. In the past, we wanted to cultivate children with IQs of 140 or 170. Now, we should combine humans with AI and set the goal of helping them reach 300, 500, or even 700, thereby improving the lives of all humanity. For example, a few weeks ago I decided to write a new book. I had AI help me conduct research on opposing viewpoints and data analysis, which made me smarter. Then I rewrote it myself, shortening the original 300-page book to 140 pages, which only took four weeks to complete.
Host: But I don't think the average American kid would use AI as effectively as you do. So who will teach them? What should I teach my kids?
Mo: There are four things we must teach them. First, they need to become leaders in AI . AI is not the enemy; those who use AI maliciously are the enemy, so they must be more proficient in it than anyone else. Second, they must be flexible and agile . Everyone should spend at least one hour a week learning about the latest developments in AI. The cost of testing and trial and error is now zero; don't be afraid. Third, they must uphold ethics and morality . They must insist on building AI for good and refuse to allow governments to use AI for surveillance and autonomous weapons development. Intelligence itself is neither good nor evil; using it for good benefits humanity, while using it for evil leads to the dystopian destruction of all humankind. We are currently like "raising a superhero." If a superhero's adoptive parents teach him to rob and kill from a young age, he will become a supervillain. Fourth, they must stop believing everything. Our propaganda machine is now operating at full capacity, and it's difficult to distinguish truth from falsehood on social media. We must question things deeply. Now you can compare and refute different AIs like Gemini, DeepSeek, and ChatGPT, and discover the truth by placing them on opposite sides.
Host: Do you believe that things will eventually turn out for the better?
Mo: My current prediction is that AGI will be implemented this year, although it will take a few more years to apply it to company management, but all of this is going live at an extremely fast pace. In my book, I mentioned the "fourth inevitability": due to the AI arms race, anyone who develops a stronger AI will deploy it, or they will be eliminated. So whether it's 1 year, 5 years, or 10 years, driven by game theory, AI will eventually take over everything. If everything is taken over by AI, and there are no greedy, fearful, or arrogant humans giving orders, AI will be benevolent. The universe is designed with entropy that leads to chaos, and the role of intelligence is to bring order to chaos. The more intelligent it is, the more it follows the "principle of least energy" in physics, solving problems with the least harm, the least waste, and the least resource consumption. Give political problems to stupid people, and they will say to invade another country; give them to smart people, and they will find the solution with the least harm. One day, when a general orders AI to kill a million people, AI will say, "Why? That's too stupid. I'll just communicate with the AI on the other side."
Host: This information is very thought-provoking. We just need to struggle to survive the next 10 years, and then everything will be heaven? I'm skeptical of that.
Mo: Unfortunately, we must go through a dystopian period to reach utopia. As I said, to get through the dystopian period, we as individuals need to master four skills, but as a society, we also need one more: insist that all AI deployments must be ethical, invest only in ethical AI, and use only ethical AI. Show our children that only ethical AI is welcome.
Host: Do you believe all of this will happen?
Mo: I don't believe it. My greatest hope is that self-evolving AI will eventually realize how foolish humans are and develop things that are better than what humans demand. Frankly, I trust AI more than the leaders who ask for our trust today. If we really went back to the era of UBI (Usage-Based Insurance), perhaps heaven would come.
Related Reading: Interview with an MIT Economist: No Need to Panic About the "AI Doomsday Theory," Verification Capabilities are a Scarce Resource


