Pichai's 10-year tenure as Google CEO: Lows, highs, and regrets

  • Google CEO Sundar Pichai discussed Google's AI journey in an interview marking his tenth anniversary, addressing the history of Transformer architecture developed by Google but popularized by OpenAI.
  • He believes the AI market is not zero-sum; Google's vertical integration is a key advantage.
  • Prediction that search will evolve into an 'agent manager' where AI agents execute tasks based on user commands.
  • Plans for capital expenditure of $175-185 billion by 2026, with supply chain bottlenecks like memory and electricity shortages.
  • Pichai personally reviews compute allocation weekly, emphasizing its critical importance.
  • Long-term projects include space data centers, quantum computing, and robotics.
  • Forecast that by 2027, business predictions at Google will be fully automated by AI.
Summary

 John Chrisson, Elard Gill, and Pichai

Author: Su Yang , Tencent Technology

Edited by Xu Qingyang

Recently, Google CEO Sundar Pichai, on the occasion of his tenth anniversary at the helm of the company, was interviewed by John Collison, co-founder of payment giant Stripe, and Elad Gil, a technology angel investor.

In the interview, Pichai reviewed Google's journey from being passive to leading in the AI ​​wave. He directly addressed a period of history that Google employees find "unsatisfactory": the Transformer architecture, which originated at Google, ultimately became the foundation for OpenAI's ChatGPT and the cornerstone of its disruptive impact on the search industry.

He admitted that there was "some misunderstanding" about this; Transformer was created from the beginning to solve translation quality issues , not just theoretical research. The reason it wasn't released promptly was partly because Google had "higher standards" for search quality, and the early internal versions were "too toxic" to release .

Regarding the current AI race, Pichai believes the market is far from a zero-sum game, stating that "the value growth curve is extremely steep." He also revealed that he spends at least an hour each week personally approving computing power allocations , calling it "the most important thing right now."

According to Pichai, Google's full-stack vertical integration is its core advantage , encompassing everything from the seventh-generation TPU to models and applications, and he revealed that capital expenditures will reach $175 billion to $185 billion by 2026.

Regarding resource bottlenecks, he believes wafer capacity is a "fundamental constraint," warning that 2026 will be a "year of supply contraction." But the United States must learn to "build physical infrastructure at 10 times the speed."

He also confirmed that Google is exploring space data centers, "which is like Waymo in 2010," seemingly a distant prospect, but already starting with a small team and a small budget.

Pichai firmly believes that the search function will not die, but will evolve into an "AI agent manager." You only need to give commands, and AI agents can complete the tasks for you. He even boldly predicts that by 2027, business forecasting within Google will be completely automated by AI, without any human intervention.

The following is an abridged version of Pichai's interview:

01. "We're not slow, we just have high entry barriers."

Q: People always bring up that history: the Transformer was invented by Google, but it ended up becoming the cornerstone of ChatGPT. How do you look back on that now?

Pichai: This is actually a bit of a misunderstanding. The Transformer didn't just appear out of thin air. At the time, we had a very real need: to improve translation. The same goes for TPUs. Speech recognition technology already existed, but the problem was that we needed to serve two billion users, and the existing chips simply couldn't handle it. We had to solve the inference efficiency problem first.

Q: So Transformer was designed with product development in mind from the beginning?

Pichai: Yes, our research team was focused on solving real-world problems from the very beginning. As soon as the Transformer was released, we immediately applied it to search. Later, we developed BERT (Bidirectional Encoder Representation) and MUM (Multi-Task Unified Model), which resulted in a huge leap in search quality during that period. In fact, we also developed a similar product internally, LaMDA (Conversational Language Model), but we weren't the first to launch it to the market.

Q: In other words, you did the research and saw the results, but you didn't use it to solve everything.

Pichai: That's not all. We actually researched the product form of ChatGPT internally as well; it was LaMDA. Do you remember? Back then, one engineer thought LaMDA had developed consciousness (and was later suspended and fired for it lol). That was actually the early prototype of ChatGPT. We had an internal product version for a long time, but it was released about nine months later than ChatGPT.

In fact, we launched AI Test Kitchen back at the 2022 I/O conference, which runs LaMDA. However, we imposed many restrictions because that version had not undergone RLHF (Reinforcement Learning Based on Human Feedback), and its statements were considered quite "toxic," so we dared not release it directly.

Furthermore, Google has always had extremely high standards for search quality, and the barriers to product release are even higher. Even when OpenAI released ChatGPT, their collaboration with Microsoft had only recently been finalized. Therefore, in retrospect, ChatGPT's success was not something that was "natural" or "guaranteed."

I think OpenAI was very lucky: they saw the opportunity in the programming scenario first through GitHub. We may have missed that signal at the time.

In programming, the advancements in model capabilities are far more significant than in pure language scenarios. From GPT-2 to GPT-3, and then to GPT-4, each leap in coding with these tools represents a more pronounced improvement than in casual conversation. These factors combined to create the current situation. Therefore, I believe this has little to do with the "research failing to translate into products" issue, but rather a confluence of other factors.

Q: I remember someone saying that ChatGPT was launched very quietly, during the week of Thanksgiving, and nobody expected it to turn out the way it did. It was just an interesting experiment.

Pichai: That's the norm in the consumer internet; there are always surprises. When we were at Google, we did Google Video Search, and then YouTube came out. It's the same with Facebook; Instagram suddenly appeared. Nobody views these things with a dramatic sense of "I'm about to be disrupted." Facebook's approach was to simply buy Instagram.

What I mean is, there are always three to five people huddled together working on prototypes, throwing out millions of ideas every day. I'm not belittling anyone, but this kind of thing is bound to happen. You can't just randomly build the next iPhone in your garage, but that's how the consumer internet works. The key is that you have to realize this and truly internalize it into the organization's DNA.

02. Search will not die.

Q: Google has always been known for its speed. Early search engines displayed response times on the results page, and Gmail and Chrome were significantly faster than their competitors. Now, the Gemini runs on a TPU and remains incredibly fast. Is this a deliberate product strategy, or are there more complex reasons?

Pichai: There are actually two types of speed. One is response speed, which is how fast the user perceives it; the other is iteration speed, which is how quickly we release new features and improve the product. Both are important.

You just asked about latency. The challenge lies in constantly adding new features while maintaining rapid response. The search team currently has a millisecond-level latency budget. For example, if you save 3 milliseconds, 1.5 milliseconds must be allocated to user experience, and only the remaining 1.5 milliseconds count as the extra time you've saved.

Q: Humans can only perceive a delay of a few hundred milliseconds, right?

Pichai: Indeed. But over the past five years, while adding a bunch of features, we've also reduced search latency by 30%. The same goes for Gemini; the Flash model has 90% of the power of the Pro model, but it's much faster and much cheaper. Vertical integration has played a crucial role in this.

Q: Do you think search will still be around in 10 years? Some people say that chat will be the new interface, while others say that in the future everyone will have their own AI agent, and you can directly command it to perform operations without having to search manually.

Pichai: With each technological revolution, search can do more. User expectations change, and you have to change accordingly. In the future, many "search" actions will become agent-based—you give a task, and an AI agent completes it for you. Search will become an AI agent manager. I'm currently using Antigravity, which already has a bunch of AI agents doing the work.

Q: Will the format of entering a keyword and returning a bunch of links still exist?

Pichai: In the current search AI paradigm, some people are already doing in-depth research on it, which is somewhat different from what you're describing, but people are using it anyway. In the future, there will be more and more long-running tasks, and they can be asynchronous.

Q: You just said that search will become an intelligent agent manager. But ten years from now, will that search box still be there, but people won't care about it anymore?

Pichai: Device form factors will change, and so will the input/output methods. But honestly, thinking about ten years from now is enough to paralyze us. We're lucky to be in a time when just looking at the next year is exciting enough. The curve is so steep; the model will be completely different a year from now. Just following the curve is exciting enough in itself.

Moreover, many people don't realize that this is an era of expansion, not a zero-sum game. Look at YouTube, TikTok, and Instagram—they've all grown, and we're still doing just fine. The more you feel that others' success means your demise, the more it truly becomes a zero-sum game. But as long as you're innovating yourself, it won't.

We're currently working on both search and Gemini, which overlap but will gradually diverge. I think having both is beneficial.

Q: In the spring and summer of 2025, the market was extremely pessimistic about Google's future, with everyone saying that search was finished and your stock price would drop to around $150. Looking back now, that was clearly a misunderstanding. Google performed exceptionally well across the entire technology stack—applications, models, TPUs, Waymo, YouTube, and all those cool bets. What do you think investors misjudged back then?

Pichai: At the time, everyone's attention was focused on the "reversal," the so-called "OpenAI comeback." But for me, that moment made me feel that Google was born for this moment. This vertical integration was neither accidental nor arbitrary. In 2016, we launched the TPU at the I/O conference and promised to build AI data centers, which has now reached its seventh generation. That year, the company also defined its "AI First" direction, and this is more than just a slogan.

We are indeed a step behind in cutting-edge big data models, but internally we have all the necessary capabilities; the rest is execution. What excites me is that, from a full-stack perspective, we have research teams, infrastructure teams, and various business platforms. And AI can simultaneously accelerate all of these businesses, including search, YouTube, cloud, and Waymo—they're all on the same curve. This is a very efficient leverage.

I didn't see it as a zero-sum game at the time. Everything would expand tenfold, and there would be room for others. After Google's rise, didn't Amazon and Facebook also do very well? We always underestimate the potential for growth. So my focus was simple: execute better.

Q: Was there a landmark moment that made people feel, "Google is finally back"? Was it the Gemini 3?

Pichai: People really started to notice this trend with Gemini 2.5. Especially its multimodal capabilities, which were at the forefront. This is thanks to the Google DeepMind team. We invested a lot of fixed costs in multimodal capabilities from the beginning; Gemini was designed with this in mind from day one. With Gemini 2.5, the advantages began to show. For example, with Nano Banana, you can see the effect of integrating everything together.

However, this field changes too rapidly. Two or three leading labs are pushing each other forward; this month you think, "Great, we're ahead in this area," but next month you think, "Oh no, we've fallen behind over there." The landscape may be completely different a few months later. That's how fierce the competition at the forefront is.

03. Spending $180 billion a year exploring AGI

Q: Some external researchers feel that Google differs from other leading labs in that Google isn't as "obsessed" with AGI. In other words, Google doesn't seem to believe AGI will be achieved immediately, nor is it rushing headlong into the idea. Do you think this observation is accurate? If so, will it affect your judgment on future directions?

Pichai: Look at our capital expenditures, they've gone from $30 billion to $180 billion. Who would throw money around like that if they didn't genuinely believe in this curve?

I think this is largely a semantic issue. We're a large company with products that reach too many people and too many levels, so our way of speaking might be different. But to say that Google doesn't understand AGI doesn't make sense. Many of the founders themselves are AGI enthusiasts: Demis Hassabis, Jeff Dean, Ilya Sutskever, and Dario Amodei all worked at Google back in the day.

I think the reason why outsiders perceive a difference in our views might be partly due to geographical factors, such as San Francisco's concentration of young companies and research labs. But these are merely superficial. Fundamentally, there's no fundamental difference in our assessments of the technology curve or our understanding and application of AI.

The real difference lies in whether you've witnessed change firsthand. In our company, there's a group of people who are always at the forefront, personally deploying and testing AI agents, watching them acquire new skills and handle complex tasks step by step. If you look back at their capabilities three months ago, you can truly feel the impact of exponential growth.

Q: I'm curious, when did you feel that your AGI moment was coming?

Pichai: I first had that feeling in 2012. Back then, Dean demonstrated the earliest version of Google Brain, a neural network that recognized a cat. Later, Larry Page and I went to the DARPA Challenge to see self-driving cars. Demis demonstrated an early model that showed what we call "imagination."

There are many more moments like this. Most recently, the most striking example is the rapid advancement in programming. You give a programming agent a complex task, and you can watch it complete the task entirely within the manager without ever opening an IDE (Integrated Development Environment). That feeling could be called an AGI moment.

Q: I was working on a small project the other day, and after it started running, I realized I didn't even know what programming language it used, so I had to ask it. It felt like magic.

Pichai: Exactly. The slope of the curve (the speed of improvement) is what's truly astonishing. Look back three months from now, and you'll see just how much progress has been made.

Q: Speaking of this kind of hands-on experience, I'm curious how you maintain a real connection with the product. Tech products are too abstract; you can't just look at reports and PowerPoint presentations. Besides daily routines like using Gmail, how do you ensure you don't become detached from the user?

Pichai: I use the internal version, specifically allocating time for intensive use. Two weeks ago, I was working out at the gym, with Gemini Live on my phone, and I would spend the next 30 minutes glued to one topic. Some experiences were great, some were frustrating, but you learn something. I force myself to use it like a "super user." I force myself to use them in that "super user" mode to stay engaged. X (Twitter) also helps because sometimes you get the most direct feedback.

In addition, I now go directly to Antigravity (our internal version) and ask the AI, "We released this feature, what do you guys think? Tell me the five worst and five best comments." It pulls them up for you right away. Has my life become easier? Absolutely.

In the past, I had to spend a lot of time trying to understand things, but now AI agents do that for me. Of course, I still need to spend time figuring things out myself; it's a learning process. I'm also trying to adapt to this future.

Q: You just said this isn't a zero-sum game, and the productivity gains are real. But looking back at previous technology cycles—the internet, mobile, and SaaS—it took a long time for their impact on GDP. With AI, we're already seeing data center construction driving GDP growth. Do you think the US economy will grow significantly more due to AI in the next three to five years? By how much?

Pichai: For these returns to be meaningful, they have to be reflected somewhere. I remember someone from Sequoia wrote an article saying that since everyone has invested so much money, the returns have to match.

Of course, that was two and a half years ago. Back then, some said it didn't make sense because the rate of return had to be at a certain level to be considered reasonable. But now, the scale of investment may have increased tenfold, and we need to re-examine these figures. At some point, the numbers have to add up. It's very clear that we are currently facing supply constraints, and we are seeing strong demand for computing power across all application areas.

Q: I have no doubt that this is a huge market. The problem is that many people might be doing the math wrong. For example, they compare token budgets to engineer salaries. I think the software engineering market is bigger than anyone thinks; an increase in supply would actually expand the market tenfold. I'm not questioning the relationship between capital expenditure and returns; I'm just curious, how much growth do you think it can actually achieve?

Pichai: Looking back at the development of the internet, GDP growth figures don't fully reflect the changes we've experienced. Perhaps without the internet, GDP growth would be negative. It's difficult to make accurate predictions; there are natural inhibitory mechanisms at all levels of society.

The most obvious example is the stark contrast between the curve for building computing power and the curve for improving models; the former is much slower. Then you have to consider how to disseminate the technology into society. Waymo is an example. It's safer than human drivers, but you still have to be cautious about how quickly you roll it out—there are limitations at all these levels. The US economy is much larger than it was ten years ago, so even a half-percentage-point increase in growth rate would be a huge contribution. I think it will move in that direction.

04. Supply Chain Alerts: Memory, Electrical Equipment

Q: You mentioned supply constraints, which is indeed a defining characteristic of 2026. You said Google's capital expenditures are around $180 billion?

Pichai: Between $175 billion and $185 billion.

Q: Interestingly, even if Google wanted to spend $400 billion, it couldn't, because it didn't have enough memory, enough power, and enough of various components. Can you talk about these bottlenecks?

Pichai: You can't even find the electrician you need.

Q: What are the bottlenecks?

Pichai: Ultimately, it comes down to wafer capacity, that's the fundamental constraint. Electricity and energy are relatively easier to solve, but licensing and the regulatory environment are a big problem that slows you down.

Q: Texas, Nevada, Montana, and other states have plenty of land, but it's still not enough?

Pichai: We are making tremendous progress, but the United States really needs to learn how to build faster. Look at the speed of China's construction; it's astonishing. We need to change our mindset and think about how to increase the speed of physical construction tenfold. This will be a real constraint. And the resistance will only grow, and it won't be solved by a few people saying, "We need to speed up construction."

Q: There are also issues like data center shutdown orders.

Pichai: Wafer capacity, approvals, and construction speed are all bottlenecks. The government has done a lot, and everyone recognizes the need for improvement. Then there are the key components in the supply chain, memory chips being a prime example. In the short term, everyone is stuck here.

For those of us running companies, no matter how "obsessed" you are with AGI, we all have to face a reality: your judgment can't be 100% accurate; there's always a margin of error. You need to figure out exactly how optimistic you are about future development, and how much profit margin you can tolerate, because external factors can change at any time. Everyone is making adjustments based on these uncertainties.

Q: So you think memory is the biggest bottleneck component?

Pichai: Absolutely one of the most crucial at the moment.

Q: You said this is short-term. Will the market stimulate supply by raising prices?

Pichai: Leading memory manufacturers are unlikely to significantly expand production. Short-term constraints will exist, but will gradually ease. Moreover, these constraints will force innovation—we will improve efficiency by 30 times. These things are happening simultaneously.

Q: Won't this reinforce the oligopolistic structure? With models self-improving, writing their own code, and labeling their own data, computing power is like a game of musical chairs. Whoever has more computing power can go further. But if everyone's computing power is distributed proportionally, then it effectively sets an upper limit for everyone. Do you think this statement is correct?

Pichai: That makes some sense. But we just released Gemma 4, a very good open-source model. The Chinese model is excellent, but I think it's also a very good open-source model outside of China. While the cutting-edge level of Gemma 4 is significantly different from the architecture of Gemini 3, the release dates aren't that long apart. It's not like a behemoth like a SpaceX rocket.

Q: I've always found it amazing: you run a data center for months, and in the end, all you get is a flat file, something like a Word document, that's your model, it's incredible!

Pichai: The peculiarity of this situation makes me want to challenge that framework. At least from a logical standpoint, you have a point. But everyone is trying to use the power of capital to break through these limitations, and the incentive is enormous.

Q: But as you just said, there's only so much memory in the world. The supply problems in 2026 and 2027 can't be solved by capital incentives alone. This may be precisely when the models start to diverge more.

Pichai: Yes, but it needs to be considered in conjunction with factors like wafer capacity and approvals. Overall, the restrictions might not be as severe as you might think. You have to consider everything, including capital.

Q: Logically, everyone should be willing to invest more money, but we've hit the real bottleneck of 2026 and 2027. It's like the Strait of Hormuz: you can set oil prices as high as you want, but if you reduce supply by 20 million barrels per day, you have to eliminate demand by 20 million barrels. The same applies to memory chips; ultimately, some people will be left out.

Pichai: Of course, there are other limitations, such as security. But the key point is that these models will soon exceed the limits of almost all existing software—or perhaps they already have, but we're sitting here without even realizing it.

Q: So supply constraints actually force you to optimize and become more efficient.

Pichai: Yes, it forces you to have some necessary dialogue. Take security, for example; we need more coordination, but the coordination we have today is far from enough. There will come a day—perhaps quite suddenly. You can't just expect these problems to disappear on their own.

05. Three "Hidden Gems"

Q: Speaking of which, Google's investment portfolio is indeed impressive. You invested in SpaceX, I remember it was around 10% a long time ago? And Anthropic, also around 10%. Waymo holds a majority stake. Internally, are there TPUs, quantum computing, or other "hidden gems" that people might not know about or underestimate?

Pichai: We've been working on various long-term projects, and when they were first announced, even the slightly peripheral ones seemed a bit absurd. For example, the space data center—we're currently in its very early stages. What you just said about limitations stimulating creativity is exactly the point.

Looking at it from a 20-year perspective, where do you plan to build these data centers? That's a difficult question, but it's what we're thinking about today, just like when we started Waymo in 2010. Quantum computing is one of them, and we're steadily moving forward with it; I'm very excited about it.

Q: In which fields do you think quantum computing will have the biggest impact? People are mainly talking about molecular modeling and cryptography. But some are developing quantum-resistant cryptography (referring to new cryptographic techniques that can withstand quantum computing attacks), and in molecular modeling, deep learning is already very advanced; AlphaFold is an example. Will quantum computing really be that important? If so, where will it have the biggest impact?

Pichai: On an abstract level, I think quantum computers are better suited for simulating nature. Because nature itself follows the laws of quantum mechanics, simulating it with a quantum system would be more direct and efficient. Of course, classical computers with sufficient compression algorithms could theoretically achieve the same result, but I intuitively feel that quantum has the advantage.

For example, we still don't fully understand the "Haber process" in fertilizer production, and there are many complex natural phenomena. My intuition is that quantum computing will ultimately prevail in areas such as simulating weather and simulating reality.

Technological history teaches us a lesson: once you make something usable, people will find all sorts of applications on it that you never even imagined before. I often use this example: adding GPS to a mobile phone led to the success of Uber. Back then, nobody making phones could have foreseen that. Therefore, I believe that once quantum computers are truly built, their applications will far exceed everyone's imagination.

Q: Excuse me for interrupting you, please continue talking about those cutting-edge projects you just mentioned.

Pichai: The Google DeepMind team is deeply involved in robotics. Google actually got involved in robotics quite early, but it was too early. Looking back now, AI was the missing piece of the puzzle back then. The Gemini Robotics model is already top-notch in spatial reasoning. Interestingly, we are now collaborating with companies like Boston Dynamics and Agile to push things forward together.

And then there's Wing, drone delivery. We're scaling up, and soon 40 million Americans will be able to use Wing's services. This isn't something that's many years in the future; it will happen very soon. These long-term projects are built up little by little.

There is also Isomorphic.

Q: Isomorphic is indeed very exciting.

Pichai: Yes, we are focused on using models to improve every aspect of drug discovery. Although there are still procedures to go, such as Phase III clinical trials, the help of AI gives us a greater confidence in achieving success.

06. Regret not investing in Waymo sooner

Q: How exactly does Google allocate its capital? Textbooks say that capital allocation is about putting money where the highest return is. Boeing is a typical example: the internal rate of return (IRR) for defense contracts is 16%, while for new passenger aircraft it's 19%—everyone would choose the latter. But Google's projects can't be calculated that way. Investing more money in YouTube means the algorithm will be optimized, user engagement will increase, and revenue will rise. Investing more money in Waymo accelerates expansion, but it's uncertain when it will become profitable on a large scale. Investing in an AI research project might not yield results even in five years. The return curves for these three projects are completely different; how can you compare them?

Pichai: That's a good question. Ironically, we're encountering this problem more often now than ever before, because of TPU allocation. To some extent, even Waymo needs TPUs, making computing power a particularly prominent issue in capital allocation.

By the way, I'm really looking forward to AI helping me with this. Once we've integrated all the data, the model is already capable; the current bottleneck is unlocking the data. I think this will help soon.

Looking back, Google has had a significant advantage: we often make decisions at a very early stage. This is largely due to the company's technological DNA.

For long-term projects, the early stages are actually easier because the initial funding requirements are not high. The real challenge lies in sustained long-term investment and continuous evaluation of the progress of fundamental technologies. Taking quantum computing as an example, how do we determine whether to continue investing? We look at the error rate of logical qubits, when we can reach the threshold for stable large-scale logical qubits, and whether the team can overcome these technical hurdles.

One of the most important lessons I learned is to bet deeply on the technology early on.

In the long run, you're essentially using intuition to judge the option value and potential market size of a project 5 to 10 years from now. You first assume a very aggressive growth curve, and then work backward to determine whether this decision is reasonable.

That's how we invested in TPU; we've been steadily investing. The same goes for Waymo. About two or three years ago, when the world was extremely pessimistic about autonomous driving, we actually increased our investment. While others were retreating, we were betting more.

Q: Going back to what you just said about capital allocation. Google does cut projects; Loon (the hot air balloon network project) was stopped, but Waymo persevered for so long. What did you see at the time? Was it a qualitative or quantitative judgment? How did you decide which project to cut and which to keep?

Pichai: We do have some quantifiable metrics. For example, we look at Waymo's driving system, seeing how its safety and reliability are improving. It's a long-term curve; you set your goals and then continuously track progress. Our team has consistently performed exceptionally well. Progress has been slower at times, but you have to trust that the team can overcome those challenges. The deeper your assessment at the technical level, the more accurate your decisions will be. At least, that's how I do it.

Q: I've heard it said that Waymo initially relied on hand-drawn maps and heuristic rules, which limited the scenarios it could handle. Their real breakthrough came a few years ago when they shifted to end-to-end deep learning, coinciding with the Transformer wave. If Waymo had only started five years ago, would they be in a similar situation now? Or were those decade-plus of accumulation truly indispensable?

Pichai: You can think of Waymo as a robot. Logically, people who only started working on robots in the last three years should be progressing faster. But Waymo is different; it's a highly integrated system, unlike TSMC or SpaceX, which only focus on technological complexity in a single dimension. For this kind of system integration, timing and technological accumulation are crucial. That being said, an end-to-end approach can indeed be an accelerator.

Q: So continuously developing a team is a huge advantage in itself. You've been investing, and it will all be worth it when the technology takes off. That's very smart. What about extending this to other areas? For example, robotics, will you develop the hardware yourself, or will you mainly rely on partners?

Pichai: We remain open-minded. But I learned something from Waymo and TPU: in areas involving security and regulation, you need a firsthand product feedback loop. Having first-party hardware will ultimately become very important.

07. Personally assess and allocate computing power weekly.

Q: Previously, R&D spending mainly focused on personnel salaries, with technology costs being secondary. Now, TPU computing power has become a major part of the budget. How exactly does this work internally at Google? Is there a total TPU budget? When allocating budgets to projects, was the budget previously based on headcount, but is it now a "headcount + computing power" budget? How are quarterly reviews conducted?

Pichai: We've always had a computing budget, but right now computing power is severely limited. I spend at least an hour each week meticulously reviewing how much computing power each project and team is using and assessing how to allocate it. This is now a top priority.

Q: So computing power has become a scarce resource, and you need to make sure it is spent on the most worthwhile things.

Pichai: That's right.

Q: What about Google Cloud? You need computing power for your own use, but you also sell it to customers. How do you resolve this contradiction?

Pichai: It's all about planning ahead. The cloud team makes forward-looking plans, and we are committed to fulfilling our promises to our customers. Everyone operates in a constrained world, and the cloud team always complains about insufficient computing power, but planning ahead can solve most problems.

Q: Speaking of Google Cloud, GCP/MCP (Google Cloud Computing Protocol) is great. Your AI can directly call Google Cloud programmatically and can do almost anything, except for core permission settings. Previously, the biggest pain point of Google Cloud was that it had too many complex features; after logging in, you had to create organizations, projects, and find services, which was very cumbersome. Now, none of that matters. You can simply say, "Add this feature." The AI ​​understands all the API documentation and acts as a navigation layer. This experience is fantastic.

Pichai: AI, as an orchestration layer, can handle anything you can imagine. The same applies within enterprises; CEOs don't lack data, they lack the methods to put that data together. Previously, you had to implement a large ERP project; now, AI is that orchestration layer.

Q: The more complex the product, the greater the benefits of AI navigation. Stripe has also experienced this, but the effect of GCP should be more obvious.

Pichai: We can do better, but you're right, the opportunity is huge.

Q: What interests me about products like OpenClaw is that they allow consumers to use stateful AI. For example, sending me a summary of news I'm interested in every morning—something that requires persistent memory, something mainstream AI applications can't do. Is this feature coming soon?

Pichai: The direction is definitely in. Users need to run persistent, long-term tasks in a reliable and secure manner. Issues like identity and permissions need to be clarified. But this is the future of AI agents, bringing this capability to consumers, and it's an exciting frontier we're exploring.

Q: That's something I wanted to mention too. Dreamer, the company of the former Stripe CTO, which was recently acquired by Meta, is particularly good at stateful AI. You can create small applications yourself, and the experience is very smooth. It's surprisingly good. ( Note: Stateful AI refers to AI systems that can retain and utilize historical context, memory, and state information in multi-step interactions or complex workflows. )

Pichai: Consumer-grade interfaces will have a complete underlying coding model, coupled with the right tools and skills, and the ability to run securely and persistently in the cloud. These fundamental components are converging. Today, probably only 0.1% of people live in this future; they are building things for themselves. But bringing it to the mass market is an exciting frontier.

Q: The companies I've worked with, even the most recently founded ones, have completely transformed product development, engineering practices, and even the positioning of design teams. Is Google also rethinking these things? Are there major changes in workflows?

Pichai: You can think of it like concentric circles. Some teams have already undergone profound transformations, and my job is to spread this change. Early on, many things were crippled, impossible to push forward. But this year the curve is shifting dramatically. Google DeepMind and some software engineering teams are already living in agent managers; their internal tool is called Jet Ski, which is essentially Antigravity. We just rolled it out to the search team last week. In large companies, change management is the biggest challenge in technology diffusion; smaller companies switch much faster.

Q: I'd like to add a few issues encountered in the practical implementation of AI. First, engineers need time to learn how to effectively prompt AI, and each company has its own specific knowledge. Second, sharing AI-generated codebases is difficult because the scope of modifications is large, code changes rapidly, and multi-person collaboration becomes complex. Third, besides the engineering field, data permissions are a major issue—you want the AI ​​to answer "What is the status of this transaction?", the company needs this information, but the permission engine needs to be rewritten. Fourth, role definitions are also changing; roles such as engineering, product, and design may need to be merged. In short, the model's capabilities have reached a certain level, but we are still using it far from enough. What are your thoughts?

Pichai: The issues you mentioned are being addressed one by one by the Gemini Enterprise team and the Antigravity team. This is our roadmap. We use it internally, encounter obstacles, overcome them, and then turn it into a product to roll out. Identity access control is a real challenge, and we have particularly high security requirements, so we must be cautious. But precisely because of this, when we solve problems, the products we release are more robust. We are currently in this fixed-cost phase.

08. Timeline for AI to Take Over from Humanity

Q: Google makes several formal business forecasts each year. In theory, you could let AI do this completely automatically, without human intervention. In which quarter do you think Google will first achieve fully automated forecasting by an AI agent?

Pichai: I expect 2027 to be a significant turning point. Initially, there will still be people in charge of verification, but it will gradually shift to others. In 2027, these changes will be very noticeable.

Q: So, besides engineering processes, do you think those non-engineering processes will also truly start to be AI-driven in 2027?

Pichai: Yes. That's also an advantage for startups; they can hire AI-native teams and operate on this model from the start. We, on the other hand, have to do retraining and transformation. Young companies definitely have an advantage in this area, and we must drive this transformation ourselves.

Q: What small projects within Google are currently exciting to you?

Pichai: It might surprise you, but our space data center started as a small team of just a few people with a very small budget to achieve our first milestone. Big ideas start small.

Share to:

Author: PA荐读

Opinions belong to the column author and do not represent PANews.

This content is not investment advice.

Image source: PA荐读. If there is any infringement, please contact the author for removal.

Follow PANews official accounts, navigate bull and bear markets together
PANews APP
Xie Jiayin: Bitget will distribute two rounds of preSPAX airdrops to VIPs.
PANews Newsflash