Podcast source: Y Combinator
Compiled & translated by: Deep Tide TechFlow
Host: Gary Tan
Guest: Demis Hassabis (Founder of DeepMind, 2024 Nobel Laureate in Chemistry, Head of Google DeepMind)
Air date: April 29, 2026
Editor's Note
Google DeepMind CEO and Nobel laureate in Chemistry, Demis Hassabis, appeared on Y Combinator to discuss key advancements towards AGI, offer advice to entrepreneurs on staying ahead of the curve, and explore where the next major scientific breakthrough might occur. His most practical advice for deep tech entrepreneurs is that if you're launching a ten-year deep tech project today, you must include the emergence of AGI in your planning. He also revealed that Isomorphic Labs (the AI pharmaceutical company spun off from DeepMind) will soon have a major announcement.

Essential Quotes
AGI Route and Timeline
"These existing technology components will almost certainly become part of AGI's final architecture."
"There are still some issues with continuous learning, long-range reasoning, and memory; AGI needs to address them all."
"If your AGI timeline is around 2030 like mine, and you start a deep tech project today, then you have to take into account the possibility that AGI might appear along the way."
Memory and Context Window
"The context window is roughly equivalent to working memory. The average human working memory only has seven digits, while we have a context window with millions or even tens of millions of tokens. But the problem is that we cram everything into it, including unimportant and incorrect information. The current approach is quite crude."
"If you want to process live video streams and store all the tokens, one million tokens are only enough for about 20 minutes."
Flaws in reasoning
"I like playing chess with Gemini. Sometimes it realizes it's a bad move, but can't find a better one, so it ends up making the same bad move again. But a precise reasoning system shouldn't do that."
"On the one hand, it can solve IMO gold medal-level problems, but on the other hand, it makes elementary school math mistakes when asked a different question. It seems to be lacking something in its introspection on its own thinking process."
Agent and Creativity
"To achieve AGI, you need a system that can proactively solve problems for you. Agents are that path, and I think we're just getting started."
"I haven't seen anyone use Vibe Coding to create a AAA game that tops the app store charts. Given the current level of effort invested, it should be possible, but it hasn't happened yet. This suggests that something is missing in the tools or processes."
Distillation and small models
"Our hypothesis is that six months to a year after the release of a cutting-edge Pro model, its capabilities can be compressed into a very small model that can run on edge devices. We haven't hit the theoretical limit of information density yet."
Scientific discoveries and the "Einstein test"
"I sometimes call it the 'Einstein test,' which is whether you can train a system using knowledge from 1901 and then let it independently derive the results Einstein made in 1905, including the theory of special relativity. Once you can do that, these systems are not far from truly inventing something entirely new."
"Solving a Millennium Prize problem is already remarkable. But what's even more difficult is proposing a new set of Millennium Prize problems that are considered equally profound and worth a lifetime of study by top mathematicians."
Deep Technology Entrepreneurship Advice
- "Pursuing difficult problems and pursuing simple problems are actually quite similar, only the difficulty lies in different ways. Life is short, so why not focus your energy on doing things that no one else will do if they don't do them?"
AGI Implementation Path
Gary Tan : You've been thinking about AGI longer than almost anyone else. Looking at the current paradigm, how much of the final AGI architecture do you think we already have? What's fundamentally missing right now?
Demis Hassabis : Large-scale pre-training, RLHF, mind chains, etc., I'm pretty sure they'll be part of the final AGI architecture. These technologies have proven so much by now. I can't imagine us finding this a dead end two years from now; it doesn't make sense to me. But on top of what's already there, there might be one or two things missing. Continuous learning, long-term reasoning, certain aspects of memory—there are still some problems to solve. AGI needs to get all of that sorted out. Maybe existing technologies plus some incremental innovations can scale to that level, but there might still be one or two major key points to be overcome. I don't think there will be more than one or two. My personal assessment is that the probability of any such unsolved key points is about 50/50. So at Google DeepMind, we're pushing forward on two lines.
Gary Tan : I've dealt with a lot of agent systems, and what shocks me most is that the underlying weights are always the same. So the concept of continuous learning is particularly interesting because right now we're basically just patching things up with tape, like those "nighttime dream cycles" and the like.
Demis Hassabis : Yes, those dream cycles are pretty cool. We've thought about this before in the context of episodic memory integration. My PhD research focused on how the hippocampus elegantly integrates new knowledge into existing knowledge systems. The brain does this exceptionally well. It completes this process during sleep, especially during REM sleep, replaying important experiences to learn from them. One key method our earliest Atari program, DQN (DeepMind's Deep Q-Network published in 2013, which was the first to achieve human-level performance in Atari games using deep reinforcement learning), mastered Atari games was experience replay. This comes from neuroscience—replaying successful paths repeatedly. That was in 2013, which is ancient history in the field of AI, but it was crucial at the time.
I agree with you; right now we're really using tape to cram everything into context windows. It doesn't feel right. Even if we're making a machine, not a biological brain, theoretically capable of millions or tens of millions of context windows, and memory could be perfect, the cost of searching and retrieving still exists. In this moment requiring concrete decision-making, finding truly relevant information isn't easy, even if you can store everything. Therefore, I think there's still a lot of room for innovation in the field of memory.
Gary Tan : To be honest, the context window for a million tokens is much larger than I expected, and it allows for a lot of things to be done.
Demis Hassabis : It's large enough for most scenarios where it's meant to be used. But think about it: a context window is roughly equivalent to working memory. The average human working memory has only seven digits, while we have millions or even tens of millions of context windows. The problem is that we cram everything into them, including unimportant and incorrect information; the current approach is quite crude. And if you're processing live video streaming right now, naively recording all the tokens, a million tokens would only last about 20 minutes. But if you want the system to understand your life over a month or two, that's far from enough.
Gary Tan : DeepMind has always been deeply invested in reinforcement learning and search. How deeply is this philosophy embedded in your current work on building Gemini? Is reinforcement learning still being underestimated?
Demis Hassabis : It's probably indeed underestimated. The level of attention in this area has fluctuated. We've been working on agent systems since DeepMind's inception. All the work on Atari and AlphaGo is essentially about reinforcement learning agents—systems capable of autonomously achieving goals, making decisions, and formulating plans. Of course, we initially chose the game domain because the complexity was manageable, and then gradually worked on more complex games, such as AlphaStar after AlphaGo. Basically, we've done every game we could.
The next question is whether these models can be generalized to world models or language models, not just game models. We've been working on this for the past few years. The thinking patterns and reasoning chains of all leading models today are essentially a return to what AlphaGo pioneered. I think much of the work we did back then is highly relevant to today; we're re-examining those old ideas and doing them on a larger scale, in a more general way, including various reinforcement learning methods like Monte Carlo tree search. The ideas from AlphaGo and AlphaZero are extremely relevant to today's foundational models, and I believe a large part of the progress in the next few years will come from this.
Distillation and small models
Gary Tan: To make something smarter now, you need larger models, but at the same time, distillation technology is advancing, and smaller models can be made quite quickly. Your Flash models are very powerful, basically achieving 95% of the performance of cutting-edge models, but at only one-tenth the price. Is that right?
Demis Hassabis: I think this is one of our core strengths. You have to build the largest models first to gain cutting-edge capabilities. One of our biggest strengths is our ability to quickly distill and compress those capabilities into increasingly smaller models. We invented this distillation method, and we are still at the forefront of it in the world. We also have a strong business incentive to do this. We are probably the world's largest AI application platform. We have AI Overviews and AI Mode, as well as Gemini. Now, every Google product, including Maps, YouTube, etc., integrates Gemini or related technologies. This involves billions of users, and a dozen products with billions of users. They must be extremely fast, extremely efficient, extremely low-cost, and extremely low-latency. This gives us a huge incentive to make Flash and the even smaller Flash-Lite models extremely efficient, and I hope this will ultimately serve users' various tasks well.
Gary Tan: I'm curious to see just how smart these small models can be. Is there a limit to distillation? Can a 50B or 400B model be as smart as today's largest cutting-edge models?
Demis Hassabis: I don't think we've hit the limits of information theory; at least, nobody knows for sure yet. Perhaps one day we'll encounter some kind of ceiling on information density, but our current assumption is that once a cutting-edge Pro model is released, its capabilities can be compressed into a very small model, almost capable of running on edge devices, within six months to a year. You can see this in the Gemma model; our Gemma 4 model performs exceptionally well for its size. This is achieved using a lot of distillation techniques and small model efficiency optimization techniques. So I really don't see any theoretical limit; I think we're still very far from that limit.
Gary Tan: There's a really ridiculous phenomenon right now, where engineers are doing about 500 to 1000 times more work than they did six months ago. Some people in this room are doing about 1000 times the work of a Google engineer in the 2000s. Steve Yegge talked about this.
Demis Hassabis: I'm very excited. Small models have many uses. One is their low cost, and the speed also brings benefits. In writing code or other tasks, you can iterate much faster, especially when collaborating with systems. Even if a fast system isn't cutting-edge, say only 90% to 95% of the cutting edge, it's perfectly adequate, and the speed you gain in iteration far exceeds that 10%.
Another major trend is running these models on edge devices, not only for efficiency but also for privacy and security. Think about devices that handle highly personal information, and robots. For your home robot, you'd want a high-efficiency, powerful model running locally, only delegating tasks to a larger model in the cloud for specific scenarios. Audio and video streams are processed locally, and the data remains local—I can imagine this would be a wonderful ultimate state.
Memory and Reasoning
Gary Tan : Let's go back to context and memory. The model is currently stateless. What would the developer experience be like if it had continuous learning capabilities? How would you guide such a model?
Demis Hassabis: That's an interesting question. The lack of continuous learning is a key bottleneck preventing current agents from completing full tasks. Current agents are useful for specific parts of a task; you can piece them together to do some cool things, but they don't adapt well to your specific environment. That's why they can't truly be "fire-and-forget" yet; they need to learn from your specific context. This problem must be solved to achieve true general intelligence.
Gary Tan: How far have you progressed in terms of reasoning? The model's thought process is currently very strong, but it still stumbles on some mistakes that a bright undergraduate wouldn't make. What specific changes are needed? What progress do you expect in reasoning?
Demis Hassabis: There's still a lot of room for innovation in our thinking paradigms. What we're doing is still quite crude and brute-force. There are many areas for improvement, such as monitoring the thought process and intervening midway through thinking. I often feel that both our systems and our competitors' systems, to some extent, overthink and get stuck in a cycle.
I sometimes like to use Gemini chess as a tool for observation. It's interesting that all the leading foundational models are actually quite poor at chess. Observing their thought processes is valuable because chess is a well-understood domain, and I can quickly determine if it's going astray or if its reasoning is valid. What we see is that sometimes it considers a move, realizes it's a bad move, but can't find a better one, and ends up making the same bad move again. A precise reasoning system shouldn't exhibit this behavior.
This huge gap still exists, but fixing it might only require one or two adjustments. This is why you see so-called "jagged intelligence," which can solve IMO gold medal-level problems on the one hand, but makes elementary school math mistakes when asked a different question on the other. It seems to be lacking something in introspection about its own thought processes.
The Agent's True Abilities
Gary Tan: Agents are a big topic. Some say it's just hype. I personally think it's only just beginning. What is DeepMind's internal assessment of agent capabilities, and how different is it from the external propaganda?
Demis Hassabis: I agree with you, we're just getting started. To achieve AGI, you need a system that proactively solves problems for you. This has always been clear to us. Agents are that path, and I think we're just beginning. Everyone is exploring how to make agents work better; we've done a lot of research in our individual experiments, and many of you here probably have too. How to integrate agents into workflows, making them not just icing on the cake, but truly doing something fundamental. We're still in the experimental phase. We might only start finding truly valuable scenarios in the last two or three months. The technology will probably reach that point where it's no longer a toy demo, but genuinely bringing value to your time and efficiency.
I often see people start dozens of agents and let them run for dozens of hours, but I'm not sure if the output matches the investment.
We haven't seen anyone create a top-ranking AAA game on the app store charts using Vibe Coding. I've written it myself, and many of you here have made some pretty good little demos. I can now create a prototype of Theme Park in half an hour, whereas it took me six months when I was 17. I have a feeling that if you spend a whole summer working on it, you can create something truly incredible. But it still requires craftsmanship, a person's soul, and taste; you have to make sure you bring those qualities into anything you build. In fact, no kid has created a game that sells ten million copies yet, though theoretically, with the current tools, it should be possible. So something's missing, maybe related to the process, maybe related to the tools. I expect to see that kind of result within the next 6 to 12 months.
Gary Tan: To what extent will it be fully automated? I don't think it will be fully automated from the start. The more likely path is for the people here to achieve 1000 times the efficiency first, then for someone to use these tools to create best-selling apps and games, and only then will more steps be automated.
Demis Hassabis: Yes, that's what you should see first.
Gary Tan: Part of the reason is that some people are indeed doing this, but they are unwilling to publicly say how much the agent helped.
Demis Hassabis: Perhaps. But I'd like to talk about creativity. I often cite AlphaGo as an example; everyone knows about move 37 of the second game. For me, I've been waiting for that moment to happen before I started scientific projects like AlphaFold. We started working on AlphaFold the day after we returned from Seoul, which was ten years ago. My trip to Korea this time was to celebrate AlphaGo's tenth anniversary.
But simply moving beyond Move 37 isn't enough. It's cool, it's useful. But could this system invent Go itself? If you give it a high-level description, like "a game whose rules can be learned in five minutes, but which is difficult to master even in a lifetime, aesthetically elegant, and a game can be finished in an afternoon," and the system returns Go, today's systems can't do that. The question is, why?
Gary Tan: Perhaps someone among us can do that.
Demis Hassabis : If someone has done it, then the answer isn't that the system is missing something, but rather that we're using the system incorrectly. That might be the right answer. Perhaps today's systems already have this capability, but they need a sufficiently talented creator to drive them, to provide the soul of the project, and at the same time, this person needs to be highly integrated with the tools, almost becoming one with them. If you immerse yourself in these tools day and night and possess deep creativity, you might be able to create something beyond imagination.
Open source and multimodal models
Gary Tan: Let's change the subject and talk about open source. The recent release of Gemma allows very powerful models to run locally. What are your thoughts on this? Will AI become something users control, rather than primarily residing in the cloud? Will this change who can use these models to build products?
Demis Hassabis: We are staunch supporters of open source and open science. Regarding AlphaFold, which you mentioned, we have made it completely free and open source. Our scientific work continues to be published in top journals. As for Gemma, we aim to create world-leading models of similar size. Gemma has already been downloaded approximately 40 million times, and it has only been released for two and a half weeks.
I also believe that the presence of Western technology stacks in the open-source field is important. China's open-source model is excellent and currently leads the field, but we believe Gemma is very competitive for its size.
Another issue for us is resources; no one has the extra computing power to create two full-scale frontier models. Therefore, our current decision is: edge models are used for Android, glasses, robots, etc., and it's best to make them open models, because once deployed on devices, they are inherently exposed, so it's better to be completely open from the start. We've unified our open strategy at the nanometer level, which also makes strategic sense.
Gary Tan: Before coming on stage, I demonstrated the AI operating system I created. I can interact directly with Gemini using voice. I was quite nervous demonstrating it to you, but it actually worked. Gemini was built as a multimodal system from the very beginning. I've used many models, and currently, no model can compare to Gemini in terms of the depth of its direct voice-to-model interaction, tool invocation capabilities, and contextual understanding.
Demis Hassabis: Yes. One advantage of the Gemini series that hasn't been fully recognized is that we built it multimodally from the start. This made the initial stages more difficult than working with just text, but we believe we'll benefit in the long run, and it's already starting to pay off. For example, in terms of world models, we built Genie (a generative interactive environment model developed by DeepMind) on top of Gemini. The same applies to robotics; Gemini Robotics will be built on a multimodal foundational model, and our multimodal advantage will become a competitive moat. We're also increasingly using Gemini at Waymo (Alphabet's autonomous driving company).
Imagine a digital assistant that follows you into the real world, perhaps on your phone or glasses, needing to understand the physical world and environment around you. Our system excels in this area. We will continue to invest in this direction, and I believe our leading edge in these kinds of problems is significant.
Gary Tan : Inference costs are decreasing rapidly. When inference becomes essentially free, what becomes possible? Will your team's optimization direction change as a result?
Demis Hassabis: I'm not sure inference will truly be free; the Jevons Paradox (the phenomenon where increased efficiency leads to increased total consumption) is there. I think everyone will eventually use up all the computing power they have. Imagine a group of millions of agents working collaboratively, or a small group of agents thinking simultaneously in multiple directions and then integrating their findings. We're experimenting with these directions, and all of this will consume available inference resources.
In terms of energy, if we solve several problems related to controlled nuclear fusion, room-temperature superconductivity, and optimal batteries, I believe we can achieve near-zero energy costs through materials science. However, bottlenecks remain in areas such as the physical manufacturing of chips, at least for the next few decades. Therefore, there will still be quota restrictions on inference devices, and efficient use will still be necessary.
The next scientific breakthrough
Gary Tan: Fortunately, the smaller models are getting smarter. Many of the founders in the fields of biology and biotechnology are here. AlphaFold 3 has already moved beyond proteins, expanding to a broader spectrum of biomolecules. How far are we from modeling complete cellular systems? Isn't that a completely different level of difficulty?
Demis Hassabis: Isomorphic Labs is making excellent progress. AlphaFold is just one step in the drug discovery process; we are doing adjacent biochemical research, designing compounds with the right properties, and there will be major announcements soon.
Our ultimate goal is to create a complete virtual cell, a fully functional cell simulator that you can perturb, whose output is close enough to experimental results to be practically applicable. You can skip numerous search steps and generate large amounts of synthetic data to train other models to predict the behavior of real cells.
I estimate that a complete virtual cell is still about ten years away. At DeepMind Science, we're starting with the virtual cell nucleus because it's relatively self-contained. The key to this type of problem is whether we can slice out a piece of appropriately complex material, self-contained enough that you can reasonably approximate its inputs and outputs, and then focus on that subsystem. The cell nucleus is a good fit from this perspective.
Another problem is the lack of data. I've spoken with top scientists working on electron microscopy and other imaging techniques. Imaging living cells without killing them would be revolutionary, because it would transform the problem into a visual one, one we know how to solve. But as far as I know, there's currently no technology that can image living, dynamic cells at nanometer resolution without damaging them. You can take static images at that resolution, and they're already very detailed, which is exciting, but not enough to directly turn it into a visual problem.
Therefore, there are two paths: one is a hardware-driven, data-driven approach; the other is to build better learnable simulators to simulate these dynamic systems.
Gary Tan: You're not just looking at biology. Materials science, drug discovery, climate modeling, mathematics—if you had to rank them, which scientific field would be most radically transformed in the next five years?
Demis Hassabis: Every field is exciting, which is why this has always been my greatest passion and the reason I've been working in AI for over 30 years. I've always believed that AI will be the ultimate tool for science, advancing scientific understanding, scientific discovery, medicine, and our knowledge of the universe.
Our initial mission statement was a two-step process. First, solve intelligence, that is, build AGI; second, use it to solve everything else. We later had to adjust our wording because people would ask, "Are you really saying you're solving everything?" We definitely meant that. Now people are starting to understand what that means. Specifically, I'm referring to solving what I call "root node problems" in science—those areas where a breakthrough could unlock entirely new branches of discovery. AlphaFold is the prototype for what we want to do. Over three million researchers worldwide, almost every biologist, are now using AlphaFold. I've heard from friends who are executives at pharmaceutical companies that almost every drug discovered in the future will use AlphaFold at some point in the drug discovery process. We're proud of this, and this is the kind of impact we hope AI will have. But I think this is just the beginning.
I can't think of any scientific or engineering field that AI can't help with. The fields you mentioned, I think, are pretty much in the "AlphaFold 1 moment"—the results are promising, but the major challenges haven't been truly overcome yet. We'll have a lot to talk about in all these fields over the next two years, from materials science all the way to mathematics.
Gary Tan: It feels like Prometheus, giving humanity a completely new ability.
Demis Hassabis: That's right. Of course, just like the moral of the story of Prometheus, we must also be careful about how this ability is used, where it is used, and the risk of the same set of tools being abused.
Successful experience
Gary Tan: Many of you here are trying to start companies that apply AI to science. In your opinion, what's the difference between startups that are truly advancing the frontier and those that just wrap basic models with APIs and then call themselves "AI for Science"?
Demis Hassabis: I'm thinking about what I would do if I were sitting in your seats today, looking at projects at Y Combinator. One thing is that you have to anticipate the direction of AI technology, which is inherently difficult. But I do believe there's a huge opportunity to combine the direction of AI with another deep technology field. This intersection, whether it's materials science, medicine, or other truly difficult scientific fields, especially those involving the atomic world, won't have shortcuts in the foreseeable future. These fields won't be crushed by the next fundamental model update. But if you're looking for a defensive direction, this is what I would recommend.
I've always had a penchant for deep tech. Nothing truly lasting and valuable comes easily. I'm always drawn to deep tech. When we started in 2010, AI was deep tech—investors told me, "We already know this won't work," and academia considered it a niche area that had been tried and failed in the 90s. But if you have conviction in your idea—why this time is different, what unique combination of backgrounds you have—ideally, you yourself are an expert in machine learning and its applications, or you can assemble such a founding team—then there's enormous impact and value that can be created.
Gary Tan: This message is important. Once something is accomplished, it may seem obvious, but before it's done, everyone is against you.
Demis Hassabis: Absolutely, so you have to do what you're truly passionate about. For me, no matter what, I'll do AI. I decided very young that it was the most impactful thing I could imagine. And it has proven to be true, but maybe not, maybe we're 50 years too early. And it's also the most interesting thing I can imagine. Even if we were still stuck in a small garage today, and AI wasn't even built yet, I would still find a way to keep doing it. Maybe I'd go back to academia, but I would find some way to continue.
Gary Tan: AlphaFold is an example of someone who followed a direction and made the right bet. What makes a scientific field suitable for producing AlphaFold-like breakthroughs? Are there any patterns, such as a certain objective function?
Demis Hassabis: I really should write this down sometime. The lesson I've learned from all the Alpha projects, including AlphaGo and AlphaFold, is that our current technology works best under the following conditions: First, the problem has a huge combinatorial search space—the larger the better, so large that no brute-force or special algorithm can solve it. The move space of Go and the conformational space of proteins far exceed the number of atoms in the universe. Second, you can clearly define the objective function, such as minimizing the free energy of a protein or winning a move in Go, so the system can perform gradient ascent. Third, there is sufficient data, or a simulator that can generate a large amount of synthetic data within a given distribution.
If these three conditions hold true, then today's methods can go very far in finding that "needle in a haystack." Drug discovery follows the same logic: if a compound exists that can treat the disease without side effects, and as long as the laws of physics allow it to exist, the only problem is how to find it efficiently and feasiblely. I believe AlphaFold was the first to demonstrate that such systems are capable of finding this "needle" in a massive search space.
Gary Tan: I want to move to a higher level. We're talking about how humans used these methods to create AlphaFold, but there's another meta-level: humans using AI to explore a space of possible hypotheses. How far are we from AI systems being able to do real scientific reasoning (and not just pattern matching on data)?
Demis Hassabis: I think we're very close. We're working on these kinds of general-purpose systems. We have a system called AI co-scientist, and algorithms like AlphaEvolve that can do things that go beyond the basic Gemini. All the leading labs are exploring this direction.
But so far, I personally haven't seen a truly significant scientific discovery made by these systems. I think it's coming soon. It might be related to the creativity we discussed earlier, truly pushing the boundaries of what we know. At that level, it's no longer pattern matching, because there's no pattern to match. It's not entirely extrapolation either, but rather some kind of analogical reasoning, which I think these systems don't yet possess, or rather, we haven't used them correctly.
One criterion I often mention in the scientific field is whether it can propose a truly interesting hypothesis, rather than just verifying one. Because verifying a hypothesis can itself be a momentous event, such as proving the Riemann Hypothesis or solving a Millennium Prize problem, but perhaps we are only a few years away from achieving that.
Even more difficult is proposing a new set of Millennium Prize problems that top mathematicians consider equally profound and worthy of a lifetime of study. I think that's an order of magnitude more difficult, and we don't yet know how to do it. But I don't believe it's magic; I believe these systems will eventually achieve it, perhaps missing just one or two things.
One way we can test this is what I sometimes call the "Einstein test": can you train a system with knowledge from 1901 and then let it independently derive the results Einstein made in 1905, including special relativity and his other papers from that year? I think we should actually run this test, try it repeatedly, and see when we can do it. Once we can, these systems are not far from truly inventing something entirely new.
Entrepreneurship advice
Gary Tan: One last question. Many of you here have deep technical backgrounds and aspire to do something on a similar scale to yours; you are one of the world's largest AI research organizations. Having come from the forefront of AGI research, what is something you know now but wish you had known by age 25?
Demis Hassabis : We've actually touched on that part. You'll find that pursuing difficult problems and pursuing easy problems are actually about the same level of difficulty, just in different ways. Different things present different kinds of difficulties. But life is short, and energy is limited, so why not invest your life energy in things that, if you don't do them, no one else will? Use that as a criterion for your choices.
Another point is that I think cross-disciplinary collaborations will become more common in the next few years, and AI will make cross-disciplinary collaborations easier.
The last point depends on your AGI timeline. Mine is around 2030. If you start a deep tech project today, it usually means a ten-year journey. Then you have to factor in the possibility of AGI appearing midway through your planning. What does this mean? Not necessarily a bad thing, but you have to consider it. Can your project utilize AGI? How will the AGI system interact with your project?
Returning to the relationship between AlphaFold and general AI systems, one scenario I foresee is that general-purpose systems like Gemini, Claude, or similar systems will use specialized systems like AlphaFold as tools. I don't think we'll cram everything into a single, massive "brain." It's pointless to cram all the protein data into Gemini, as Gemini doesn't need to perform protein folding. Going back to your point about information efficiency, that protein data would definitely hinder its language capabilities. A better approach is to have very powerful general-purpose tool-using models that can call upon and even train those specialized tools, but the specialized tools are independent systems.
This line of thinking is worth considering deeply, as it influences what you build today, including the types of factories and financial systems you choose. You need to take the AGI timeline seriously, imagine what that world will be like, and then build something that will still be useful when that world arrives.


