a16z predicts four major trends for 2026.

In its annual "Big Ideas 2026" report, venture capital firm a16z outlines key structural shifts predicted for 2026 across four core areas: infrastructure, growth, healthcare, and the interactive world. The overarching theme is that AI is evolving from a tool into an environment, a system, and an agent that works alongside humans.

Infrastructure Team Predictions

  • Startups will focus on taming unstructured, multimodal data (PDFs, videos, logs) to reduce "data entropy" and enable reliable AI workflows.
  • AI will automate repetitive cybersecurity tasks, breaking the cycle of talent shortages and allowing experts to focus on strategic threats.
  • Native infrastructure for intelligent agents will become standard to handle "agent-speed," recursive, and massive workloads, unlike current human-centric systems.
  • Creative tools will achieve true multimodality, allowing AI to generate and edit complex, coherent video scenes and characters.
  • The AI-native data stack will evolve, deeply integrating data flow into vector databases and enabling agents to maintain consistent context across systems.
  • Video will transform into an interactive "space" where users can "walk in," with models maintaining consistency, causality, and physics over time.

Growth Team Predictions

  • The central role of enterprise "record-keeping systems" (like CRM) will decline as AI agents turn them into autonomous workflow engines.
  • Vertical AI software (in law, healthcare, etc.) will upgrade from information reasoning to "multi-player mode," coordinating tasks across multiple parties and AIs.
  • Content and software design will shift optimization from humans to intelligent agents, prioritizing machine readability over visual interfaces.
  • The "screen time" KPI will disappear in favor of outcome-based pricing, as AI delivers value without requiring user interaction time.

Bio + Health Team Prediction

  • A new core user group, "Healthy MAUs" (healthy, active monthly users), will emerge. AI-driven, preventative, and subscription-based health services will cater to this data-conscious, prevention-oriented population.

Speedrun Team (Interactive World) Predictions

  • AI world models will generate explorable 3D worlds from text, revolutionizing storytelling, gaming, and creating new digital economies.
  • Hyper-personalization will define products, with AI tailoring education, health, and media uniquely for each individual ("My Year").
  • The first truly AI-native university will emerge, built as an adaptive, self-optimizing learning system focused on teaching collaboration with AI.
Summary

Original title: Big Ideas 2026: Part 1

Original author: a16z New Media

Compiled by: Peggy, BlockBeats

Abstract: Over the past year, AI breakthroughs have shifted from model capabilities to system capabilities: understanding long time series, maintaining consistency, performing complex tasks, and collaborating with other intelligent agents. Consequently, the focus of industrial upgrading has shifted from single-point innovation to redefining infrastructure, workflows, and user interaction methods.

In its annual "Big Ideas 2026" report, a16z's four investment teams provided key insights for 2026 from four dimensions: infrastructure, growth, healthcare, and the interactive world.

Essentially, they all depict a trend: AI is no longer a tool, but an environment, a system, and an agent that acts alongside humans.

The following are the assessments of the four teams regarding structural changes in 2026:

As investors, our job is to delve into every corner of the technology industry, understand its workings, and predict its next direction of evolution. Therefore, every December, we invite investment teams to share what they believe to be a "big idea" that tech entrepreneurs will need to tackle in the coming year.

Today, we bring you perspectives from the Infrastructure, Growth, Bio + Health, and Speedrun teams. Views from other teams will be released tomorrow, so stay tuned.

Infrastructure Team

Jennifer Li: Startups will tame the "chaos" of multimodal data.

Unstructured, multimodal data has always been the biggest bottleneck for enterprises, but also their greatest untapped treasure trove. Every company is overwhelmed by PDFs, screenshots, videos, logs, emails, and all sorts of semi-structured "data sludge." Models are becoming increasingly intelligent, but the inputs are becoming increasingly chaotic—this causes RAG systems to develop illusions, leading to subtle yet costly errors by intelligent agents, and keeping critical workflows heavily reliant on manual quality checks.

The real limiting factor for AI companies today is data entropy: in an unstructured world that contains 80% of a company's knowledge, freshness, structure, and authenticity are constantly declining.

This is why untangling the "mess" of unstructured data is becoming an entrepreneurial opportunity for a generation. Enterprises need a continuous approach to cleanse, structure, validate, and govern their multimodal data to truly enable downstream AI workloads to function effectively. Applications are ubiquitous: contract analytics, onboarding, claims processing, compliance, customer service, procurement, engineering retrieval, sales enablement, analytics pipelines, and all agent workflows that rely on reliable context.

Platform startups that can extract structure from documents, images, and videos, reconcile conflicts, repair data pipelines, and keep data fresh and searchable will possess the "key to the kingdom" of enterprise knowledge and processes.

Joel de la Garza: AI will reshape the recruitment dilemma for cybersecurity teams.

Over the past decade, recruitment has been one of the biggest headaches for CISOs. From 2013 to 2021, the global cybersecurity job shortage surged from less than 1 million to 3 million. The reason is that security teams need highly specialized technical talent, but they are being asked to do exhausting Level 1 security work, such as going through logs, which almost no one wants to do.

The deeper root of the problem lies in the fact that cybersecurity teams have created their own drudgery. They purchase tools that "indiscriminately detect everything," forcing teams to "censor everything"—which in turn creates an artificial "labor shortage," creating a vicious cycle.

In 2026, AI will break this cycle, significantly reducing the talent gap by automating the vast majority of repetitive and redundant tasks. Anyone who has worked on a large security team knows that half the work could be automated; the problem is, when you're overwhelmed with work every day, you simply can't take the time to think about what should be automated. Truly AI-native tools will do this for security teams, finally allowing them to focus on what they originally wanted to do: track attackers, build systems, and fix vulnerabilities.

Malika Aubakirova: Native infrastructure for intelligent agents will become "standard".

The biggest infrastructure upheaval in 2026 will not come from the outside, but from within. We are shifting from "human-speed, low-concurrency, predictable" traffic to "agent-speed, recursive, explosive, and massive" workloads.

Current enterprise backends are designed for a 1:1 "from human action to system response" model. They are not well-suited for handling millisecond-level recursive storms triggered by a single "target" from an agent, resulting in 5,000 subtasks, database queries, and internal API calls. When an agent attempts to refactor the codebase or fix security logs, it doesn't act like a user; to traditional databases or rate limiters, it resembles a DDoS attack.

To build systems for agent workloads in 2026, the control plane must be redesigned. Agent-native infrastructure will begin to emerge. The next generation of systems must accept the thundering herd effect as the default state. Cold starts must be shortened, latency fluctuations must be reduced, and concurrency limits must be increased by orders of magnitude.

The real bottleneck will shift to coordination itself: routing, lock control, state management, and policy enforcement in massively parallel execution. Only platforms that can survive the deluge of tool calls will ultimately emerge victorious.

Justine Moore: Creative Tools Are Fully Moving Towards Multimodality

We already have the basic building blocks for AI storytelling: generative sound, music, images, and video. But as long as the content is more than just a short film, achieving near-director-level control remains time-consuming, painful, and even impossible.

Why can't we let the model receive a 30-second video clip, create a new character using reference images and sound we provide, and then continue filming the same scene? Why can't we let the model "reshoot" from a new angle, or match the motion to the reference video?

2026 will be the year when AI truly enables multimodal creation. Users will be able to feed any form of reference content to the model and work together to generate new works or edit existing scenes.

We have already seen the emergence of first-generation products, such as the Kling O1 and Runway Aleph, but this is just the beginning—new innovations are needed at both the model and application levels.

Content creation is one of the "killer applications" of AI, and I expect multiple successful products to emerge from various user groups—from meme creators to Hollywood directors.

Jason Cui: The AI native data stack will continue to iterate.

Over the past year, the "modern data stack" has been clearly consolidating. Data companies are moving from modular services such as collection, transformation, and computation to bundled and unified platforms (such as the merger of Fivetran/dbt and the expansion of Databricks).

While the ecosystem is becoming more mature, we are still in the early stages of achieving a truly AI-native data architecture. We are excited about how AI continues to transform multiple aspects of the data stack and are beginning to see data and AI infrastructure irreversibly moving towards deep integration.

We are particularly focused on the following areas:

How can data continue to flow to high-performance vector databases beyond traditional structured storage?

How AI agents solve the "context problem": By continuously accessing the correct data semantics and business definitions, applications that "talk to data" can maintain consistent understanding across multiple systems.

As data workflows become more intelligent and automated, how will traditional BI tools and spreadsheets evolve?

Yoko Li: We will truly "go inside the video".

In 2026, video will no longer be passive content to be watched, but will begin to become a place we can "walk into". Video models will finally be able to understand time, remember what has been presented, and react to our actions, while maintaining a stability and coherence close to the real world, rather than just outputting a few seconds of unrelated images.

These systems are able to maintain characters, objects, and physical laws over extended periods, allowing actions to truly have an impact and enabling causality to unfold. Video thus transforms from a medium into a space where things can be constructed: robots can be trained, game mechanics can evolve, designers can prototype, and intelligent agents can learn by "doing."

The world presented is no longer like a short video, but like a "living environment," beginning to narrow the gap between perception and action. This is the first time that humanity has been able to truly "dwell" in the videos it generates.

Growth Team

Sarah Wang: The status of a company's "record-keeping system" will begin to waver.

In 2026, the real transformation of enterprise software will come from a core shift: the central role of record-keeping systems will finally begin to decline.

AI is bridging the gap between "intention" and "execution": models can directly read, write, and infer enterprise operational data, transforming ITSM, CRM, and other systems from passive databases into autonomous workflow engines.

With the rapid advancements in reasoning models and agent workflows, these systems are no longer just responding to demands, but are capable of predicting, coordinating, and executing end-to-end processes.

The interface will become a dynamic intelligent agent layer, while the traditional system record layer will gradually recede into a "cheap persistent storage", and strategic dominance will be handed over to players who control the intelligent execution environment.

Alex Immerman: Vertical AI is upgrading from "information acquisition and reasoning" to "multi-person collaboration mode".

AI is driving explosive growth in vertical industry software. Companies in the healthcare, legal, and housing sectors have quickly surpassed $100 million in ARR; finance and accounting are following closely behind.

The initial revolution was information acquisition: finding, extracting, and summarizing information.

2025 brought inference: Hebbia analyzes financial statements, Basis reconciles trial balances across multiple systems, and EliseAI diagnoses maintenance issues and schedules suppliers.

Multiplayer mode will be unlocked in 2026.

Vertical software naturally possesses industry-specific interfaces, data, and integration capabilities, while vertical industry work is essentially a multi-party collaboration: buyers, sellers, tenants, consultants, and suppliers, each with different permissions, processes, and compliance requirements.

Today, AIs from various parties are operating independently, leading to chaotic and authoritative handover points: AI analyzing contracts cannot communicate with the CFO's modeling preferences; maintenance AI is unaware of the commitments made by on-site personnel to tenants.

Multi-player AI will disrupt this situation: automatically coordinating among parties; maintaining context; synchronizing changes; automatically routing to functional experts; allowing adversary AI to negotiate within boundaries; and marking asymmetries for human review.

When the quality of transactions is improved through collaboration between "multiple agents and multiple humans," the switching costs skyrocket—this collaborative network will become the "moat" that AI applications have long lacked.

Stephenie Zhang: The objects of future creation will no longer be humans, but intelligent agents.

By 2026, people will interact with the network through intelligent agents, and human-oriented content optimization will lose its original importance.

We've optimized for predictable human behavior: Google rankings; top Amazon product listings; the 5W+1H of news articles and eye-catching openings.

Humans may overlook the deep insights buried on the fifth page, but intelligent agents will not.

The software will also change accordingly. Applications were previously designed for human eyes and clicks, with optimization meaning better UIs and processes; however, with intelligent agents taking over retrieval and interpretation, the importance of visual design decreases: engineers no longer need to stare at Grafana, AI SREs will automatically analyze telemetry and provide insights in Slack; sales teams no longer need to manually flip through CRMs, intelligent agents will automatically summarize patterns and insights.

We are no longer designing for humans, but for intelligent agents. The new optimization is no longer about visual hierarchy, but about machine readability. This will fundamentally change the way content is created and the tools available.

Santiago Rodriguez: The "screen time" KPI will disappear.

For the past 15 years, "screen time" has been the gold standard for measuring product value: Netflix viewing time; mouse clicks in healthcare systems; minutes users spend on ChatGPT.

But in the coming era of outcome-based pricing, screen time will be completely phased out.

The benefits are already becoming apparent: ChatGPT's DeepResearch queries require almost no screen time yet provide immense value; Abridge automatically records doctor-patient conversations and handles follow-up tasks, requiring doctors to barely look at the screen; Cursor has completed full application development, and engineers are already planning the next phase; Hebbia automatically generates pitch decks from a large number of public documents, finally allowing investment banking analysts to sleep.

The challenge that follows is that companies need to find more sophisticated ways to measure ROI—doctor satisfaction, developer productivity, analyst well-being, user happiness… all of which are rising with AI.

The companies that can tell the clearest ROI story will continue to win.

Bio+Health Team (Biotechnology and Health Focus)

Julie Yoo: "Health MAUs" become the core user group

In 2026, a new healthcare user group will take center stage: "Healthy MAUs" (healthy people who are active monthly but not sick).

Traditional medicine primarily serves three types of people:

Sick MAUs: High-cost, cyclical demanders

-Sick DAUs: such as long-term critical care patients

-Healthy YAUs: People who rarely seek medical care

Healthy YAUs can easily turn into Sick MAUs/DAUs, and preventative care could have delayed this change. However, due to the current "treatment-oriented" healthcare system, proactive testing and monitoring are almost entirely uncovered.

The emergence of healthy MAUs has changed this structure: they are not sick, but are willing to monitor their health status regularly, and are the largest potential population.

We anticipate that both AI-native startups and "repackaged" traditional institutions will join in, providing periodic health services.

As AI reduces healthcare delivery costs, preventative insurance products emerge, and users are willing to pay for subscription services, "health MAUs" will become the most promising customer group for the next generation of health technology—continuously active, data-driven, and prevention-oriented.

Speedrun Team (Games, Interactive Media, and World Modeling)

Jon Lai: World Models Will Reshape Narrative Methods

In 2026, AI world models will revolutionize narratives through interactive virtual worlds and the digital economy. Technologies such as Marble (World Labs) and Genie 3 (DeepMind) can generate complete 3D worlds from text, allowing users to explore them like playing a game.

As creators adopt these tools, entirely new forms of storytelling will emerge—and there may even be a "generic version of Minecraft" where players co-create a vast, evolving universe.

These worlds will blur the boundaries between players and creators, forming a shared, dynamic reality. Different genres, such as fantasy, horror, and adventure, can coexist; the digital economy within them will flourish, allowing creators to earn income by creating assets, guiding players, and developing interactive tools.

These generated worlds will also become training grounds for AI agents, robots, and even potential AGIs. World models bring not only a new game genre, but also a completely new creative medium and economic frontier.

Josh Lu: "My Year"

2026 will be "My Year": products will no longer be mass-produced for the "average consumer," but will be tailor-made for "you."

In education, Alphaschool's AI tutors match each student's pace and interests.

In terms of health, AI can customize supplements, exercise plans, and diet schemes for you.

In media, AI allows content to be remixed in real time to suit your taste.

Giants of the past century won by finding the "average user"; giants of the next century will win by finding the "individual among the average users".

In 2026, the world will no longer be optimized for everyone, but for "you".

Emily Bennett: The First AI-Native University is About to Be Born

In 2026, we will see the first truly AI-native university—an institution built from scratch around an intelligent system. Traditional universities have already applied AI for grading, tutoring, and scheduling, but now a deeper transformation is emerging: an "adaptive academic organism" capable of learning and self-optimization in real time.

Imagine a university where courses, mentorship, research collaborations, and campus operations are all adjusted in real-time based on feedback; course schedules self-optimize; reading lists are dynamically updated as new research emerges; and each student's learning path changes in real time.

Precedents have already emerged: Arizona State University's collaboration with OpenAI has resulted in hundreds of AI projects; the State University of New York has incorporated AI literacy into its general education curriculum.

In AI-native universities:

Professors become "architects of learning systems": planning data, tuning models, and teaching students how to examine machine reasoning.

- The assessment method will shift to "AI awareness" assessment: instead of asking students whether they used AI, the focus will be on how they used AI.

With various industries urgently needing talent capable of collaborating with intelligent systems, this university will become a "talent engine" for the new economy.

Share to:

Author: 区块律动BlockBeats

This article represents the views of PANews columnist and does not represent PANews' position or legal liability.

The article and opinions do not constitute investment advice

Image source: 区块律动BlockBeats. Please contact the author for removal if there is infringement.

Follow PANews official accounts, navigate bull and bear markets together
Recommended Reading
3 minute ago
35 minute ago
1 hour ago
2 hour ago
2 hour ago
2 hour ago
Related Topics
13 articles

Popular Articles

Industry News
Market Trends
Curated Readings

Curated Series

App内阅读