
From June 26 to 27, Istanbul Blockchain Week (IBW 2025) focused on the integration trend of AI and Web3, becoming an important venue for Web3 security discussions this year. In two roundtable forums at IBW, Jason Jiang, chief business officer of security company CertiK, participated in in-depth discussions on the application status and security challenges of AI technology in DeFi with experts such as Nurettin Erginoz, head of cybersecurity services at PwC Turkey, and Charlie Hu, co-founder of Bitlayer.
During the discussion, "DeFAI" (decentralized artificial intelligence finance) became a key word. The guests pointed out that with the rapid development of large language models (LLM) and AI agents, a new financial paradigm - DeFAI (decentralized artificial intelligence finance) is gradually taking shape. However, this change also brings new attack surfaces and security risks.
“DeFAI has great prospects, but it also requires us to re-examine the trust mechanism in decentralized systems,” said Jason Jiang. “Unlike smart contracts based on fixed logic, the decision-making process of AI agents is affected by context, time, and even historical interactions. This unpredictability not only increases risks, but also creates opportunities for attackers.”
"AI agents" are essentially intelligent entities that can make autonomous decisions and execute based on AI logic, and are usually authorized to run by users, protocols or DAOs. Among them, the most typical representative is the AI trading robot. Currently, most AI agents run on the Web2 architecture and rely on centralized servers and APIs, which makes them vulnerable to threats such as injection attacks, model manipulation or data tampering. Once hijacked, it may not only lead to financial losses, but also affect the stability of the entire protocol.
The forum also mentioned a typical attack scenario: when an AI trading agent run by a DeFi user is monitoring social media messages as trading signals, the attacker issues a false alarm, such as "Protocol X is under attack", which may induce the agent to immediately initiate emergency liquidation. This operation will not only cause user asset losses, but also trigger market fluctuations, which can be exploited by attackers through front running.

In response to the above risks, guests generally believed that the security of AI agents should not be borne by a single party, but should be the joint responsibility of users, developers and third-party security agencies.
First, users need to be clear about the scope of permissions that agents have, grant permissions with caution, and be careful to review high-risk operations of AI agents. Second, developers should implement defense measures during the design phase, such as prompt word reinforcement, sandbox isolation, rate limiting, and fallback logic. Third-party security companies like CertiK should provide independent reviews of AI agent model behaviors, infrastructure, and on-chain integration methods, and work with developers and users to identify risks and propose mitigation measures.
At the end of the discussion, Jason Jiang warned: "If we continue to treat AI agents as 'black boxes', it is only a matter of time before security incidents occur in the real world." For developers who are exploring the direction of DeFAI, his advice is: "Like smart contracts, the behavior logic of AI agents is also implemented by code. Since it is code, it is possible to be attacked, so professional security audits and penetration tests are required."
As one of the most influential blockchain events in Europe, IBW (Istanbul Blockchain Week) has attracted more than 15,000 developers, project owners, investors and regulators from around the world. This year, with the official launch of the issuance of blockchain project licenses by the Turkish Capital Markets Board (CMB), IBW's industry status has been further enhanced.
