More than 200 million US dollars were stolen. What lessons can we learn from the Cetus security incident?

Teams with purely technical backgrounds seriously lack basic "financial risk acumen".

After reading @CetusProtocol's hacker attack security "replay" report, you will find an "intriguing" phenomenon: the technical details are disclosed quite transparently, and the emergency response is also textbook-level, but when it comes to the most critical soul-searching question of "why was it hacked", it seems to be evasive:

The report uses a lot of space to explain the check error of the `checked_shlw` function of the `integer-mate` library (should be ≤2^192, but actually ≤2^256), and characterizes this as a "semantic misunderstanding." Although this narrative is technically valid, it cleverly directs the focus to external responsibilities, as if Cetus is also an innocent victim of this technical defect.

The question is, since `integer-mate` is an open source and widely used mathematical library, why did you make such an absurd mistake that you can get a sky-high liquidity share with just one token?

By analyzing the hacker attack path, we can find that hackers must meet four conditions at the same time to achieve a perfect attack: incorrect overflow check, large shift calculation, rounding up rule, and lack of economic rationality verification.

Cetus was "careless" in every "trigger" condition, for example: accepting astronomical numbers such as 2^200 input from users, using extremely dangerous large-scale displacement operations, and completely trusting the inspection mechanism of external libraries. The most fatal thing is that when the system calculated the absurd result of "1 token for a sky-high price share", it was directly executed without any economic common sense check.

Therefore, the points that Cetus should really reflect on are as follows:

1) Why did they use a general external library without doing security testing? Although the integer-mate library is open source, popular, and widely used, Cetus used it to manage hundreds of millions of dollars in assets without fully understanding the security boundaries of the library, whether there are suitable alternatives if the library fails, etc. Obviously, Cetus lacks the most basic awareness of supply chain security protection;

2) Why is it allowed to input astronomical figures without setting boundaries? Although DeFi protocols should seek decentralization, the more open a mature financial system is, the more clear boundaries it needs.

When the system allows the input of astronomical numbers carefully constructed by attackers, the team obviously did not think about whether such liquidity demand is reasonable? Even the world's largest hedge fund cannot require such an exaggerated share of liquidity. Obviously, the Cetus team lacks risk management talents with financial intuition;

3) Why were there still no problems found in advance after multiple rounds of security audits? This sentence inadvertently exposed a fatal cognitive misunderstanding: the project party outsourced the security responsibility to the security company and regarded the audit as a golden ticket to immunity. But the reality is cruel: security audit engineers are good at finding code bugs, who would have thought that when testing the system to calculate the fantastic exchange ratio, there might be something wrong?

This kind of cross-border verification of mathematics, cryptography, and economics is the biggest blind spot in modern DeFi security. Audit companies will say "this is a design flaw in the economic model, not a code logic problem"; project owners will complain that "the audit did not find any problems"; and users only know that their money is gone!

You see, what this ultimately exposes is the systemic security shortcomings of the DeFi industry: teams with purely technical backgrounds seriously lack basic "sense of financial risks."

Judging from the Cetus report, the team obviously did not reflect enough.

Rather than simply focusing on the technical flaws in this hacker attack, I think that starting with Cetus, all DeFi teams should change the limitations of purely technical thinking and truly cultivate the security risk awareness of "financial engineers."

For example: introducing financial risk control experts to fill the knowledge blind spots of the technical team; implementing a multi-party audit and review mechanism, not only looking at code audits but also necessary economic model audits; cultivating a "financial sense", simulating various attack scenarios and corresponding response measures, and always being sensitive to abnormal operations, etc.

This reminds me of my previous experience working in a security company, where industry security leaders @evilcos, @chiachih_wu, @yajinzhou, and @mikelee205 also shared the same consensus:

As the industry matures, technical bugs at the code level will become less and less common, while business logic "awareness bugs" with unclear boundaries and fuzzy responsibilities will become the biggest challenge.

Audit companies can only ensure that the code is bug-free, but how to achieve "logical boundaries" requires the project team to have a deeper understanding of the essence of the business and the ability to control boundaries. (The root cause of many "blame-shifting incidents" that were still attacked by hackers after security audits lies in this)

The future of DeFi belongs to teams with strong coding skills and a deep understanding of business logic!

Share to:

Author: 链上观

This article represents the views of PANews columnist and does not represent PANews' position or legal liability.

The article and opinions do not constitute investment advice

Image source: 链上观. Please contact the author for removal if there is infringement.

Follow PANews official accounts, navigate bull and bear markets together
Recommended Reading
1 hour ago
2 hour ago
3 hour ago
3 hour ago
4 hour ago
5 hour ago

Popular Articles

Industry News
Market Trends
Curated Readings

Curated Series

App内阅读