A brief share on using trae
Categories:
This lengthy post was published on 2025-07-22; at the moment trae’s feature completeness and performance remain poor. It may improve later, so feel free to try it for yourself and trust your own experience.
As common sense dictates, the first employees shape a company’s culture and products, forming a deep-rooted foundation that is hard to change and also somewhat intangible; my sharing is for reference only.
UI Design
Trae’s interface has nice aesthetics, with layout / color / font tweaks over the original version, and it looks great visually. The logic is fairly clear as well; in this area I have no suggestions to offer.
Features
Missing Features
Compared with VS Code, many Microsoft- and GitHub-provided features are absent; below is only the portion I’m aware of:
- settings sync
- settings profile
- tunnel
- extension marketplace
- first-party closed-source extensions
- IDE only supports Windows and macOS—missing Web and Linux
- Remote SSH only supports Linux targets—missing Windows and macOS
The first-party closed-source extensions are particularly hard to replace; currently open-vsx.org is used in their place—many popular extensions are available, not necessarily the latest versions, but good enough.
Because Remote is missing, multi-platform devices have to be set aside for now.
Feature Parity
When compared with the more mature VS Code / Cursor, feature parity is already achieved.
The large-model integrations—Ask / Edit / Agent, etc.—are all there. CUE (Context Understanding Engine) maps to NES (Next Edit Suggestion).
GitHub Copilot’s completions use GPT-4o, Cursor’s completions use the fusion model; Trae has not yet disclosed its completion model.
MCP, rules, Docs are all present.
Completion
In actual use, CUE performs poorly—at least 90 % of suggestions are rejected by me. Because of its extremely low acceptance rate, it usually distracts; I’ve completely disabled CUE now.
GPT-4o is good at completing the next line; NES performs terribly, so I keep it turned off almost always.
Cursor’s fusion NES is superb—anyone who has used it must have been impressed. Its strength lies only in code completion, though; for non-code content it lags behind GPT-4o.
CUE is simply unusable.
On a 10-point scale, an unscientific subjective scoring:
Model | Inline Code Completion | Next Edit Completion | Non-code Completion |
---|---|---|---|
Cursor | 10 | 10 | 6 |
GitHub Copilot | 9 | 3 | 8 |
Trae | 3 | 0 | 3 |
Agent
In every IDE the early-stage Agents are reasonably capable, yet their actual effectiveness steadily declines over time—this is not directed at any one vendor; it’s true for all of them.
Several concepts currently exist:
- RAG, Retrieval-Augmented Generation
- Prompt Engineering
- Context Engineering
The goal is for the large model to better understand human intent. Supplying more context is not necessarily better—the context must reach a certain quality, and poor-quality context will harm comprehension.
That said, some may find after huge effort that simply passing original source files to the model produces the best results. In the middle layers, prompt/wording and context engineering can be ineffective or even detrimental.
Trae implements all three approaches, yet I haven’t yet felt any leading experience.
Performance Issues
Many people, myself included, have encountered performance problems; Trae is definitely the most unusual one among the VS Code family. Although I previously praised its frontend design, it stutters heavily in day-to-day usage.
Trae may have changed VS Code so profoundly that future compatibility is unlikely, and its baseline version may stay locked at some older VS Code release.
Some of my extensions run sluggishly in Trae, and some functions no longer work correctly—this issue may persist.
Privacy Policy
Trae International provides its privacy policy here: https://www.trae.ai/privacy-policy
The Trae IDE supports Chinese, English, and Japanese; its privacy policy appears in nine languages—none of them Chinese.
In simple terms:
- Trae collects and shares data with third parties
- Trae provides zero privacy settings—using it equals accepting the policy
- Trae’s data-storage protection and sharing follows the laws of certain countries/regions—China is not among them
Conclusion
Trae’s marketing is heavy, and that may be deeply tied to its corporate culture; going forward it may also become a very vocal IDE on social media. Because its capabilities do not match its noise, I will no longer keep watching. ByteDance’s in-house models are not the strongest; they may need data for training so as to raise their models’ competitiveness. The privacy policy is unfriendly and opens the door wide to data collection.
Based on my long-term experience with similar dev tooling, the underlying competitiveness is the model, not other aspects—in other words, the CLI is enough for vibe coding.
Trae’s pricing is extremely cheap: you can keep buying 600 Claude calls for $3, the cheapest tool on the market that offers Claude.
From this I infer that Trae is in fact a data-harvesting product launched to train ByteDance’s own models and to build its core competency.