- Docs
- English EN-US
- English
- 简体中文
- 繁體中文
- 日本語
- 한국어
- العربية
- العربية
- Deutsch
- Español
- Français
- हिंदी
- Bahasa Indonesia
- Italiano
- Nederlands
- Polski
- Português
- Русский
- Türkçe
This post summarizes how to use GitHub Copilot in Agent mode, sharing practical experience.
"github.copilot.chat.codeGeneration.instructions": [
{
"text": "Only modify files under ./script/; leave others unchanged."
},
{
"text": "If the target file exceeds 1,000 lines, place new functions in a new file and import them; if the change would make the file too long you may disregard this rule temporarily."
}
],
"github.copilot.chat.testGeneration.instructions": [
{
"text": "Generate test cases in the existing unit-test files."
},
{
"text": "After any code changes, always run the tests to verify correctness."
}
],
Break large tasks into small ones; one session per micro-task. A bloated context makes the model’s attention scatter.
The right amount of context for a single chat is tricky—too little or too much both lead to misunderstanding.
DeepSeek’s model avoids the attention problem, but it’s available only in Cursor via DeepSeek API; its effectiveness is unknown.
Understand the token mechanism: input tokens are cheap and fast, output tokens are expensive and slow.
If a single file is huge but only three lines need change, the extra context and output still consume many tokens and time.
Therefore keep files compact; avoid massive files and huge functions. Split large ones early and reference them.
Understanding relies on comments and test files. Supplement code with sufficient comments and test cases so Copilot Agent can grasp the business.
The code and comments produced by the Agent itself often act as a quick sanity check—read them to confirm expectations.
Generate baseline code for the feature, then its tests, then adjust the logic. The Agent can debug autonomously and self-validate.
It will ask permission to run tests, read the terminal output, determine correctness, and iterate on failures until tests pass.
In other words, your greatest need is good domain understanding; actual manual writing isn’t excessive. Only when both the test code and the business code are wrong—so the Agent neither writes correct tests nor correct logic—will prolonged debugging occur.
Understand the token cost model: input context is cheap, output code is costly; unchanged lines in the file may still count toward output—evidence is the slow streaming of unmodified code.
Keep individual files small if possible. You will clearly notice faster or slower interactions depending on file size as you use the Agent.