The Annoying Things Copilot Still Inserts (And How to Kill Them Permanently)
I like GitHub Copilot. I genuinely do. It saves me time every single day. But there are moments — and they happen more often than I'd like to admit —...
I like GitHub Copilot. I genuinely do. It saves me time every single day. But there are moments — and they happen more often than I'd like to admit — where I want to reach through the screen and shake it.
You know the moments I'm talking about. You're in the zone, writing a function that you know exactly how to write, and Copilot helpfully suggests a twelve-line comment block explaining what a for loop does. Or it inserts an entire error handling framework around a three-line utility function. Or it decides that your plain JavaScript needs TypeScript-style interfaces expressed as JSDoc comments that are longer than the actual code.
After 30+ years of writing software and about two years of living with Copilot daily, I've cataloged the worst offenders and figured out how to shut each one down. Here's the field guide.
The Unnecessary Comment Epidemic
This is Copilot's most persistent sin. You write a function called getUserById and Copilot immediately wants to add:
/**
* Gets a user by their ID
* @param {string} id - The ID of the user
* @returns {Object} The user object
*/
function getUserById(id) {
return db.users.findOne({ _id: id });
}
The function is called getUserById. It takes an id. It returns a user. Every single character of that comment block is redundant information that will eventually drift out of sync with the actual code and become actively misleading.
Copilot does this constantly. It's been trained on millions of repositories where developers were taught that "good code has lots of comments," and it absorbed that lesson without absorbing the more important lesson: good code is self-documenting, and comments should explain why, not what.
The fix is two-fold. First, in your VS Code settings:
{
"github.copilot.chat.codeGeneration.instructions": [
{
"text": "Do not add comments that merely restate what the code does. Only add comments when explaining non-obvious business logic or workarounds."
}
]
}
Second — and this is the more powerful approach — create a .github/copilot-instructions.md file in your repository root:
## Code Style
- Do NOT add JSDoc comments to functions unless the function signature is genuinely ambiguous
- Do NOT add inline comments that restate what the code does
- Only add comments to explain WHY something is done, not WHAT is done
- Variable and function names should be self-documenting
That instruction file travels with the repo, which means everyone on your team gets the same Copilot behavior. It's one of the most underused features Copilot offers.
The Over-Engineering Instinct
Copilot has a deep bias toward complexity. Write a simple Express route handler, and it will suggest middleware patterns, error boundary classes, and dependency injection frameworks that belong in an enterprise Java codebase, not a Node.js API.
I was building a straightforward contact form endpoint the other day. I wrote:
router.post("/contact", function(req, res) {
And Copilot immediately suggested a 40-line function with input validation middleware, a dedicated error class, rate limiting logic, CSRF token verification, and a response envelope pattern. For a contact form.
The suggestion wasn't wrong per se — all of those are legitimate concerns. But they were concerns I was handling elsewhere in my middleware stack. Copilot doesn't know that. It sees a POST endpoint and assumes you need everything from scratch.
Here's what I add to my copilot-instructions.md to combat this:
## Architecture
- This project uses Express.js with middleware for cross-cutting concerns
- Rate limiting, CSRF protection, and input sanitization are handled at the middleware level
- Route handlers should focus on business logic, not infrastructure
- Prefer simple, flat functions over deeply nested abstractions
- Do not suggest design patterns unless explicitly asked
The key insight is that Copilot instruction files are where you encode your architectural decisions. Copilot doesn't know your architecture unless you tell it.
Wrong Patterns for Your Stack
This one drives me nuts. I work primarily in Node.js with CommonJS modules, using require() and module.exports. Copilot routinely suggests ESM syntax — import and export — in the middle of a CommonJS file. It's like it can't read the ten other require() calls at the top of the same file.
Similarly, I use var and function() declarations in my article code examples for maximum reader compatibility, and Copilot relentlessly "corrects" these to const, let, and arrow functions.
The settings fix:
{
"github.copilot.chat.codeGeneration.instructions": [
{
"text": "Use CommonJS (require/module.exports), not ESM (import/export). Use var for variable declarations. Use function() declarations, not arrow functions."
}
]
}
And in copilot-instructions.md:
## JavaScript Style
- Use CommonJS modules: require() and module.exports
- Use var for variable declarations
- Use function() declarations, not arrow functions
- Use function expressions for callbacks: function(err, result) { }
- Do not use async/await unless the existing code in the file uses it
That last rule is important. Copilot loves to introduce async/await into callback-based codebases. If your project uses callbacks or Promises with .then(), having Copilot randomly insert async functions creates inconsistency that's worse than either pattern alone.
The Phantom Import Problem
You're writing code in a file, and Copilot suggests a function call that uses a library you don't have installed. Not a typo — an actual npm package that it thinks you should be using based on what it's seen in training data.
I've had Copilot suggest lodash methods in projects where I intentionally avoid lodash. I've had it suggest moment for date formatting when I'm already using date-fns. I've had it suggest axios when the file already uses the built-in http module.
The instruction file fix:
## Dependencies
- Do not suggest imports from packages not already in package.json
- For HTTP requests, use the built-in http/https modules or the existing request library
- For dates, use native Date methods or date-fns (already installed)
- Do not suggest lodash — use native array/object methods
You're essentially building a "don't suggest these" list for your project. It feels tedious to set up, but once it's in place, the quality of suggestions jumps noticeably.
The Try-Catch Everything Approach
Copilot wraps everything in try-catch blocks. Everything. You write a function that reads a config value from an object, and Copilot wants to wrap it in try-catch with a custom error class, a logging call, and a fallback value.
// What I wrote
function getConfig(key) {
return config[key];
}
// What Copilot suggested
function getConfig(key) {
try {
if (!config) {
throw new ConfigurationError("Config not initialized");
}
if (!config.hasOwnProperty(key)) {
throw new ConfigurationError("Missing config key: " + key);
}
return config[key];
} catch (error) {
logger.error("Failed to get config", { key: key, error: error });
return undefined;
}
}
Defensive programming has its place. But wrapping a property access in try-catch with a custom error class is the kind of over-engineering that makes codebases unreadable. If config is null, I want it to throw. That's a bug I need to know about, not silently swallow.
Add this to your instructions:
## Error Handling
- Do not add try-catch blocks unless the operation genuinely can throw (network calls, file I/O, JSON parsing)
- Let programming errors (null references, type errors) throw naturally — they indicate bugs
- Do not create custom error classes unless the project already has them
- Do not add fallback/default values unless the function signature explicitly requires them
The Auto-Complete That Finishes Your Thought Wrong
This isn't a configuration problem — it's a workflow problem. You start typing a function name, and Copilot completes it with an implementation that's close to what you want but subtly wrong. You tab to accept it because the first three lines look right, and then you spend five minutes debugging line seven.
I've trained myself to deal with this through a simple habit: I never accept a Copilot suggestion longer than five lines without reading every line first. For longer suggestions, I use Copilot Chat to generate the code in a side panel where I can review it before inserting it.
The VS Code setting that helps most here:
{
"github.copilot.editor.enableAutoCompletions": true,
"editor.inlineSuggest.showToolbar": "always"
}
Keeping the inline suggest toolbar visible gives you a constant visual reminder that you're looking at a suggestion, not confirmed code. It's a small thing, but it helps me stay in review mode rather than acceptance mode.
The Test File Disaster
Copilot generates terrible tests. I'll say it plainly. It produces tests that test the implementation rather than the behavior, it mocks things that shouldn't be mocked, and it creates test names that are just the function name repeated.
// Copilot's typical test suggestion
describe("getUserById", function() {
it("should get user by id", function() {
var result = getUserById("123");
expect(result).toBeDefined();
});
});
That test tells you nothing. What should the user object contain? What happens with an invalid ID? What about a user that doesn't exist?
My test instructions in copilot-instructions.md:
## Testing
- Test behavior, not implementation
- Test names should describe the expected outcome: "returns null when user does not exist"
- Do not mock database calls in integration tests — use a test database
- Each test should have a meaningful assertion, not just toBeDefined()
- Include edge cases: null inputs, empty strings, missing fields
This dramatically improves the quality of test suggestions. Copilot needs to know your testing philosophy because it's seen every testing philosophy in existence and defaults to the most superficial one.
The Complete Settings Blueprint
Here's my full VS Code settings.json block for taming Copilot:
{
"github.copilot.editor.enableAutoCompletions": true,
"github.copilot.chat.codeGeneration.instructions": [
{
"text": "Use CommonJS (require/module.exports). Use var and function(). No arrow functions. No unnecessary comments. No try-catch around simple operations."
}
],
"editor.inlineSuggest.showToolbar": "always",
"github.copilot.chat.codeGeneration.useReferencedFiles": true
}
And here's a starter .github/copilot-instructions.md template that you can adapt:
# Copilot Instructions
## Code Style
- [Your language/module conventions]
- Do not add comments that restate what the code does
- Only comment to explain non-obvious logic or workarounds
## Architecture
- [Your framework and middleware setup]
- Prefer simple functions over deep abstractions
- Do not suggest design patterns unless asked
## Dependencies
- Do not suggest imports from packages not in package.json
- [List your preferred libraries for common tasks]
## Error Handling
- Only use try-catch for operations that genuinely throw
- Let programming errors surface naturally
- Do not add defensive null checks on every parameter
## Testing
- Test behavior, not implementation
- Test names describe expected outcomes
- Include edge cases
The Copilot Chat System Prompt Trick
One last thing that most people don't know about: you can set a persistent system prompt for Copilot Chat at the workspace level. Create a file at .vscode/settings.json in your project:
{
"github.copilot.chat.codeGeneration.instructions": [
{
"text": "You are assisting on a Node.js Express project using CommonJS modules, MongoDB, and Pug templates. Keep suggestions simple and aligned with existing code patterns in the project."
}
]
}
This gives Copilot Chat ongoing context about your project that persists across conversations. Combined with the instruction file, you're essentially giving Copilot a project-specific personality that matches your coding style.
Is It Worth the Setup?
Absolutely. I spent maybe an hour total setting up my instruction files and VS Code settings, and the reduction in annoying suggestions has been dramatic. I'd estimate that before these configurations, I rejected about 60% of Copilot's inline suggestions. After? I reject maybe 25%.
That's not just about less annoyance — it's about less context-switching. Every time Copilot suggests something wrong, your brain has to shift from "writing code" mode to "reviewing someone else's code" mode. Reducing those shifts adds up to real productivity gains across a full working day.
Copilot is a genuinely useful tool. But like any tool, it works best when it's calibrated to the person using it. Take the time to set up your instruction files. Your future self, staring at a suggestion that actually matches your codebase, will thank you.
Shane Larson is a software engineer and technical author based in Caswell Lakes, Alaska. He builds things at Grizzly Peak Software and occasionally argues with his AI coding assistants. His book on training large language models is available on Amazon.