Building a VSCode Focus Extension: An AI-Assisted Coding Journey
Recently, I stumbled upon a tweet by Will McGugan that caught my attention. The tweet showcased iA Writer’s focus function—a clever feature that highlights the current “block” (or paragraph) while dimming everything else around it.
Since I’d never tackled VSCode extension development before, this seemed like the perfect opportunity for some experimental “vibe coding.” The concept was simple enough: create something similar for code editors, but instead of paragraphs, focus on code blocks and symbol hierarchies.
tl;dr
I built a VSCode extension that dims code based on cursor location using AI assistance. Key takeaways: Used “meta-prompting” (AI helping write prompts for AI), spent ~$0.50 for 291 lines of code, discovered the FoldingRangeProvider
API, and learned that AI excels at boilerplate but struggles with integration challenges. The extension works by leveraging DocumentSymbolProvider
to create progressive dimming effects based on symbol hierarchy.
The Meta-Prompt Approach
Rather than diving straight into code, I decided to craft a comprehensive prompt first. Eventually, ChatGPT came up with this prompt:
< BEGIN OF PROMPT > Prompt:
Build a Visual Studio Code extension named “VSCode Focus” that enhances code readability by dynamically dimming unrelated code based on the cursor’s location in the document’s symbol hierarchy. The extension should implement the following behavior:
🔍 Behavior:
-
Symbol Hierarchy Detection
- Use the
DocumentSymbolProvider
to retrieve the hierarchical structure of the open document. - Parse the symbol tree to determine which symbols contain the current cursor position.
- Identify the full path of nested symbols from the outermost to the innermost level where the cursor resides.
- Use the
-
Dynamic Opacity Calculation
- Apply a progressive dimming effect:
- Innermost symbol (cursor location): 100% opacity (no dimming).
- Each parent level: Reduce opacity by a configurable
opacityIncrement
. - Code outside the symbol hierarchy: Dimmed to a configurable
minimumOpacity
.
-
Optional Block Support
- If
vscode-focus.includeBlocks
istrue
, include blocks likeif
,while
,for
,switch
, etc., in the hierarchy analysis using AST or custom parsing.
- If
-
Live Updates
- Automatically update dimming effects as the cursor moves or the file content changes.
⚙️ Configuration (via Settings UI):
Register the following settings under vscode-focus
:
"contributes": {
"configuration": {
"title": "VSCode Focus",
"properties": {
"vscode-focus.enabled": {
"type": "boolean",
"default": true,
"description": "Enable/disable the focus highlighting effect."
},
"vscode-focus.minimumOpacity": {
"type": "number",
"default": 0.1,
"minimum": 0.01,
"maximum": 0.9,
"description": "Minimum opacity for the outermost dimmed code."
},
"vscode-focus.opacityIncrement": {
"type": "number",
"default": 0.2,
"minimum": 0.05,
"maximum": 0.5,
"description": "Opacity increment per hierarchy level."
},
"vscode-focus.includeBlocks": {
"type": "boolean",
"default": false,
"description": "Include control flow blocks like 'if', 'while', etc., in focus detection."
}
}
}
}
🎨 Implementation Notes:
- Use the
TextEditorDecorationType
API to apply dimming styles to ranges. - Use debounce when handling cursor change or document edits for performance.
- Respect multiple cursors but apply dimming based on the first/main selection.
- Provide a command
Toggle VSCode Focus
to quickly enable/disable the effect.
< END OF PROMPT >
As you can see, this prompt was itself generated by an LLM. After several iterations with ChatGPT (gpt-3.5), it produced this fairly comprehensive specification. I’m still debating whether to call this approach “vibe prompting” or “meta vibe coding”—but whatever the name, it felt like a natural way to bridge the gap between a vague idea and executable requirements.
From Vision to Specification
The inputs I fed into ChatGPT were refreshingly simple: a screenshot from a similar extension and some ASCII art sketching out my vision.
And this ASCII art that captured what I had in mind:
// hierarchy 1 (inactive)
LEVEL 1 60% DIM = OPACITY 40%
LEVEL 1 60% DIM = OPACITY 40%
// hierarchy 2 (the active hierarchy)
LEVEL 1 60% DIM = OPACITY 40%
LEVEL 2 40% DIM = OPACITY 60%
LEVEL 3 20% DIM = OPACITY 80%
LEVEL 3 20% DIM = OPACITY 80%
LEVEL 4 NO DIM = OPACITY 100% ← Cursor is here
LEVEL 4 NO DIM = OPACITY 100%
// hierarchy 3 (inactive)
LEVEL 1 60% DIM = OPACITY 40%
The concept was straightforward: create a gradual dimming effect where the current code context (method, class, namespace) remains fully visible, while surrounding code fades progressively based on how far removed it is from the cursor’s current location.
The Development Experience
Armed with this detailed prompt, I put it to work with both GitHub Copilot and Claude Sonnet 4. The results were… educational.
Getting Started: The Activation Struggle
The initial setup was surprisingly smooth—both AI assistants generated reasonable starting code that followed the specification. However, getting the extension to actually activate proved trickier than expected. It took several “still not working” follow-up prompts before the code became truly functional. There’s something humbling about watching an AI struggle with the same activation issues that trip up human developers.
The Mess Factor
During development, Copilot developed some interesting habits. It generated numerous test files that weren’t actually needed, then forgot to configure the TypeScript compiler to ignore them. This created a cascade of build errors that took time to untangle. Even more amusing was its tendency to create an “incredible amount of markdown files”—apparently its way of keeping notes on the development process. These digital breadcrumbs painted a picture of an AI trying to maintain context across a complex task.
The Dimming Logic
Like an intern, the AI initially struggled with the core logic of applying the dimming effect. It got the logic right in of of the first prompts, but then applied the rendering logic from the inside (deepest symbol level) to the outside (outermost symbol level). While that sounds logical, the result was that the later applied dimming effects would override the earlier ones. And since symbols can nest deeply, this led to messed up visual results.
Technical Discoveries
From an implementation standpoint, the AI made some smart architectural choices. As expected from the original prompt, it leveraged the DocumentSymbolProvider
—essentially tapping into the Language Server Protocol’s understanding of code structure. This was the obvious foundation for any symbol-hierarchy-based dimming system.
The more interesting development came when I pushed to extend the dimming beyond just symbols (classes, functions, etc.) to include logical blocks like if
, for
, and switch
statements. Copilot suggested investigating the FoldingRangeProvider
, which was completely new to me. This was genuinely insightful—the folding provider already understands code structure in a way that could complement symbol analysis. Unfortunately, despite the solid theoretical foundation, we never quite got this extended functionality working reliably.
The Economics of AI-Assisted Development
Running this experiment provided some interesting data points on the practical costs of AI-assisted coding.
I kept everything within a single session, which meant that at some point the conversation history got compressed to manage context limits. According to the AI’s own accounting, the entire project consumed roughly 60,000 input tokens and 20,000 output tokens—translating to about $0.50 in total costs.
The final extension weighs in at 291 lines of code, which works out to approximately 0.17 cents per line. I’m not sure this metric means much in isolation, but it’s an interesting data point for anyone thinking about the economics of AI-assisted development. Of course, this doesn’t account for the value of learning, experimentation, or the time saved compared to building from scratch.
Reflections
This experiment highlighted both the potential and limitations of current AI coding assistants. They excel at generating boilerplate, suggesting architectural patterns, and even discovering APIs you might not know about. But they still struggle with the integration challenges that often make or break real-world projects.
The meta-prompting approach—using AI to help craft better specifications—feels like it has real potential. Starting with a well-structured prompt led to much better initial code than my usual approach of jumping straight into implementation details.
Most importantly, this wasn’t just about building a VSCode extension. It was about exploring a new way of working with AI tools, treating them as collaborative partners in both the design and implementation phases of software development.