First Impressions of GPT-5 Integration in Visual Studio Code Copilot
Jazzlike_Course_9895 shares initial experiences using GPT-5 in Visual Studio Code Copilot, highlighting differences from previous AI models and discussing how GPT-5’s analytic code feedback compares to alternatives like Claude 4 and GPT-4.1.
First Impressions of GPT-5 Integration in Visual Studio Code Copilot
Author: Jazzlike_Course_9895
Overview
The post discusses the availability of GPT-5 within GitHub Copilot in Visual Studio Code, offering initial impressions and early user experience feedback. The author compares GPT-5’s code summarization and improvement suggestions to GPT-4.1 and Claude 4, evaluating output quality, analytic focus, and the practical impact on coding workflows.
Availability and Setup
- GPT-5 is now reportedly enabled for some users in Visual Studio Code’s Copilot integration.
- Users should check and enable the feature within their Copilot settings.
- Some subscription users (like those with Windsurf) see multiple GPT-5 variants (low, medium, high reasoning levels) with some included as free-of-credits options.
Model Comparison Table
Model | Summary Style | Number of Points | Example Code | Focus |
---|---|---|---|---|
GPT-5 | Very analytic, long, less human | 10 (+ subpoints) | Yes | Mix of code & generic advice |
GPT-4.1 | Short, limited analysis | 6 (with fluff) | Yes | Partial code focus |
Claude 4 | Concise, code-centric | 5 | Five | Strictly code improvements |
- GPT-5 produces the lengthiest, most analytic summaries but sometimes includes excessive generic advice that may not be directly actionable.
- GPT-4.1’s summaries are considered the weakest, with some improvement but still lacking code-centric feedback.
- Claude 4 is praised for “grumpy senior dev” style—direct, practical code-focused suggestions, and concise output without filler.
User Observations
- GPT-5’s approach is seen as more academic, with detailed lists and analytic explanations, but sometimes loses the practical, human touch.
- Claude 4 offers precise code analysis and actionable improvements, favored by the author for this use case.
- Disagreement about GPT-5’s value at “1 premium request price”—comparable to Claude 4 with less perceived benefit.
- Additional feedback on different subscription plans (e.g., Windsurf) and the lack of multiple GPT-5 variants for some users.
- Speculation on model pricing and API access for cost-effectiveness.
Anecdotal Conclusion
- The author concludes that, for now, Claude 4 better meets coding feedback needs, while GPT-5 is promising but perhaps too verbose for practical development tasks.
- Further experimentation is planned, especially for agent usage and more complex reasoning prompts.
Community Sentiment
- Some frustration about pricing, expectation management, and the desire for more flexible or lower-cost access to advanced models in Copilot.
- Users note differences in reasoning quality, warmth of feedback, and realism between models.
References:
This post appeared first on “Reddit Github Copilot”. Read the entire article here