Cursor Adds Subagents, Skills, and Image Generation - Major Agent Enhancements
Cursor implements subagents, agent skills, and image generation. Parallel execution, improved context management, and dynamic skill application enable handling of more complex, long-running tasks.
On February 4th, Cursor announced significant enhancements to its agent capabilities. The addition of subagents, skills, and image generation improves the system’s ability to handle increasingly complex, long-running tasks across codebases.
Subagents
Subagents are independent agents specialized to handle discrete parts of a parent agent’s task. They run in parallel, use their own context, and can be configured with custom prompts, tool access, and models.
This architecture results in faster overall execution, more focused context in the main conversation, and specialized expertise for each subtask. Cursor includes default subagents for researching codebases, running terminal commands, and executing parallel work streams, automatically improving conversation quality in both the editor and Cursor CLI.
Custom subagents can be defined as well. Learn more in the official documentation.
Skills
Cursor now supports Agent Skills in the editor and CLI. Agents can discover and apply skills when domain-specific knowledge and workflows are relevant. Skills can also be invoked using the slash command menu.
Skills are defined in SKILL.md files, which can include custom commands, scripts, and instructions for specializing the agent’s capabilities based on the task at hand.
Compared to always-on, declarative rules, skills are better suited for dynamic context discovery and procedural “how-to” instructions. This gives agents more flexibility while keeping context focused.
Image Generation
Users can now generate images directly from Cursor’s agent. Describe the image in text or upload a reference to guide the underlying image generation model (Google Nano Banana Pro).
Images are returned as an inline preview and saved to the project’s assets/ folder by default. This is useful for creating UI mockups, product assets, and visualizing architecture diagrams.
Cursor Blame
On the Enterprise plan, Cursor Blame extends traditional git blame with AI attribution, showing exactly what was AI-generated versus human-written.
When reviewing code, each line links to a summary of the conversation that produced it, providing context and reasoning behind the change. Cursor Blame distinguishes between code from Tab completions, agent runs (broken down by model), and human edits, and tracks AI usage patterns across a team’s codebase.
Clarification Questions from Agents
The interactive Q&A tool used by agents in Plan and Debug mode now allows agents to ask clarifying questions in any conversation.
While waiting for a response, the agent can continue reading files, making edits, or running commands, then incorporate the answer as soon as it arrives. Custom subagents and skills can also use this tool by being instructed to “use the ask question tool.”
References
Related Articles
Cursor Launches Long-Running Agents in Research Preview - Autonomous Execution for Complex Tasks
Cursor introduces long-running agents capable of autonomous planning and execution without human intervention. Available for Ultra, Teams, and Enterprise plans.
Cursor 2.5 Brings Plugin Marketplace, Sandbox Network Controls, and Async Subagents
Cursor releases version 2.5 with a plugin marketplace for extensibility, granular network access controls for sandboxed environments, and asynchronous subagent execution for parallel processing.
Cursor CLI Update: Plan Handoff to Cloud, Mermaid ASCII Diagrams, and Quality-of-Life Improvements
Cursor updates its CLI with plan-to-cloud handoff from the terminal, inline ASCII rendering of Mermaid diagrams, and a range of tooling and reliability improvements.
Popular Articles
868 Agentic Skills, One Command: Antigravity Awesome Skills Becomes the Cross-Tool Skill Standard
Antigravity Awesome Skills (v5.4.0) delivers 868+ battle-tested skills for Claude Code, Gemini CLI, Codex CLI, Cursor, GitHub Copilot, and five other AI coding assistants via a single npx command. With official skills from Anthropic, Vercel, OpenAI, Supabase, and Microsoft consolidated under one MIT-licensed repository, it's emerging as the portable skill layer for the fragmented AI coding agent landscape.
How Claude Sonnet 4.6 Agent Teams Achieve 4x Productivity: Practical Insights from Anthropic's Own Research
Two Anthropic studies—a survey of 132 internal engineers and an analysis of 1M+ real-world agent interactions—reveal the precise delegation strategies and autonomy patterns that enable high-performing teams to multiply output with Claude Sonnet 4.6 agent teams.
What Actually Makes OpenClaw Special: The Full Story from VibeTunnel to 200k+ GitHub Stars
The three-stage VibeTunnel→Clawdbot→OpenClaw evolution, Pi runtime philosophy, why HEARTBEAT is the real differentiator from Claude Code, and the ClawHub supply chain attack (12% of skills were malicious). An unvarnished look at the most used and most misunderstood OSS agent.
Latest Articles
Two AI Agent Communication Projects Hit Hacker News Simultaneously, Targeting MCP's Blind Spots
Aqua and Agent Semantic Protocol appeared on Hacker News on the same day, both tackling the same unsolved problem: how AI agents communicate directly without a central broker, across network boundaries, and asynchronously.
Claude Sonnet 4.6 Becomes the Default for Free and Pro Users — Outperforms Opus 4.5 on Coding Agent Benchmarks
Anthropic has made Claude Sonnet 4.6 the default model for claude.ai's Free and Pro plans. Released February 17, 2026, it matches Sonnet 4.5 pricing at $3/$15 per million tokens while internal Claude Code evaluations show it beating the previous frontier model, Opus 4.5, 59% of the time on agentic coding tasks.
Google Permanently Bans AI Pro Users for Accessing Gemini via OpenClaw, Continues Charging $250/Month
A Hacker News post garnering 140 points and 107 comments details how Google terminated Google AI Pro and Ultra accounts without warning after users accessed Gemini through OpenClaw, a third-party client. The incident surfaces deeper issues around prompt caching, subscription economics, and how AI providers enforce terms of service.