Agentic context fetching

Learn about agentic context fetching, a mini-agent that uses search and tools to retrieve context.

Cody's agentic context fetching experience can evaluate context and fetch any additional context (MCP, OpenCtx, terminal, etc.) by providing enhanced, context-aware chat capabilities. It extends Cody's functionality by proactively understanding your coding environment and gathering relevant information based on your requests before responding. These features help you get noticeably higher-quality responses.

This experience aims to reduce the learning curve associated with traditional coding assistants by minimizing users' need to provide context manually. It achieves this through agentic context retrieval, where the AI autonomously gathers and analyzes context before generating a response.

Capabilities of agentic chat

The agentic context fetching experience leverages several key capabilities, including:

  • Proactive context gathering: Automatically gathers relevant context from your codebase, project structure, and current task
  • Agentic context reflection: Review the gathered context to ensure it is comprehensive and relevant to your query
  • Iterative context improvement: Performs multiple review loops to refine the context and ensure a thorough understanding
  • Enhanced response accuracy: Leverages comprehensive context to provide more accurate and relevant responses, reducing the risk of hallucinations

What can agentic context fetching do?

Agentic context fetching can help you with the following:

Tool Usage

It has access to a suite of tools for retrieving relevant context. These tools include:

  • Code Search: Performs code searches
  • Codebase File: Retrieves the full content from a file in your codebase
  • Terminal: Executes shell commands in your terminal
  • Web Browser: Searches the web for live context
  • MCP: (Configure MCP and add servers)[] to fetch external context
  • OpenCtx: Any OpenCtx providers could be used by the agent

It integrates seamlessly with external services, such as web content retrieval and issue tracking systems, using OpenCtx providers. To learn more, read the OpenCtx docs.

Terminal access is not supported on the Web. It currently only works with VS Code, JetBrains, and Visual Studio editor extensions.

Terminal access

Agentic context fetching can use the CLI Tool to request the execution of shell commands to gather context from your terminal. Its ability to execute terminal commands enhances its context-gathering capabilities. However, it’s essential to understand that any information accessible via your terminal could potentially be shared with the LLM. It's recommended not to request information that you don't want to share. Here's what you should consider:

  • Requires user consent: Agentic context fetching will pause and ask for permission each time before executing any shell command.
  • Trusted workspaces only: Commands can only be executed within trusted workspaces with a valid shell
  • Potential data sharing: Any terminal-accessible information may be shared with the LLM

Commands are generated by the agent/LLM based on your request. Avoid asking it to execute destructive commands.

Use cases

Agentic context fetching can be helpful to assist you with a wide range of tasks, including:

  • Improved response quality: Helps you get better and more accurate responses than other LLMs, making up for the additional processing time for context gathering a non-issue
  • Error resolution: It can automatically identify error sources and suggest fixes by analyzing error logs
  • Better unit tests: Automatically includes imports and other missing contexts to generate better unit tests

Enable agentic context fetching

Getting agentic context fetching access for Pro users

Pro users can find the agentic context fetching option in the LLM selector drop-down.

agentic context fetching interface

Getting agentic context fetching access for Enterprise customers

Agentic context fetching uses smaller models from the Gemini, Claude, and GPT families for reflection steps and whichever model you choose from the model selector for the final response. This provide a good balance between quality and latency. If none of the smaller models are available on your instance, we will fall back to the model chosen in the model selector for reflection. We use the latest versions of these models, and can fall back to older versions when necessary. The default models may be changed to optimize for quality and/or latency.

Terminal access is disabled by default. To enable it, set the agentic-chat-cli-tool-experimental feature flag terminal access

Previous
Query Types