Google Gemini 2.5 Pro arrives in JetBrains AI Assistant
JetBrains AI Assistant now supports Google’s latest and most intelligent AI model, Gemini 2.5 Pro. This integration makes your JetBrains IDE even smarter, providing enhanced accuracy and deep reasoning capabilities to streamline your coding experience. At JetBrains, we aim to equip developers with AI tools that simplify complex tasks and make everyday coding more productive […]

JetBrains AI Assistant now supports Google’s latest and most intelligent AI model, Gemini 2.5 Pro. This integration makes your JetBrains IDE even smarter, providing enhanced accuracy and deep reasoning capabilities to streamline your coding experience.
At JetBrains, we aim to equip developers with AI tools that simplify complex tasks and make everyday coding more productive and enjoyable. By continually integrating cutting-edge AI models – such as our own LLM Mellum for code completion, as well as third-party LLMs – we’re committed to improving your workflow and helping you tackle every task.
Google Gemini 2.5 Pro is currently available in AI Assistant in experimental mode.
What’s special about Gemini 2.5 Pro
Google Gemini 2.5 Pro, the first “thinking” model from Google DeepMind, is currently listed as a top performer across several key industry benchmarks for coding tasks, as well as mathematics and science challenges, and it ranks highly on the community-driven LMArena Leaderboard. With this top-performing model integrated directly into your JetBrains IDE, you’ll be able to:
- Solve complex problems: Use Gemini 2.5 Pro’s advanced reasoning to complete intricate coding tasks.
- Improve code quality: Experience greater precision and contextual understanding, streamlining your coding process.
- Enhance your productivity: Receive accurate, context-aware suggestions directly in your IDE for more efficiency.
Gemini 2.5 Pro simplifies your workflow, as it minimizes guesswork, boosts code quality, and significantly reduces debugging time.
“We are excited to closely partner with JetBrains AI to deliver a state-of-the-art experience with Gemini 2.5. To achieve this we focused on advancing our models’ reasoning abilities and code generation quality.
For JetBrains AI users, this means higher quality code suggestions and a richer understanding of project context by the AI. We are excited to see what the JetBrains AI community builds and we are looking forward to collaborating on future projects like AI-powered developer assistance capabilities and agents.”
Technical details and considerations
To help you get the most out of Gemini 2.5 Pro, here are a few technical points to be aware of:
- Supported context window: We currently support a 200,000 token context window for this model and are actively working to enable the full 1 million token input context as soon as possible.
- Usage cost: As with other highly performant models, users should monitor credit usage.
- Rate of requests: Gemini 2.5 Pro is still a Preview model, so you may experience a lower available rate of requests to it.
- Feature support: We recommend carefully reviewing Google’s official documentation to determine whether specific features are applicable to your use case.
Need help finding the perfect model for your needs? Check out this blog post for some helpful tips. If you’re interested in integrating JetBrains’ latest models and datasets into your automated evaluation process, take a look at our card on Hugging Face.
How to try it
To use the latest model, simply select Gemini 2.5 Pro (Experimental) from the AI chat’s drop-down menu in your JetBrains IDE.
The integration is available starting from version 2025.1 of JetBrains IDEs.
If you’re already using JetBrains AI Assistant, simply update your IDE to the latest version to explore the new features. If you haven’t yet given JetBrains AI Assistant a try, we invite you to get started today. It’s free.