Insights from Spring I/O 2025
Why I chose this session
With the rapid rise of large language models (LLMs), I’ve been exploring how to integrate them into real-world applications, especially in a secure and maintainable way. I was looking for a solid introduction to Spring AI, and this session delivered practical patterns, real-world examples, and a look at the limitations we should be aware of.
What is Spring AI?
Spring AI is a new framework from the Spring team that simplifies the integration of AI capabilities into Spring Boot applications. As of May 20, 2025, Spring AI 1.0 is officially generally available. It supports a wide range of AI providers, including OpenAI, Azure, and Anthropic, and offers features like chat completion, image generation, and audio transcription, all through familiar Spring idioms.
The challenge: secure and responsible LLM integration
In today’s tech landscape, LLMs are everywhere, but integrating them into production systems raises questions around data privacy, security, and control. This session focused on how to use Spring AI to build intelligent applications while maintaining guardrails and governance.
My top 3 takeaways from the session
- Spring AI guardrails for safer AI use
Spring AI includes pre- and post-processing guardrails that allow developers to inspect requests and responses for sensitive data like PII. This is essential for building applications using an LLM that are secure and comply with privacy regulations. - Use of local models for privacy
Instead of sending all data to cloud-based models, Spring AI supports using local models to scan for sensitive content. This hybrid approach enhances privacy and gives developers more control over data flow. - MCP (Model Context Protocol) for application integration
MCP allows LLMs to interact with your application’s internal logic. This means the model can trigger business logic—like reading from or writing to a database, based on natural language input. It’s a powerful way to build AI agents in Spring Boot.
A demo that stood out
One of the most impressive demos showed how an LLM could act as an intelligent assistant within a Spring application. Using Model Context Protocol (MCP), the model was able to trigger internal logic, such as querying a local database or writing new entries, based entirely on natural language. The frontend was built using Vaadin, which provided a clean and responsive UI to showcase the interaction between the user, the LLM, and the backend. While Vaadin wasn’t the focus, it played a key role in making the demo intuitive and accessible.
From inspiration to application
While I’m still cautious about integrating external models like OpenAI’s into client environments, especially where sensitive data is involved, this session gave me a lot to think about. The idea of running a company-specific LLM locally, combined with Spring AI’s guardrails and MCP integration, opens up exciting possibilities. Imagine building intelligent agents that can safely interact with your application logic, all within the familiar Spring ecosystem.
The most inspiring insight? Using MCP to turn your application into a toolbox for AI. It’s a concept that could reshape how we design and build intelligent systems in Java.
Final thoughts
This session was a great blend of practical advice and forward-looking ideas. It showed that integrating LLMs into Spring Boot applications isn’t just a futuristic concept, it’s something we can start exploring today, with the right tools and a thoughtful approach.
If I had to sum it up in three words:
Integrate, LLMs, Intelligently.