Hexagonal architecture & AI Workshop

In the workshop we focus on applying Hexagonal Architecture (Ports & Adapters) to AI-enabled systems.

Hexagonal architecture & AI Workshop



Hexagonal Architecture for AI Integration: How to Build Maintainable LLM-Powered Enterprise Apps with Spring and LangChain4j

By Michal Boška · February 2026 · Based on an Ableneo Tech Workshop


As AI capabilities become a core requirement in enterprise software, development teams face a critical question: how do you integrate LLM-powered features into existing systems without creating unmaintainable spaghetti code?

This was the central theme of a recent Ableneo Tech Workshop, where I demonstrated how hexagonal architecture (also known as ports and adapters) provides a clean, scalable pattern for isolating AI integrations from business logic — and why this matters more than ever in 2026.

Drawing from hands-on experience on a large-scale project spanning seven countries and serving nearly 11 million users, I walked through a practical demo application that combines Spring Boot, Kotlin, and two different LLM integration approaches — Spring AI and LangChain4j — all wrapped inside a hexagonal architecture that keeps the domain layer completely untouched regardless of which AI framework runs underneath.


Why Hexagonal Architecture? The Real-World Enterprise Context

To understand why hexagonal architecture matters for AI integration, consider the kind of environment many enterprise teams work in daily.

The project I referenced operates across seven production environments (soon to be eight), each tailored to the regulatory and business nuances of its respective country. The team follows a Spotify-inspired organizational model with cross-functional squads and chapters, and the codebase is a microservices architecture with some remaining monoliths being gradually decomposed.

Key Challenges at Scale

  • Multiple production environments with country-specific configurations and requirements
  • Backward compatibility obligations, especially for mobile clients running different app versions
  • Staggered deployments where environments may be two release cycles apart
  • A large number of backend developers contributing to shared and separate codebases simultaneously
  • An API-first development approach using OpenAPI specifications as the single source of truth

In this kind of complex, distributed environment, every additional moving part — including AI integrations — adds risk. Hexagonal architecture emerged as a proven way to manage that complexity and keep the codebase maintainable over time.


What Is Hexagonal Architecture? A Quick Primer

Hexagonal architecture (also called ports and adapters pattern) builds on the classic layered architecture many developers already know, but takes the concept of separation of concerns further. The core idea is a strict separation between three zones.

The Domain Core

This is the heart of the application. It contains only business logic, domain objects, and domain services. Crucially, it has zero dependencies on external frameworks, databases, messaging systems, or AI libraries.

If you showed the domain layer to a business analyst, they should be able to understand the objects and rules without knowing anything about the underlying technology. That’s the litmus test.

Ports (Interfaces)

Ports define the contracts between the domain and the outside world:

  • Inbound ports describe how external actors (like REST controllers) can invoke business logic
  • Outbound ports describe what external services the domain needs — persistence, LLM calls, distance calculations — without specifying how those services are implemented

Adapters (Implementations)

Adapters are the concrete implementations of ports. A persistence adapter might use PostgreSQL today and a cloud-native store tomorrow. An LLM adapter might use Spring AI now and LangChain4j later.

The key insight: swapping an adapter never requires changing the domain core.

💡 Key Principle: The hexagon’s six sides are purely aesthetic — the number of ports and adapters is unlimited. What matters is the strict dependency direction: adapters depend on ports, ports live in the domain, and the domain depends on nothing.


Enforcing Architecture Through Gradle Modules

One of the most practical takeaways from this approach is how to enforce hexagonal architecture at the build level. Rather than relying on developer discipline alone, I structure projects as separate Gradle modules:

ModulePurposeDependencies
coreDomain objects, domain services, port interfacesNone (empty build.gradle)
persistenceDatabase adaptercore + database drivers (e.g., PostgreSQL)
llmLLM integration adaptercore + Spring AI or LangChain4j
appWires everything together via dependency injectionAll modules

This module-based approach makes architectural violations physically impossible. A developer working in the core module simply cannot import a Spring annotation or a database entity class — the compiler won’t allow it. This eliminates “shortcut temptations” and keeps the codebase honest over time.

Why This Matters for AI-Assisted Development

This modular structure also benefits AI code generation tools like Claude Code, Cursor, or GitHub Copilot. When an AI agent needs to make changes, it can focus on a single, well-scoped module rather than needing the entire codebase in its context window. Smaller context means better, more isolated changes — and a faster feedback loop when running tests.


Practical Demo: AI-Powered Business Trip Report Generation

To illustrate these concepts without revealing production code, I built a demo application around a relatable use case: generating business trip reports. You feed in natural language describing a trip (cities visited, vehicle used, dates), and the system produces structured trip data for expense reporting.

How the Flow Works

Step 1: Natural Language Parsing

A user submits unstructured text like:

“I drove my Škoda Octavia from Bratislava to Trnava, then to Nitra, and finally to Banská Bystrica.”

The system’s LLM adapter (either Spring AI or LangChain4j) parses this into structured domain objects: a vehicle identifier and a list of destinations with timestamps.

Step 2: Business Validation in the Domain Layer

The domain layer validates the parsed result against business rules. For example, it checks whether any two stops have overlapping time periods (which would be physically impossible). This validation lives entirely in the domain core and runs identically regardless of which LLM adapter produced the data.

Step 3: Retry Logic for Non-Deterministic AI Outputs

Because LLM outputs can be unreliable, the domain service implements a retry pattern: it attempts validation up to three times. If the LLM returns invalid data, the system retries the parsing before failing with an error.

This defensive pattern is itself business logic — it belongs in the domain, not in the infrastructure layer.

Step 4: Persistence

Once validated, the trip is saved to a PostgreSQL database through the persistence adapter. If the team decided to switch to Elasticsearch or a cloud storage service, only the persistence adapter would change.


Domain Validation vs. Controller Validation: Where Do Business Rules Belong?

This distinction is crucial for any team integrating AI into enterprise applications.

Controller-level validation checks structural correctness: Does the request body match the API specification? Are required headers present? Is the content type correct? This is technical validation that belongs at the adapter boundary.

Domain-level validation checks business correctness: Does the trip make logical sense? Do the time periods overlap? Is the vehicle registered? These rules belong in the domain core because they represent business invariants.

Why does this separation matter practically?

If you move to a different input channel — a CLI tool, a Slack bot, an email-triggered workflow instead of REST — your business validation rules travel with the domain. They never get accidentally left behind in a controller that no longer exists. This is especially relevant when adding AI-powered interfaces to existing systems.


Spring AI vs. LangChain4j: Two LLM Adapters, One Architecture

Perhaps the most compelling part of the demo was the side-by-side comparison of two completely different LLM integration approaches, both plugged into the same hexagonal architecture through the same port interface.

Spring AI Adapter

The Spring AI integration was the simpler of the two. It uses Spring’s built-in AI support to:

  • Define prompts with template variables
  • Specify structured output formats (so the LLM returns typed objects, not raw text)
  • Register tools that the LLM can invoke autonomously (e.g., a distance-calculation tool for travel times between cities)

The key advantage is tight integration with the Spring ecosystem: automatic serialization, minimal boilerplate, and a familiar programming model for Spring developers.

LangChain4j Adapter

The LangChain4j integration brought a more sophisticated, agent-based approach. I implemented two orchestration strategies, switchable via configuration:

Manual Orchestration — A coded pipeline where individual agents are called in sequence:

  1. An optimizer agent creates a route plan
  2. A cost-breakdown agent estimates expenses
  3. A scoring agent evaluates quality
  4. A refinement agent improves the plan if the score is below a threshold

The flow is explicit and fully controlled in code.

Agentic Orchestration — A top-level orchestrator agent receives all sub-agents as tools and decides autonomously which to call and in what order. During the live demo, the orchestrator was observed making multiple refinement loops, calling the scoring agent repeatedly, and making independent decisions about when to stop iterating.

The Power of the Pattern: From the outside, both adapters expose exactly the same interface. A configuration switch is all it takes to swap between Spring AI and LangChain4j — or to A/B test which produces better results. No business logic changes. No domain code touched.


Isolating LLM Quirks: A Real-World Example

When I initially passed domain objects (containing Java Timestamp fields) directly to LangChain4j, the model couldn’t process them correctly. The solution was to create adapter-specific helper classes that use strings instead of timestamps, with conversion logic isolated entirely within the LangChain4j module.

This is exactly the kind of problem hexagonal architecture is designed to contain. The LLM’s inability to handle certain data types is an adapter concern, not a domain concern. The workaround lives in one module and affects nothing else — not the domain model, not the persistence layer, not any other adapter.


When Should You Use Hexagonal Architecture for AI Integration?

Good Fit

  • Production-grade applications with real business logic that will be maintained long-term
  • Projects with incremental delivery cycles (new features every few months)
  • Systems requiring flexibility to swap infrastructure — databases, AI model providers, messaging systems
  • Teams with multiple developers working on the same codebase
  • Applications where AI-generated code needs to be well-scoped and testable

Less Appropriate

  • Simple AWS Lambda functions for one-off integrations between services
  • Throwaway proof-of-concept scripts that will never reach production
  • Very small microservices where the total code complexity is low enough to rewrite entirely

The AI Code Generation Advantage

With modern AI code generation tools, the overhead of setting up a hexagonal structure is lower than ever. The boilerplate that used to deter teams can now be generated quickly, making the investment worthwhile even for smaller projects with production ambitions.

There’s also a significant organizational benefit: teams are notoriously reluctant to refactor working production code. If you start with good architecture from day one, you avoid the painful (and often rejected) “we need a week to refactor” conversation later.


Honest Trade-Offs and Lessons Learned

No architectural pattern is without costs. Here’s what to expect:

Discipline required — The initial temptation to take shortcuts is strong. Module boundaries help enforce rules, but the team must buy into the approach from the start.

Domain modeling pressure — The architecture forces you to think carefully about your domain model upfront. Getting it wrong means more refactoring later — but this arguably produces better designs in the long run.

Over-engineering risk — If a project gets cancelled before reaching production, the extra structure might feel wasted. However, this cost is increasingly mitigated by AI-assisted code generation.

Learning curve — For developers used to placing all logic in controllers or service classes, the mental shift to strict separation takes time and mentoring.

On the flip side, the team observed clear benefits: changes to LLM integration never break business logic, business rule changes never accidentally affect infrastructure, and testing becomes dramatically simpler because each module can be tested in isolation with minimal mocking.


Key Takeaways

  1. Hexagonal architecture is language-agnostic. The demo used Kotlin and Spring, but the same principles apply to any language or framework.
  2. Gradle (or Maven) modules enforce architecture at compile time, not just in code reviews. This prevents shortcuts before they happen.
  3. LLM integrations are ideal candidates for adapter isolation. Their outputs are non-deterministic, their APIs change frequently, and their quirks should never leak into business logic.
  4. Always validate LLM outputs with business rules. The domain layer should never blindly trust AI-generated data — implement retry patterns and validation checks.
  5. The cost of good architecture is dropping. AI code generation tools reduce the boilerplate overhead, making hexagonal architecture more accessible and practical than ever before.
  6. Start with hexagonal architecture early. The path from blob to hexagon is a well-known refactoring journey, but it’s always cheaper to start clean than to restructure later — especially when multiple developers are involved.

Frequently Asked Questions

What is hexagonal architecture in simple terms?

Hexagonal architecture (ports and adapters) is a software design pattern that strictly separates your business logic from all external concerns like databases, APIs, and AI services. Your domain core defines what it needs through interfaces (ports), and concrete implementations (adapters) handle the technical details. This means you can swap technologies without touching your business rules.

Can I use hexagonal architecture with Spring Boot?

Yes. Spring Boot’s dependency injection mechanism works naturally with hexagonal architecture. You define port interfaces in your domain module, implement them as adapters in separate modules, and let Spring wire everything together. The workshop demo used Spring Boot with Kotlin as the primary framework.

How does hexagonal architecture help with LLM integration?

LLM integrations are non-deterministic — they can return unexpected formats, hallucinate data, or fail to handle certain data types. By isolating LLM calls behind adapter boundaries, these quirks are contained in a single module and never affect your business logic, persistence layer, or other integrations.

What is the difference between Spring AI and LangChain4j?

Spring AI provides tighter integration with the Spring ecosystem and is simpler for basic LLM interactions (prompts, structured output, tool calls). LangChain4j offers more advanced agent orchestration patterns, including multi-agent workflows where agents can call other agents as tools. Both can be used as adapters within the same hexagonal architecture.

Is hexagonal architecture the same as clean architecture?

They share the same core principle — dependency inversion and domain isolation — but differ in terminology and exact layering. Hexagonal architecture uses “ports and adapters,” while clean architecture (Robert C. Martin) uses “use cases” and “interface adapters.” In practice, they achieve very similar outcomes for enterprise applications.


This article is based on an internal Ableneo Tech Workshop held on February 26, 2026. Tech stack covered: Kotlin, Spring Boot, Spring AI, LangChain4j, PostgreSQL, Gradle, OpenAPI.

Related Webinars