Kotlin enhancements for LangChain4j, providing coroutine support and Flow-based streaming capabilities for chat language models.
See the discussion on LangChain4j project.
โน๏ธ This project is a playground for LangChain4j's Kotlin API. If accepted, some code might be adopted into the original LangChain4j project and removed from here. Mean while, enjoy it here.
- โจ Kotlin Coroutine support for ChatLanguageModels
- ๐ Kotlin Asynchronous Flow support for StreamingChatLanguageModels
- ๐External Prompt Templates support. Basic implementation loads both system and user prompt templates from the classpath, but PromptTemplateSource provides extension mechanism.
- ๐พAsync Document Processing Extensions support parallel document processing with Kotlin coroutines for efficient I/O operations in LangChain4j
See api docs for more details.
Add the following dependencies to your pom.xml
:
<dependencies>
<!-- LangChain4j Kotlin Extensions -->
<dependency>
<groupId>me.kpavlov.langchain4j.kotlin</groupId>
<artifactId>langchain4j-kotlin</artifactId>
<version>[LATEST_VERSION]</version>
</dependency>
<!-- Extra Dependencies -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j</artifactId>
<version>1.0.0-beta1</version>
</dependency>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai</artifactId>
<version>1.0.0-beta1</version>
</dependency>
</dependencies>
Add the following to your build.gradle.kts
:
dependencies {
implementation("me.kpavlov.langchain4j.kotlin:langchain4j-kotlin:$LATEST_VERSION")
implementation("dev.langchain4j:langchain4j-open-ai:1.0.0-beta1")
}
Extension can convert ChatModel
response
into Kotlin Suspending Function:
val model: ChatModel = OpenAiChatModel.builder()
.apiKey("your-api-key")
// more configuration parameters here ...
.build()
// sync call
val response =
model.chat(chatRequest {
messages += systemMessage("You are a helpful assistant")
messages += userMessage("Hello!")
})
println(response.aiMessage().text())
// Using coroutines
CoroutineScope(Dispatchers.IO).launch {
val response =
model.chatAsync {
messages += systemMessage("You are a helpful assistant")
messages += userMessage("Say Hello")
parameters(OpenAiChatRequestParameters.builder()) {
temperature = 0.1
builder.seed(42) // OpenAI specific parameter
}
}
println(response.aiMessage().text())
}
Sample code:
Extension can convert StreamingChatModel response into Kotlin Asynchronous Flow:
val model: StreamingChatModel = OpenAiStreamingChatModel.builder()
.apiKey("your-api-key")
// more configuration parameters here ...
.build()
model.chatFlow {
messages += systemMessage("You are a helpful assistant")
messages += userMessage("Hello!")
}.collect { reply ->
when (reply) {
is CompleteResponse ->
println(
"Final response: ${reply.response.content().text()}",
)
is PartialResponse -> println("Received token: ${reply.token}")
else -> throw IllegalArgumentException("Unsupported event: $reply")
}
}
The library adds support for coroutine-based async AI services through the AsyncAiServices
class, which leverages
Kotlin's coroutines for efficient asynchronous operations:
// Define your service interface with suspending function
interface Assistant {
@UserMessage("Hello, my name is {{name}}. {{question}}")
suspend fun chat(name: String, question: String): String
}
// Create the service using AsyncAiServicesFactory
val assistant = createAiService(
serviceClass = Assistant::class.java,
factory = AsyncAiServicesFactory(),
).chatModel(model)
.build()
// Use with coroutines
runBlocking {
val response = assistant.chat("John", "What is Kotlin?")
println(response)
}
The AsyncAiServices
implementation uses HybridVirtualThreadInvocationHandler
under the hood,
which supports multiple invocation patterns:
- Suspend Functions: Native Kotlin coroutines support
- CompletionStage/CompletableFuture: For Java-style async operations
- Blocking Operations: Automatically run on virtual threads (Java 21+)
Example with different return types:
interface AdvancedAssistant {
// Suspend function
@UserMessage("Summarize: {{text}}")
suspend fun summarize(text: String): String
// CompletionStage return type for Java interoperability
@UserMessage("Analyze sentiment: {{text}}")
fun analyzeSentiment(text: String): CompletionStage<String>
// Blocking operation (runs on virtual thread)
@Blocking
@UserMessage("Process document: {{document}}")
fun processDocument(document: String): String
}
- Efficient Resource Usage: Suspending functions don't block threads during I/O or waiting
- Java Interoperability: Support for CompletionStage/CompletableFuture return types
- Virtual Thread Integration: Automatic handling of blocking operations on virtual threads
- Simplified Error Handling: Leverage Kotlin's structured concurrency for error propagation
- Reduced Boilerplate: No need for manual callback handling or future chaining
The Kotlin Notebook environment allows you to:
- Experiment with LLM features in real-time
- Test different configurations and scenarios
- Visualize results directly in the notebook
- Share reproducible examples with others
You can easily get started with LangChain4j-Kotlin notebooks:
%useLatestDescriptors
%use coroutines
@file:DependsOn("dev.langchain4j:langchain4j:0.36.2")
@file:DependsOn("dev.langchain4j:langchain4j-open-ai:0.36.2")
// add maven dependency
@file:DependsOn("me.kpavlov.langchain4j.kotlin:langchain4j-kotlin:0.1.1")
// ... or add project's target/classes to classpath
//@file:DependsOn("../target/classes")
import dev.langchain4j.data.message.SystemMessage.systemMessage
import dev.langchain4j.data.message.UserMessage.userMessage
import dev.langchain4j.model.openai.OpenAiChatModel
import kotlinx.coroutines.runBlocking
import kotlinx.coroutines.CoroutineScope
import kotlinx.coroutines.Dispatchers
import me.kpavlov.langchain4j.kotlin.model.chat.chatAsync
val model = OpenAiChatModel.builder()
.apiKey("demo")
.modelName("gpt-4o-mini")
.temperature(0.0)
.maxTokens(1024)
.build()
// Invoke using CoroutineScope
val scope = CoroutineScope(Dispatchers.IO)
runBlocking {
val result = model.chatAsync {
messages += systemMessage("You are helpful assistant")
messages += userMessage("Make a haiku about Kotlin, Langchain4j and LLM")
}
println(result.content().text())
}
Try this Kotlin Notebook yourself:
- Create
.env
file in root directory and add your API keys:
OPENAI_API_KEY=sk-xxxxx
Using Maven:
mvn clean verify
Using Make:
make build
Contributions are welcome! Please feel free to submit a Pull Request.
Run before submitting your changes
make lint
- LangChain4j - The core library this project enhances
- Training data from Project Gutenberg: