-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
Issue Description:
When using the ChatModel in Spring AI to make an LLM request, subsequent calls to other interfaces become significantly slower. The only way to restore normal performance is to restart the service. However, if I directly call other interfaces without using the ChatModel, they respond quickly as expected.
Steps to Reproduce:
1 .Call the /chat endpoint to invoke the ChatModel.
2. Immediately call another endpoint (e.g., /hi).
3.Notice that the response time for the subsequent endpoint is much slower than usual.
4.Restart the service to restore normal performance.
Environment:
Spring AI versions tested: 1.0.1 and 1.0.3
Both versions exhibit the same issue.
Additional Information:
If I implement my own LLM request method (as shown in the /chat2 endpoint), it works normally without affecting the performance of other endpoints.
The issue only occurs when using the ChatModel provided by Spring AI.
Code Example:
@GetMapping("/hi")
public Mono<String> hi() {
return Mono.just("ok");
}
@GetMapping(value = "/chat", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> chat(@RequestParam String message) {
return chatModel.stream(message);
}
@GetMapping(value = "/chat2", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> chat2(@RequestParam String message) {
return request(message);
}
private Flux<String> request (String msg) {
WebClient webClient = WebClient.builder().baseUrl("https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions")
.defaultHeader(HttpHeaders.AUTHORIZATION, "Bearer " + "sk-xxxx")
.defaultHeader(HttpHeaders.ACCEPT, MediaType.TEXT_EVENT_STREAM_VALUE).build();
Map<String, Object> map = Map.of("model", "qwen-plus","enable_thinking", false,"stream",true,"messages", List.of(Map.of("role", "user", "content", msg)));
return webClient.post().bodyValue(map).retrieve().bodyToFlux(String.class);
}