Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit 554b1c3

Browse files
authored
Merge pull request #223 from tikikun/main
hotfix: important change to fix the issue of template formatting for upcoming release
2 parents 7d784c8 + bcc524d commit 554b1c3

File tree

1 file changed

+6
-2
lines changed

1 file changed

+6
-2
lines changed

controllers/llamaCPP.cc

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ void llamaCPP::chatCompletion(
191191
role = input_role;
192192
}
193193
std::string content = message["content"].asString();
194-
formatted_output += role + content + "\n";
194+
formatted_output += role + content;
195195
}
196196
formatted_output += ai_prompt;
197197

@@ -205,7 +205,11 @@ void llamaCPP::chatCompletion(
205205
}
206206

207207
bool is_streamed = data["stream"];
208-
208+
// Enable full message debugging
209+
#ifdef DEBUG
210+
LOG_INFO << "Current completion text";
211+
LOG_INFO << formatted_output ;
212+
#endif
209213
const int task_id = llama.request_completion(data, false, false);
210214
LOG_INFO << "Resolved request for task_id:" << task_id;
211215

0 commit comments

Comments
 (0)