Skip to content

Commit ca32e01

Browse files
committed
Add prefills (assistant prefixes)
Closes #118.
1 parent eb8768b commit ca32e01

File tree

2 files changed

+59
-3
lines changed

2 files changed

+59
-3
lines changed

README.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -314,6 +314,52 @@ The returned value will be a string that matches the input `RegExp`. If the user
314314

315315
If a value that is neither a `RegExp` object or a valid JSON schema object is given, the method will error with a `TypeError`.
316316

317+
### Constraining responses by providing a prefix
318+
319+
As discussed in [Customizing the role per prompt](#customizing-the-role-per-prompt), it is possible to prompt the language model to add a new `"assistant"`-role response in addition to a previous one. Usually it will elaborate on its previous messages. For example:
320+
321+
```js
322+
const followup = await session.prompt([
323+
[
324+
{
325+
role: "user",
326+
content: "I'm nervous about my presentation tomorrow"
327+
},
328+
{
329+
role: "assistant"
330+
content: "Presentations are tough!"
331+
}
332+
]
333+
]);
334+
335+
// `followup` might be something like "Here are some tips for staying calm.", or
336+
// "I remember my first presentation, I was nervous too!" or...
337+
```
338+
339+
In some cases, instead of asking for a new response message, you want to "prefill" part of the `"assistant"`-role response message. An example use case is to guide the language model toward specific response formats. To do this, add `prefix: true` to the trailing `"assistant"`-role message. For example:
340+
341+
```js
342+
const characterSheet = await session.prompt([
343+
{
344+
role: "user",
345+
content: "Create a TOML character sheet for a gnome barbarian"
346+
},
347+
{
348+
role: "assistant",
349+
content: "```toml\n",
350+
prefix: true
351+
}
352+
]);
353+
```
354+
355+
(Such examples work best if we also support [stop sequences](https://github.com/webmachinelearning/prompt-api/issues/44); stay tuned for that!)
356+
357+
Without this continuation, the output might be something like "Sure! Here's a TOML character sheet...". Whereas the prefix message sets the assistant on the right path immediately.
358+
359+
(Kudos to the [Standard Completions project](https://standardcompletions.org/) for [discussion](https://github.com/standardcompletions/rfcs/pull/8) of this functionality, as well as [the example](https://x.com/stdcompletions/status/1928565134080778414).)
360+
361+
If `prefix` is used in any message besides a final `"assistant"`-role one, a `"SyntaxError"` `DOMException` will occur.
362+
317363
### Appending messages without prompting for a response
318364

319365
In some cases, you know which messages you'll want to use to populate the session, but not yet the final message before you prompt the model for a response. Because processing messages can take some time (especially for multimodal inputs), it's useful to be able to send such messages to the model ahead of time. This allows it to get a head-start on processing, while you wait for the right time to prompt for a response.

index.bs

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -123,6 +123,8 @@ dictionary LanguageModelMessage {
123123

124124
// The DOMString branch is shorthand for `[{ type: "text", value: providedValue }]`
125125
required (DOMString or sequence<LanguageModelMessageContent>) content;
126+
127+
boolean prefix = false;
126128
};
127129

128130
dictionary LanguageModelMessageContent {
@@ -159,7 +161,8 @@ typedef (
159161
"{{LanguageModelMessageContent/type}}" → "{{LanguageModelMessageType/text}}",
160162
"{{LanguageModelMessageContent/value}}" → |input|&nbsp;&nbsp;&nbsp;&nbsp;<!-- https://github.com/speced/bikeshed/issues/3118 -->
161163
162-
»
164+
»,
165+
"{{LanguageModelMessage/prefix}}" → false
163166
164167
»</span>.
165168

@@ -178,8 +181,15 @@ typedef (
178181
"{{LanguageModelMessageContent/type}}" → "{{LanguageModelMessageType/text}}",
179182
"{{LanguageModelMessageContent/value}}" → |message|&nbsp;&nbsp;&nbsp;&nbsp;<!-- https://github.com/speced/bikeshed/issues/3118 -->
180183
181-
»
182-
</span> to |messages|.
184+
»,
185+
"{{LanguageModelMessage/prefix}}" → |message|["{{LanguageModelMessage/prefix}}"]
186+
</span>.
187+
188+
1. If |message|["{{LanguageModelMessage/prefix}}"] is true, then:
189+
190+
1. If |message|["{{LanguageModelMessage/role}}"] is not "{{LanguageModelMessageRole/assistant}}", then throw a "{{SyntaxError}}" {{DOMException}}.
191+
192+
1. If |message| is not the last item in |messages|, then throw a "{{SyntaxError}}" {{DOMException}}.
183193

184194
1. [=list/For each=] |content| of |message|["{{LanguageModelMessage/content}}"]:
185195

0 commit comments

Comments
 (0)