You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+46Lines changed: 46 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -314,6 +314,52 @@ The returned value will be a string that matches the input `RegExp`. If the user
314
314
315
315
If a value that is neither a `RegExp` object or a valid JSON schema object is given, the method will error with a `TypeError`.
316
316
317
+
### Constraining responses by providing a prefix
318
+
319
+
As discussed in [Customizing the role per prompt](#customizing-the-role-per-prompt), it is possible to prompt the language model to add a new `"assistant"`-role response in addition to a previous one. Usually it will elaborate on its previous messages. For example:
320
+
321
+
```js
322
+
constfollowup=awaitsession.prompt([
323
+
[
324
+
{
325
+
role:"user",
326
+
content:"I'm nervous about my presentation tomorrow"
327
+
},
328
+
{
329
+
role:"assistant"
330
+
content:"Presentations are tough!"
331
+
}
332
+
]
333
+
]);
334
+
335
+
// `followup` might be something like "Here are some tips for staying calm.", or
336
+
// "I remember my first presentation, I was nervous too!" or...
337
+
```
338
+
339
+
In some cases, instead of asking for a new response message, you want to "prefill" part of the `"assistant"`-role response message. An example use case is to guide the language model toward specific response formats. To do this, add `prefix: true` to the trailing `"assistant"`-role message. For example:
340
+
341
+
```js
342
+
constcharacterSheet=awaitsession.prompt([
343
+
{
344
+
role:"user",
345
+
content:"Create a TOML character sheet for a gnome barbarian"
346
+
},
347
+
{
348
+
role:"assistant",
349
+
content:"```toml\n",
350
+
prefix:true
351
+
}
352
+
]);
353
+
```
354
+
355
+
(Such examples work best if we also support [stop sequences](https://github.com/webmachinelearning/prompt-api/issues/44); stay tuned for that!)
356
+
357
+
Without this continuation, the output might be something like "Sure! Here's a TOML character sheet...". Whereas the prefix message sets the assistant on the right path immediately.
358
+
359
+
(Kudos to the [Standard Completions project](https://standardcompletions.org/) for [discussion](https://github.com/standardcompletions/rfcs/pull/8) of this functionality, as well as [the example](https://x.com/stdcompletions/status/1928565134080778414).)
360
+
361
+
If `prefix` is used in any message besides a final `"assistant"`-role one, a `"SyntaxError"``DOMException` will occur.
362
+
317
363
### Appending messages without prompting for a response
318
364
319
365
In some cases, you know which messages you'll want to use to populate the session, but not yet the final message before you prompt the model for a response. Because processing messages can take some time (especially for multimodal inputs), it's useful to be able to send such messages to the model ahead of time. This allows it to get a head-start on processing, while you wait for the right time to prompt for a response.
0 commit comments