Skip to content

Conversation

@cpinn
Copy link
Contributor

@cpinn cpinn commented Dec 10, 2025

Run prompts in the sandbox environment, depabtable but seems recommended for untrusted templates

Run prompts in the sandbox environment, depabtable but seems recommended for untrusted templates
"GitPython",
"requests",
"chevron",
"jinja2>=3.1.6",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jinja appeared to have some bad vulnerabilities prior to 3.1.5 so I chose 3.1.6 as min version.


def render_templated_object(obj: Any, args: Any) -> Any:

class _JinjaSafeDict:
Copy link
Contributor Author

@cpinn cpinn Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran into jinja accessing . as function methods during evaluation (ie. TypeError: 'builtin_function_or_method' object is not iterable). This would prevent that by wrapping the dictionary values. Some users may find this and the sandbox environment to be restrictive but start with safety.

@ibolmo
Copy link
Collaborator

ibolmo commented Jan 2, 2026

@cpinn is this one still needed?

@cpinn
Copy link
Contributor Author

cpinn commented Jan 5, 2026

@cpinn is this one still needed?
Yep, still needed, just needed a small update. I had wanted to do a little bit of testing with the api as well.

template_format: TemplateFormat | None = None

@classmethod
def from_dict_deep(cls, d: dict):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Welcome input here.

I want the api to accept nunjucks but I want users of the sdk to interact with jinja.

Copy link
Collaborator

@ibolmo ibolmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left a thinker

template_obj = env.from_string(template)
if isinstance(data, dict):
wrapped_data = {k: _JinjaSafeDict.wrap(v) for k, v in data.items()}
variables = {"input": _JinjaSafeDict(data), **wrapped_data}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#nit

may be useful to call out that in js we expand the input inside the main variables so we're just keeping similar behavior

variables = {"input": data}
return template_obj.render(**variables)
except jinja2.UndefinedError as e:
raise ValueError(f"Template rendering failed: {str(e)}") from e
Copy link
Collaborator

@ibolmo ibolmo Jan 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#aside Pretty sure we don't do this, but in the future we should have custom error classes

def from_dict_deep(cls, d: dict):
d2 = dict(d)
if d2.get("template_format") == "nunjucks":
d2["template_format"] = "jinja"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i take it we're aliasing jinja <=> nunjucks, I'm not seeing something similar in TS. 🤔

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also curious if the template_format should be in the PromptData or in PromptOptions. i take it the latter would be saved in the prompt? Would this make the prompt interoperable in ts and python? would that also avoid hiding the template_format within the data it self?

Copy link
Contributor Author

@cpinn cpinn Jan 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you remember how I got really worried about the sdk release with nunjucks? That was because of the PromptData v PromptOptions. I realized maybe we may have wanted this in promptOptions but it already existed and went out as prompt data. Some people already started using that on the api so I didn't feel like I could rollback that choice unfortunately. It sort of makes sense in PromptData in the PromptOptions did appear to currently be more about options on the ai model but I think there was a case for putting it in either location.

yeah I wanted to alias jinja <==> nunjucks for users of the python sdk. The other option would be to just accept and treat both in the api but then I was feeling like that messed up the zod type and how the data got saved and read so I ended up with just aliasing in the SDK.

# Use the prompt's template_format
template_format = self.template_format

params = self.options.get("params") or {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah if template_format was in the options we could grab it from here and avoid some of the state?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I did that it would not match with the api as template_format is currently top level on the prompt. Maybe that is fine in the python sdk(?) but it wouldn't match the current modelling.

When I had been looking and maybe it was not the best modeling choice it had looked like "options" were really for options of the actual prompt model being used (ie. which model, temperature for the model) and not options for prompt setup.

@cpinn
Copy link
Contributor Author

cpinn commented Jan 22, 2026

Holding off we may have another plan for supporting advanced templating in other languages.

@cpinn cpinn marked this pull request as draft January 22, 2026 20:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants