Skip to content

Cancel/Interrupt active evaluation #3442

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mattsse opened this issue Oct 30, 2023 · 17 comments
Open

Cancel/Interrupt active evaluation #3442

mattsse opened this issue Oct 30, 2023 · 17 comments
Labels
API enhancement New feature or request

Comments

@mattsse
Copy link
Contributor

mattsse commented Oct 30, 2023

feature
I would like to be able to cancel an evaluation (or JsValue::call) after a certain timeout.
Perhaps this is already possible, but I wasn't able to find it.

boja for example has interrupts:

https://github.com/dop251/goja/blob/b396bb4c349df65109dea3df00fb60f6a044950d/runtime.go#L1474C9-L1482

my motivation for this is to have the option to terminate a (malicious) evaluation for example while(1) {}

Example code
Give a code example that should work after the implementation of this feature.

@mattsse mattsse added the enhancement New feature or request label Oct 30, 2023
@jedel1043
Copy link
Member

There isn't a hook to do this at the moment, but we have plans to support this. The simplest API I can think of is a way to set a flag for the Context from another thread that marks it as interrupted. Would that be enough to support your use case?

@mattsse
Copy link
Contributor Author

mattsse commented Oct 30, 2023

cool.

The simplest API I can think of is a way to set a flag for the Context from another thread that marks it as interrupted

yeah, this is exactly what I'm looking for, like Interrupt(Arc<AtomicBool>) or equivalent.

@jedel1043
Copy link
Member

yeah, this is exactly what I'm looking for, like Interrupt(Arc<AtomicBool>) or equivalent.

Alright, I'll put this on our backlog then.

@jedel1043 jedel1043 moved this to To do in Boa pre-v1 Oct 30, 2023
@jedel1043 jedel1043 added the API label Oct 30, 2023
@felipefdl
Copy link

It's difficult to use Boa for running untrusted third-party code without a way to interrupt eval.

I appreciate the simplicity and effectiveness of rquickjs's set_interrupt_handler method for handling this issue. It's a great solution for many use cases.

Here's an example showing how a timeout could be implemented using set_interrupt_handler:

let timestamp_now = std::time::SystemTime::now();

rt.set_interrupt_handler(Some(Box::new(move || {
  if let Ok(elapsed) = timestamp_now.elapsed() {
    if elapsed.as_millis() > 1000 {
      return true;
    }
  }

  false
})));

@rib
Copy link

rib commented May 1, 2025

It could be good if any solution to interrupting evaluation didn't depend on threads, in case you might want to embed the Boa runtime in a wasm/browser environment without threads.

Some interpreter runtimes (e.g. Wasmi: https://docs.rs/wasmi/latest/wasmi/struct.Config.html#method.consume_fuel, or Piccolo: https://github.com/kyren/piccolo?tab=readme-ov-file#executor-fuel-and-vm-memory-tracking) define some notion of "fuel" that can effectively count and limit the number bytecode instructions evaluated, and maybe an approach like that could work for Boa too.

I'd like to be able to use Boa for evaluating code that is generated by AI agents where I can't guarantee that they don't write nonsense code that might end up in some infinite loop. I'd also love to find an embeddable runtime that can be used in a browser.

@rib
Copy link

rib commented May 1, 2025

Interesting; it vaguely looks like the Boa VM already has some support for evaluting opcodes with a limited budget:

    /// Runs the current frame to completion, yielding to the caller each time `budget`
    /// "clock cycles" have passed.
    #[allow(clippy::future_not_send)]
    pub(crate) async fn run_async_with_budget(&mut self, budget: u32) -> CompletionRecord {

@rib
Copy link

rib commented May 7, 2025

Looking at this again, it seems like Boa already has some support, and examples, for running the engine async with a budget system that forces the engine to yield to the async runtime every X cycles.

There are some examples here:

Not sure if that could support your use case @mattsse / @felipefdl or if there are some notable limitations with this currently @jedel1043? It looks like it could be perfect for my use case.

@jedel1043
Copy link
Member

@rib The biggest limitation right now is that the natively implemented functions don't have a way to "suspend", since they're implemented as sync code. This means things like Array.sort will run indefinitely without considering the budget.

I see two possible solutions for this:

  • Convert all sync code into async code or state machines. Really painful, since you cannot have async function pointers so we'd have to entirely re-architect our object vtables, but it offers the best runtime performance.
  • Translate all natively implemented functions into VM bytecode. Doesn't require redesigning anything, but it will require creating a "compile time" bytecode compiler and a lot of effort to migrate every function to that.

@rib
Copy link

rib commented May 7, 2025

Ah, right, that makes sense - thanks for clarifying.

I was also half hoping it was going to be possible to implement async native functions (i.e sync from the pov of javascript), so yeah guess that's going to be tricky.

@rib
Copy link

rib commented May 7, 2025

  • Really painful, since you cannot have async function pointers so we'd have to entirely re-architect our object vtables, but it offers the best runtime performance

Without being familiar with the constraints atm, I vaguely wonder what would stop Boa from having boxed futures ala Box::pin(future) for reducing a future for native function implementations into to a single pointer.

Imagining that maybe in addition to PointerFn(NativeFunctionPointer) there could be a BoxedFuture(Foo) and maaybe there could be some way to avoid re-working the object vtable so __call__ could return some kind of JSError that represents a yield, whereby the function needs to be repeatedly called until it doesn't yield.

So unlike NativeFunction::from_async_fn that is a sync function that returns an enqueued JsPromise, the call implementation would poll the future and return an error if the future is not ready yet and somehow that would bubble up to the VM that would yield in response to the error and then repeat the operation... maybe?

I guess it's not really that simple, e.g. considering something then needs to reset the mutable state of the future once the function completes without an error.

@jedel1043
Copy link
Member

Maybe that would be enough? We had to use &RefCell<&mut Context> in NativeFunction::from_async_fn and all other async functions to please the borrow checker, but maybe we just need to fully migrate to an internally mutable &Context instead, and change all the functions to be async.

@rib
Copy link

rib commented May 7, 2025

Usage of CallValue::resolve() sort of looks like it could be adapted to propagate CallValue::Pending as a means to make the VM yield.

Maybe there could be a corresponding CompletionType::Pending that could be returned by Operation::execute and somehow Context::execute_one can be adapted to revert the program counter whenever an operator returns a Pending status.

@jedel1043
Copy link
Member

Maybe there could be a corresponding CompletionType::Pending that could be returned by Operation::execute and somehow Context::execute_one can be adapted to revert the program counter whenever an operator returns a Pending status.

That's basically doing manual state machines, and if you have tried to convert a complex for loop into a state machine... yeah, it's not pretty.

@rib
Copy link

rib commented May 7, 2025

hmm, not sure. The complex state machine would be in a regular Future that's boxed within a NativeFunction.

The thing about the program counter is just to make sure that the VM would keep spinning on the same operation until its Pending status is cleared (and for a NativeFunction that wraps a boxed Future then __call__ would keep returning a Pending status until poll returns Ready<T>.

@jedel1043
Copy link
Member

Hmm yeah, that could work. The other missing piece is how to communicate to the VM that you want to call a function within another function.

We cannot call it directly since that would cause a double mutable borrow, so there must be some mechanism to make the VM suspend the currently running native function, then start executing the callee, then resume the caller with the callee's return value when the callee finishes executing.

@rib
Copy link

rib commented May 7, 2025

I also don't know atm how to practically deal with creating a newly reset Future once it reaches completion since Rust doesn't expose the state machine in a way that can simply be reset.

@rib
Copy link

rib commented May 9, 2025

As a draft/prototype of some of my thoughts above, I experimented with this branch that adds basic support for async NativeFunctions that can yield to the application's async runtime but appear synchronous to JavaScript: #4237

I might be wrong but I don't think it would be too tricky to improve the Waker details so the implementation wouldn't lead to a busy yield loop when waiting for async host functions.

It's a lot harder for me to see a good way of extending it to support all the various Operations that themselves make JsObject::call()s that have the potential to call an async NativeFunction. (currently only the Call Operation supports async NativeFunction and other operations that use __call__ would panic because they don't support being executed repeatedly while in a "pending" state.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API enhancement New feature or request
Projects
Status: To do
Development

No branches or pull requests

4 participants