Skip to content

Commit 989b47a

Browse files
committed
Add cleanup guidance for WasmExecutor in secure code execution doc
1 parent 68b4627 commit 989b47a

File tree

1 file changed

+10
-6
lines changed

1 file changed

+10
-6
lines changed

docs/source/en/tutorials/secure_code_execution.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ By default, the `CodeAgent` runs LLM-generated code in your environment.
3030
This is inherently risky, LLM-generated code could be harmful to your environment.
3131

3232
Malicious code execution can occur in several ways:
33-
- **Plain LLM error:** LLMs are still far from perfect and may unintentionally generate harmful commands while attempting to be helpful. While this risk is low, instances have been observed where an LLM attempted to execute potentially dangerous code.
34-
- **Supply chain attack:** Running an untrusted or compromised LLM could expose a system to harmful code generation. While this risk is extremely low when using well-known models on secure inference infrastructure, it remains a theoretical possibility.
33+
- **Plain LLM error:** LLMs are still far from perfect and may unintentionally generate harmful commands while attempting to be helpful. While this risk is low, instances have been observed where an LLM attempted to execute potentially dangerous code.
34+
- **Supply chain attack:** Running an untrusted or compromised LLM could expose a system to harmful code generation. While this risk is extremely low when using well-known models on secure inference infrastructure, it remains a theoretical possibility.
3535
- **Prompt injection:** an agent browsing the web could arrive on a malicious website that contains harmful instructions, thus injecting an attack into the agent's memory
3636
- **Exploitation of publicly accessible agents:** Agents exposed to the public can be misused by malicious actors to execute harmful code. Attackers may craft adversarial inputs to exploit the agent's execution capabilities, leading to unintended consequences.
3737
Once malicious code is executed, whether accidentally or intentionally, it can damage the file system, exploit local or cloud-based resources, abuse API services, and even compromise network security.
@@ -102,10 +102,10 @@ These safeguards make out interpreter is safer.
102102
We have used it on a diversity of use cases, without ever observing any damage to the environment.
103103

104104
> [!WARNING]
105-
> It's important to understand that no local python sandbox can ever be completely secure. While our interpreter provides significant safety improvements over the standard Python interpreter, it is still possible for a determined attacker or a fine-tuned malicious LLM to find vulnerabilities and potentially harm your environment.
106-
>
105+
> It's important to understand that no local python sandbox can ever be completely secure. While our interpreter provides significant safety improvements over the standard Python interpreter, it is still possible for a determined attacker or a fine-tuned malicious LLM to find vulnerabilities and potentially harm your environment.
106+
>
107107
> For example, if you've allowed packages like `Pillow` to process images, the LLM could generate code that creates thousands of large image files to fill your hard drive. Other advanced escape techniques might exploit deeper vulnerabilities in authorized packages.
108-
>
108+
>
109109
> Running LLM-generated code in your local environment always carries some inherent risk. The only way to run LLM-generated code with truly robust security isolation is to use remote execution options like E2B or Docker, as detailed below.
110110
111111
The risk of a malicious attack is low when using well-known LLMs from trusted inference providers, but it is not zero.
@@ -454,6 +454,10 @@ agent = CodeAgent(model=InferenceClientModel(), tools=[], executor_type="wasm")
454454
agent.run("Can you give me the 100th Fibonacci number?")
455455
```
456456

457+
> [!TIP]
458+
> Using the agent as a context manager (with the `with` statement) ensures that the WebAssembly Deno sandbox is cleaned up immediately after the agent completes its task.
459+
> Alternatively, it is possible to manually call the agent's `cleanup()` method.
460+
457461
### Best practices for sandboxes
458462

459463
These key practices apply to Blaxel, E2B, and Docker sandboxes:
@@ -481,7 +485,7 @@ These key practices apply to Blaxel, E2B, and Docker sandboxes:
481485
As illustrated in the diagram earlier, both sandboxing approaches have different security implications:
482486

483487
### Approach 1: Running just the code snippets in a sandbox
484-
- **Pros**:
488+
- **Pros**:
485489
- Easier to set up with a simple parameter (`executor_type="blaxel"`, `executor_type="e2b"`, or `executor_type="docker"`)
486490
- No need to transfer API keys to the sandbox
487491
- Better protection for your local environment

0 commit comments

Comments
 (0)