Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Debugging #266

Closed
wants to merge 29 commits into from
Closed

Debugging #266

wants to merge 29 commits into from

Conversation

AdamBelfki3
Copy link
Collaborator

@AdamBelfki3 AdamBelfki3 commented Oct 10, 2024

New Feature!

NNsight support now more user-friendly error handling which shows the exact line in the context that caused the exception to be raised.

Simply set debug=True on your .trace(...) call to activate this setting.

from nnsight import LanguageModel

lm = LanguageModel("openai-community/gpt2")

with lm.trace("Hello World", debug=True) as tracer:
    lm.transformer.h[5].mlp.output[0][-1][100000].save()
Traceback (most recent call last):
  File "traceback_test.py", line 7, in <module>
    lm.transformer.h[5].mlp.output[0][-1][100000].save()

NNsightError: index 100000 is out of bounds for dimension 0 with size 768.
  • If you are using a Session, you just need to pass debug=True to the .session(...) call and it will propagate to all the graphs defined within.

  • You can make this debug mode your default setting by calling:

import nnsight
from nnsight import CONFIG

CONFIG.set_default_app_debug(True)
  • Or if you want to only make it persistent for the current run, use:
import nnsight
from nnsight import CONFIG

CONFIG.APP.DEBUG = True

improvement: Passing scan=True to Tracer propagates to all the Invokers defined within.

with lm.trace(scan=True) as tracer:
    with tracer.invoke("The Eiffel Tower is in the city of"):
        print(lm.transformer.h[2].mlp.output.shape)

    with tracer.invoke("Buckingham Palace is in the city of"):
        print(lm.transformer.h[2].mlp.output.shape)
>>> torch.Size([1, 10, 768])
>>> torch.Size([1, 9, 768])

Butanium and others added 24 commits August 22, 2024 19:11
Fix logic in LaguageModel init. The meta model (from_config) was not …
Changed the /status page icon location in the website navbar
The model name for llama is `meta-llama/Meta-Llama-3.1-70B`, corrected it in the /about page.  Previously, it would report model not found.
Update the model name in the about page
Bug fixes to the activation patching tutorial
Fix tokenizer kwarg in unifiedTransformer
There is an inconsistency between how the tracing context is initialized in the large code block and the smaller inline one.
Add <3.10 compatibility to remote execution
Fix new synthax erro in readme
@JadenFiotto-Kaufman
Copy link
Member

@AdamBelfki3 Can you merge 0.4 into this branch and make the PR into 0.4?

@AdamBelfki3 AdamBelfki3 changed the base branch from dev to 0.4 October 22, 2024 14:59
@AdamBelfki3 AdamBelfki3 closed this Nov 7, 2024
@JadenFiotto-Kaufman JadenFiotto-Kaufman deleted the debugging branch December 22, 2024 18:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants