Skip to content
View Xannadoo's full-sized avatar

Highlights

  • Pro

Organizations

@carbonCostKaggle

Block or report Xannadoo

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Xannadoo/README.md

Hej πŸ‘‹

I'm Chrisanna, also known as Sanna.

πŸ‘©β€πŸŽ“ I completed my MSc in Data Science at ITU Copenhagen in late 2025, with a focus on explainability of LLMs, and machine learning models in general.

🧠 I am interested in knowing why. More specifically, why do machine learning models make the decisions they do/return the response they did? How do we go about creating these explanations and how reliable are the explanations? What can we trust and how much can we trust it? What do we even mean by that anyway?

πŸŽ† Fun fact: The name Xannadoo comes from Samuel Taylor Coleridge's poem Kubla Khan. Many years ago, a collegue of mine misheard my name and it stuck. I also used to live in Ottery St Mary, the birthplace of Coleridge, before moving to Denmark in 2020.

Pinned Loading

  1. examining-faithfulness-COT-deepseekR1 examining-faithfulness-COT-deepseekR1 Public

    Examining the Faithfulness of Deepseek R1’s Chain-of-Thought Reasoning

    Jupyter Notebook

  2. comparision-reasoning-skills-llm comparision-reasoning-skills-llm Public

    Investigating Comparison Reasoning Skills in a Large Language Model

    Jupyter Notebook

  3. carbonCostKaggle/carbon-cost-kaggle carbonCostKaggle/carbon-cost-kaggle Public

    Investigating the carbon cost of machine learning competitions that use medical image datasets.

    Jupyter Notebook 1