Skip to content

Commit

Permalink
docs fixes (#1075)
Browse files Browse the repository at this point in the history
* remove spurious trulens_explain

* clear output

* formatting

* formatting

* nits

* formatting

* remove one more name

* format

* resaving problematic notebook

* more formatting and clearing

* fixing some docs links

* fixing docs links

* adding icons

* comment out utils index

---------

Co-authored-by: Aaron <[email protected]>
  • Loading branch information
piotrm0 and arn-tru authored Apr 17, 2024
1 parent 5644b1a commit d03fa8b
Show file tree
Hide file tree
Showing 11 changed files with 318 additions and 659 deletions.
72 changes: 44 additions & 28 deletions docs/trulens_eval/evaluation/feedback_functions/anatomy.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
# Anatomy of Feedback Functions
# 🦴 Anatomy of Feedback Functions

The `Feedback` class contains the starting point for feedback function
specification and evaluation. A typical use-case looks like this:
The [Feedback][trulens_eval.feedback.feedback.Feedback] class contains the
starting point for feedback function specification and evaluation. A typical
use-case looks like this:

```python
# Context relevance between question and each context chunk.
f_context_relevance = (
Feedback(provider.context_relevance_with_cot_reasons, name = "Context Relevance")
Feedback(
provider.context_relevance_with_cot_reasons,
name="Context Relevance"
)
.on(Select.RecordCalls.retrieve.args.query)
.on(Select.RecordCalls.retrieve.rets)
.aggregate(numpy.mean)
Expand All @@ -22,11 +26,12 @@ Multiple underlying models are available througheach provider, such as GPT-4 or
Llama-2. In many, but not all cases, the feedback implementation is shared
cross providers (such as with LLM-based evaluations).

Read more about [feedback providers](../../../api/provider/).
Read more about [feedback providers](../../api/providers.md).

## Feedback implementations

`openai.context_relevance` is an example of a feedback function implementation.
[OpenAI.context_relevance][trulens_eval.feedback.provider.openai.OpenAI.context_relevance]
is an example of a feedback function implementation.

Feedback implementations are simple callables that can be run
on any arguments matching their signatures. In the example, the implementation
Expand All @@ -36,11 +41,12 @@ has the following signature:
def context_relevance(self, prompt: str, context: str) -> float:
```

That is, `context_relevance` is a plain python method that accepts the prompt and
context, both strings, and produces a float (assumed to be between 0.0 and
1.0).
That is,
[context_relevance][trulens_eval.feedback.provider.openai.OpenAI.context_relevance]
is a plain python method that accepts the prompt and context, both strings, and
produces a float (assumed to be between 0.0 and 1.0).

Read more about [feedback implementations](../../feedback_implementations/)
Read more about [feedback implementations](../feedback_implementations/index.md)

## Feedback constructor

Expand All @@ -49,24 +55,34 @@ Feedback object with a feedback implementation.

## Argument specification

The next line, `on_input_output`, specifies how
the `language_match` arguments are to be determined from an app record or app
definition. The general form of this specification is done using `on` but
several shorthands are provided. For example, `on_input_output` states that the first two
argument to `relevance` (`prompt` and `response`) are to be the main app input
and the main output, respectively.

Read more about [argument specification](../feedback_selectors/selecting_components.md) and [selector shortcuts](../feedback_selectors/selector_shortcuts.md).
The next line,
[on_input_output][trulens_eval.feedback.feedback.Feedback.on_input_output],
specifies how the
[context_relevance][trulens_eval.feedback.provider.openai.OpenAI.context_relevance]
arguments are to be determined from an app record or app definition. The general
form of this specification is done using
[on][trulens_eval.feedback.feedback.Feedback.on] but several shorthands are
provided. For example,
[on_input_output][trulens_eval.feedback.feedback.Feedback.on_input_output]
states that the first two argument to
[context_relevance][trulens_eval.feedback.provider.openai.OpenAI.context_relevance]
(`prompt` and `context`) are to be the main app input and the main output,
respectively.

Read more about [argument
specification](../feedback_selectors/selecting_components.md) and [selector
shortcuts](../feedback_selectors/selector_shortcuts.md).

## Aggregation specification

The last line `aggregate(numpy.mean)` specifies
how feedback outputs are to be aggregated. This only applies to cases where
the argument specification names more than one value for an input. The second
specification, for `statement` was of this type. The input to `aggregate` must
be a method which can be imported globally. This requirement is further
elaborated in the next section. This function is called on the `float` results
of feedback function evaluations to produce a single float. The default is
`numpy.mean`.

Read more about [feedback aggregation](../../feedback_aggregation/).
The last line `aggregate(numpy.mean)` specifies how feedback outputs are to be
aggregated. This only applies to cases where the argument specification names
more than one value for an input. The second specification, for `statement` was
of this type. The input to
[aggregate][trulens_eval.feedback.feedback.Feedback.aggregate] must be a method
which can be imported globally. This requirement is further elaborated in the
next section. This function is called on the `float` results of feedback
function evaluations to produce a single float. The default is
[numpy.mean][numpy.mean].

Read more about [feedback aggregation](../feedback_aggregation/index.md).
2 changes: 1 addition & 1 deletion docs/trulens_explain/api/attribution.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Attribution Methods

::: trulens_explain.trulens.nn.attribution
::: trulens.nn.attribution
2 changes: 1 addition & 1 deletion docs/trulens_explain/api/distributions.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Distributions of Interest

::: trulens_explain.trulens.nn.distributions
::: trulens.nn.distributions
2 changes: 1 addition & 1 deletion docs/trulens_explain/api/model_wrappers.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Model Wrappers

::: trulens_explain.trulens.nn.models
::: trulens.nn.models
2 changes: 1 addition & 1 deletion docs/trulens_explain/api/quantities.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Quantities of Interest

::: trulens_explain.trulens.nn.quantities
::: trulens.nn.quantities
2 changes: 1 addition & 1 deletion docs/trulens_explain/api/slices.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Slices

::: trulens_explain.trulens.nn.slices
::: trulens.nn.slices
2 changes: 1 addition & 1 deletion docs/trulens_explain/api/visualizations.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Visualization Methods

::: trulens_explain.trulens.visualizations
::: trulens.visualizations
6 changes: 3 additions & 3 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ nav:
# PLACEHOLDER: - trulens_eval/evaluation/index.md
- ☔ Feedback Functions:
- trulens_eval/evaluation/feedback_functions/index.md
- Anatomy of a Feedback Function: trulens_eval/evaluation/feedback_functions/anatomy.md
- 🦴 Anatomy of a Feedback Function: trulens_eval/evaluation/feedback_functions/anatomy.md
- Feedback Implementations:
- trulens_eval/evaluation/feedback_implementations/index.md
- 🧰 Stock Feedback Functions: trulens_eval/evaluation/feedback_implementations/stock.md
Expand Down Expand Up @@ -240,7 +240,7 @@ nav:
- 🦙 TruLlama: trulens_eval/api/app/trullama.md
- TruRails: trulens_eval/api/app/trurails.md
- TruCustom: trulens_eval/api/app/trucustom.md
- TruVirtual: trulens_eval/api/app/truvirtual.md
- TruVirtual: trulens_eval/api/app/truvirtual.md
- Feedback: trulens_eval/api/feedback.md
- 💾 Record: trulens_eval/api/record.md
- Provider:
Expand All @@ -259,7 +259,7 @@ nav:
- 𝄢 Instruments: trulens_eval/api/instruments.md
- 🗄 Database: trulens_eval/api/db.md
- Utils:
- trulens_eval/api/utils/index.md
# - trulens_eval/api/utils/index.md
- trulens_eval/api/utils/python.md
- trulens_eval/api/utils/serial.md
- trulens_eval/api/utils/json.md
Expand Down
Loading

0 comments on commit d03fa8b

Please sign in to comment.