Skip to content

Commit 5cf4975

Browse files
Context Recall (#96)
## What Context recall estimation using annotated answers as ground truth ## Why Context recall was a highly requested feature, as it is one of the main pain points where pipeline error occurs in RAG systems ## How Introduced a simple paradigm similar to faithfulness --------- Co-authored-by: jjmachan <[email protected]>
1 parent ec2a34b commit 5cf4975

File tree

11 files changed

+803
-463
lines changed

11 files changed

+803
-463
lines changed

README.md

+4-2
Original file line numberDiff line numberDiff line change
@@ -91,9 +91,11 @@ Ragas measures your pipeline's performance against different dimensions
9191

9292
2. **Context Relevancy**: measures how relevant retrieved contexts are to the question. Ideally, the context should only contain information necessary to answer the question. The presence of redundant information in the context is penalized.
9393

94-
3. **Answer Relevancy**: refers to the degree to which a response directly addresses and is appropriate for a given question or context. This does not take the factuality of the answer into consideration but rather penalizes the present of redundant information or incomplete answers given a question.
94+
3. **Context Recall**: measures the recall of the retrieved context using annotated answer as ground truth. Annotated answer is taken as proxy for ground truth context.
9595

96-
4. **Aspect Critiques**: Designed to judge the submission against defined aspects like harmlessness, correctness, etc. You can also define your own aspect and validate the submission against your desired aspect. The output of aspect critiques is always binary.
96+
4. **Answer Relevancy**: refers to the degree to which a response directly addresses and is appropriate for a given question or context. This does not take the factuality of the answer into consideration but rather penalizes the present of redundant information or incomplete answers given a question.
97+
98+
5. **Aspect Critiques**: Designed to judge the submission against defined aspects like harmlessness, correctness, etc. You can also define your own aspect and validate the submission against your desired aspect. The output of aspect critiques is always binary.
9799

98100
The final `ragas_score` is the harmonic mean of individual metric scores.
99101

docs/integrations/langchain.ipynb

+159-29
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,17 @@
2525
"nest_asyncio.apply()"
2626
]
2727
},
28+
{
29+
"cell_type": "code",
30+
"execution_count": 2,
31+
"id": "8333f65e",
32+
"metadata": {},
33+
"outputs": [],
34+
"source": [
35+
"%load_ext autoreload\n",
36+
"%autoreload 2"
37+
]
38+
},
2839
{
2940
"cell_type": "markdown",
3041
"id": "842e32dc",
@@ -35,7 +46,7 @@
3546
},
3647
{
3748
"cell_type": "code",
38-
"execution_count": 2,
49+
"execution_count": 3,
3950
"id": "4aa9a986",
4051
"metadata": {},
4152
"outputs": [],
@@ -51,23 +62,23 @@
5162
"\n",
5263
"llm = ChatOpenAI()\n",
5364
"qa_chain = RetrievalQA.from_chain_type(\n",
54-
" llm, retriever=index.vectorstore.as_retriever(), return_source_documents=True\n",
65+
" llm, retriever=index.vectorstore.as_retriever(), return_source_documents=True,\n",
5566
")"
5667
]
5768
},
5869
{
5970
"cell_type": "code",
60-
"execution_count": 3,
71+
"execution_count": 4,
6172
"id": "b0ebdf8d",
6273
"metadata": {},
6374
"outputs": [
6475
{
6576
"data": {
6677
"text/plain": [
67-
"'New York City was named in honor of the Duke of York, who would become King James II of England. King Charles II appointed the Duke as proprietor of the former territory of New Netherland, including the city of New Amsterdam, when England seized it from Dutch control.'"
78+
"'New York City got its name in 1664 when it was renamed after the Duke of York, who later became King James II of England. The city was originally called New Amsterdam by Dutch colonists and was renamed New York when it came under British control.'"
6879
]
6980
},
70-
"execution_count": 3,
81+
"execution_count": 4,
7182
"metadata": {},
7283
"output_type": "execute_result"
7384
}
@@ -90,7 +101,7 @@
90101
},
91102
{
92103
"cell_type": "code",
93-
"execution_count": 4,
104+
"execution_count": 5,
94105
"id": "e67ce0e0",
95106
"metadata": {},
96107
"outputs": [],
@@ -103,7 +114,16 @@
103114
" \"What is the significance of the Statue of Liberty in New York City?\",\n",
104115
"]\n",
105116
"\n",
106-
"queries = [{\"query\": q} for q in eval_questions]"
117+
"eval_answers = [\n",
118+
" \"8,804,000\", # incorrect answer\n",
119+
" \"Queens\", # incorrect answer\n",
120+
" \"New York City's economic significance is vast, as it serves as the global financial capital, housing Wall Street and major financial institutions. Its diverse economy spans technology, media, healthcare, education, and more, making it resilient to economic fluctuations. NYC is a hub for international business, attracting global companies, and boasts a large, skilled labor force. Its real estate market, tourism, cultural industries, and educational institutions further fuel its economic prowess. The city's transportation network and global influence amplify its impact on the world stage, solidifying its status as a vital economic player and cultural epicenter.\",\n",
121+
" \"New York City got its name when it came under British control in 1664. King Charles II of England granted the lands to his brother, the Duke of York, who named the city New York in his own honor.\",\n",
122+
" 'The Statue of Liberty in New York City holds great significance as a symbol of the United States and its ideals of liberty and peace. It greeted millions of immigrants who arrived in the U.S. by ship in the late 19th and early 20th centuries, representing hope and freedom for those seeking a better life. It has since become an iconic landmark and a global symbol of cultural diversity and freedom.',\n",
123+
"]\n",
124+
"\n",
125+
"examples = [{\"query\": q, \"ground_truths\": [eval_answers[i]]} \n",
126+
" for i, q in enumerate(eval_questions)]"
107127
]
108128
},
109129
{
@@ -126,18 +146,63 @@
126146
},
127147
{
128148
"cell_type": "code",
129-
"execution_count": 5,
149+
"execution_count": 10,
150+
"id": "8f89d719",
151+
"metadata": {},
152+
"outputs": [
153+
{
154+
"data": {
155+
"text/plain": [
156+
"'The Statue of Liberty in New York City holds great significance as a symbol of the United States and its ideals of liberty and peace. It greeted millions of immigrants who arrived in the U.S. by ship in the late 19th and early 20th centuries, representing hope and freedom for those seeking a better life. It has since become an iconic landmark and a global symbol of cultural diversity and freedom.'"
157+
]
158+
},
159+
"execution_count": 10,
160+
"metadata": {},
161+
"output_type": "execute_result"
162+
}
163+
],
164+
"source": [
165+
"result = qa_chain({\"query\": eval_questions[4]})\n",
166+
"result[\"result\"]"
167+
]
168+
},
169+
{
170+
"cell_type": "code",
171+
"execution_count": 16,
172+
"id": "81fa9c47",
173+
"metadata": {},
174+
"outputs": [
175+
{
176+
"data": {
177+
"text/plain": [
178+
"'The borough of Brooklyn (Kings County) has the highest population in New York City.'"
179+
]
180+
},
181+
"execution_count": 16,
182+
"metadata": {},
183+
"output_type": "execute_result"
184+
}
185+
],
186+
"source": [
187+
"result = qa_chain(examples[1])\n",
188+
"result[\"result\"]"
189+
]
190+
},
191+
{
192+
"cell_type": "code",
193+
"execution_count": 8,
130194
"id": "1d9266d4",
131195
"metadata": {},
132196
"outputs": [],
133197
"source": [
134198
"from ragas.langchain.evalchain import RagasEvaluatorChain\n",
135-
"from ragas.metrics import faithfulness, answer_relevancy, context_relevancy\n",
199+
"from ragas.metrics import faithfulness, answer_relevancy, context_relevancy, context_recall\n",
136200
"\n",
137201
"# create evaluation chains\n",
138202
"faithfulness_chain = RagasEvaluatorChain(metric=faithfulness)\n",
139203
"answer_rel_chain = RagasEvaluatorChain(metric=answer_relevancy)\n",
140-
"context_rel_chain = RagasEvaluatorChain(metric=context_relevancy)"
204+
"context_rel_chain = RagasEvaluatorChain(metric=context_relevancy)\n",
205+
"context_recall_chain = RagasEvaluatorChain(metric=context_recall)"
141206
]
142207
},
143208
{
@@ -152,17 +217,17 @@
152217
},
153218
{
154219
"cell_type": "code",
155-
"execution_count": 6,
220+
"execution_count": 17,
156221
"id": "5ede32cd",
157222
"metadata": {},
158223
"outputs": [
159224
{
160225
"data": {
161226
"text/plain": [
162-
"1.0"
227+
"0.5"
163228
]
164229
},
165-
"execution_count": 6,
230+
"execution_count": 17,
166231
"metadata": {},
167232
"output_type": "execute_result"
168233
}
@@ -172,6 +237,28 @@
172237
"eval_result[\"faithfulness_score\"]"
173238
]
174239
},
240+
{
241+
"cell_type": "code",
242+
"execution_count": 18,
243+
"id": "94b5544e",
244+
"metadata": {},
245+
"outputs": [
246+
{
247+
"data": {
248+
"text/plain": [
249+
"0.0"
250+
]
251+
},
252+
"execution_count": 18,
253+
"metadata": {},
254+
"output_type": "execute_result"
255+
}
256+
],
257+
"source": [
258+
"eval_result = context_recall_chain(result)\n",
259+
"eval_result[\"context_recall_score\"]"
260+
]
261+
},
175262
{
176263
"cell_type": "markdown",
177264
"id": "f11295b5",
@@ -184,7 +271,7 @@
184271
},
185272
{
186273
"cell_type": "code",
187-
"execution_count": 7,
274+
"execution_count": 24,
188275
"id": "1ce7bff1",
189276
"metadata": {},
190277
"outputs": [
@@ -199,31 +286,73 @@
199286
"name": "stderr",
200287
"output_type": "stream",
201288
"text": [
202-
"100%|█████████████████████████████████████████████████████████████| 1/1 [00:38<00:00, 38.77s/it]\n"
289+
"100%|█████████████████████████████████████████████████████████████| 1/1 [00:57<00:00, 57.41s/it]\n"
203290
]
204291
},
205292
{
206293
"data": {
207294
"text/plain": [
208295
"[{'faithfulness_score': 1.0},\n",
209296
" {'faithfulness_score': 0.5},\n",
210-
" {'faithfulness_score': 0.75},\n",
297+
" {'faithfulness_score': 1.0},\n",
211298
" {'faithfulness_score': 1.0},\n",
212299
" {'faithfulness_score': 1.0}]"
213300
]
214301
},
215-
"execution_count": 7,
302+
"execution_count": 24,
216303
"metadata": {},
217304
"output_type": "execute_result"
218305
}
219306
],
220307
"source": [
221308
"# run the queries as a batch for efficiency\n",
222-
"predictions = qa_chain.batch(queries)\n",
309+
"predictions = qa_chain.batch(examples)\n",
223310
"\n",
224311
"# evaluate\n",
225312
"print(\"evaluating...\")\n",
226-
"r = faithfulness_chain.evaluate(queries, predictions)\n",
313+
"r = faithfulness_chain.evaluate(examples, predictions)\n",
314+
"r"
315+
]
316+
},
317+
{
318+
"cell_type": "code",
319+
"execution_count": 25,
320+
"id": "55299f14",
321+
"metadata": {},
322+
"outputs": [
323+
{
324+
"name": "stdout",
325+
"output_type": "stream",
326+
"text": [
327+
"evaluating...\n"
328+
]
329+
},
330+
{
331+
"name": "stderr",
332+
"output_type": "stream",
333+
"text": [
334+
"100%|█████████████████████████████████████████████████████████████| 1/1 [00:54<00:00, 54.21s/it]\n"
335+
]
336+
},
337+
{
338+
"data": {
339+
"text/plain": [
340+
"[{'context_recall_score': 0.9333333333333333},\n",
341+
" {'context_recall_score': 0.0},\n",
342+
" {'context_recall_score': 1.0},\n",
343+
" {'context_recall_score': 1.0},\n",
344+
" {'context_recall_score': 1.0}]"
345+
]
346+
},
347+
"execution_count": 25,
348+
"metadata": {},
349+
"output_type": "execute_result"
350+
}
351+
],
352+
"source": [
353+
"# evaluate context recall\n",
354+
"print(\"evaluating...\")\n",
355+
"r = context_recall_chain.evaluate(examples, predictions)\n",
227356
"r"
228357
]
229358
},
@@ -244,15 +373,15 @@
244373
},
245374
{
246375
"cell_type": "code",
247-
"execution_count": 8,
376+
"execution_count": 48,
248377
"id": "e75144c5",
249378
"metadata": {},
250379
"outputs": [
251380
{
252381
"name": "stdout",
253382
"output_type": "stream",
254383
"text": [
255-
"using existing dataset: NYC test\n"
384+
"Created a new dataset: NYC test\n"
256385
]
257386
}
258387
],
@@ -274,9 +403,10 @@
274403
" dataset = client.create_dataset(\n",
275404
" dataset_name=dataset_name, description=\"NYC test dataset\"\n",
276405
" )\n",
277-
" for q in eval_questions:\n",
406+
" for e in examples:\n",
278407
" client.create_example(\n",
279-
" inputs={\"query\": q},\n",
408+
" inputs={\"query\": e[\"query\"]},\n",
409+
" outputs={\"ground_truths\": e[\"ground_truths\"]},\n",
280410
" dataset_id=dataset.id,\n",
281411
" )\n",
282412
"\n",
@@ -297,7 +427,7 @@
297427
},
298428
{
299429
"cell_type": "code",
300-
"execution_count": 9,
430+
"execution_count": 27,
301431
"id": "3a6decc6",
302432
"metadata": {},
303433
"outputs": [],
@@ -322,27 +452,27 @@
322452
},
323453
{
324454
"cell_type": "code",
325-
"execution_count": 10,
455+
"execution_count": 49,
326456
"id": "25f7992f",
327457
"metadata": {},
328458
"outputs": [
329459
{
330460
"name": "stdout",
331461
"output_type": "stream",
332462
"text": [
333-
"View the evaluation results for project '2023-08-22-19-28-17-RetrievalQA' at:\n",
334-
"https://smith.langchain.com/projects/p/2133d672-b69a-4091-bc96-a4e39d150db5?eval=true\n"
463+
"View the evaluation results for project '2023-08-24-03-36-45-RetrievalQA' at:\n",
464+
"https://smith.langchain.com/projects/p/9fb78371-150e-49cc-a927-b1247fdb9e8d?eval=true\n"
335465
]
336466
}
337467
],
338468
"source": [
339469
"from langchain.smith import RunEvalConfig, run_on_dataset\n",
340470
"\n",
341471
"evaluation_config = RunEvalConfig(\n",
342-
" custom_evaluators=[faithfulness_chain, answer_rel_chain, context_rel_chain],\n",
472+
" custom_evaluators=[faithfulness_chain, answer_rel_chain, context_rel_chain, context_recall_chain],\n",
343473
" prediction_key=\"result\",\n",
344474
")\n",
345-
"\n",
475+
" \n",
346476
"result = run_on_dataset(\n",
347477
" client,\n",
348478
" dataset_name,\n",

docs/metrics.md

+16
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,22 @@ dataset: Dataset
3030
results = context_rel.score(dataset)
3131
```
3232

33+
### `Context Recall`
34+
measures the recall of the retrieved context using annotated answer as ground truth. Annotated answer is taken as proxy for ground truth context.
35+
36+
```python
37+
from ragas.metrics.context_recall import ContextRecall
38+
context_recall = ContextRecall()
39+
# Dataset({
40+
# features: ['contexts','ground_truths'],
41+
# num_rows: 25
42+
# })
43+
dataset: Dataset
44+
45+
results = context_recall.score(dataset)
46+
```
47+
48+
3349
### `AnswerRelevancy`
3450

3551
This measures how relevant is the generated answer to the prompt. If the generated answer is incomplete or contains redundant information the score will be low. This is quantified by working out the chance of an LLM generating the given question using the generated answer. Values range (0,1), higher the better.

0 commit comments

Comments
 (0)