Skip to content

Commit 7eec750

Browse files
authored
feat: Test generation for non-english corpus (#1734)
1 parent 99d9425 commit 7eec750

File tree

8 files changed

+515
-24
lines changed

8 files changed

+515
-24
lines changed

docs/howtos/customizations/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ How to customize various aspects of Ragas to suit your needs.
1515

1616

1717
## Testset Generation
18-
18+
- [Generate test data from non-english corpus](testgenerator/_language_adaptation.md)
1919
- [Configure or automatically generate Personas](testgenerator/_persona_generator.md)
2020
- [Customize single-hop queries for RAG evaluation](testgenerator/_testgen-custom-single-hop.md)
2121
- [Create custom multi-hop queries for RAG evaluation](testgenerator/_testgen-customisation.md)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
## Synthetic test generation from non-english corpus
2+
3+
In this notebook, you'll learn how to adapt synthetic test data generation to non-english corpus settings. For the sake of this tutorial, I am generating queries in Spanish from Spanish wikipedia articles.
4+
5+
### Download and Load corpus
6+
7+
8+
```python
9+
! git clone https://huggingface.co/datasets/explodinggradients/Sample_non_english_corpus
10+
```
11+
12+
Cloning into 'Sample_non_english_corpus'...
13+
remote: Enumerating objects: 12, done.
14+
remote: Counting objects: 100% (8/8), done.
15+
remote: Compressing objects: 100% (8/8), done.
16+
remote: Total 12 (delta 0), reused 0 (delta 0), pack-reused 4 (from 1)
17+
Unpacking objects: 100% (12/12), 11.43 KiB | 780.00 KiB/s, done.
18+
19+
20+
21+
```python
22+
from langchain_community.document_loaders import DirectoryLoader, TextLoader
23+
24+
25+
path = "Sample_non_english_corpus/"
26+
loader = DirectoryLoader(path, glob="**/*.txt")
27+
docs = loader.load()
28+
```
29+
30+
/opt/homebrew/Caskroom/miniforge/base/envs/ragas/lib/python3.9/site-packages/requests/__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.20) or chardet (5.2.0)/charset_normalizer (None) doesn't match a supported version!
31+
warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "
32+
33+
34+
35+
```python
36+
len(docs)
37+
```
38+
39+
40+
41+
42+
6
43+
44+
45+
46+
### Intialize required models
47+
48+
49+
```python
50+
from ragas.llms import LangchainLLMWrapper
51+
from ragas.embeddings import LangchainEmbeddingsWrapper
52+
from langchain_openai import ChatOpenAI
53+
from langchain_openai import OpenAIEmbeddings
54+
55+
generator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o-mini"))
56+
generator_embeddings = LangchainEmbeddingsWrapper(OpenAIEmbeddings())
57+
```
58+
59+
/opt/homebrew/Caskroom/miniforge/base/envs/ragas/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
60+
from .autonotebook import tqdm as notebook_tqdm
61+
62+
63+
### Setup Persona and transforms
64+
you may automatically create personas using this [notebook](./_persona_generator.md). For the sake of simplicity, I am using a pre-defined person, two basic tranforms and simple specic query distribution.
65+
66+
67+
```python
68+
from ragas.testset.persona import Persona
69+
70+
personas = [
71+
Persona(
72+
name="curious student",
73+
role_description="A student who is curious about the world and wants to learn more about different cultures and languages",
74+
),
75+
]
76+
```
77+
78+
79+
```python
80+
from ragas.testset.transforms.extractors.llm_based import NERExtractor
81+
from ragas.testset.transforms.splitters import HeadlineSplitter
82+
83+
transforms = [HeadlineSplitter(), NERExtractor()]
84+
```
85+
86+
### Intialize test generator
87+
88+
89+
```python
90+
from ragas.testset import TestsetGenerator
91+
92+
generator = TestsetGenerator(
93+
llm=generator_llm, embedding_model=generator_embeddings, persona_list=personas
94+
)
95+
```
96+
97+
### Load and Adapt Queries
98+
99+
Here we load the required query types and adapt them to the target language.
100+
101+
102+
```python
103+
from ragas.testset.synthesizers.single_hop.specific import (
104+
SingleHopSpecificQuerySynthesizer,
105+
)
106+
107+
distribution = [
108+
(SingleHopSpecificQuerySynthesizer(llm=generator_llm), 1.0),
109+
]
110+
111+
for query, _ in distribution:
112+
prompts = await query.adapt_prompts("spanish", llm=generator_llm)
113+
query.set_prompts(**prompts)
114+
```
115+
116+
### Generate
117+
118+
119+
```python
120+
dataset = generator.generate_with_langchain_docs(
121+
docs[:],
122+
testset_size=5,
123+
transforms=transforms,
124+
query_distribution=distribution,
125+
)
126+
```
127+
128+
Applying HeadlineSplitter: 0%| | 0/6 [00:00<?, ?it/s]unable to apply transformation: 'headlines' property not found in this node
129+
unable to apply transformation: 'headlines' property not found in this node
130+
unable to apply transformation: 'headlines' property not found in this node
131+
unable to apply transformation: 'headlines' property not found in this node
132+
unable to apply transformation: 'headlines' property not found in this node
133+
unable to apply transformation: 'headlines' property not found in this node
134+
Generating Scenarios: 100%|██████████| 1/1 [00:07<00:00, 7.75s/it]
135+
Generating Samples: 100%|██████████| 5/5 [00:03<00:00, 1.65it/s]
136+
137+
138+
139+
```python
140+
eval_dataset = dataset.to_evaluation_dataset()
141+
```
142+
143+
144+
```python
145+
print("Query:", eval_dataset[0].user_input)
146+
print("Reference:", eval_dataset[0].reference)
147+
```
148+
149+
Query: Quelles sont les caractéristiques du Bronx en tant que borough de New York?
150+
Reference: Le Bronx est l'un des cinq arrondissements de New York, qui est la plus grande ville des États-Unis. Bien que le contexte ne fournisse pas de détails spécifiques sur le Bronx, il mentionne que New York est une ville cosmopolite avec de nombreux quartiers ethniques, ce qui pourrait inclure des caractéristiques culturelles variées présentes dans le Bronx.
151+
152+
153+
That's it. You can customize the test generation process as per your requirements.
154+
155+

0 commit comments

Comments
 (0)