Dataset Viewer issue: DatasetGenerationError
The same issue shows up in two places. Loading the dataset fails with the same error that viewing the dataset gives.
Issue 1. - Load_dataset fails
I get the following error when I try to load_dataset with:
fleursR = load_dataset("google/fleurs-r", cache_dir="fleurs-r_dataset")
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/root/.cache/huggingface/hub/datasets--google--fleurs-r/snapshots/c621c0b7b569dcebcd50273a187a35d1a1fc895f/data/am_et/train.tsv' with error <class 'pandas.errors.ParserError'>: Error tokenizing data. C error: EOF inside string starting at row 1917
Generating train split: 1024 examples [00:00, 12498.23 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1854, in _prepare_split_single
for _, table in generator:
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
for batch_idx, df in enumerate(csv_file_reader):
File "/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py", line 1843, in __next__
return self.get_chunk()
File "/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
return self.read(nrows=size)
File "/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py", line 1923, in read
) = self._engine.read( # type: ignore[attr-defined]
File "/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
chunks = self._reader.read_low_memory(nrows)
File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 1917
Inside that file data/am_et/train.tsv
, there is no obviously bad sample. When I delete samples just trying to get the load_dataset function to run through, I get different errors about pandas columns not lining up.
Issue 2.
same issue shows up on hf dataset viewer when you follow this link...The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset viewer is not working.
Error details:
Error code: DatasetGenerationError
Exception: ParserError
Message: Error tokenizing data. C error: EOF inside string starting at row 1917
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 797, in wrapped
for item in generator(*args, **kwargs):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 195, in _generate_tables
for batch_idx, df in enumerate(csv_file_reader):
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
return self.get_chunk()
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
return self.read(nrows=size)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read
) = self._engine.read( # type: ignore[attr-defined]
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
chunks = self._reader.read_low_memory(nrows)
File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 1917
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1524, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1099, in stream_convert_to_parquet
builder._prepare_split(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
This can be fixed by setting quoting=3 as infleursR = load_dataset("google/fleurs-r", quoting=3, cache_dir="fleurs-r_dataset")
,
but that leads to a new error:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1870, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py", line 622, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 2292, in table_cast
return cast_table_to_schema(table, schema)
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 2240, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
1427: int64
14962900514380077605.wav: string
α¨αα΅α₯ ααα¨α αα²α«αα ααα³α αα½ααα’ α₯αα° αα΅α³αα α«α ααααΆα½ αα£α αααα« αα°α¨αα£αΈα α¨ααα½α α α α ααα³ α α ααα ααΈαα’: string
α¨αα΅α₯ ααα¨α αα²α«αα ααα³α αα½αα α₯αα° αα΅α³αα α«α ααααΆα½ αα£α αααα« αα°α¨αα£αΈα α¨ααα½α α α α ααα³ α α ααα ααΈα: string
α¨ α α΅ α₯ | α α α¨ α | α α² α« α α | α α α³ α | α α½ α α | α₯ α α° | α α΅ α³ α α | α« α | α α α αΆ α½ | α α£ α | α α α α« | α α° α¨ α α£ αΈ α | α¨ α α α½ α | α α α | α α α³ | α α α α α | α αΈ α |: string
279360: int64
MALE: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 3808
to
{'331': Value(dtype='int64', id=None), '14614557722650670755.wav': Value(dtype='string', id=None), 'Niemand anders het ooit meer verskynings gemaak of meer doele as Bobek aangeteken nie.': Value(dtype='string', id=None), 'niemand anders het ooit meer verskynings gemaak of meer doele as bobek aangeteken nie': Value(dtype='string', id=None), 'n i e m a n d | a n d e r s | h e t | o o i t | m e e r | v e r s k y n i n g s | g e m a
a k | o f | m e e r | d o e l e | a s | b o b e k | a a n g e t e k e n | n i e |': Value(dtype='string', id=None), '79200': Value(dtype='int64', id=None), 'FEMALE': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 889, in incomplete_dir
yield tmp_dir
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1000, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1741, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1872, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 7 new columns ({'α¨αα΅α₯ ααα¨α αα²α«αα ααα³α αα½αα α₯αα° αα΅α³αα α«α ααααΆα½ αα£α αααα« αα°α¨αα£αΈα α¨ααα½α α α α ααα³ α α ααα ααΈα', 'MALE', '1427', 'α¨ α α΅ α₯ | α α α¨ α | α α² α« α α | α α α³ α | α α½ α α | α₯ α α° | α α΅ α³ α α | α« α | α α α αΆ α½ | α α£ α | α α α α« | α α° α¨ α α£ αΈ α | α¨ α α α½ α | α α α | α α α³ | α α α α α | α αΈ α |', 'α¨αα΅α₯ ααα¨α αα²α«αα ααα³α αα½ααα’ α₯αα° αα΅α³αα α«α ααααΆα½ αα£α αααα« αα°α¨αα£
αΈα α¨ααα½α α α α ααα³ α α ααα ααΈαα’', '279360', '14962900514380077605.wav'}) and 7 missing columns ({'n i e m a n d | a n d e r s | h e t | o o i t | m e e r | v e r s k y n i n g s | g e m a a k | o f | m e e r | d o e l e | a s | b o b e k | a a n g e t e k e n | n i e |', '331', '79200', 'FEMALE', 'Niemand anders het ooit meer verskynings gemaak of meer doele as Bobek aangeteken nie.', '14614557722650670755.wav', 'niemand anders het
ooit meer verskynings gemaak of meer doele as bobek aangeteken nie'}).
This happened while the csv dataset builder was generating data using
hf://datasets/google/fleurs-r/data/am_et/train.tsv (at revision c621c0b7b569dcebcd50273a187a35d1a1fc895f)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
The solution: Use Git Clone.
>> sudo apt install git-lfs
>> git lfs install
>> git clone https://huggingface.co/datasets/google/fleurs-r
Do you happen to have a modified version of the dataset that can be loaded without these issues ? What would be needed to fix the data files ?