Skip to content

ASR-NL-benchmark is a python package to evaluate and compare the performance of speech-to-text for the Dutch language.

Notifications You must be signed in to change notification settings

greenw0lf/ASR_NL_benchmark

 
 

Repository files navigation

ASR-NL-benchmark

Description

ASR-NL-benchmark is a python package to evaluate and compare the performance of speech-to-text for the Dutch language. Universities and Dutch media companies joined forces to develop this package that makes it easier to compare the performance of various open-source or commercial speech-to-text solutions on Dutch broadcast media. This package wraps around the famous sclite tool (part of SCTK that has been used for decades in the speech-to-text benchmark evaluations organised by NIST in the US. Further, the package contains several preprocessing files and connectors to databases.

How to use

How to: Create a reference file

Reference files can be created using tooling such as:

A full annotation protocol can be found here.

Please check the guidelines for the reference file in the section below.

How to: Install

  • Install docker
  • Pull the docker image: docker pull asrnlbenchmark/asr-nl-benchmark

How to: Run Using the command line only

In order to run the benchmarking tool over a (set of) local hyp and ref file(s) we need docker to mount the local directory where the input files are located. The output files of the benchmarking tool will appear in the same folder.

The following line runs the benchmarking tool over a local hyp and ref file. Use the absolute file path as the value for the variables SOURCE. For HYPFILENAME use the filename of the hypfile and for REFFILENAME the name of the reffile.

  • run: docker run -it --mount type=bind,source=SOURCE,target=/input asrnlbenchmark/asr-nl-benchmark:latest python ASR-NL-benchmark/src/app.py -hyp /input/HYPFILENAME ctm -ref /input/REFFILENAME stm

The results (.dtl, .spk, and .csv format) can be found inside a folder named 'results' which can be found on the local 'SOURCE' location (see above).

How to: Use the User Interface

In order to open a User Interface, run the same command as above but now with the optional argument -interface set to TRUE:

  • run: docker run -it --mount type=bind,source=SOURCE,target=/input asrnlbenchmark/asr-nl-benchmark:latest python ASR-NL-benchmark/src/app.py -interactive True

Use a web browser to access the UI by navigating to "http://loaclhost:5000" :

Within the tab Select folder, enter the path to the hypotheses and reference files:

  • Enter the path of the hyp or the path to a folder containing a set of hyp files: (e.g. "ref_folder" or "ref_file.stm")
  • Enter the path of the ref file or the path to a folder containing a set of ref files: (e.g. "hyp_folder" or "hyp_file.stm")
  • click submit

A progress bar will appear. As soon as the benchmarking is ready, you will be forwarded to the results. The results (.dtl, .spk, and .csv format) can be found inside a folder named 'results' which can be found on the local 'SOURCE' location (see above).

How to: Interpret the results

The final results are saved in .csv format inside a folder named 'results' stored locally on the 'SOURCE' location (see above). Those results are based upon the .dtl and .spk output files as generated by sclite.

The different output files

  • .dtl files - Detailed Overall Report as returned by sclite
  • .spk files - Report with scoring for a speaker as returned by sclite
  • .csv files - Overall results of the benchmarking as shown in the interface

More about the pipeline

Normalisation

Manual transcripts (used as reference files) sometimes contain abbreviations (e.g. "'n" instead of "een"), symbols (e.g. "&" instead of "en") and numbers ("4" instead of "vier"). The reference files often contain the written form of the words instead. Since we don't want to penalize the speech-to-text tooling or algorithm for such differences we normalize both, the reference and hypotheses files.

Normalisation replacements:

Symbols:

  • '%' => " procent"
  • '°' => " graden")
  • '&' => " en"
  • '€' => " euro"

Double spaces:

  • ' ' =>' ') Numbers (i.a.):
  • 4 => "vier"
  • 4.5 => "vier punt vijf"
  • 4,3 => "vier komma drie"

Combinations (e.g.):

  • 12,3% => 'twaalf komma drie procent'

Variation

In order to deal with spelling variations, this tool applies a .glm file to the reference and hypothesis files. This .glm file contains a list of words with their spelling variations and can be found here. Whereas the normalisation step is typically rule-based, the variations are not. Therefore, we invite you all to adjustment to the glm and to create a pull request with the requested additions.

Guidelines

File Naming

In order for the benchmarking tool to match the reference and hypothesis files, both should have exactly the same naming. The only 2 exceptions are:

  1. The file extension (.stm for the reference files and .ctm for the hypothesis files)
  2. In case you are using subcategories (See Benchmarking subcategories).

Benchmarking subcategories

[PLACEHOLDER]

example: Without subcategories:

  • program_1.stm
  • program_1.ctm
  • programe_2.stm
  • program_2.ctm

With subcategories (sports v.s. news):

  • programe_1.stm
  • program_1-sports.ctm
  • programe_2.stm
  • program_2-news.ctm

Reference file

The reference file is used as the ground truth. To get the best results, the reference file should meet the following guidelines:

  • The reference file should be a Segment Time Mark file (STM), see description below.
  • Words should be written according to the modern Dutch spelling
  • No abbreviations (e.g. use: "bijvoorbeeld" instead of: "bv." or "bijv. , use: "het" instead of "'t")
  • No symbols (use: "procent" instead of: "%")
  • No numbers (write out all numbers: "drie" instead of "3")
  • utf-8 encoded

In order to create those reference files, we suggest to use a transcription tool like transcriber.

Segment Time Mark (STM)

The Segment Time Mark files, to be used as reference files, consist of a connotation of time marked text segment records. Those segments are separated by a new line and follow the File_id Channel Speaker_id Begin_Time End_Time Transcript

To comment out a line start the line with ';;'

Example STM

;; Some information you want to comment out like a description
;; More information you want to include and comment out
;; like the name of the transcriber, the version or explanation of labels
Your_favorite_tv_show_2021_S1_E1 Speaker_01_Female_Native A 0.000 1.527 <o, f1, female> The first line
Your_favorite_tv_show_2021_S1_E1 Speaker_01_Female_Native A 1.530 2.127 <o, f1, male> The second text segment

Hypothesis file

To get the best results the hypothesis file (i.e. the output of a speech recognizer) should meet the following guidelines:

  • The hypothesis file should be Time Marked Conversations files (CTM), see the description below.
  • utf-8 encoded

CTM Format

The Time Marked Conversation files, to be used as hypothesis files, consist of a connotation of time-marked word records. Those records are separated by a new line and follow the following format:

File_id Channel Begin_time Duration Word Confidence

To comment out a line start the line with ';;'

Example CTM

;; Some infomration you want to comment out like a description
;; More information you want to include and comment out
Your_favorite_tv_show_2021_S1_E1 A 0.000 0.482 The 0.95
Your_favorite_tv_show_2021_S1_E1 A 0.496 0.281 first 0.98
Your_favorite_tv_show_2021_S1_E1 A 1.216 0.311 line 0.88

Related Documentation

About

ASR-NL-benchmark is a python package to evaluate and compare the performance of speech-to-text for the Dutch language.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 83.9%
  • HTML 14.2%
  • Dockerfile 1.9%