You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Diart is the official implementation of the paper *[Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation](/paper.pdf)* by [Juan Manuel Coria](https://juanmc2005.github.io/), [Hervé Bredin](https://herve.niderb.fr), [Sahar Ghannay](https://saharghannay.github.io/) and [Sophie Rosset](https://perso.limsi.fr/rosset/).
308
+
Diart is the official implementation of the paper
309
+
[Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation](https://github.com/juanmc2005/diart/blob/main/paper.pdf)
310
+
by [Juan Manuel Coria](https://juanmc2005.github.io/),
311
+
[Hervé Bredin](https://herve.niderb.fr),
312
+
[Sahar Ghannay](https://saharghannay.github.io/)
313
+
and [Sophie Rosset](https://perso.limsi.fr/rosset/).
293
314
294
315
295
316
> We propose to address online speaker diarization as a combination of incremental clustering and local diarization applied to a rolling buffer updated every 500ms. Every single step of the proposed pipeline is designed to take full advantage of the strong ability of a recently proposed end-to-end overlap-aware segmentation to detect and separate overlapping speakers. In particular, we propose a modified version of the statistics pooling layer (initially introduced in the x-vector architecture) to give less weight to frames where the segmentation model predicts simultaneous speakers. Furthermore, we derive cannot-link constraints from the initial segmentation step to prevent two local speakers from being wrongfully merged during the incremental clustering step. Finally, we show how the latency of the proposed approach can be adjusted between 500ms and 5s to match the requirements of a particular use case, and we provide a systematic analysis of the influence of latency on the overall performance (on AMI, DIHARD and VoxConverse).
296
317
297
318
<palign="center">
298
-
<imgheight="400"src="/figure1.png"title="Visual explanation of the system"width="325" />
319
+
<imgheight="400"src="https://github.com/juanmc2005/diart/blob/main/figure1.png?raw=true"title="Visual explanation of the system"width="325" />
299
320
</p>
300
321
301
322
## 📗 Citation
@@ -315,7 +336,7 @@ If you found diart useful, please make sure to cite our paper:
Diart aims to be lightweight and capable of real-time streaming in practical scenarios.
321
342
Its performance is very close to what is reported in the paper (and sometimes even a bit better).
@@ -367,9 +388,13 @@ if __name__ == "__main__": # Needed for multiprocessing
367
388
This pre-calculates model outputs in batches, so it runs a lot faster.
368
389
See `diart.benchmark -h` for more options.
369
390
370
-
For convenience and to facilitate future comparisons, we also provide the [expected outputs](/expected_outputs) of the paper implementation in RTTM format for every entry of Table 1 and Figure 5. This includes the VBx offline topline as well as our proposed online approach with latencies 500ms, 1s, 2s, 3s, 4s, and 5s.
391
+
For convenience and to facilitate future comparisons, we also provide the
0 commit comments