Skip to content

[Request]: Update NWBIO Class to allow for chunck-by-chunk conversion #1644

Open
@FrancescoNegri

Description

@FrancescoNegri

Describe the bug
When Neo is used to load and export proprietary formats in .nwb files, the source electrophysiological recording is fully loaded into memory, converted and ultimately saved in the output format.
This procedure is particularly problematic when converting large files, where the process is interrupted due to saturation of the available memory.
Unlike Neo, the neuroconv package sequentially converts smaller chunks of the original signal, avoiding any memory-related issue.
Wouldn't it be appropriate to modify the NWBIO class so that it follows the same approach as neuroconv?
These considerations hold for any output format, as reading the whole electrophysiological recording in many cases might be unfeasible.

To Reproduce
The issue arises when reading and subsequently writing any large enough recording to .nwb. In my case, I tried to export a .pl2 Plexon file with a size of about 50 GB.

Expected behaviour
The desirable behaviour, as previously stated, would be a chunk-by-chunk conversion, without the need to load in-memory the whole recording, avoiding memory saturation and subsequent process interruption.

Environment:

  • OS: Ubuntu (in a Windows-based Docker container)
  • Python version: 3.12
  • Neo version: 0.14.0

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions