Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ECG Analysis bug for sample rate of 500 #593

Open
sametdumankaya opened this issue Dec 10, 2021 · 12 comments
Open

ECG Analysis bug for sample rate of 500 #593

sametdumankaya opened this issue Dec 10, 2021 · 12 comments
Assignees
Labels
Data/code required 📅 Reproducibility wontfix This will not be worked on

Comments

@sametdumankaya
Copy link

I wanted to analyze my ECG data (sample rate of 500), however I came across with this bug and I don't know how to update the parameter 'windows'.

Code:
processed_data, info = nk.bio_process(ecg=df["V3"], sampling_rate=500)
results = nk.bio_analyze(processed_data, sampling_rate=500)

Error:
ValueError: NeuroKit error: the window cannot contain more data points than the time series. Decrease 'windows'.

@welcome
Copy link

welcome bot commented Dec 10, 2021

Hi 👋 Thanks for reaching out and opening your first issue here! We'll try to come back to you as soon as possible. ❤️ kenobi

@DominiqueMakowski
Copy link
Member

Hi, could you attach a piece of reproducible code and data for us to see what's going on? Thanks

@Soumadip-Saha
Copy link

Soumadip-Saha commented Feb 8, 2022

Hello @DominiqueMakowski, I am also facing the same issue for the 500Hz sample frequency. I have 88000 ECG data points from PhysioNet with various signal frequencies. I have used upsampling and downsampling on them and made the of 500Hz frequency. Now the problem arises when I try to use Neurokit ecg_analyze function on it.

df, info = nk.ecg_process(data[1], sampling_rate=500)  
R_peaks = info['ECG_R_Peaks']
temp_df = nk.ecg_analyze(df,sampling_rate=500)

Here data is the twelve leads of the ECG signal and I am passing only lead 2 as data[1]. But it is raising this error for some of the ECG signal data points. I have also pasted the error here:
NeuroKit error: the window cannot contain more data points than the time series. Decrease 'windows'.

@ZacDair
Copy link

ZacDair commented Mar 20, 2022

Having the same issue utilising the WESAD dataset PPG data (64Hz) and ECG data (700Hz) windowed into 10-second segments.
After my windowing on Subjet 10, I end up with 549 windows of data. ECG has 7000 data points per window, PPG has 640.

Interestingly the code runs and retrieves measures to identify the HRV from the peaks for two iterations on PPG and near seventy iterations on ECG, then fails.

I'm happy to supply code if needed.

@anshu-97
Copy link
Collaborator

anshu-97 commented Apr 5, 2022

Hi @ZacDair , apologies for the late reply! Could you please attach your code as well as data so that we can try reproducing the error?

@ZaraNaSha
Copy link

Hi,
I have the same issue. Is there any solution for it?
Best regards.

@ZaraNaSha
Copy link

Hi everyone, @ZacDair, @Soumadip-Saha
I found the source of this error. when the number of 'RRI' in different window is not the same, this error is shown. to solve it, you could replace the '_hrv_dfa' with the following code:
def _hrv_dfa(peaks, rri, out, n_windows="default", **kwargs):

if "dfa_windows" in kwargs:
    dfa_windows = kwargs["dfa_windows"]
else:
    print(rri.shape)
    dfa_windows = [(4, rri.shape[0]-1), (rri.shape[0], None)]
    print(dfa_windows)

# Determine max beats
if dfa_windows[1][1] is None:
    max_beats = len(peaks) / 10
else:
    max_beats = dfa_windows[1][1]

# No. of windows to compute for short and long term
if n_windows == "default":
    n_windows_short = int(dfa_windows[0][1] - dfa_windows[0][0] + 1)
    n_windows_long = int(max_beats - dfa_windows[1][0] + 1)
elif isinstance(n_windows, list):
    n_windows_short = n_windows[0]
    n_windows_long = n_windows[1]

# Compute DFA alpha1
short_window = np.linspace(dfa_windows[0][0], dfa_windows[0][1], n_windows_short).astype(int)
# For monofractal
print('fractal_dfa')
out["DFA_alpha1"] = fractal_dfa(rri, multifractal=False, windows=short_window, **kwargs)[0]
# For multifractal
mdfa_alpha1 = fractal_dfa(
    rri, multifractal=True, q=np.arange(-5, 6), windows=short_window, **kwargs
)[1]
out["DFA_alpha1_ExpRange"] = mdfa_alpha1["ExpRange"]
out["DFA_alpha1_ExpMean"] = mdfa_alpha1["ExpMean"]
out["DFA_alpha1_DimRange"] = mdfa_alpha1["DimRange"]
out["DFA_alpha1_DimMean"] = mdfa_alpha1["DimMean"]

# Compute DFA alpha2
# sanatize max_beats
if max_beats < dfa_windows[1][0] + 1:
    warn(
        "DFA_alpha2 related indices will not be calculated. "
        "The maximum duration of the windows provided for the long-term correlation is smaller "
        "than the minimum duration of windows. Refer to the `windows` argument in `nk.fractal_dfa()` "
        "for more information.",
        category=NeuroKitWarning,
    )
    return out
else:
    long_window = np.linspace(dfa_windows[1][0], int(max_beats), n_windows_long).astype(int)
    # For monofractal
    out["DFA_alpha2"] = fractal_dfa(rri, multifractal=False, windows=long_window, **kwargs)[0]
    # For multifractal
    mdfa_alpha2 = fractal_dfa(
        rri, multifractal=True, q=np.arange(-5, 6), windows=long_window, **kwargs
    )[1]
    out["DFA_alpha2_ExpRange"] = mdfa_alpha2["ExpRange"]
    out["DFA_alpha2_ExpMean"] = mdfa_alpha2["ExpMean"]
    out["DFA_alpha2_DimRange"] = mdfa_alpha2["DimRange"]
    out["DFA_alpha2_DimMean"] = mdfa_alpha2["DimMean"]

return out

@stale stale bot added the wontfix This will not be worked on label Sep 8, 2022
@stale stale bot closed this as completed Sep 22, 2022
@stale stale bot removed the wontfix This will not be worked on label Sep 22, 2022
@neuropsychology neuropsychology deleted a comment from stale bot Sep 22, 2022
@stale stale bot added the wontfix This will not be worked on label Nov 25, 2022
@stale stale bot closed this as completed Dec 3, 2022
@jarleholt
Copy link

I have exactly the same problem,:
analyze_df=Error during ECG processing: NeuroKit error: the window cannot contain more data points than the time series. Decrease 'scale'

I suggest the case is reopened as there is no fix without rewriting part of neurokit functionality, as far as I understand. That defeats the purpose of using neurokit

@neuropsychology neuropsychology deleted a comment from stale bot Oct 9, 2023
@DominiqueMakowski DominiqueMakowski removed the wontfix This will not be worked on label Oct 9, 2023
@DominiqueMakowski
Copy link
Member

Hi, would you help us fixing it by providing a reproducible code / data?

@DominiqueMakowski DominiqueMakowski mentioned this issue Oct 9, 2023
12 tasks
@DominiqueMakowski DominiqueMakowski mentioned this issue Nov 6, 2023
12 tasks
Copy link

stale bot commented Mar 17, 2024

Is this still relevant? If so, what is blocking it? Is there anything you can do to help move it forward?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the wontfix This will not be worked on label Mar 17, 2024
@DominiqueMakowski DominiqueMakowski mentioned this issue Apr 6, 2024
13 tasks
@DominiqueMakowski DominiqueMakowski mentioned this issue May 21, 2024
14 tasks
@DominiqueMakowski DominiqueMakowski mentioned this issue Aug 16, 2024
14 tasks
@AnupKumarGupta
Copy link

Heyy, Any update on this? I am facing a similar error.

@stale stale bot removed the wontfix This will not be worked on label Oct 24, 2024
Copy link

stale bot commented Jan 31, 2025

Is this still relevant? If so, what is blocking it? Is there anything you can do to help move it forward?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the wontfix This will not be worked on label Jan 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Data/code required 📅 Reproducibility wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

8 participants