We provide certain tools to encourage reproducibility and consistency of results reported in the field of automated seizure detection algorithm.
epilepsy2bids#
A library to convert datasets to BIDS and to manipulate BIDS files.
Python library to convert EEG datasets to a BIDS compatible dataset
Library for converting EEG datasets of people with epilepsy to EEG-BIDS compatible datasets. These datasets comply with the ILAE and IFCN minimum recording standards. They provide annotations that are HED-SCORE compatible. The datasets are formatted to be operated by the SzCORE seizure validation framework.
The library provides tools to:
- Convert EEG datasets to BIDS.
- Load and manipulate EDF files.
- Load and manipulate seizure annotation files.
Currently, the following datasets are supported:
- PhysioNet CHB-MIT Scalp EEG Database v1.0.0
- KULeuven SeizeIT1
- Siena Scalp EEG Database v1.0.0
- TUH EEG Seizure Corpus
timescoring#
Lib for event and sample based performance metrics
We built a library that provides different scoring methodologies to compare a reference time series with binary annotation (ground-truth annotations of the neurologist) to hypothesis binary annotations (provided by a machine learning pipeline). These different scoring methodologies provide a count of correctly identified events (True Positives) as well as missed events (False Negatives) and wrongly marked events (False positions)
In more details, we measure performance on the level of:
- Samples : Performance metric that threats every label sample independently.
- Events (e.g. epileptic seizure) : Classifies each event in both reference and hypothesis based on overlap of both.
Both methods are illustrated in the following figures :
szcore-evaluation#
Compare szCORE compliant annotations of EEG datasets of people with epilelpsy
The library provides a single function to evaluate a set of annotations.
def evaluate_dataset(
reference: Path, hypothesis: Path, outFile: Path, avg_per_subject=True
) -> dict:
"""
Compares two sets of seizure annotations accross a full dataset.
Parameters:
reference (Path): The path to the folder containing the reference TSV files.
hypothesis (Path): The path to the folder containing the hypothesis TSV files.
outFile (Path): The path to the output JSON file where the results are saved.
avg_per_subject (bool): Whether to compute average scores per subject or
average across the full dataset.
Returns:
dict. return the evaluation result. The dictionary contains the following
keys: {'sample_results': {'sensitivity', 'precision', 'f1', 'fpRate',
'sensitivity_std', 'precision_std', 'f1_std', 'fpRate_std'},
'event_results':{...}
}
"""
sz-validation-framework#
Framework for the validation of EEG based automated seizure detection algorithms
Example code that uses the framework for the validation of EEG based automated seizure detection algorithms.
The repository provides code to :
- Convert EDF files from most open scalp EEG datasets of people with epilepsy to a standardized format
- Convert seizure annotations from these datasets to a standardized format.
- Evaluate the performance of seizure detection algorithm.
szcore#
This repository hosts an open seizure detection benchmarking platform.
Repository that implements a Continuous Integration pipeline for the evaluation of seizure detection algorithms. This is the repository used to populate the benchmark page.