Temesgen Mehari

Temesgen Mehari

Bio

I am a Machine Learning Researcher with a PhD in AI for Medicine, specializing in deep learning for ECG analysis. My work focuses on explainable AI, robustness, and performance metrics in biomedical signal processing. I am passionate about interdisciplinary research at the intersection of AI and healthcare, bridging theoretical advancements with real-world applications.

Research

Self Supervised Learning Visualization

Self-supervised representation learning from 12-lead ECG data

Temesgen Mehari, Nils Strodthoff

Computers in biology and medicine 141, 105114

In this work, we explore self-supervised learning (SSL) for ECG analysis to learn meaningful representations without requiring labeled data. By pretraining on large-scale ECG datasets, our approach surpasses convolutional architectures in both supervised and self-supervised settings, achieving state-of-the-art performance. We demonstrate that SSL can enhance generalization, robustness, and data efficiency in ECG classification tasks, paving the way for more scalable and interpretable AI in cardiology.

PTB-XL+ Visualization

PTB-XL+, a comprehensive electrocardiographic feature dataset

Nils Strodthoff, Temesgen Mehari, Claudia Nagel, Philip J Aston, Ashish Sundar, Claus Graff, Jørgen K Kanters, Wilhelm Haverkamp, Olaf Dössel, Axel Loewe, Markus Bär, Tobias Schaeffter

Scientific data 10 (1), 279

The PTB-XL+ dataset extends the PTB-XL ECG dataset, providing a comprehensive set of precomputed features derived from 12-lead electrocardiograms (ECGs). This dataset includes morphological, spectral, and deep learning-based representations, facilitating research in explainable AI, model robustness, and ECG classification. By offering structured and interpretable feature sets, PTB-XL+ supports both traditional machine learning and deep learning approaches for cardiac signal analysis.

XAI4ECG Visualization

Explaining deep learning for ecg analysis: Building blocks for auditing and knowledge discovery

Patrick Wagner, Temesgen Mehari, Wilhelm Haverkamp, Nils Strodthoff

Computers in Biology and Medicine 176, 108525

This work explores methods to interpret and audit deep learning models for electrocardiogram (ECG) analysis. The study introduces a structured framework that integrates concept-based AI, segmentation-based approaches, and discovery-driven methods to enhance model explainability:

  • Concept-based AI enables models to be analyzed in terms of clinically meaningful features, bridging the gap between deep learning representations and human medical reasoning.
  • Segmentation-based approaches help assess how models focus on different ECG waveform components, ensuring that predictions align with medical expertise.
  • Discovery-driven methods allow the identification of previously unknown but relevant patterns, supporting knowledge discovery in cardiology.

By combining these approaches, the paper provides tools for auditing model decisions, detecting biases, and improving the transparency of AI systems in healthcare.

S4 Visualization

Towards quantitative precision for ECG analysis: Leveraging state space models, self-supervision and patient metadata

Temesgen Mehari, Nils Strodthoff

IEEE Journal of Biomedical and Health Informatics

In this work, we introduce a novel framework that integrates structured state space models (SSMs), self-supervised learning, and patient metadata to advance the precision of ECG-based predictions. Our approach challenges conventional convolutional architectures by leveraging SSMs, which are inherently suited for capturing long-range dependencies in physiological signals. We further enhance model performance through self-supervised pretraining, enabling more data-efficient and generalizable feature extraction. Additionally, we incorporate patient-specific metadata to refine predictions, reducing biases and improving clinical relevance.

Feature Importance Visualization

Ecg feature importance rankings: Cardiologists vs. algorithms

Temesgen Mehari, Ashish Sundar, Alen Bosnjakovic, Peter Harris, Steven E Williams, Axel Loewe, Olaf Doessel, Claudia Nagel, Nils Strodthoff, Philip J Aston

IEEE Journal of Biomedical and Health Informatics

Interpretable AI is crucial for deploying deep learning models in clinical settings, yet discrepancies often arise between algorithmic and human reasoning. In this study, we systematically compare feature importance rankings derived from state-of-the-art machine learning models with those provided by expert cardiologists for ECG classification tasks. Our analysis reveals that while algorithms excel at detecting complex patterns, they sometimes emphasize features that lack clinical significance. Conversely, cardiologists prioritize features grounded in physiological understanding, which may not always align with data-driven importance rankings. By bridging this gap, we highlight the strengths and limitations of both approaches and propose strategies for improving AI-assisted ECG diagnostics. This work underscores the need for human-AI collaboration in medical decision-making, ensuring that model predictions align with expert knowledge to enhance trust and clinical utility.