12:15 - 13:15 CET
50 minutes speech + 10 minutes Q&A
The recent progress in machine learning has accelerated the development of automated systems for interpretation of mammograms and breast imaging in general. Some AI systems have already demonstrated a level of performance where they can compete with the best radiologists and make a significant contribution to improvement of early detection and diagnosis of breast cancer. In this presentation an overview of the state of art will be presented, focusing both on the technology and validation in clinical practice. The overview will include a variety of applications ranging from breast cancer risk assessment, early detection, and diagnosis using a multi-modal approach.
13:15 - 14:30 CET
Oral presentations are assigned 15 minutes (12 speech + 3 Q&A)
Deep learning has been revolutionizing multiple aspects of our daily lives, thanks to its state-of-the-art results. However, the complexity of its models and its associated difficulty to interpret its results has prevented it from being widely adopted in healthcare systems. This represents a missed opportunity, specially considering the growing volumes of Electronic Health Record (EHR) data, as hospitals and clinics increasingly collect information in digital databases. While there are studies addressing artificial neural networks applied to this type of data, the interpretability component tends to be approached lightly or even disregarded. Here we demonstrate the superior capability of recurrent neural network based models, outperforming multiple baselines with an average of 0.94 test AUC, when predicting the use of non-invasive ventilation by Amyotrophic Lateral Sclerosis (ALS) patients, while also presenting a comprehensive explainability solution. In order to interpret these complex, recurrent algorithms, the robust SHAP package was adapted, as well as a new instance importance score was defined, to highlight the effect of feature values and time series samples in the output, respectively. These concepts were then combined in a dashboard, which serves as a proof of concept in terms of a AI-enhanced detailed analysis tool for medical staff.
The recent spread of COVID-19 put a strain on hospitals all over the world. In this paper we address the problem of hospital overloads and present a tool based on machine learning to predict the length of stay of hospitalised patients affected by COVID-19. This tool was developed using Random Forests and Extra Trees regression algorithms and was trained and tested on the data from more than 1000 hospitalised patients from Northern Italy. These data contain demographics, several laboratory test results and a score that evaluates the severity of the pulmonary conditions. The experimental results show good performance for the length of stay prediction and, in particular, for identifying which patients will stay in hospital for a long period of time.
Immunotherapy is one of the most interesting and promising cancer treatments. Encouraging results have confirmed the effectiveness of immunotherapy drugs for treating tumors in terms of long-term survival and a significant reduction in toxicity compared to more traditional chemotherapy approaches. However, the percentage of patients eligible for immunotherapy is rather small, and this is likely related to the limited knowledge of physiological mechanisms by which certain subjects respond to the treatment while others have no benefit. To address this issue, the authors propose an innovative approach based on the use of a non-linear cellular architecture with a deep downstream classifier for selecting and properly augmenting 2D features from chest-abdomen CT images toward improving outcome prediction. The proposed pipeline has been designed to make it usable over an innovative embedded Point of Care system. The authors report a case study of the proposed solution applied to a specific type of aggressive tumor, namely Metastatic Urothelial Carcinoma (mUC). The performance evaluation (overall accuracy close to 93%) confirms the proposed approach effectiveness.
Polyps represent an early sign of the development of Colorectal Cancer. The standard procedure for their detection consists of colonoscopic examination of the gastrointestinal tract. However, the wide range of polyp shapes and visual appearances, as well as the reduced quality of this image modality, turn their automatic identification and segmentation with computational tools into a challenging computer vision task. In this work, we present a new strategy for the delineation of gastrointestinal polyps from endoscopic images based on a direct extension of common encoder-decoder networks for semantic segmentation. In our approach, two pretrained encoder-decoder networks are sequentially stacked: the second network takes as input the concatenation of the original frame and the initial prediction generated by the first network, which acts as an attention mechanism enabling the second network to focus on interesting areas within the image, thereby improving the quality of its predictions. Quantitative evaluation carried out on several polyp segmentation databases shows that double encoder-decoder networks clearly outperform their single encoder-decoder counterparts in all cases. In addition, our best double encoder-decoder combination attains excellent segmentation accuracy and reaches state-of-the-art performance results in all the considered datasets, with a remarkable boost of accuracy on images extracted from datasets not used for training.
The precise segmentation of organs from computed tomography is a fundamental and pivotal task for correct diagnosis and proper treatment of diseases. Neural network models are widely explored for their promising performance in the segmentation of medical images. However, the small dimension of available datasets is affecting the biomedical imaging domain significantly and has a huge impact in training of deep learning models. In this paper we try to address this issue by iteratively augmenting the dataset with auxiliary task-based information. This is obtained by introducing a recursive training approach, where a new set of segmented images is generated at each iteration and then concatenated with the original input data as organ attention maps. In the experimental evaluation two different datasets were tested and the results produced from the proposed approach have shown significant improvements in organ segmentation as compared to a standard non-recursive approach.
14:30 - 15:30 CET
Poster presentations are made in parallel
The recent advances in algorithmic photo-editing and the vulnerability of hospitals to cyberattacks raises the concern about the tampering of medical images. This paper introduces a new large scale dataset of tampered Computed Tomography (CT) scans generated by different methods, LuNoTim-CT dataset, which can serve as the most comprehensive testbed for comparative studies of data security in healthcare. We further propose a deep learning-based framework, ConnectionNet, to automatically detect if a medical image is tampered. The proposed ConnectionNet is able to handle small tampered regions and achieves promising results and can be used as the baseline for studies of medical image tampering detection.
To date, deep learning has assisted in classifying embryos as early as day 5 after insemination. We investigated whether deep neural networks could successfully predict the destiny of each embryo (discard or transfer) at an even earlier stage, namely at day 3. We first assessed whether the destiny of each embryo could be derived from technician scores, using a simple regression model. We then explored whether a deep neural network could make accurate predictions using images alone. We found that a simple 8-layer network was able to achieve 75.24% accuracy of destiny prediction, outperforming deeper, state-of-the-art models that reached 68.48% when applied to our middle slice images. Increasing focal points from a single (middle slice) to three slices per image did not improve accuracy. Instead, accounting for the "batch effect", that is, predicting an embryo's destiny in relation to other embryos from the same batch, greatly improved accuracy, to a level of 84.69% for unseen cases. Importantly, when analyzing cases of transferred embryos, we found that our lean, deep neural network predictions were correlated (0.65) with clinical outcomes.
Accurate assessment of diabetic foot ulcers (DFU) is primordial to provide an efficient treatment and to prevent amputation. Traditional DFU assessment methods used by clinicians are based on visual examination of the ulcer by estimating the surface and analyzing tissue conditions. These manual methods are subjective and make direct contact with the wound, resulting in high variability and risk of infection. In this research work, we propose a novel smartphone-based skin telemonitoring system to support medical diagnoses and decisions during DFU tissues examination. The database contains 219 images, for effective tissue identification and annotation of the ground truth, a graphical interface based on superpixel segmentation method has been used. Our method performs DFU assessment in an end-to-end style comprising automatic ulcer segmentation and tissue classification. The classification task is performed at a patch-level, superpixels extracted with SLIC are used as input for the training of the deep neural network. State-of-the-art deep learning models for semantic segmentation have been used to perform tissue differentiation within the ulcer area into three classes (Necrosis, Granulation and Slough) and have been compared to the proposed method. The proposed superpixel-based method outperforms classic fully convolutional network models while improving significantly the performance on all the metrics. Accuracy and DICE index are improved from 84.55% to 92.68% and from 54.31% to 75.74% respectively for FCN-32. The results reveal robust tissue classification effectiveness and the potential of our system to monitor DFU healing over time.
While in developed countries routine dental consultations are often covered by insurance, access to prophylactic dental examinations is often expensive in developing countries. Therefore, sufficient oral health prevention, particularly early caries detection, is not accessible to many people in these countries, yet. This observation is, however, contrary to the accessibility of smartphone technology, as smartphones have become available and affordable in most countries. Their technology can be utilized for low-cost initial caries inspection to determine the necessity for a subsequent dental examination. In this paper we address the specific problem of caries detection in smartphone images. Fully supervised methods usually require tedious location annotations, whereas weakly supervised approaches manage to address the detection task with less complex labels. To this end, we propose a weakly supervised caries detection strategy with local constraints and investigate its caries localization capabilities compared to a superior fully supervised Faster R-CNN approach as upper baseline. Our proposed strategy shows promising initial results on our in-house smartphone caries data set.
Pollen grain micrograph classification has multiple applications in medicine and biology. Automatic pollen grain image classification can alleviate the problems of manual categorisation such as subjectivity and time constraints. While a number of computer-based methods have been introduced in the literature to perform this task, classification performance needs to be improved for these methods to be useful in practice. In this paper, we present an ensemble approach for pollen grain microscopic image classification into four categories: Corylus Avellana well-developed pollen grain, Corylus Avellana anomalous pollen grain, Alnus well-developed pollen grain, and non-pollen (debris) instances. In our approach, we develop a classification strategy that is based on fusion of four state-of-the-art fine-tuned convolutional neural networks, namely EfficientNetB0, EfficientNetB1, EfficientNetB2 and SeResNeXt-50 deep models. These models are trained with images of three fixed sizes (224 × 224, 240 × 240, and 260 × 260 pixels) and their prediction probability vectors are then fused in an ensemble method to form a final classification vector for a given pollen grain image. Our proposed method is shown to yield excellent classification performance, obtaining an accuracy of of 94.48% and a weighted F1-score of 94.54% on the ICPR 2020 Pollen Grain Classification Challenge training dataset based on five-fold cross-validation. Evaluated on the test set of the challenge, our approach achieves a very competitive performance in comparison to the top ranked approaches with an accuracy and weighted F1-score of 96.28% and 96.30%, respectively.
For tumor delineation in Positron Emission Tomography (PET) images, it is of utmost importance to devise efficient and operator-independent segmentation methods capable of reconstructing the 3D tumor shape. In this paper, we present a fully 3D automatic system for the brain tumor delineation in PET images. In previous work, we proposed a 2D segmentation system based on a two-steps approach. The first step automatically identified the slice enclosing the maximum tracer uptake in the whole tumor volume and generated a rough contour surrounding the tumor itself. Such contour was then used to initialize the second step, where the 3D shape of the tumor was obtained by separately segmenting 2D slices. In this paper, we migrate our system into fully 3D. In particular, the segmentation in the second step is performed by evolving an active surface directly in the 3D space. The key points of such advancement are that it performs the shape reconstruction on the whole stack of slices simultaneously, leveraging useful cross-slice information. Additionally, it does not require any specific stopping condition, as the active surface naturally reaches a stable topology once convergence is achieved. Performance of this approach is evaluated on the same dataset discussed in our previous work to assess if any benefit is achieved migrating the system from 2D to 3D. Results confirm an improvement in performance in term of dice similari ty coefficient (89.89%), and Hausdorff distance (1.11 voxel).
Loss functions are error metrics that quantify the difference between a prediction and its corresponding ground truth. Fundamentally, they define a functional landscape for traversal by gradient descent. Although numerous loss functions have been proposed to date in order to handle various machine learning problems, little attention has been given to enhancing these functions to better traverse the loss landscape. In this paper, we simultaneously and significantly mitigate two prominent problems in medical image segmentation namely: i) class imbalance between foreground and background pixels and ii) poor loss function convergence. To this end, we propose an Adaptive Logarithmic Loss (ALL) function. We compare this loss function with the existing state-of-the-art on the ISIC 2018 dataset, the nuclei segmentation dataset as well as the DRIVE retinal vessel segmentation dataset. We measure the performance of our methodology on benchmark metrics and demonstrate state-of-the-art performance. More generally, we show that our system can be used as a framework for better training of deep neural networks.
Cone-beam computed tomography (CBCT) is widely used in clinical diagnosis of vertical root fractures (VRFs) which presents as crack on the teeth. However, manually checking the VRFs from a larger number of CBCT images is time-consuming and error-prone. Although the Convolutional Neural Networks (CNN) have achieved unprecedented progress in natural image recognition, end-to-end CNN is unsuitable to identify VRFs due to crack appears to be multi-scales and their complex relationships with surroundings tissues. We proposed a novel Feature Pyramids Attention Convolutional Neural Network (FPA-CNN), which incorporates saliency mask and multi-scale feature to boost the classification performance. Saliency map is viewed as spatial probability map where a person might look first to make a discriminative conclusion. Therefore it plays a role of high-level hint to guide the network focusing on the discriminative region. Experimental results demonstrate that our proposed FPA-CNN overcomes the challenge arised from multi-scale crack and complex contextual relationships.
A key step of the diagnosis of Idiopathic Pulmonary Fibrosis (IPF) is the examination of high-resolution computed tomography images (HRCT). IPF exhibits a typical radiological pattern, named Usual Interstitial Pneumoniae (UIP) pattern, which can be detected in non-invasive HRCT investigations, thus avoiding surgical lung biopsy. Unfortunately, the visual recognition and quantification of UIP pattern can be challenging even for experienced radiologists due to the poor inter and intra-reader agreement. This study aimed to develop a tool for the semantic segmentation and the quantification of UIP pattern in patients with IPF using a deep-learning method based on a Convolutional Neural Network (CNN), called UIP-net. The proposed CNN, based on an encoder-decoder architecture, takes as input a thoracic HRCT image and outputs a binary mask for the automatic discrimination between UIP pattern and healthy lung parenchyma. To train and evaluate the CNN, a dataset of 5000 images, derived by 20 CT scans of different patients, was used. The network performance yielded 96.7% BF-score and 85.9% sensitivity. Once trained and tested, the UIP-net was used to obtain the segmentations of other 60 CT scans of different patients to estimate the volume of lungs affected by the UIP pattern. The measurements were compared with those obtained using the reference software for the automatic detection of UIP pattern, named Computer Aided Lungs Informatics for Pathology Evaluation and Rating (CALIPER), through the Bland-Altman plot. The network performance assessed in terms of both BF-score and sensitivity on the test-set and resulting from the comparison with CALIPER demonstrated that CNNs have the potential to reliably detect and quantify pulmonary disease in order to evaluate its progression and become a supportive tool for radiologists.
Recent work on the classification of microscopic skin lesions does not consider how the presence of skin hair may affect diagnosis. In this work, we investigate how deep-learning models can handle a varying amount of skin hair during their predictions. We present an automated processing pipeline that tests the performance of the classification model. We conclude that, under realistic conditions, modern day classification models are robust to the presence of skin hair and we investigate three architectural choices (Resnet50, InceptionV3, Densenet121) that make them so.
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is a popular tool for the diagnosis of breast lesions due to its effectiveness, especially in a high risk population. Accurate lesion segmentation is an important step for subsequent analysis, especially for computer aided diagnosis systems. However, manual breast lesion segmentation of (4D) MRI is time consuming, requires experience, and it is prone to interobserver and intraobserver variability. This work proposes a deep learning (DL) framework for segmenting breast lesions in DCE-MRI using a 3D patch based U-Net architecture. We perform different experiments to analyse the effects of class imbalance, different patch sizes, optimizers and loss functions in a cross-validation fashion using 46 images from a subset of a challenging and publicly available dataset not reported to date, that is the TCGA-BRCA. We also compare the proposed U-Net framework with another state-of-the-art approach used for breast lesion segmentation in DCE-MRI, and report better segmentation accuracy with the proposed framework. The results presented in this work have the potential to become a publicly available benchmark for this task.
15:30 - 15:45 CET
15:45 - 17:00 CET
Oral presentations are assigned 15 minutes (12 speech + 3 Q&A)
The prevalence of Autism Spectrum Disorder (ASD) in the United States has increased by 178% from 2000 to 2016. However, due to the lack of well-trained specialists and the time-consuming diagnostic process, many children are not able to be promptly diagnosed. Recently, several research have taken steps to explore automatic video-based ASD detection systems with the help of machine learning and deep learning models, such as support vector machine (SVM) and long short-term memory (LSTM) model. However, the models mentioned above could not extract effective features directly from raw videos. In this study, we aim to take advantages of 3D convolution-based deep learning models to aid video-based ASD detection. We explore three representative 3D convolutional neural networks (CNNs), including C3D, I3D and 3D ResNet. In addition, a new 3D convolutional model, called 3D ResNeSt, is also proposed based on ResNeSt. We evaluate these models on an ASD detection dataset. The experimental results show that, on average, all of the four 3D convolutional models can obtain competitive results when compared to the baseline using LSTM model. Our proposed 3D ResNeSt model achieves the best performance, which improves the average detection accuracy from 0.72 to 0.85.
Cognitive impairments affect areas such as memory, learning, concentration, or decision making and range from mild to severe. Impairments of this kind can be indicators of neurodegenerative diseases such as Alzheimer’s, that affect millions of people worldwide and whose incidence is expected to increase in the near future. Handwriting is one of the daily activities affected by this kind of impairment, and its anomalies are already used for the diagnosis of neurodegenerative diseases, such as, for example, micrographia in Parkinson’s patients. Classifier combination methods have proved to be an effective tool for increasing the performance in pattern recognition applications. The rationale of this approach follows from the observation that appropriately diverse classifiers, especially when trained on different types of data, tend to make uncorrelated errors. In this paper, we present a study in which the responses of different classifiers, trained on data from graphic tasks, have been combined to predict cognitive impairments. The proposed system has been trained and tested on a dataset containing handwritten traits extracted from some simple graphic tasks, e.g. joining two points or drawing circles. The results confirmed that a simple combination rule, such as the majority vote rule, performs better than single classifiers.
The development of deep learning provides powerful support for disease classification of neuroimaging data. However, in the classification of neuroimaging data based on deep learning methods, the spatial information cannot be fully utilized. In this paper, we propose a lightweight 3D spatial attention module with adaptive receptive fields, which allows neurons to adaptively adjust the receptive field size according to multiple scales of input information. The attention module can fuse spatial information of different scales on multiple branches, so that 3D spatial information of neuroimaging data can be fully utilized. A 3D-ResNet18 based on our proposed attention module is trained to diagnose Alzheimer’s disease (AD). Experiments are conducted on 521 subjects (254 of patients with AD and 267 of normal controls) from Alzheimer’s Disease National Initiative (ADNI) dataset of 3D structural MRI brain scans. Experimental results show the effectiveness and efficiency of our proposed approach for AD classification.
Brain-Computer Interfaces (BCI) based on motor imagery translate mental motor images recognized from the electroencephalogram (EEG) to control commands. EEG patterns of different imagination tasks, e.g. hand and foot movements, are effectively classified with machine learning techniques using band power features. Recently, also Convolutional Neural Networks (CNNs) that learn both effective features and classifiers simultaneously from raw EEG data have been applied. However, CNNs have two major drawbacks: (i) they have a very large number of parameters, which thus requires a very large number of training examples; and (ii) they are not designed to explicitly learn features in the frequency domain. To overcome these limitations, in this work we introduce Sinc-EEGNet, a lightweight CNN architecture that combines learnable band-pass and depthwise convolutional filters. Experimental results obtained on the publicly available BCI Competition IV Dataset 2a show that our approach outperforms reference methods in terms of classification accuracy.
For many practical problems and applications, it is not feasible to create a vast and accurately labeled dataset, which restricts the application of deep learning in many areas. Semi-supervised learning algorithms intend to improve performance by also leveraging unlabeled data. This is very valuable for 2D-pose estimation task where data labeling requires substantial time and is subject to noise. This work aims to investigate if semi-supervised learning techniques can achieve acceptable performance level that makes using these algorithms during training justifiable. To this end, a lightweight network architecture is introduced and mean teacher, virtual adversarial training and pseudo-labeling algorithms are evaluated on 2D-pose estimation for surgical instruments. For the applicability of pseudo-labelling algorithm, we propose a novel confidence measure, total variation. Experimental results show that utilization of semi-supervised learning improves the performance on unseen geometries drastically while maintaining high accuracy for seen geometries. For RMIT benchmark, our lightweight architecture outperforms state-of-the-art with supervised learning. For Endovis benchmark, pseudo-labelling algorithm improves the supervised baseline achieving the new state-of-the-art performance.
17:00 - 18:00 CET
Poster presentations are made in parallel
The Swiss Monitoring of Adverse Drug Events (SwissMADE) project is part of the SNSF-funded Smarter Health Care initiative, which aims at improving health services for the public. Its goal is to use text mining on electronic patient reports to automatically detect adverse drug events automatically in hospitalised elderly patients who received anti-thrombotic drugs. The project is the first of its kind in Switzerland: the data is provided by four hospitals from both the German- and French-speaking part of Switzerland, all of which had not previously released electronic patient records for research, making extraction and anonymisation of records one of the major challenges of the project. In this paper, we describe the part of the project concerned with the de-identification and annotation of German data obtained from one of the hospitals in the form of patient reports. All of these reports are automatically de-identified using a dictionary-based approach augmented with manually created rules, and then automatically annotated. For this, we employ our entity recognition pipeline called OGER (OntoGene Entity Recognizer), also a dictionary-based approach, augmented by an adapted transformer model to obtain state of the art performance, to detect drug, disease and symptom mentions in these reports. Furthermore, a subset of reports are manually annotated for drugs and diagnoses by a medical expert, serving as a validation set for the automatic annotations.
Acute Lymphoblastic Leukemia (ALL) is one of the most commonly occurring type of leukemia which poses a serious threat to life. It severely affects White Blood Cells (WBCs) of the human body that fight against any kind of infection or disease. Since, there are no evident morphological changes and the signs are pretty similar to other disorders, it becomes difficult to detect leukemia. Manual diagnosis of leukemia is time-consuming and is even susceptible to errors. Thus, in this paper, computer assisted diagnosis method has been implemented to detect leukemia using deep learning models. Three models namely, VGG11, ResNet18 and ShufflenetV2 have been trained and fine tuned on ISBI 2019 C-NMC dataset. Finally an ensemble using weighted averaging technique is formed and evaluated as per the criteria of binary classification. The proposed method gave an overall accuracy of 87.52% and F1-score of 87.40%. Thus, it outperforms most of the existing techniques for the same dataset.
Medical images have been indispensable and useful tools for supporting medical experts in making diagnostic decisions. However, taken medical images especially throat and endoscopy images are normally hazy, lack of focus, or uneven illumination. Thus, these could difficult the diagnosis process for doctors. In this paper, we propose MIINet, a novel image-to-image translation network for improving quality of medical images by unsupervised translating low-quality images to the high-quality clean version. Our MIINet is not only capable of generating high-resolution clean images, but also preserving the attributes of original images, making the diagnostic more favorable for doctors. Experiments on dehazing 100 practical throat images show that our MIINet largely improves the mean doctor opinion score (MDOS), which assesses the quality and the reproducibility of the images from the baseline of 2.36 to 4.11, while dehazed images by CycleGAN got lower score of 3.83. The MIINet is confirmed by three physicians to be satisfying in supporting throat disease diagnostic from original low-quality images.
Falling is among the most damaging events for elderly people, which sometimes may end with significant injuries. Due to fear of falling, many elderly people choose to stay more at home in order to feel safer. In this work, we propose a new fall detection and recognition approach, which analyses egocentric videos collected by wearable cameras through a computer vision/machine learning pipeline. More specifically, we conduct a case study with one volunteer who collected video data from two cameras; one attached to the chest and the other one attached to the waist. A total of 776 videos were collected describing four types of falls and nine kinds of non-falls. Our method works as follows: extracts several uniformly distributed frames from the videos, uses a pre-trained ConvNet model to describe each frame by a feature vector, followed by feature fusion and a classification model. Our proposed model demonstrates its suitability for the detection and recognition of falls from the data captured by the two cameras together. For this case study, we detect all falls with only one false positive, and reach a balanced accuracy of 93% in the recognition of the 13 types of activities. Similar results are obtained for videos of the two cameras when considered separately. Moreover, we observe better performance of videos collected in indoor scenes.
Spine surgery is nowadays performed for a great number of spine pathologies; it is estimated that 4.83 million surgeries are carried out globally each year. This prevalence led to an evolution of spine surgery into an extremely specialized field, so that traditional open interventions to the spine were integrated and often replaced by minimally invasive approaches. Despite the several benefits associated to robotic minimally invasive surgeries (RMIS), loss of depth perception, reduced field of view and consequent difficulty in intraoperative identification of relevant anatomical structures are still unsolved issues. For these reasons, Augmented Reality (AR) was introduced to support the surgeon in surgical applications. However, even though the irruption of AR has promised breakthrough changes in surgery, its adoption was slower than expected as there are still usability hurdles. The objective of this work is to introduce a client software with marker-based optical tracking capabilities, included into a client-server architecture that uses protocols to enable real-time streaming over the network, providing desktop rendering power to the head mounted display (HMD). Results relative to the tracking are promising (Specificity = 0.98 ± 0.03; Precision = 0.94 ± 0.04; Dice = 0.80 ± 0.07) as well as real-time communication, which was successfully set.
Assessing the physical condition in rehabilitation scenarios is a challenging problem, since it involves Human Activity Recognition (HAR) and kinematic analysis methods. In addition, the difficulties increase in unconstrained rehabilitation scenarios, which are much closer to the real use cases. In particular, our aim is to design an upper-limb assessment pipeline for stroke patients using smartwatches. We focus on the HAR task, as it is the first part of the assessing pipeline. Our main target is to automatically detect and recognize four key movements inspired by the Fugl-Meyer assessment scale, which are performed in both constrained and unconstrained scenarios. In addition to the application protocol and dataset, we propose two detection and classification baseline methods. We believe that the proposed framework, dataset and baseline results will serve to foster this research field.
Patients with epilepsy suffer from recurrently occurring seizures. To improve diagnosis and treatment as well as to increase patients’ safety and quality of life, it is of great interest to develop reliable methods for automated seizure detection. In this work, we evaluate a first trial of a multimodal approach combining 3D acceleration and heart rate data acquired with a mobile In-Ear sensor as part of the project EPItect. For the detection of tonic-clonic seizures(TCS), we train different classification models (Naïve Bayes, K-Nearest-Neighbor, linear Support Vector Machine and Adaboost.M1) and evaluate cost-sensitive learning as a measure to address the problem of highly imbalanced data. To assess the performance of our multimodal approach, we compare it to a unimodal approach, which only uses the acceleration data. Experiments show that our method leads to a higher sensitivity, lower detection latency and lower false alarm rate compared to the unimodal method.
Heart diseases are still among the main causes of death in the world population. The use of tools able to discriminate early this type of problem, even by non-specialized medical personnel on an outpatient basis, would put a decrease in health pressure on hospital centers and a better patient prognosis. This paper focuses on the problem of cardiac akinesis, a condition attributable to a very large number of pathologies, and a possible serious complication for SARS-Covid19 patients. In particular, we considered echocardiographic images of both akinetic and healthy patients. The dataset, containing echocardiograms of around 700 patients, has been supplied by Sacco hospital of Milan (Italy). We implemented a modified ResNet34 architecture and we tested the model under various combinations of parameters. The final best performing model was able to achieve a F1-score of 0.91 in the binary classification Akinetic vs. Normokinetic.
The right matching of patients to an intended treatment is routinely performed by doctor and physicians in healthcare. Improving doctor’s ability to choose the right treatment can greatly speed up patient’s recovery. In a clinical study on Disorders of Consciousness patients in Minimal Consciousness State (MCS) have gone through transcranial Electrical Stimulation (tES) therapy to increase consciousness level. We have carried out the study of MCS patient’s response to tES therapy using as input the EEG data collected before the intervention. Different Machine Learning approaches have been applied to the Relative Band Power features extracted from the EEG. We aimed to predict tES treatment out come from this EEG data of 17 patients, where 4 of the patients sustainably showed further signs of consciousness after treatment. We have been able to correctly classify with 95% accuracy the response of patients to tES therapy. In this paper we present the methodology as well as a comparative evaluation of the different employed classification approaches. Hereby we demonstrate the feasibility of implementing a novel informed Decision Support System (DSS) based on this methodological approach for the correct prediction of patients’ response to tES therapy in MCS.
Neurodegenerative disease assessment with handwriting has been shown to be effective. In this exploratory analysis, several features are extracted and tested on different tasks of the novel HAND-UNIBA dataset. Results show what are the most important kinematic features and the most significant tasks for neurodegenerative disease assessment through handwriting.
Nowadays, the treatments of neurodegenerative diseases are increasingly sophisticated, mainly thanks to innovations in the medical field. As the effectiveness of care, strategies is enhanced by the early diagnosis, in recent years there has been an increasing interest in developing reliable, non-invasive, easy to administer, and cheap diagnostics tools to support clinicians in the diagnostic processes. Among others, Alzheimer’s disease (AD) has received special attention in that it is a severe and progressive neurodegenerative disease that heavily influence the patient’s quality of life, as well as the social costs for proper care. In this context, a large variety of methods have been proposed that exploit handwriting and drawing tasks to discriminate between healthy subjects and AD patients. Most, if not all, of these methods adopt a single machine learning technique to achieve the final classification. We propose to tackle the problem by adopting a multi-classifier approach envisaging as many classifiers as the number of tasks, each of whom produces a binary output. The outputs of the classifiers are eventually combined by a majority vote to achieve the final decision. Experiments on a dataset involving 175 subjects executing 25 different handwriting and drawing tasks and 6 different machine learning techniques selected among the most used ones in the literature show that the best results are achieved by selecting the subset of tasks on which each classifier perform best and then combining the outputs of the classifier on each task, achieving an overall accuracy of 91% with a sensitivity of 83% and a specificity of 100%. Moreover, this strategy reduces the meantime to complete the test from 25 minutes to less than 10.
18:00 CET