AUDI Konfuzius-Institut Ingolstadt

New paper accepted at ICANN 2020

Our latest paper, on learning mathematical relations among tumor histopathological and morphological parameters to capture phenotypical growth transitions, entitled

“Tumor Characterization using Unsupervised Learning of Mathematical Relations within Breast Cancer Data” by Cristian Axenie and Daria Kurz

was accepted at the 29th International Conference on Artificial Neural Networks, ICANN2020. The International Conference on Artificial Neural Networks (ICANN) is the annual flagship conference of the European Neural Network Society (ENNS). The 29th ICANN Conference was planned for 15-18th September 2020 in Bratislava, Slovakia.

Well done!

 

COMPONENS: COMPutational ONcology ENgineered Solutions

Introduction

COMPONENS focuses on the research and development of tools, models and infrastructure needed to interpret large amounts of clinical data and enhance cancer treatments and our understanding of the disease. To this end, COMPONENS serves as a bridge between the data, the engineer, and the clinician in oncological practice.

Thus, knowledge-based predictive mathematical modelling is used to fill gaps in sparse data; assist and train machine learning algorithms; provide measurable interpretations of complex and heterogeneous clinical data sets, and make patient-tailored predictions of cancer progression and response.


GLUECK: Growth pattern Learning for Unsupervised Extraction of Cancer Kinetics

 

Neoplastic processes are described by complex and heterogeneous dynamics. The interaction of neoplastic cells with their environment describes tumor growth and is critical for the initiation of cancer invasion. Despite the large spectrum of tumor growth models, there is no clear guidance on how to choose the most appropriate model for a particular cancer and how this will impact its subsequent use in therapy planning. Such models need parametrization that is dependent on tumor biology and hardly generalize to other tumor types and their variability. Moreover, the datasets are small in size due to the limited or expensive measurement methods. Alleviating the limitations that incomplete biological descriptions, the diversity of tumor types, and the small size of the data bring to mechanistic models, we introduce Growth pattern Learning for Unsupervised Extraction of Cancer Kinetics (GLUECK) a novel, data-driven model based on a neural network capable of unsupervised learning of cancer growth curves. Employing mechanisms of competition, cooperation, and correlation in neural networks, GLUECK learns the temporal evolution of the input data along with the underlying distribution of the input space. We demonstrate the superior accuracy of GLUECK, against four typically used tumor growth models, in extracting growth curves from a set of four clinical tumor datasets. Our experiments show that, without any modification, GLUECK can learn the underlying growth curves being versatile between and within tumor types.

Preprint

https://www.biorxiv.org/content/10.1101/2020.06.13.140715v1

Code

https://gitlab.com/akii-microlab/ecml-2020-glueck-codebase


PRINCESS: Prediction of Individual Breast Cancer Evolution to Surgical Size

 

Modelling surgical size is not inherently meant to replicate the tumor’s exact form and proportions, but instead to elucidate the degree of the tissue volume that may be surgically removed in terms of improving patient survival and minimize the risk that a second or third operation will be needed to eliminate all malignant cells entirely. Given the broad range of models of tumor growth, there is no specific rule of thumb about how to select the most suitable model for a particular breast cancer type and whether that would influence its subsequent application in surgery planning. Typically, these models require tumor biology-dependent parametrization, which hardly generalizes to cope with tumor heterogeneity. In addition, the datasets are limited in size owing to the restricted or expensive methods of measurement. We address the shortcomings that incomplete biological specifications, the variety of tumor types and the limited size of the data bring to existing mechanistic tumor growth models and introduce a Machine Learning model for the PRediction of INdividual breast Cancer Evolution to Surgical Size (PRINCESS). This is a data-driven model based on neural networks capable of unsupervised learning of cancer growth curves. PRINCESS learns the temporal evolution of the tumor along with the underlying distribution of the measurement space. We demonstrate the superior accuracy of PRINCESS, against four typically used tumor growth models, in extracting tumor growth curves from a set of nine clinical breast cancer datasets. Our experiments show that, without any modification, PRINCESS can learn the underlying growth curves being versatile between breast cancer types.

Preprint

https://www.biorxiv.org/content/10.1101/2020.06.13.150136v1

Code

https://gitlab.com/akii-microlab/cbms2020 


TUCANN: TUmor Characterization using Artificial Neural Networks

 

Despite the variety of imaging, genetic and histopathological data used to assess tumors, there is still an unmet need for patient-specific tumor growth profile extraction and tumor volume prediction, for use in surgery planning. Models of tumor growth predict tumor size based on measurements made in histological images of individual patients’ tumors compared to diagnostic imaging. Typically, such models require tumor biology-dependent parametrization, which hardly generalizes to cope with tumor variability among patients. In addition, the histopathology specimens datasets are limited in size, owing to the restricted or single-time measurements. In this work, we address the shortcomings that incomplete biological specifications, the inter-patient variability of tumors, and the limited size of the data bring to mechanistic tumor growth models and introduce a machine learning model capable of characterizing a tumor, namely its growth pattern, phenotypical transitions, and volume. The model learns without supervision, from different types of breast cancer data the underlying mathematical relations describing tumor growth curves more accurate than three state-of-the-art models on three publicly available clinical breast cancer datasets, being versatile among breast cancer types. Moreover, the model can also, without modification, learn the mathematical relations among, for instance, histopathological and morphological parameters of the tumor and together with the growth curve capture the (phenotypical) growth transitions of the tumor from a small amount of data. Finally, given the tumor growth curve and its transitions, our model can learn the relation among tumor proliferation-to-apoptosis ratio, tumor radius, and tumor nutrient diffusion length to estimate tumor volume, which can be readily incorporated within current clinical practice, for surgery planning. We demonstrate the broad capabilities of our model through a series of experiments on publicly available clinical datasets.

Preprint

https://www.biorxiv.org/content/10.1101/2020.06.08.140723v1  

Code

https://gitlab.com/akii-microlab/icann-2020-bio 


CHIMERA: Combining Mechanistic Models and Machine Learning for Personalized Chemotherapy and Surgery Sequencing in Breast Cancer

 

Mathematical and computational oncology has increased the pace of cancer research towards the advancement of personalized therapy. Serving the pressing need to exploit the large amounts of currently underutilized data, such approaches bring a significant clinical advantage in tailoring the therapy. CHIMERA is a novel system that combines mechanistic modelling and machine learning for personalized chemotherapy and surgery sequencing in breast cancer. It optimizes decision-making in personalized breast cancer therapy by connecting tumor growth behaviour and chemotherapy effects through predictive modelling and learning. We demonstrate the capabilities of CHIMERA in learning simultaneously the tumor growth patterns, across several types of breast cancer, and the pharmacokinetics of a typical breast cancer chemotoxic drug. The learnt functions are subsequently used to predict how to sequence the intervention. We demonstrate the versatility of CHIMERA in learning from tumor growth and pharmacokinetics data to provide robust predictions under two, typically used, chemotherapy protocol hypotheses.

Preprint

https://www.biorxiv.org/content/10.1101/2020.06.08.140756v1 

Code

https://gitlab.com/akii-microlab/bibe2020 

New paper accepted at ECML PKDD 2020

Our new paper combining machine learning and mechanistic modelling for cancer growth curve extraction entitled

“GLUECK: Growth pattern Learning for Unsupervised Extraction of Cancer Kinetics” by Cristian Axenie and Daria Kurz

was accepted at ECML PKDD 2020, the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases that will take place in Ghent, Belgium, from the 14nd to the 18nd of September 2020.

Well done!

New research abstract accepted at MASCC/ISOO 2020

One research abstract on

Adaptive Virtual Reality Avatars For Sensorimotor Rehabilitation In Chemotherapy-Induced Peripheral Neuropathy by C. Axenie and D. Kurz

was accepted and presented at MASCC/ISOO 2020 — the Annual Meeting on Supportive Care in Cancer, a joint meeting of the Multinational Association of Supportive Care in Cancer (MASCC) and the International Society of Oral Oncology (ISOO) — in Seville, Spain from June 24 to 26, 2021.

The work will be published in Springer Supportive Care in Cancer.

Well done!

New research abstract accepted at DKK2020

One research abstract on

Learning Personalized Virtual Reality Avatars for Chemotherapy-Induced Peripheral Neuropathy Rehabilitation in Breast Cancer by D. Kurz and C. Axenie

was accepted and presented at 34th German Cancer Congress in February 19 to 22, 2020, in Berlin.

The work was published in Oncology Research and Treatment Journal, Karger Publishing,  DOI: 10.1159/issn.2296-5270.

Well done!

PERSEUS: Platform for Enhanced virtual Reality in Sport Exercise Understanding and Simulation

 

The PERSEUS (Platform for Enhanced virtual Reality in Sport Exercise Understanding and Simulation) is funded by the Central Innovation Programme for small and medium-sized enterprises (SMEs) (ZIM – Zentrales Innovationsprogramm Mittelstand) from the Federal Ministry for Economic Affairs and Energy.

 

Consortium

Goal 

To support performance, elite athletes, such as goalkeepers, require a combination of general visual skills (e.g. visual acuity, contrast sensitivity, depth perception) and performance-relevant perceptual-cognitive skills (e.g. anticipation, decision-making). While these skills are typically developed as a consequence of regular, on-field practice, training techniques are available that can enhance those skills outside of, or in conjunction with, regular training. Perceptual training has commonly included sports vision training (SVT) that uses generic stimuli (e.g. shapes, patterns) optometry-based tasks with the aim of developing visual skills, or perceptual-cognitive training (PCT), that traditionally uses sport-specific film or images to develop perceptual-cognitive skills. Improvements in technology have also led to the development of additional tools (e.g. reaction time trainers, computer-based vision training, and VR systems) which have the potential to enhance perceptual skill using a variety of different equipment in on- and off-field settings that don’t necessarily fit into these existing categories.

In this context we hypothesize that using high-fidelity VR systems to display realistic 3D sport environments could provide a mean to control anxiety, allowing resilience-training systems to prepare athletes for real-world, high-pressure situations and hence to offer a tool for sport psychology training. Moreover, a VE should provide a realistic rendering of the sports scene to achieve good perceptual fidelity. More important for a sport-themed VE is high functional fidelity, which requires an accurate physics model of a complex environment, real time response, and a natural user interface. This is of course complemented by precise body motion tracking and kinematic model extraction. The goal is to provide multiple scenarios to players at different levels of difficulty, providing them with improved skills that can be applied directly to the real sports arena, contributing to a full biomechanical training in VR.

The project proposes the development of an AI powered VR system for sport psychological (cognitive) and biomechanical training. By exploiting neuroscientific knowledge in sensorimotor processing, Artificial Intelligence algorithms and VR avatar reconstruction, our lab along with the other consortium partners target the development of an adaptive, affordable, and flexible novel solution for goalkeeper training in VR.

Overview

The generic system architecture is depicted in the following diagram.

Using time sequenced data one can extract the motion components from goalkeeper’s motion and generate a VR avatar.

An initial view of the avatar compatibility with the real-world motion of the athlete is described in the next diagram.

The validation is done against ground truth data from a camera, whereas the data from the gloves are used to train the avatar kinematics and reconstruction. The gloves system is a lightweight embedded sensing system.

In the current study, we focus on extracting such goalkeeper analytics from kinematics using machine learning. We demonstrate that information from a single motion sensor can be successfully used for accurate and explainable goalkeeper kinematics assessment.

In order to exploit the richness and unique parameters of the goalkeeper’s motion, we employ a robust machine learning algorithm that is able to discriminate dives from other types of specific motions directly from raw sensory data. Each prediction is accompanied by an explanation of how each sensed motion component contributes to describing a specific goalkeeper’s action.

 

AKII wins 1 Prize in the European AI Research Challenge organized by Merck Research

30th of August 2019

A team of two students from THI, Du Xiaorui (Masters, Computer Science), Yavuzhan Erdem (Bachelor, Mechatronics) lead by Dr. Cristian Axenie (the head of the Audi Konfuzius-Institut Ingolstadt Lab) have been awarded the 1st Prize in the prestigious European Merch AI Research Challenge.

Team NeuroTHIx, among the 72 initially participating teams, qualified for the final in June. Then a 1 day research bootcamp at Merck offered the team the possibility to refine their idea and after 3 months of development, NeuroTHIx was invited to pitch in Darmstadt at Merck’s Innovation Center. On the 30th of August, team NeuroTHIx pitched along other 4 finalists and received the outstanding distinction.

IRENA (Invariant Representations Extraction in Neural Architectures) is the approach that team NeuroTHIx developed. IRENA offers a computational layer for extracting sensory relations for rich visual scenes, withy learning, inference, de-noising and sensor fusion capabilities. The system is also capable, through its underlying unsupervised learning capabilities, to embed semantics and perform scene understanding.

Team NeuroTHIx won a paid Merck research fellowship of at least 4 months in the AI Research Department. Additionally, the team got the chance to publish a joint scientific paper and also the chance to write theses with Merck. The prize also consisted in a check of 2500EUR.

Well done!

PERSEUS Project receives funding

The PERSEUS (Platform for Enhanced Virtual Reality in Sport Exercise Understanding and Simulation) research project, which deals with the development of VR-based goalkeeping training, successfully received the financial support from the Federal Ministry for Economic Affairs and Energy – BMWi). AKII Microlab will design and develop AI algorithms for VR avatar-based reconstruction of the player’s movement, able to track and simulate the whole-body skeletal movement of goalkeepers using embedded glove sensors. The funding is part of the Central Innovation Program for SMEs (ZIM) and thus supports our research activities in the field of VR and AI.

Well done team !

Loading new posts...
No more posts