AUDI Konfuzius-Institut Ingolstadt

New paper accepted at ECML PKDD 2020

Our new paper combining machine learning and mechanistic modelling for cancer growth curve extraction entitled

“GLUECK: Growth pattern Learning for Unsupervised Extraction of Cancer Kinetics” by Cristian Axenie and Daria Kurz

was accepted at ECML PKDD 2020, the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases that will take place in Ghent, Belgium, from the 14nd to the 18nd of September 2020.

Well done!

New research abstract accepted at MASCC/ISOO 2020

One research abstract on

Adaptive Virtual Reality Avatars For Sensorimotor Rehabilitation In Chemotherapy-Induced Peripheral Neuropathy by C. Axenie and D. Kurz

was accepted and presented at MASCC/ISOO 2020 — the Annual Meeting on Supportive Care in Cancer, a joint meeting of the Multinational Association of Supportive Care in Cancer (MASCC) and the International Society of Oral Oncology (ISOO) — in Seville, Spain from June 24 to 26, 2021.

The work will be published in Springer Supportive Care in Cancer.

Well done!

New research abstract accepted at DKK2020

One research abstract on

Learning Personalized Virtual Reality Avatars for Chemotherapy-Induced Peripheral Neuropathy Rehabilitation in Breast Cancer by D. Kurz and C. Axenie

was accepted and presented at 34th German Cancer Congress in February 19 to 22, 2020, in Berlin.

The work was published in Oncology Research and Treatment Journal, Karger Publishing,  DOI: 10.1159/issn.2296-5270.

Well done!

PERSEUS: Platform for Enhanced virtual Reality in Sport Exercise Understanding and Simulation

 

The PERSEUS (Platform for Enhanced virtual Reality in Sport Exercise Understanding and Simulation) is funded by the Central Innovation Programme for small and medium-sized enterprises (SMEs) (ZIM – Zentrales Innovationsprogramm Mittelstand) from the Federal Ministry for Economic Affairs and Energy.

 

Goal 

To support performance, elite athletes, such as goalkeepers, require a combination of general visual skills (e.g. visual acuity, contrast sensitivity, depth perception) and performance-relevant perceptual-cognitive skills (e.g. anticipation, decision-making). While these skills are typically developed as a consequence of regular, on-field practice, training techniques are available that can enhance those skills outside of, or in conjunction with, regular training. Perceptual training has commonly included sports vision training (SVT) that uses generic stimuli (e.g. shapes, patterns) optometry-based tasks with the aim of developing visual skills, or perceptual-cognitive training (PCT), that traditionally uses sport-specific film or images to develop perceptual-cognitive skills. Improvements in technology have also led to the development of additional tools (e.g. reaction time trainers, computer-based vision training, and VR systems) which have the potential to enhance perceptual skill using a variety of different equipment in on- and off-field settings that don’t necessarily fit into these existing categories.

In this context we hypothesize that using high-fidelity VR systems to display realistic 3D sport environments could provide a mean to control anxiety, allowing resilience-training systems to prepare athletes for real-world, high-pressure situations and hence to offer a tool for sport psychology training. Moreover, a VE should provide a realistic rendering of the sports scene to achieve good perceptual fidelity. More important for a sport-themed VE is high functional fidelity, which requires an accurate physics model of a complex environment, real time response, and a natural user interface. This is of course complemented by precise body motion tracking and kinematic model extraction. The goal is to provide multiple scenarios to players at different levels of difficulty, providing them with improved skills that can be applied directly to the real sports arena, contributing to a full biomechanical training in VR.

The project proposes the development of an AI powered VR system for sport psychological (cognitive) and biomechanical training. By exploiting neuroscientific knowledge in sensorimotor processing, Artificial Intelligence algorithms and VR avatar reconstruction, our lab along with the other consortium partners target the development of an adaptive, affordable, and flexible novel solution for goalkeeper training in VR.

Overview

The generic system architecture is depicted in the following diagram.

Using time sequenced data one can extract the motion components from goalkeeper’s motion and generate a VR avatar.

An initial view of the avatar compatibility with the real-world motion of the athlete is described in the next diagram.

The validation is done against ground truth data from a camera, whereas the data from the gloves are used to train the avatar kinematics and reconstruction. The gloves system is a lightweight embedded sensing system.

In the current study, we focus on extracting such goalkeeper analytics from kinematics using machine learning. We demonstrate that information from a single motion sensor can be successfully used for accurate and explainable goalkeeper kinematics assessment.

In order to exploit the richness and unique parameters of the goalkeeper’s motion, we employ a robust machine learning algorithm that is able to discriminate dives from other types of specific motions directly from raw sensory data. Each prediction is accompanied by an explanation of how each sensed motion component contributes to describing a specific goalkeeper’s action.

 

AKII wins 1 Prize in the European AI Research Challenge organized by Merck Research

30th of August 2019

A team of two students from THI, Du Xiaorui (Masters, Computer Science), Yavuzhan Erdem (Bachelor, Mechatronics) lead by Dr. Cristian Axenie (the head of the Audi Konfuzius-Institut Ingolstadt Lab) have been awarded the 1st Prize in the prestigious European Merch AI Research Challenge.

Team NeuroTHIx, among the 72 initially participating teams, qualified for the final in June. Then a 1 day research bootcamp at Merck offered the team the possibility to refine their idea and after 3 months of development, NeuroTHIx was invited to pitch in Darmstadt at Merck’s Innovation Center. On the 30th of August, team NeuroTHIx pitched along other 4 finalists and received the outstanding distinction.

IRENA (Invariant Representations Extraction in Neural Architectures) is the approach that team NeuroTHIx developed. IRENA offers a computational layer for extracting sensory relations for rich visual scenes, withy learning, inference, de-noising and sensor fusion capabilities. The system is also capable, through its underlying unsupervised learning capabilities, to embed semantics and perform scene understanding.

Team NeuroTHIx won a paid Merck research fellowship of at least 4 months in the AI Research Department. Additionally, the team got the chance to publish a joint scientific paper and also the chance to write theses with Merck. The prize also consisted in a check of 2500EUR.

Well done!

PERSEUS Project receives funding

The PERSEUS (Platform for Enhanced Virtual Reality in Sport Exercise Understanding and Simulation) research project, which deals with the development of VR-based goalkeeping training, successfully received the financial support from the Federal Ministry for Economic Affairs and Energy – BMWi). AKII Microlab will design and develop AI algorithms for VR avatar-based reconstruction of the player’s movement, able to track and simulate the whole-body skeletal movement of goalkeepers using embedded glove sensors. The funding is part of the Central Innovation Program for SMEs (ZIM) and thus supports our research activities in the field of VR and AI.

Well done team !

AKII Finalist in Merck “Future of AI” Challenge

Dr. Axenie together with two students of THI (team NeuroTHIx: Du Xiaorui, Yavuzhan Erdem, Cristian Axenie) qualified for the final of Merck “Future of AI” Challenge.

IRENA (Invariant Representations Extraction in Neural Architectures) 

In this project we aim at building an unsupervised learning system that is based on and
inspired by our biological intelligence for the problem of learning invariant representations.
Mammalian visual systems are characterized by their ability to recognize stimuli invariant to
various transformations. With our proposed model, we investigate the hypothesis that this
ability is achieved by the temporal encoding of visual stimuli, and why not, other sensory
stimuli. By using a model of a multisensory cortically inspired network, we show that this
encoding is invariant to several transformations and robust with respect to stimulus
variability. Furthermore, we show that the proposed model provides a rapid encoding and
computation, in accordance with recent physiological results.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

https://app.ekipa.de/challenge/future-of-ai/brief

The team is now invited in Darmstadt at Merck’s Research Center for a 2 day boot camp to refine the idea and then work on it until end August. To conclude the Challenge, we will present our final solution in Darmstadt on 30th of August.

Well done and good luck!

 

Two new papers accepted at the 28th International Conference on Artificial Neural Networks

The International Conference on Artificial Neural Networks (ICANN) is the annual flagship conference of the European Neural Network Society (ENNS). For the 2019 edition of ICANN AKII Microlab has two papers accepted.

 

Neural Network 3D Body Pose Tracking and Prediction for Motion-to-Photon Latency Compensation in Distributed Virtual Reality – Sebastian Pohl, Armin Becher, Thomas Grauschopf, Cristian Axenie

NARPCA: Neural Accumulate-Retract PCA for Low-latency High-throughput Processing on Datastreams – Cristian Axenie, Radu Tudoran, Stefano Bortoli, Mohamad Al Hajj Hassan, Goetz Brasche

 

Well done team !

Loading new posts...
No more posts