1 PhD Position available: Human Robot Interaction based on Vision
The key idea is to use visual cues to understand human intentions during a manual task, e.g. assembling object, preparing food, etc. By observing a human, a robot should be able to assist a human similar to a theatre nurse during a surgery. The robot observers and anticipates the next step. This needs an understanding of human tasks on a semantic level, consisting of actions applied to objects. An internal (semantic) action plan of such a process is needed ant the robot should be able to localize the current executed action within the task network and be able to learn new actions be observation. One approach could be, e.g. to link robot and human actions to symbols and vice versa.
1 PhD Position available: Semantic 3D Scene understanding
Object detection in computer vision has made a significant progress due to the renaissance of neural networks and deep learning. The drawback of such approaches are still that huge datasets have to be processed and it is difficult to add knowledge to the network without retaining at least some layers of the network. Neural networks are still a black box and it is hard to extract symbolic knowledge about the scene from the network. This project will deal with the question of how semantic knowledge, using Ontologies, Bayesian networks, etc. about shape, features, structures co-appearances, to reason about what a robot sees. Eg. A cup is perceived by a Neural Network hanging from a ceiling which is very likely to be misclassified. What can semantic knowledge tell the robot about the usual appearances of cups and what about things hanging from a ceiling and where does the knowledge come from? Can a knowledge base trigger also robotic actions if a correctly classified object does not belong there?