<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Two PhD positions are available at the Robotics Group and the CoE
for Robotic Vision, Queensland University of Technology, Brisbane,
Australia.<br>
<br>
<br>
<p style="margin: 0px; padding: 0px; color: rgb(51, 51, 51);
font-family: Arial, sans-serif; font-size: 14px; font-style:
normal; font-variant: normal; font-weight: normal; letter-spacing:
normal; line-height: 20px; orphans: auto; text-align: left;
text-indent: 0px; text-transform: none; white-space: normal;
widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);"><strong>1 PhD Position
available: Human Robot Interaction based on Vision</strong></p>
<p style="margin: 10px 0px 0px; padding: 0px; color: rgb(51, 51,
51); font-family: Arial, sans-serif; font-size: 14px; font-style:
normal; font-variant: normal; font-weight: normal; letter-spacing:
normal; line-height: 20px; orphans: auto; text-align: left;
text-indent: 0px; text-transform: none; white-space: normal;
widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);">The key idea is to use
visual cues to understand human intentions during a manual task,
e.g. assembling object, preparing food, etc. By observing a human,
a robot should be able to assist a human similar to a theatre
nurse during a surgery. The robot observers and anticipates the
next step. This needs an understanding of human tasks on a
semantic level, consisting of actions applied to objects. An
internal (semantic) action plan of such a process is needed ant
the robot should be able to localize the current executed action
within the task network and be able to learn new actions be
observation. One approach could be, e.g. to link robot and human
actions to symbols and vice versa.</p>
<p style="margin: 10px 0px 0px; padding: 0px; color: rgb(51, 51,
51); font-family: Arial, sans-serif; font-size: 14px; font-style:
normal; font-variant: normal; font-weight: normal; letter-spacing:
normal; line-height: 20px; orphans: auto; text-align: left;
text-indent: 0px; text-transform: none; white-space: normal;
widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);"><strong>1 PhD Position
available: Semantic 3D Scene understanding</strong></p>
<p style="margin: 10px 0px 0px; padding: 0px; color: rgb(51, 51,
51); font-family: Arial, sans-serif; font-size: 14px; font-style:
normal; font-variant: normal; font-weight: normal; letter-spacing:
normal; line-height: 20px; orphans: auto; text-align: left;
text-indent: 0px; text-transform: none; white-space: normal;
widows: 1; word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);">Object detection in
computer vision has made a significant progress due to the
renaissance of neural networks and deep learning. The drawback of
such approaches are still that huge datasets have to be processed
and it is difficult to add knowledge to the network without
retaining at least some layers of the network. Neural networks are
still a black box and it is hard to extract symbolic knowledge
about the scene from the network. This project will deal with the
question of how semantic knowledge, using Ontologies, Bayesian
networks, etc. about shape, features, structures co-appearances,
to reason about what a robot sees. Eg. A cup is perceived by a
Neural Network hanging from a ceiling which is very likely to be
misclassified. What can semantic knowledge tell the robot about
the usual appearances of cups and what about things hanging from a
ceiling and where does the knowledge come from? Can a knowledge
base trigger also robotic actions if a correctly classified object
does not belong there?</p>
<br>
<br>
More informations about how to apply, scholarships, eligibility etc.
can be found at:<br>
<br>
<a class="moz-txt-link-freetext" href="https://wiki.qut.edu.au/display/cyphy/PhD+Projects+in+Robotics+at+QUT">https://wiki.qut.edu.au/display/cyphy/PhD+Projects+in+Robotics+at+QUT</a><br>
<br>
Applicants can only! apply using our online application system.<br>
<br>
<a class="moz-txt-link-freetext" href="http://survey.qut.edu.au/f/183969/8562/">http://survey.qut.edu.au/f/183969/8562/</a><br>
<br>
Questions (no applications!) regarding the two positions mentioned
above can be send to<br>
<br>
<a class="moz-txt-link-abbreviated" href="mailto:markus.eich@qut.edu.au">markus.eich@qut.edu.au</a> with the subject [PhD]<br>
<div class="moz-signature">-- <br>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<title></title>
<br>
Kind regards,<br>
<br>
Dr. Markus Eich | Research Fellow<br>
ARC Centre of Excellence for Robotic Vision | Science and
Engineering Faculty | Queensland University of Technology<br>
P: + 61 7 3138 2348 |E: <a class="moz-txt-link-abbreviated" href="mailto:markus.eich@qut.edu.au">markus.eich@qut.edu.au</a> |W:
<a class="moz-txt-link-abbreviated" href="http://www.roboticvision.org">www.roboticvision.org</a><br>
Gardens Point, G Block 423 | 2 George Street, Brisbane, QLD 4000 |
CRICOS No. 00213J<br>
<img alt="ACRV" src="cid:part1.01030200.08070204@qut.edu.au"
height="80" width="208"><br>
<br>
</div>
</body>
</html>