Human Support Robot

Robots make the world happy

TU Wien, V4R – Vision For Robotics.

Last updated: Oct. 17, 2022

We make robots see

Member

  • Markus Vincze, Assistant Professor, TU Wien
  • Dominik Bauer, Post-doctoral researcher, TU Wien
  • Jean-Baptiste Weibel, Post-doctoral researcher, TU Wien

Link


Abstract

We make robots see That is, we devise machine vision methods to perceive structures and objects such that robots act in and learn from every day situations. This paves the way to automated manufacturing and robots performing household tasks. Solutions develop the situated approach to integrate task, robot and perception knowledge. Core expertise is safe navigation, 2D and 3D attention, object modelling, object class detection, affordance-based grasping, and manipulation of objects in relation to object functions. Research Topics Human in the Loop – HiL Within this research area of the V4R research group, we investigate different Human-Robot-Interaction (HRI) scenarios. We are especially interested in enabling long-term HRI and perform research from a user-centered perspective as well as from a robot/cognition-centered perspective. We study joint action scenarios, usability and acceptance of service robots in the domestic and the industrial context, adaptive behaviour coordination, and educational robotics. Semantic Scene – SCENE Within this research area of the V4R research group, we focus on reasoning about scenes and environments from different perspectives, with the goal of better perceiving, representing and understanding a robots’ surrounding world to enable a more advanced behavior. Research topics include scene reconstruction, semantic scene parsing, efficient representation of semantic knowledge and its exploitation in the robotics scope, especially intelligent robot navigation and robotic interaction with the environment. Object Perception and Manipulation – OBJECT Within this research area of the V4R research group, we investigate different research aspects related to objects. Our research focusses on object modelling (multi-view reconstruction, RGBD as well as stereo), object recognition, object detection, object classification, object affordances and manipulation of objects like grasping known and unknown objects. The goal of our research is to empower autonomous robots in their perception and manipulation tasks.

Reference

  1. Bauer, Dominik, Timothy Patten, and Markus Vincze. “SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement.” Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2022. [Link]
  2. Thalhammer, Stefan, et al. “Pyrapose: Feature pyramids for fast and accurate object pose estimation under domain shift.” 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. [Link]
  3. Bauer, Dominik, Timothy Patten, and Markus Vincze. “VeREFINE: Integrating object pose verification with physics-guided iterative refinement.” IEEE Robotics and Automation Letters 5.3 (2020): 4289-4296. [Link]
  4. Langer, Edith, Timothy Patten, and Markus Vincze. “Robust and efficient object change detection by combining global semantic information and local geometric verification.” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020. [Link]
  5. Park, Kiru, Timothy Patten, and Markus Vincze. “Pix2pose: Pixel-wise coordinate regression of objects for 6d pose estimation.” Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019. [Link]