Theses

We offer these current topics directly for Bachelor and Master students at TU Darmstadt. Note that we cannot provide funding for any of these theses projects.

We highly recommend that you do either our robotics and machine learning lectures (Robot LearningStatistical Machine Learning) or our colleagues (Grundlagen der Robotik, Probabilistic Graphical Models and/or Deep Learning). Even more important to us is that you take both Robot Learning: Integrated Project, Part 1 (Literature Review and Simulation Studies) and Part 2 (Evaluation and Submission to a Conference) before doing a thesis with us.

In addition, we are usually happy to devise new topics on request to suit the abilities of excellent students. Please DIRECTLY contact the thesis advisor if you are interested in one of these topics. When you contact the advisor, it would be nice if you could mention (1) WHY you are interested in the topic (dreams, parts of the problem, etc.), and (2) WHAT makes you special for the projects (e.g., class work, project experience, special programming or math skills, prior work, etc.). Supplementary materials (CV, grades, etc.) are highly appreciated. Of course, such materials are not mandatory, but they help the advisor to see whether the topic is too easy, just about right or too hard for you.

FOR FB16+FB18 STUDENTS: Students from other departments at TU Darmstadt (e.g., ME, EE, IST), you need an additional formal supervisor who officially issues the topic. Please do not try to arrange your home dept advisor by yourself but let the supervisor get in touch with that person instead!

Topic 1: Learning human models for safe human-robot handovers

Scope: Master’s thesis
Advisor:Georgia ChalvatzakiPuze LiuDavide Tateo
Start: ASAP
Topic: In this thesis, we want to study ways of approximating the safety manifold of the human when interacting with a robot, particularly during object handovers. While most works define a hard-coded workspace representing the safety manifold of the human, those do not apply to most real-world interactions. We will record and explore the use of human-human demonstrations of handover actions to encode human motion and learn the human-body constraint manifold, and its evolution during the interaction. These constraints and the human motion model can be used for constructing a safe action space to explore how the robot should approach and pass over objects to the human-receiver in a model-based learning setting. The initial tasks of the master thesis will include: i. literature review of human-robot handovers, ii. recording of human-human demonstrations, iii. Development of simulation environment with human motions replay, iv. learn the constrained human workspace.

Highly motivated students can apply by sending an e-mail expressing your interest to georgia.chalvatzaki@tu-darmstadt.de, attaching your CV and transcripts.

Minimum knowledge

  • Good knowledge of Python and/or C++;
  • Good knowledge of robotics
  • Good knowledge of Reinforcement Learning;

Preferred knowledge

  • Experience with recent deep RL methods;
  • Experience with deep learning libraries;
  • Experience with Pybullet simulator and Gym environment;

References

  1. Vogt, David, et al. “A system for learning continuous human-robot interactions from human-human demonstrations.” 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017.
  2. Liu, Changliu, and Masayoshi Tomizuka. “Safe exploration: Addressing various uncertainty levels in human robot interactions.” 2015 American Control Conference (ACC). IEEE, 2015.
  3. Calinon, Sylvain, Irene Sardellitti, and Darwin G. Caldwell. “Learning-based control strategy for safe human-robot interaction exploiting task and robot redundancies.” 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2010.
  4. Sutanto, Giovanni, et al. “Learning Equality Constraints for Motion Planning on Manifolds.” arXiv preprint arXiv:2009.11852 (2020).

Topic 2: Curriculum Adversarial Reinforcement Learning

Scope: Master’s thesis
Advisor: Carlo D’EramoGeorgia Chalvatzaki
Start: ASAP
Topic: Adversarial Reinforcement Learning (RL) is a technique to let a protagonist RL agent obtain robust skills by training an adversary RL agent that tries to hinder the learning of the protagonist. The interaction between the protagonist and the adversary is modeled as a zero-sum game, which essentially means that the adversary has the same “strength” as the protagonist during the learning process. In some problems, especially when exploration is an issue, the opposition of the adversary can strongly obstacle the protagonist, thus leading to unsatisfying performance. In this project, we will investigate a curriculum RL approach to address the exploration issue in adversarial RL by adapting the strength of the adversary according to the learning progress of the protagonist. The synergy of curriculum and adversarial RL will eventually allow the protagonist to obtain robust skills while not ending up in suboptimal behavior.

Minimum knowledge

  • Good knowledge of Python;
  • Good knowledge of Reinforcement Learning;

Preferred knowledge

  • Experience with recent deep RL methods;
  • Experience with Pybullet simulator and Gym environment;

References

  1. Pinto, Lerrel, et al. “Robust adversarial reinforcement learning.” International Conference on Machine Learning. PMLR, 2017.
  2. Lin, Yen-Chen, et al. “Tactics of Adversarial Attack on Deep Reinforcement Learning Agents.” IJCAI. 2017.
  3. Figure from https://bair.berkeley.edu/blog/2020/03/27/attacks/.

Topic 3: Discovering neural parts in objects with invertible NNs for robot grasping

Scope: Master Thesis
Advisor: Georgia Chalvatzaki, Despoina Paschalidou

In this thesis, we will investigate the use of 3D primitive representations in objects using Invertible Neural Networks (INNs). Through INNs we can learn the implicit surface function of the objects and their mesh. Apart from extracting the object’s shape, we can parse the object into semantically interpretable parts. In our work our main focus will be to segment the parts in objects that are semantically related to object affordances. Moreover, the implicit representation of the primitive can allow us to compute directly the grasp configuration of the object, allowing grasp planning. Interested students are expected to have experience with Computer Vision and Deep Learning, but also know how to program in Python using DL libraries like PyTorch.

The thesis will be co-supervised by Despoina Paschalidou (Ph.D. candidate at the Max Planck Institute for Intelligent Systems and the Max Planck ETH Center for Learning Systems). Highly motivated students can apply by sending an e-mail expressing your interest to georgia.chalvatzaki@tu-darmstadt.de, attaching your CV and transcripts.

References:

  1. Paschalidou, Despoina, Angelos Katharopoulos, Andreas Geiger, and Sanja Fidler. “Neural Parts: Learning expressive 3D shape abstractions with invertible neural networks.” arXiv preprint arXiv:2103.10429 (2021).
  2. Karunratanakul, Korrawe, Jinlong Yang, Yan Zhang, Michael Black, Krikamol Muandet, and Siyu Tang. “Grasping Field: Learning Implicit Representations for Human Grasps.” arXiv preprint arXiv:2008.04451 (2020).
  3. Chao, Yu-Wei, Wei Yang, Yu Xiang, Pavlo Molchanov, Ankur Handa, Jonathan Tremblay, Yashraj S. Narang et al. “DexYCB: A Benchmark for Capturing Hand Grasping of Objects.” arXiv preprint arXiv:2104.04631 (2021).
  4. Do, Thanh-Toan, Anh Nguyen, and Ian Reid. “Affordancenet: An end-to-end deep learning approach for object affordance detection.” In 2018 IEEE international conference on robotics and automation (ICRA), pp. 5882-5889. IEEE, 2018.

Thesis 4: Cross-platform Benchmark of Robot Grasp Planning

Scope: Master Thesis
Advisor: Georgia Chalvatzaki, Daniel Leidner

Grasp planning is one of the most challenging tasks in robot manipulation. Apart from perception ambiguity, the grasp robustness and the successful execution rely heavily on the dynamics of the robotic hands. The student is expected to research and develop benchmarking environments and evaluation metrics for grasp planning. The development in simulation environments as ISAAC Sim and Gazebo will allow us to integrate and evaluate different robotic hands for grasping a variety of everyday objects. We will evaluate grasp performance using different metrics (e.g., object-category-wise, affordance-wise, etc.), and finally, test the sim2real gap when transferring such approaches from popular simulators to real robots. The student will have the chance to work with different robotic hands (Justin hand, PAL TIAGo hands, Robotiq gripper, Panda gripper, etc.) and is expected to transfer the results to at least two robots (Rollin’ Justin at DLR and TIAGo++ at TU Darmstadt). The results of this thesis are intended to be made public (both the data and the benchmarking framework) for the benefit of the robotics community. As this thesis is offered in collaboration with the DLR institute of Robotics and Mechatronics in Oberpfaffenhofen near Munich, the student is expected to work in DLR for a period of 8-months for the thesis. On-site work at the premises of DLR can be expected but not guaranteed due to COVID-19 restrictions. A large part of the project can be carried out remotely.

Highly motivated students can apply by sending an e-mail expressing your interest to daniel.leidner@dlr.de and georgia.chalvatzaki@tu-darmstadt.de, attaching your CV and transcripts.

References:

  1. Collins, Jack, Shelvin Chand, Anthony Vanderkop, and David Howard. “A Review of Physics Simulators for Robotic Applications.” IEEE Access (2021).
  2. Bekiroglu, Y., Marturi, N., Roa, M. A., Adjigble, K. J. M., Pardi, T., Grimm, C., … & Stolkin, R. (2019). Benchmarking protocol for grasp planning algorithms. IEEE Robotics and Automation Letters, 5(2), 315-322.
%d bloggers like this: