In this video, we demonstrate our implementation of the Tidy Up task for the Simulation-DSPL competition at RoboCup@Home Japan Open 2020. We only attempted tasks 1 and 2a as described in the rulebook of the World Robot Summit challenge (https://worldrobotsummit.org/wrs2020/…). Most of the functionalities demonstrated in the video are the same ones that we use on our physical robot; however, some components used in the simulation are not the same, and some important ones are missing. In particular, we currently cannot do the following in simulaton: * Execute demonstrated trajectories: In simulation, we only do randomised motion planning. As can be seen in the video, this sometimes leads to unreasonable trajectories and/or failures. * Perform image-based object recognition: On the real robot, we use a detection + recognition model trained on real data, but this doesn’t generalise well in the simulated domain. The object detection performed in the video is purely based on point cloud data. * Perform reliable grasp verification: Our current grasp verification strategy on the real robot is based on change detection on force measurements, but this doesn’t seem to translate well to the simulation. During the official runs, the grasp verification is what ultimately led to a complete failure of our robot, as it was consistently failing to detect successful grasps. The video includes side-by-side views of the Gazebo simulation and Rviz, where the objects detected by the robot as well as its chosen grasping pose can be seen.