Skip to content
Snippets Groups Projects

KB Robotics Application

This repository provides the application that combines the different modules within the KB Robotics project.

Installation

After you have cloned this repository, you need to clone the relevant other modules inside this project folder:

cd application/

git clone https://git.wur.nl/kb-robotics/object_manipulation_module.git (or use ssh if you prefer)

Also, inside the object_detection/ directory you have to create a directory named models/ that contains the relevant PyTorch models. The easiest way of doing so is going to the object-detection-module-lite repository of the kb-robotics group in the browser, clicking on the models/ directory, downloading the current directory as zip, and then unzipping this in the object_detection directory.

Now create a Python virtual environment (use e.g. anaconda or venv):

conda create --name robot python=3.9

Activate the virtual environment:

conda activate robot

Check the PyTorch website (pytorch.org) for instructions on how to install the old version of PyTorch 1.7.1. At the time of writing the installation instructions are as follows, but make sure to verify:

pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

Now install the other dependencies:

pip install -r requirements.txt

Install the Intel Realsense driver:

Download and install librealsense: https://github.com/IntelRealSense/librealsense

For retrieving camera data this project builds on code of the visual-pushing-grasping project by Andy Zeng: https://github.com/andyzeng/visual-pushing-grasping. This repository provides a C++ executable that streams the camera data via TCP. This executable needs to be installed using the following commands:

cd object_manipulation_module/camera/realsense

cmake .

make

If you run into any problems during the installation of this C++ executable, it could that you need to install some dependencies. Note: these are suggestions from experience, they are not taken from any (official) documentation.

sudo apt install libglfw3-dev

sudo apt install libglu1-mesa-dev

Running the code

Camera stream

In order to run any code that uses the Intel Realsense camera, you need to run the realsense C++ executable in the background:

cd object_manipulation_module/camera/realsense/

./realsense

If everything works you should see a screen popping up with the color and depth images of the Realsense camera. If the depth image looks strange (lots of variation in the depth in places where you don't expect it), just close the program, unplug and replug the USB cable of the camera, and restart.

Scripts

A number of scripts are included in this repository, mostly for calibrating the camera/robot or testing the calibration.

Calibrate

To run:

python calibrate.py

This script calibrates the camera with respect to the robot, so that positions on the camera image can be translated into positions in the robot coordinate frame. Please check settings.py for calibration settings. For example, the boolean CALIBRATION_COLLECT_DATA determines whether new data is collected or whether stored data from a previous run is re-used. The value of CALIBRATION_DATA_DIR determines where the resulting calibration values are stored. The calibration script is an adapted version of the calibration script developed in the visual-pushing-grasping project by Andy Zeng (git repo: https://github.com/andyzeng/visual-pushing-grasping).

Click to Pick

To run:

python click_to_pick.py

This script starts an interactive session in which the user can click on a point in an image taken from the camera to make the robot pick an object at that location and drop it off at another location. Instructions:

  • Run the script, and a figure showing the camera stream will pop up
  • Make sure the screen is selected, and press the key b on your keyboard to enable the mode that allows you to click on the location of a box (dropoff position) in the image. Once have clicked on a point, this information is stored.
  • Press o to enable the mode that allows to click on the location of an object. When you click, the robot will pick up the object at this location and drop it off at the previously chosen box location.
  • Press c to close the application.

Keyboard Controller

To run:

python keyboard_controller.py

This script starts an interactive session in which the user can type commands to control the UR5 robot using the keyboard. For most purposes using the keyboard in this way is very inconvenient, it is mostly useful for testing the connection to the robot and trying specific movements.