Skip to content
Snippets Groups Projects
Commit 5043e17f authored by Cees Voesenek's avatar Cees Voesenek
Browse files

Improve MarkDown rendering of README

Most MarkDown renderers only recognise paragraphs if they are
separated by an empty line.
parent 0767fa21
No related branches found
No related tags found
1 merge request!12Improve README based on fresh user experience
# dlc_flytracker
Python package for the tracking the 3d kinematics of animals (currently only flies) build around [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut)
![???image](/paths/to/example_image_mosquito tracking_result.png)
......@@ -51,28 +50,22 @@ conda activate dlc-flytracker
```
### Experimental data and folder structure
Before using the package, you will need video recordings of moving animals (e.g. flying insects).
Each event should have been recorded by multiple synchronized cameras.
Ideally the camera would have been positioned at 90° angles from each other, thus maximizing the amount of original information that each camera is recording.
Your cameras should not have been moved (or of a minimum amount) during the entire time during which the videos have been recorded.
You should have recorded data necessary to generate a DLT calibration (i.e. 2d pixel coordinates of points of known 3d positions for all cameras).
If the cameras moved slightly between experiment, you will need an additional calibration.
We recommend recording calibration data once a day.
DLT coefficients can be generated using `camera.calib.estim_dlt_coef` or `camera.calib.estim_mdlt_coef` (see [here](tests.test_calib.test_dlt3d.py)).
Although it is possible to use various folder structures, we recommend saving all the recordings made with one camera in a dedicated folder.
Each recording usually consisting of a folder containing the all recorded images of a particular event and which name indicate the time and date of the recording (e.g. cam1_20221029_115612).
Before using the package, you will need video recordings of moving animals (e.g. flying insects). Each event should have been recorded by multiple synchronized cameras. Ideally the camera would have been positioned at 90° angles from each other, thus maximizing the amount of original information that each camera is recording. Your cameras should not have been moved (or of a minimum amount) during the entire time during which the videos have been recorded.
You should have recorded data necessary to generate a DLT calibration (i.e. 2d pixel coordinates of points of known 3d positions for all cameras). If the cameras moved slightly between experiment, you will need an additional calibration. We recommend recording calibration data once a day. DLT coefficients can be generated using `camera.calib.estim_dlt_coef` or `camera.calib.estim_mdlt_coef` (see [here](tests.test_calib.test_dlt3d.py)).
Although it is possible to use various folder structures, we recommend saving all the recordings made with one camera in a dedicated folder. Each recording usually consisting of a folder containing the all recorded images of a particular event and which name indicate the time and date of the recording (e.g. cam1_20221029_115612).
## Usage
Here the main steps introduced earlier are explained in more details and with examples.
There is two ways of processing video recordings:
- By following the steps bellow (the preferred way if it is your first time). Please notice that doing so will generate a yaml file (saved at each step) that can then be used to run automatically the full analysis. Example scripts can be found [here](/tests/test_process_batch.py).
- By following the steps below (the preferred way if it is your first time). Please notice that doing so will generate a yaml file (saved at each step) that can then be used to run automatically the full analysis. Example scripts can be found [here](/tests/test_process_batch.py).
- By using the function `process.batch_processing(yaml_path)`, you can automatically run the processing of all your recordings.
Settings for the batch processing will be read from a yaml file such as [this one](/data/mosquito_escapes/preprocessing_settings-sample_binned.yaml).
### Initialization
The processing of your recordings need to be done using the [BatchProcessing class](process/batch_processing). You can initialize it using:
The processing of your recordings need to be done using the [`BatchProcessing`](process/batch_processing) class. You can initialize it using:
```python
> from process.batch_processing import BatchProcessing
> img_process = BatchProcessing.from_directory_by_names(recording_paths, framerate, save_path)
......@@ -83,8 +76,7 @@ OR alternatively:
```
With `recording_paths` being a list of paths (a path per camera) towards directories that contains the recording(s) (folder(s) with all recorded images of an event by a camera).
### pre-processing of video recordings
### Pre-processing of video recordings
These steps allow the generating of .avi videos that will be analysed later by Deeplabcut
(See example in the [Jupyter notebook for pre-processing](Notebook1-preprocessing_recordings.ipynb)).
......@@ -92,16 +84,15 @@ These steps allow the generating of .avi videos that will be analysed later by D
- **Image enhancement:** *(Optional)* `img_process.enhance_images(fn_name, factor)` with `fn_name` the name of image characteristic to enhance (can be: 'contrast', 'brightness', 'sharpness' or 'color').
By enhancing the recorded images, you will make it easier to label body features later with DLC.
- **Cropping of the recordings:** To reduce the size of the images that will be analysed by DLC.
- **Cropping around a fix position:**
- **Cropping around a fixed position:**
- TODO: Add a way to manually define the position around which cropping should be done, and save in .csv file.
- **Crop all frames:** `img_process.crop_images(height, width)`
- **Dynamic cropping around moving animal:**
- **Manual tracking:**
- **Manually track single animal per recording:** `img_process.manual_tracking(dlt_path)`
- **Crop all frames:** `img_process.crop_images(height, width)`
- **Automatic tracking:**
- **Select a subsample:** `img_process.sample_images(step_frame)`
- **Do 2d tracking (blob detection) on images:** `img_process.track2d(from_fn_name='sample')`
......@@ -112,23 +103,20 @@ By enhancing the recorded images, you will make it easier to label body features
This step can help the labeling of the body features using DLC.
- **Stitching of multiple camera views:** `img_process.stitch_images()`
Will stitch all views together for each frame.
By stitching all views of the same animal together, you will allow DeepLabCut to learn correlation between the different views, and therefore you will improve the overall tracking accuracy.
Will stitch all views together for each frame. By stitching all views of the same animal together, you will allow DeepLabCut to learn correlation between the different views, and therefore you will improve the overall tracking accuracy.
See section [???](???) for more details.
- **Save each recording in an .avi** : `img_process.save_avi()`
### Select or define an animal skeleton
An animal skeleton is defined as one body module + one or several (pair(s) of) limbs module(s).
For example, a [fly skeleton](/skeleton_fitter/animals/fly.py) can consist of an [insect body](/skeleton_fitter/modules/bodies/insect_body_slim.py) (three jointed segments for the head, thorax and abdomen + one segment between the two wing hinges) and two [wings](/skeleton_fitter/modules/limbs/insect_wings_flat.py) (here defined as flat).
Each limb module consist of symmetrical limbs (one left and one right limb).
For example, a [fly skeleton](/skeleton_fitter/animals/fly.py) can consist of an [insect body](/skeleton_fitter/modules/bodies/insect_body_slim.py) (three jointed segments for the head, thorax and abdomen + one segment between the two wing hinges) and two [wings](/skeleton_fitter/modules/limbs/insect_wings_flat.py) (here defined as flat). Each limb module consist of symmetrical limbs (one left and one right limb).
You can select an already defined animal skeleton (see [/skeleton_fitter/animals](/skeleton_fitter/animals)).
**OR** New animal skeletons can be defined by confining bodies and limbs modules (see [/skeleton_fitter/modules](/skeleton_fitter/modules)).
New modules can also be defined. See [example animal skeleton](/skeleton_fitter/animals/???), [example body module](/skeleton_fitter/modules/bodies) and [example limbs module](/skeleton_fitter/modules/limbs/???) for more information on how to build your own skeleton and modules.
**OR**
New animal skeletons can be defined by confining bodies and limbs modules (see [/skeleton_fitter/modules](/skeleton_fitter/modules)). See [example animal skeleton](/skeleton_fitter/animals/???), [example body module](/skeleton_fitter/modules/bodies) and [example limbs module](/skeleton_fitter/modules/limbs/???) for more information on how to build your own skeleton and modules.
![???figure that shows how a skeleton is build](/paths/to/image.png)
......@@ -143,38 +131,32 @@ You will need to use the exact same naming convention for the body joints in DLC
??? TODO: Make a script that generate a DLC config.yaml file from the previously defined dlc_flytracker skeleton (and number of cameras).
### Training a network in DeepLabCut
*Using the DLC user interface.*
You will need to train a DLC network that is capable of tracking accurately the joints of your skeleton in all recorded views.
This will require to manually label around 100-200 images.
Tutorials on how to use deeplabcut are available [here](https://deeplabcut.github.io/DeepLabCut/docs/intro.html).
This step is very important, final tracking accuracy will be heavily dependent on how good the training dataset is.
See the example [config.yaml](/data/mosquito_escapes/config.yaml) to see an example and how DLC settings can be tweaked in order to get good tracking results with pre-cropped and stitched images.
You will need to train a DLC network that is capable of tracking accurately the joints of your skeleton in all recorded views. This will require to manually label around 100-200 images. Tutorials on how to use deeplabcut are available [here](https://deeplabcut.github.io/DeepLabCut/docs/intro.html).
This step is very important, final tracking accuracy will be heavily dependent on how good the training dataset is. See the example [config.yaml](/data/mosquito_escapes/config.yaml) to see an example and how DLC settings can be tweaked in order to get good tracking results with pre-cropped and stitched images.
### Run DLC analysis on the full dataset
Once you are happy with a DLC network that you trained on you data, you can run the analysis of you entire dataset with:
`img_process.analyse_dlc(dlc_cfg_path, shuffle, trainingsetindex, batch_size, model_name)`
### Load the DLC results in dlc_flytracker
You can convert tracking results from DLC to the data format used in this package using:
`img_process.load_dlc(model_name, tracker_method, dlt_path)`
This conversion consist of:
- Reading the results files from DeepLabCut.
- *Unscrambling*, by checking that 2d points have been detected in the correct view. If not, the points will be associated to what is thought to be their correct view number.
- *Reversing the pre-processing* by converting the 2d coordinates tracked with DLC in the original 2d coordinates of the recorded images (so before any cropping or stitching).
- *Reconstructing the 3d coordinates* of the skeleton joints from their 2d coordinates.
- **Unscrambling**, by checking that 2d points have been detected in the correct view. If not, the points will be associated to what is thought to be their correct view number.
- **Reversing the pre-processing** by converting the 2d coordinates tracked with DLC in the original 2d coordinates of the recorded images (so before any cropping or stitching).
- **Reconstructing the 3d coordinates** of the skeleton joints from their 2d coordinates.
### Kinematic tracking
Estimate kinematic parameters by fitting of the 3d skeleton to the 2d/3d coordinates of the joints using:
`img_process.fit_skeleton(animal_name, model_name, fit_method, opt_method, dlt_path, multiprocessing=True)`
You will need to choose an optimization method (e.g. `opt_method` = 'nelder-mead', 'powell' or 'leastsq').
And you will need to select a fitting method (e.g. `fit_method` = '2d' or '3d'), to decide if the skeleton will be fitted to the 2d or 3d coordinates of the joints.
It is also possible to fit the body to a hybrid skeleton that use the average position of the limbs to improve accuracy ot the kinematic tracking of the body (`fit_method = '3d_hybrid'`).
And you will need to select a fitting method (e.g. `fit_method` = '2d' or '3d'), to decide if the skeleton will be fitted to the 2d or 3d coordinates of the joints. It is also possible to fit the body to a hybrid skeleton that use the average position of the limbs to improve accuracy ot the kinematic tracking of the body (`fit_method = '3d_hybrid'`).
??? TODO: Document how filtering/smoothing is done by default and how to modify it + implement filtering of outlier (example of Cas data)
......@@ -185,8 +167,6 @@ To run tests locally, type:
> pytest dlc_flytracker
```
## License
This project utilizes the [LGPL LICENSE](LICENCE.txt).
Allows developers and companies to use and integrate a software component released under the LGPL into their own (even proprietary) software without being required by the terms of a strong copyleft license to release the source code of their own components. However, any developer who modifies an LGPL-covered component is required to make their modified version available under the same LGPL license.
\ No newline at end of file
Allows developers and companies to use and integrate a software component released under the LGPL into their own (even proprietary) software without being required by the terms of a strong copyleft license to release the source code of their own components. However, any developer who modifies an LGPL-covered component is required to make their modified version available under the same LGPL license.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment