Skip to content
Snippets Groups Projects
Commit da97d589 authored by Cribellier, Antoine's avatar Cribellier, Antoine
Browse files

Merge branch 'development' into 'master'

Debugging, improvement of readme and files restructuring

See merge request cribe001/dlc_flytracker!15
parents a5142699 bdc48770
No related branches found
No related tags found
1 merge request!15Debugging, improvement of readme and files restructuring
# dlc_flytracker # dlc_flytracker
Python package for the tracking the 3d kinematics of animals (currently only flies) build around [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut) Python package for the tracking the 3d kinematics of animals (currently only flies) build around [DeepLabCut](https://https://github.com/DeepLabCut/DeepLabCut)
![???image](/paths/to/example_image_mosquito tracking_result.png) <!-- ![TODO! image](/paths/to/example_image_mosquito tracking_result.png) -->
## Introduction ## Introduction
This package allow the estimation of animal kinematic parameters (positions and angles) by fitting a 3d skeleton to 2d tracking results from DeepLabCut (DLC). This package allow the estimation of animal kinematic parameters (positions and angles) by fitting a 3d skeleton to 2d tracking results from DeepLabCut (DLC).
The usual steps (see more details below in 'Usage'): The usual steps (see more details below in 'Usage'):
- **Pre-processing of video recordings (at least 2 cameras):** image enhancement, dynamic cropping around moving animal and the stitching of multiple camera views. These cropped and stitched images are then saved in a single video per recorded event. Videos will be analysed later using DLC. This preprocessing step allow a significant increase of the tracking quality of DLC (see our [method paper](https://???path_toward_method_paper_to_be_published)). - **Pre-processing of video recordings (at least 2 cameras):** image enhancement, dynamic cropping around moving animal and the stitching of multiple camera views. These cropped and stitched images are then saved in a single video per recorded event. Videos will be analysed later using DLC. This preprocessing step allow a significant increase of the tracking quality of DLC (see our [method paper](https://https://TODO!path_toward_method_paper_to_be_published)).
- **Selecting or defining an animal 3d skeleton** that will be used for the tracking. You can use a previously made skeleton (e.g. fly), or make your own from body and limbs modules (see the [section below](???)). - **Selecting or defining an animal 3d skeleton** that will be used for the tracking. You can use a previously made skeleton (e.g. fly), or make your own from body and limbs modules (see the [section below](TODO!)).
- **Training of a DeepLabCut (DLC) network** *(Outside this package)***:** usually done using the DLC user interface. You will most likely need to manually label 100-200 images. See the [section below](???) regarding how to correctly choose and tweak DLC settings in order to get good tracking results with stitched images. - **Training of a DeepLabCut (DLC) network** *(Outside this package)***:** usually done using the DLC user interface. You will most likely need to manually label 100-200 images. See the [section below](TODO!) regarding how to correctly choose and tweak DLC settings in order to get good tracking results with stitched images.
- **Running DLC analysis on the full dataset** to track the position of the skeleton joints on all the images of your recordings. - **Running DLC analysis on the full dataset** to track the position of the skeleton joints on all the images of your recordings.
- **Loading DLC results in dlc_flytracker** to convert the DLC results into the .csv files that will be used in this package. - **Loading DLC results in dlc_flytracker** to convert the DLC results into the .csv files that will be used in this package.
- **Reconstructing 3d coordinates of the skeleton joints** from their 2d coordinates tracked with DLC. - **Reconstructing 3d coordinates of the skeleton joints** from their 2d coordinates tracked with DLC.
...@@ -17,17 +17,17 @@ The usual steps (see more details below in 'Usage'): ...@@ -17,17 +17,17 @@ The usual steps (see more details below in 'Usage'):
- **Post-processing of the kinematics parameters**: removing outlier, using low pass filter, interpolating missing datapoints and smoothing timeseries. - **Post-processing of the kinematics parameters**: removing outlier, using low pass filter, interpolating missing datapoints and smoothing timeseries.
## Citation ## Citation
If you use this code, we kindly ask that you please cite our method paper: [Cribellier et al, 2023](https://https://???path_toward_method_paper_to_be_published). If you use this code, we kindly ask that you please cite our method paper: [Cribellier et al, ??? - To be published](https://https://https://https://TODO!path_toward_method_paper_to_be_published).
And of course, please cite the Deeplabcut articles: [Mathis et al, 2018](https://www.nature.com/articles/s41593-018-0209-y) and [Nath, Mathis et al, 2019.](https://doi.org/10.1038/s41596-019-0176-0). And of course, please cite the Deeplabcut articles: [Mathis et al, 2018](https://https://www.nature.com/articles/s41593-018-0209-y) and [Nath, Mathis et al, 2019.](https://https://doi.org/10.1038/s41596-019-0176-0).
## Getting Started ## Getting Started
### Installation and prerequisites ### Installation and prerequisites
`dlc_flytracker` requires a Python version `3.8`. It depends on the Python packages `deeplabcut`, `tensorflow` and other common packages like `numpy`, `scipy`, `matplotlib`, which are specified in [`requirements.txt`](requirements.txt). `dlc_flytracker` requires Python version `3.8`. It depends on the Python packages `deeplabcut`, `tensorflow` and other common packages like `numpy`, `scipy`, `matplotlib`, which are specified in [`requirements.txt`](requirements.txt).
To guarantee that you have a Python 3.8, regardless of your system's Python version, it is most convenient to create an Anaconda environment to install the dependencies of `dlc_flytracker` in. [Miniconda](https://docs.conda.io/en/latest/miniconda.html) is a minimal distribution of Anaconda. Be aware that this is only free to use for academic use; commercial users need to pay a licence fee. Install it (or another Anaconda distribution) with the installer, Windows, Linux and macOS versions are available. To guarantee that you have a Python 3.8 installation, regardless of your system's Python version, it is most convenient to create an Anaconda environment to install the dependencies of `dlc_flytracker`. [Miniconda](https://https://docs.conda.io/en/latest/miniconda.html) is a minimal distribution of Anaconda. Be aware that this is only free to use for academic use; commercial users need to pay a licence fee. Install it (or another Anaconda distribution) with the installer, Windows, Linux and macOS versions are available.
Create and activate a new environment: Create and activate a new environment:
``` ```
conda create -n "dlc-flytracker" python=3.8 conda create -n "dlc-flytracker" python=3.8
...@@ -36,7 +36,7 @@ conda activate dlc-flytracker ...@@ -36,7 +36,7 @@ conda activate dlc-flytracker
`dlc_flytracker` transitively depends on the HDF5 library. Install it system-wide with e.g. `sudo apt install libhdf5-dev` on Ubuntu, or install it in your currently active Anaconda environment with `conda install -c anaconda hdf5`. `dlc_flytracker` transitively depends on the HDF5 library. Install it system-wide with e.g. `sudo apt install libhdf5-dev` on Ubuntu, or install it in your currently active Anaconda environment with `conda install -c anaconda hdf5`.
Then, install all dependencies from [PyPI](https://pypi.org/) with `pip`: Then, install all dependencies from [PyPI](https://https://pypi.org/) with `pip`:
``` ```
pip install -r requirements.txt pip install -r requirements.txt
...@@ -79,14 +79,14 @@ With `recording_paths` being a list of paths (a path per camera) towards directo ...@@ -79,14 +79,14 @@ With `recording_paths` being a list of paths (a path per camera) towards directo
These steps allow the generating of .avi videos that will be analysed later by Deeplabcut These steps allow the generating of .avi videos that will be analysed later by Deeplabcut
(See example in the [Jupyter notebook for pre-processing](Notebook1-preprocessing_recordings.ipynb)). (See example in the [Jupyter notebook for pre-processing](Notebook1-preprocessing_recordings.ipynb)).
![???figure showing pre-processing steps](/paths/to/image.png) <!-- ![TODO!figure showing pre-processing steps](/paths/to/image.png) -->
- **Image enhancement:** *(Optional)* `img_process.enhance_images(fn_name, factor)` with `fn_name` the name of image characteristic to enhance (can be: 'contrast', 'brightness', 'sharpness' or 'color'). - **Image enhancement:** *(Optional)* `img_process.enhance_images(fn_name, factor)` with `fn_name` the name of image characteristic to enhance (can be: 'contrast', 'brightness', 'sharpness' or 'color').
By enhancing the recorded images, you will make it easier to label body features later with DLC. By enhancing the recorded images, you will make it easier to label body features later with DLC.
- **Cropping of the recordings:** To reduce the size of the images that will be analysed by DLC. - **Cropping of the recordings:** To reduce the size of the images that will be analysed by DLC.
- **Cropping around a fixed position:** - **Cropping around a fixed position:**
- TODO: Add a way to manually define the position around which cropping should be done, and save in .csv file. <!-- - TODO: Add a way to manually define the position around which cropping should be done, and save in .csv file. -->
- **Crop all frames:** `img_process.crop_images(height, width)` - **Crop all frames:** `img_process.crop_images(height, width)`
- **Dynamic cropping around moving animal:** - **Dynamic cropping around moving animal:**
- **Manual tracking:** - **Manual tracking:**
...@@ -101,7 +101,7 @@ By enhancing the recorded images, you will make it easier to label body features ...@@ -101,7 +101,7 @@ By enhancing the recorded images, you will make it easier to label body features
- **Rotate one or multiple views:** *(Optional)* `img_process.rotate_images(degrees, camn_to_process=[camn])` This step can help the labeling of the body features using DLC. - **Rotate one or multiple views:** *(Optional)* `img_process.rotate_images(degrees, camn_to_process=[camn])` This step can help the labeling of the body features using DLC.
- **Stitching of multiple camera views:** `img_process.stitch_images()` - **Stitching of multiple camera views:** `img_process.stitch_images()`
Will stitch all views together for each frame. By stitching all views of the same animal together, you will allow DeepLabCut to learn correlation between the different views, and therefore you will improve the overall tracking accuracy. See section [???](???) for more details. Will stitch all views together for each frame. By stitching all views of the same animal together, you will allow DeepLabCut to learn correlation between the different views, and therefore you will improve the overall tracking accuracy. See section [TODO!](TODO!) for more details.
- **Save each recording in an .avi** : `img_process.save_avi()` - **Save each recording in an .avi** : `img_process.save_avi()`
...@@ -113,24 +113,26 @@ You can select an already defined animal skeleton (see [/skeleton_fitter/animals ...@@ -113,24 +113,26 @@ You can select an already defined animal skeleton (see [/skeleton_fitter/animals
**OR** **OR**
New animal skeletons can be defined by confining bodies and limbs modules (see [/skeleton_fitter/modules](/skeleton_fitter/modules)). See [example animal skeleton](/skeleton_fitter/animals/???), [example body module](/skeleton_fitter/modules/bodies) and [example limbs module](/skeleton_fitter/modules/limbs/???) for more information on how to build your own skeleton and modules. New animal skeletons can be defined by confining bodies and limbs modules (see [/skeleton_fitter/modules](/skeleton_fitter/modules)). See [example animal skeleton](/skeleton_fitter/animals/TODO!), [example body module](/skeleton_fitter/modules/bodies) and [example limbs module](/skeleton_fitter/modules/limbs/TODO!) for more information on how to build your own skeleton and modules.
![???figure that shows how a skeleton is build](/paths/to/image.png) <!-- ![TODO!figure that shows how a skeleton is build](/paths/to/image.png) -->
??? TODO: Add details of exactly how a new skeleton is defined (what needs to be done, functions to write) <!--
TODO! Add details of exactly how a new skeleton is defined (what needs to be done, functions to write)
??? TODO: Explain how the contour of a stereotypical wing can be loaded in .yaml file using update_wings_init_skeleton3d.py TODO! Explain how the contour of a stereotypical wing can be loaded in .yaml file using update_wings_init_skeleton3d.py
??? For a fly: Initial body orientation is defined along x axis. Yaw, pitch and roll are then computed one after the other and in the reference frame of the body. TODO! For a fly: Initial body orientation is defined along x axis. Yaw, pitch and roll are then computed one after the other and in the reference frame of the body.
-->
**in DeepLabCut:** **in DeepLabCut:**
You will need to use the exact same naming convention for the body joints in DLC. You will need to use the exact same naming convention for the body joints in DLC.
??? TODO: Make a script that generate a DLC config.yaml file from the previously defined dlc_flytracker skeleton (and number of cameras). <!-- TODO! Make a script that generate a DLC config.yaml file from the previously defined dlc_flytracker skeleton (and number of cameras). -->
### Training a network in DeepLabCut ### Training a network in DeepLabCut
*Using the DLC user interface.* *Using the DLC user interface.*
You will need to train a DLC network that is capable of tracking accurately the joints of your skeleton in all recorded views. This will require to manually label around 100-200 images. Tutorials on how to use deeplabcut are available [here](https://deeplabcut.github.io/DeepLabCut/docs/intro.html). You will need to train a DLC network that is capable of tracking accurately the joints of your skeleton in all recorded views. This will require to manually label around 100-200 images. Tutorials on how to use deeplabcut are available [here](https://https://deeplabcut.github.io/DeepLabCut/docs/intro.html).
This step is very important, final tracking accuracy will be heavily dependent on how good the training dataset is. See the example [config.yaml](/data/mosquito_escapes/config.yaml) to see an example and how DLC settings can be tweaked in order to get good tracking results with pre-cropped and stitched images. This step is very important, final tracking accuracy will be heavily dependent on how good the training dataset is. See the example [config.yaml](/data/mosquito_escapes/config.yaml) to see an example and how DLC settings can be tweaked in order to get good tracking results with pre-cropped and stitched images.
...@@ -155,7 +157,7 @@ Estimate kinematic parameters by fitting of the 3d skeleton to the 2d/3d coordin ...@@ -155,7 +157,7 @@ Estimate kinematic parameters by fitting of the 3d skeleton to the 2d/3d coordin
You will need to choose an optimization method (e.g. `opt_method` = 'nelder-mead', 'powell' or 'leastsq'). You will need to choose an optimization method (e.g. `opt_method` = 'nelder-mead', 'powell' or 'leastsq').
And you will need to select a fitting method (e.g. `fit_method` = '2d' or '3d'), to decide if the skeleton will be fitted to the 2d or 3d coordinates of the joints. It is also possible to fit the body to a hybrid skeleton that use the average position of the limbs to improve accuracy ot the kinematic tracking of the body (`fit_method = '3d_hybrid'`). And you will need to select a fitting method (e.g. `fit_method` = '2d' or '3d'), to decide if the skeleton will be fitted to the 2d or 3d coordinates of the joints. It is also possible to fit the body to a hybrid skeleton that use the average position of the limbs to improve accuracy ot the kinematic tracking of the body (`fit_method = '3d_hybrid'`).
??? TODO: Document how filtering/smoothing is done by default and how to modify it + implement filtering of outlier (example of Cas data) <!-- TODO! Document how filtering/smoothing is done by default and how to modify it + implement filtering of outlier (example of Cas data) -->
## Notes ## Notes
### Running the tests ### Running the tests
......
import copy
from dataclasses import dataclass
from typing import List
import numpy as np
from scipy.interpolate import interp1d
from camera import calib
# Code for calibration mirrors mounted on gimbals + virtual cameras
# TODO: Read calibration data (April tag position in 3D vs size and position in pixels vs orientation of mirror (in steps of stepper motors)
# TODO: Generate calibration
# TODO: Write/read calibration
# TODO: link calibration of multiple cameras system with this calibration (same reference frame/origin)
# TODO: Add function that say something about focus shift?
@dataclass
class CameraCalibMultiFocuses:
""" Class for keeping track of the intrinsic parameters of a camera for multiple distances of focus plan.
"""
def __init__(self, xps: List[float], yps: List[float], fxs: List[float], fys: List[float], distances: List[float]):
"""
Args:
xps: List of multiple x coordinate of camera principal point (pixels) for each distances of focus plan
yps: List of multiple y coordinate of camera principal point (pixels) for each distances of focus plan
fxs: List of multiple camera focal length along x (pixels) for each distances of focus plan
fys: List of multiple camera focal length along y (pixels) for each distances of focus plan
distances: List of distances between camera and focus plan
"""
# Coordinates of principal points (if a camera sensor is N*M pixels, xp ~ N/2 and yp ~ M/2
self.xps = xps # type: List[float]
self.yps = yps # type: List[float]
# Focal lengths of the camera
self.fxs = fxs # type: List[float]
self.fys = fys # type: List[float]
self.distances = distances # type: List[float]
assert len(distances) == len(xps) and len(distances) == len(fxs)
assert len(xps) == len(yps) and len(fxs) == len(fys)
def get_intrinsic(self, distance):
""" Get intrinsic parameters of the camera for a given focus distance.
Args:
distance: Distance of focus points (m)
Returns:
xp: x coordinate of camera principal point (pixels)
yp: y coordinate of camera principal point (pixels)
fx: camera focal length along x (pixels)
fy: camera focal length along y (pixels)
"""
if distance in self.distances:
ind = self.distances.index(distance)
return self.xps[ind], self.yps[ind], self.fxs[ind], self.fys[ind]
else:
# TODO check if linear interpolation is correct (are the calibration curves linear?)
fxp = interp1d(self.distances, self.xps, kind='linear')
fyp = interp1d(self.distances, self.yps, kind='linear')
ffx = interp1d(self.distances, self.fxs, kind='linear')
ffy = interp1d(self.distances, self.fys, kind='linear')
return fxp(distance), fyp(distance), ffx(distance), ffy(distance)
@staticmethod
def from_dlt_coefs(dlt_coefs: List[List[float]], x_focus: List[float]):
""" Creates a CameraCalibMultiFocuses dataclass from
Args:
dlt_coefs: List of DLT coefficients of one camera for multiple focus distances (nb_distance*11).
x_focus: List of x coordinates of the focus plan
Returns:
CameraCalibMultiFocuses
"""
assert len(dlt_coefs) == len(x_focus)
# Generate intrinsic and extrinsic parameters from dlt_coefs
coords, quats = np.zeros((len(x_focus), 3)), np.zeros((len(x_focus), 4))
xps, yps, fxs, fys = np.zeros(len(x_focus)), np.zeros(len(x_focus)), np.zeros(len(x_focus)), np.zeros(len(x_focus))
for i in range(0, len(x_focus)):
xps[i], yps[i], fxs[i], fys[i] = calib.intrinsic_from_dlt_coef(dlt_coefs[i])
coords[i], quats[i] = calib.extrinsic_from_dlt_coef(dlt_coefs[i], (xps[i], yps[i], fxs[i], fys[i]))
x_cams = [coord[0] for coord in coords]
# y_cams = [coord[1] for coord in coords]
# z_cams = [coord[2] for coord in coords]
#
# # To check that camera didn't move during the calibration
# np.testing.assert_almost_equal(0.0, np.nanstd(x_cams), decimal=4)
# np.testing.assert_almost_equal(0.0, np.nanstd(y_cams), decimal=4)
# np.testing.assert_almost_equal(0.0, np.nanstd(z_cams), decimal=4)
# Generate CameraCalibMultiFocuses class
distances = x_focus - x_cams
return CameraCalibMultiFocuses(xps, yps, fxs, fys, distances)
This diff is collapsed.
...@@ -5,40 +5,6 @@ from pyquaternion import Quaternion ...@@ -5,40 +5,6 @@ from pyquaternion import Quaternion
from scipy.spatial.transform import Rotation from scipy.spatial.transform import Rotation
def intersect_line_plane(pt_line: List[float], vect_line: List[float], pt_plane: List[float], vect_plane: List[float],
epsilon: float = 1e-6) -> List[float]:
"""
Args:
pt_line: A point on the line ([x, y, z]).
vect_line: Any vector along the line ([u, v, w]) .
pt_plane: A point on the plane ([x, y, z]).
vect_plane: A normal vector ([u, v, w]) defining the plane direction (does not need to be normalized).
epsilon:
Returns:
coords: [x, y, z] coordinates of the intersection.
"""
pt_line = np.asarray(pt_line)
vect_line = np.asarray(vect_line)
pt_plane = np.asarray(pt_plane)
vect_plane = np.asarray(vect_plane)
dot = np.dot(vect_plane, vect_line)
assert abs(dot) > epsilon, ('The segment is parallel to the plane.')
fac = -np.dot(vect_plane, pt_line + pt_plane) / dot
vect_line = fac * vect_line
coords = pt_line + vect_line
return coords
def normalize(vect: List[float]) -> List[float]: def normalize(vect: List[float]) -> List[float]:
norm = np.linalg.norm(vect) norm = np.linalg.norm(vect)
...@@ -48,62 +14,6 @@ def normalize(vect: List[float]) -> List[float]: ...@@ -48,62 +14,6 @@ def normalize(vect: List[float]) -> List[float]:
return vect / norm return vect / norm
def to_inhomogeneus(X: np.ndarray) -> np.ndarray:
X = np.asarray(X)
if X.ndim > 1:
raise ValueError("x must be one-dimensional.")
return (X / X[-1])[:-1]
def to_homogeneus(X: np.ndarray) -> np.ndarray:
X = np.asarray(X)
if X.ndim > 1:
raise ValueError("X must be one-dimensional.")
return np.hstack([X, 1])
def is_orthogonal(M, return_error: bool = False):
""" Check if matrix is orthogonal
Args:
M: a square matrix (NxN)
return_error: Whether to return the rms error for orthogonality
Return:
Whether the matrix is found to be orthogonal
OR
The rms error for orthogonality
"""
assert all(len(row) == len(M) for row in M), "Matrix need to be square (NxN)"
if not return_error:
rtol = 1e-05
atol = 1e-08
if not np.allclose(np.dot(M, M.conj().transpose()), np.eye(3), rtol=rtol, atol=atol):
# Matrix is not orthogonal (i.e. its transpose should be its inverse)
return False
elif not np.isclose(np.linalg.det(M), 1.0, rtol=rtol, atol=atol):
# Matrix is not special orthogonal (i.e. its determinant must be +1.0)
return False
else:
return True
else:
eye_diff = np.dot(M, M.conj().transpose()) - np.eye(3)
ortho_error = np.sqrt(np.sum(eye_diff ** 2))
return ortho_error
def get_rotation_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.])) -> Rotation: def get_rotation_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.])) -> Rotation:
""" Get rotation matrix between two vectors using scipy.spatial.transform.Rotation. """ Get rotation matrix between two vectors using scipy.spatial.transform.Rotation.
...@@ -132,50 +42,6 @@ def get_rotation_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., ...@@ -132,50 +42,6 @@ def get_rotation_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0.,
return R return R
# def get_rotation_quaternion_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.])) -> Quaternion:
# """ Get rotation matrix between two vectors using scipy.spatial.transform.Rotation.
#
# Args:
# vec1: vector #1 (e.g. observed in initial frame) (N, 3)
# vec2: vector #2 (e.g. observed in another frame) (N, 3)
#
# Returns:
# Quaternion
# """
#
# return Quaternion(quat0123_to_1230(get_rotation_btw_vectors(vec1, vec2).as_quat()))
#
#
# def get_rotation_matrix_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.])) -> np.ndarray:
# """ Get rotation matrix between two vectors using scipy.spatial.transform.Rotation.
#
# Args:
# vec1: vector #1 (e.g. observed in initial frame) (N, 3)
# vec2: vector #2 (e.g. observed in another frame) (N, 3)
#
# Returns:
# Rotation matrix
# """
#
# return get_rotation_btw_vectors(vec1, vec2).as_matrix()
def get_rotation_angles_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.]),
in_degrees: bool = True) -> np.ndarray:
""" Get rotation angles between two vectors using scipy.spatial.transform.Rotation.
Args:
vec1: vector #1 (e.g. observed in initial frame) (N, 3)
vec2: vector #2 (e.g. observed in another frame) (N, 3)
in_degrees: Whether to give angles in degree. Will be in radians if false. True by default.
Returns:
Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes.
"""
return get_rotation_btw_vectors(vec1, vec2).as_euler('xyz', degrees=in_degrees)
def get_rotation_angle_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.]), def get_rotation_angle_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.]),
in_degrees: bool = True) -> np.ndarray: in_degrees: bool = True) -> np.ndarray:
""" Get rotation angle between two vectors. """ Get rotation angle between two vectors.
...@@ -268,47 +134,3 @@ def quat0123_to_1230(quat: np.ndarray) -> np.ndarray: ...@@ -268,47 +134,3 @@ def quat0123_to_1230(quat: np.ndarray) -> np.ndarray:
assert len(quat) == 4 assert len(quat) == 4
return np.append(quat[1:], quat[0]) return np.append(quat[1:], quat[0])
# TODO remove following code?
# def find_closest_orthogonal_matrix(A):
# '''
# Find closest orthogonal matrix to *A* using iterative method.
#
# From: https://gist.github.com/dokato/7a997b2a94a0ec6384a5fd0e91e45f8b
# Bases on the code from REMOVE_SOURCE_LEAKAGE function from OSL Matlab package.
#
# Args:
# A (numpy.array): array shaped k, n, where k is number of channels, n - data points
#
# Returns:
# L (numpy.array): orthogonalized matrix with amplitudes preserved
#
# '''
#
# MAX_ITER = 2000
#
# TOLERANCE = np.max((1, np.max(A.shape) * np.linalg.svd(A.T, False, False)[0])) * np.finfo(A.dtype).eps # TODO
# reldiff = lambda a, b: 2 * abs(a - b) / (abs(a) + abs(b))
# convergence = lambda rho, prev_rho: reldiff(rho, prev_rho) <= TOLERANCE
#
# A_b = A.conj()
# d = np.sqrt(np.sum(A * A_b, axis=1))
#
# rhos = np.zeros(MAX_ITER)
#
# for i in range(MAX_ITER):
# scA = A.T * d
#
# u, s, vh = np.linalg.svd(scA, False)
#
# V = np.dot(u, vh)
#
# d = np.sum(A_b * V.T, axis=1)
#
# L = (V * d).T
# E = A - L
# rhos[i] = np.sqrt(np.sum(E * E.conj()))
# if i > 0 and convergence(rhos[i], rhos[i - 1]):
# break
#
# return L
...@@ -87,7 +87,7 @@ class ImageSequence: ...@@ -87,7 +87,7 @@ class ImageSequence:
return self._names return self._names
def get_names_from_numbers(self, numbers: List[str]) -> List[int]: def get_names_from_numbers(self, numbers: List[str]) -> List[int]:
"""Retrieves the names of images from their numbers. Assumes that the list of numbers is sorted. """ Retrieves the names of images from their numbers. Assumes that the list of numbers is sorted.
Args: Args:
numbers: The numbers of images. numbers: The numbers of images.
......
...@@ -60,8 +60,8 @@ rec_names_to_process = 'all' ...@@ -60,8 +60,8 @@ rec_names_to_process = 'all'
# ------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------
# Crop all frames and save in save_path + '/_Cropped' # Crop all frames and save in save_path + '/_Cropped'
img_process.crop_images(200, 200, from_tracks=True, show_plot=False, img_process.dynamic_crop_images(200, 200, from_tracks=True, show_plot=False,
rec_names_to_process=rec_names_to_process, delete_previous=True) rec_names_to_process=rec_names_to_process, delete_previous=True)
# ------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------
# Enhance image contrast from path '/_Cropped' # Enhance image contrast from path '/_Cropped'
......
This diff is collapsed.
import copy
import itertools
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from pyquaternion import Quaternion
from camera import objects, calib, calib_gimbals
def test_calib_multi_focus():
show_plot = False
ind_to_plot = 1 # Index of focus distance to plot
in_degrees = True
with_cst_focal_length = False # Will adjust focal length (zoom in/out) to keep all the 3d points in the full image
# # Without noize
# sigma = 0
# max_mean_err_px = 10e-5
# With noize (std = 2 pixels)
sigma = 3
max_mean_err_px = 6
pixel_size = 10e-5
width_sensor = 0.2
# ------------------------------------------------------------------------------------------------------------------
# Camera
coord_principal_pt = [1024, 1024] # So image size is 2024 x 2024 px
# distance_camera_to_focus_point / width_field_view_around_focus_point * width_sensor (m)
focal_length = 1 / 0.1 * width_sensor / pixel_size # (pixels) for an angle of view = 5.7°
coords = [-0.1, 0, 0]
quat = Quaternion(axis=[0, 1, 0], degrees=90)
cam = objects.Camera(coord_principal_pt, [focal_length, focal_length], coords, quat, in_degrees=in_degrees)
cam.sensor_size = np.array(coord_principal_pt) * 2
# ------------------------------------------------------------------------------------------------------------------
# Generate fake calibration (dlt_coefs vs various focus distances)
x_range = np.linspace(0.5, 2.5, 10)
x_focus_offset = 0.02
yz = list(itertools.combinations_with_replacement([-0.2, -0.07, 0, 0.12, 0.2], 2))
yz = [list(coords) for coords in yz]
# yz = [yz[i] for i in np.random.choice(len(yz), 4)] # To reduce number of 2d points to min needed for dlt_coefs
quat_back_to_init = cam.quat.inverse
dlt_coefs_vs_focus, repro_errs = [], []
for x_focus in x_range:
if not with_cst_focal_length:
# Correct focal length of the camera to fit all the 3d points in angle of view
cam.f = x_focus / np.max(yz) * width_sensor / pixel_size
xyz = np.array([[np.nan, np.nan, np.nan]])
for factor_offset in [-1, 1]: # At each focus distance, offset plan with yz along x axis (towards/away from cam)
x_offsetted = x_focus + factor_offset*x_focus_offset
xyz = np.append(xyz, [[x_offsetted, coords[0], coords[1]] for coords in yz], axis=0)
xyz = xyz[1:]
if show_plot and x_focus == x_range[ind_to_plot]:
fig = plt.figure()
ax = Axes3D(fig)
for coords in xyz:
ax.scatter(coords[0], coords[1], coords[2], marker='*', color='k', s=10)
ax.scatter(cam.x, cam.y, cam.z, marker='+', color='r', s=40)
ax.quiver(cam.x, cam.y, cam.z, cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2,
color='r')
ax.set_xlabel('x [m]')
ax.set_ylabel('y [m]')
ax.set_zlabel('z [m]')
fig2 = plt.figure()
ax2 = Axes3D(fig2)
uv = []
for coords in xyz:
obj = objects.Object3D(coords, [0, 0, 0], in_degrees=in_degrees)
obj.transpose(- cam.coords)
obj.rotate(quat_back_to_init)
# Coordinates in image plan: with Z axis normal to image plan, need initial normal vector to be [0, 0, 1]
cur_uv = - cam.f / obj.z * np.array([obj.x, obj.y]) # (m)
cur_uv = cur_uv * pixel_size # (pixels)
# cur_uv = - cur_uv # rotate image to avoid it being upside down
uv.append(cur_uv)
if show_plot and x_focus == x_range[ind_to_plot]:
ax2.scatter(obj.x, obj.y, obj.z, marker='*', color='k', s=10)
ax2.scatter(cur_uv[0], cur_uv[1], - cam.f * pixel_size, marker='*', color='g', s=10)
ax2.plot([obj.x, cur_uv[0]], [obj.y, cur_uv[1]], [obj.z, - cam.f * pixel_size], marker='', color='g')
if show_plot and x_focus == x_range[ind_to_plot]:
cam_p = copy.deepcopy(cam)
cam_p.transpose(- cam_p.coords)
cam_p.rotate(quat_back_to_init)
ax2.scatter(cam_p.x, cam_p.y, cam_p.z, marker='+', color='r', s=40)
ax2.quiver(cam_p.x, cam_p.y, cam_p.z,
cam_p.norm_vect[0], cam_p.norm_vect[1], cam_p.norm_vect[2], length=0.2, color='r')
ax2.set_xlabel('x [m]')
ax2.set_ylabel('y [m]')
ax2.set_zlabel('z [m]')
plt.show()
# --------------------------------------------------------------------------------------------------------------
# Convert to pixel coordinates
# pixel_size = np.amax(uv) / np.min(cam.sensor_size)) / 2
uv = np.array(uv) / pixel_size + cam.sensor_size / 2 # scale meters to pixels
# Add noize to 2d coordinates
uv = uv + np.reshape(np.random.normal(0, sigma, len(uv) * 2), [len(uv), 2])
# --------------------------------------------------------------------------------------------------------------
# Generate DLT coefficients for each camera
# dlt_coef, repro_err = calib.estim_dlt_coef(xyz, uv, 3) # Will no produce dlt_coef from an orthogonal matrix
dlt_coef, repro_err = calib.estim_mdlt_coef(xyz, uv, show_plot=(show_plot and x_focus == x_range[ind_to_plot]))
assert repro_err <= max_mean_err_px
dlt_coefs_vs_focus.append(dlt_coef)
repro_errs.append(repro_err)
cam_calib = calib_gimbals.CameraCalibMultiFocuses.from_dlt_coefs(dlt_coefs_vs_focus, x_range)
if show_plot:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(repro_errs, marker='+', color='g')
ax.set_xlabel('# iteration')
ax.set_ylabel('reprojection error')
ax.set_yscale('log')
fig2, axs = plt.subplots(4)
fig2.suptitle('Intrinsic parameters vs focus distances')
axs[0].plot(x_range, cam_calib.xps)
axs[0].set_ylabel('x principal point [px]')
axs[1].plot(x_range, cam_calib.yps)
axs[1].set_ylabel('y principal point [px]')
axs[2].plot(x_range, cam_calib.fxs)
axs[2].set_ylabel('x focal length [px]')
axs[3].plot(x_range, cam_calib.fys)
axs[3].set_xlabel('x focus length [m]')
axs[3].set_ylabel('y focal length [px]')
plt.show()
# ------------------------------------------------------------------------------------------------------------------
# Compare dlt_coef gotten from intrinsic parameters interpolated from calib to coef gotten from uv/xyz
distance = 0.86
cam.f = distance / np.max(yz) * width_sensor / pixel_size
xyz = np.array([[np.nan, np.nan, np.nan]])
for factor_offset in [-1, 1]: # At each focus distance, offset plan with yz along x axis (towards/away from cam)
x_offsetted = distance + factor_offset * x_focus_offset
xyz = np.append(xyz, [[x_offsetted, coords[0], coords[1]] for coords in yz], axis=0)
xyz = xyz[1:]
uv = []
for coords in xyz:
obj = objects.Object3D(coords, [0, 0, 0], in_degrees=in_degrees)
obj.transpose(- cam.coords)
obj.rotate(quat_back_to_init)
cur_uv = - cam.f / obj.z * np.array([obj.x, obj.y]) # (m)
cur_uv = cur_uv * pixel_size # (pixels)
uv.append(cur_uv)
uv = np.array(uv) / pixel_size + cam.sensor_size / 2 # scale meters to pixels
uv = uv + np.reshape(np.random.normal(0, sigma, len(uv) * 2), [len(uv), 2]) # Noize
dlt_coef, repro_err = calib.estim_mdlt_coef(xyz, uv, show_plot=show_plot)
assert repro_err <= max_mean_err_px
xp, yp, fx, fy = cam_calib.get_intrinsic(distance)
dlt_coef2 = calib.dlt_coef_from_intrinsic_extrinsic(xp, yp, fx, fy, cam.coords, cam.quat)
uv2 = calib.find2d(1, dlt_coef2, xyz)
if show_plot:
fig = plt.figure() # Diff are due to differences in focal lengths (1st is set line 157 and 2nd is interpolated)
ax = fig.add_subplot(111)
for i in range(0, len(uv)):
ax.scatter(uv[i][0], uv[i][1], marker='+', color='g')
ax.scatter(uv2[i][0], uv2[i][1], marker='+', color='r')
ax.set_xlabel('x [pixels]')
ax.set_ylabel('y [pixels]')
plt.show()
def test_calib_gimbal_system():
# TODO write test functions to test calibration of gimbal system
show_plot = False
ind_to_plot = 0
in_degrees = True
norm_vect_init = [0., 0., 1.]
# # Without noize
# sigma = 0
# max_mean_err_px = 10e-5
# decimal = 4
# With noize (std = 2 pixels)
sigma = 2
max_mean_err_px = 6
decimal = -1
pixel_size = 10e-5
width_sensor = 0.2
# ------------------------------------------------------------------------------------------------------------------
# Camera
coord_principal_pt = [1024, 1024] # So image size is 2024 x 2024 px
# distance_camera_to_focus_point / width_field_view_around_focus_point * width_sensor (m)
focal_length = 1 / 0.1 * width_sensor / pixel_size # (pixels) for an angle of view = 5.7°
coords = [-.8, 0, 1.7]
quat = Quaternion()
cam = objects.Camera(coord_principal_pt, [focal_length, focal_length], coords, quat, in_degrees=in_degrees)
cam.sensor_size = np.array(coord_principal_pt) * 2
# ------------------------------------------------------------------------------------------------------------------
# Mirror and virtual camera
coords = [-.8, 0, 2.]
quat = Quaternion(axis=[0, 1, 0], degrees=145)
mirror = objects.RoundMirror(coords, quat, radius=0.1, in_degrees=in_degrees)
virt_cam = objects.VirtualCamera(cam, mirror, in_degrees=in_degrees)
# ------------------------------------------------------------------------------------------------------------------
# April tags
uv_atags = list(itertools.combinations_with_replacement([-0.1, -0.05, 0, 0.05, 0.1], 2))
uv_atags = [list(coords) for coords in uv_atags]
# uv_atags = [uv_april[i] for i in np.random.choice(len(uv_atags), 4)] # To reduce number of 2d points to min needed for dlt_coefs
atags = objects.MultiObjects(np.append(
[objects.Object3D([uv[0], uv[1], 0.], [0., 0., 0.], in_degrees=in_degrees) for uv in uv_atags],
[objects.Object3D([uv[0], .1, uv[1]-.1], [0., 0., 0.], in_degrees=in_degrees) for uv in uv_atags]))
# Coordinates and orientation of april tags groups
coord_atags = [[-1., 0., 0.], [-.5, 0., 0.], [0., 0., 0.], [.5, 0., 0.], [1., 0., 0.], # Tags on the floor
[.5, .5, 0.], [1., .5, 0.], [1., 1., 0.], [1.2, -.5, .2], [1.2, -.5, .7], [1.2, -1., .2],
[1.2, 0., .2], [1.2, 0., .7], [1.2, 0., 1.2], [1.2, 0., 1.7], [1.2, 0., 2.2]] # Tags on the wall
quat_floor, quat_wall = Quaternion(), Quaternion(axis=[0, 1, 0], degrees=-90)
quat_atags = [quat_floor, quat_floor, quat_floor, quat_floor, quat_floor, quat_floor, quat_floor, quat_floor,
quat_wall, quat_wall, quat_wall, quat_wall, quat_wall, quat_wall, quat_wall, quat_wall]
# ------------------------------------------------------------------------------------------------------------------
if show_plot:
fig = plt.figure() # Diff are due to differences in focal lengths (1st is set line 157 and 2nd is interpolated)
ax = Axes3D(fig)
for ind, _ in enumerate(quat_atags):
new_atags = copy.deepcopy(atags)
new_atags.rotate(quat_atags[ind])
new_atags.transpose(coord_atags[ind])
if show_plot:
for obj in new_atags.objects:
ax.scatter(obj.x, obj.y, obj.z, marker='+', color='k')
norm_vect = quat_atags[ind].rotate(norm_vect_init)
ax.scatter(coord_atags[ind][0], coord_atags[ind][1], coord_atags[ind][2], marker='+', color='k', s=40)
ax.quiver(coord_atags[ind][0], coord_atags[ind][1], coord_atags[ind][2],
norm_vect[0], norm_vect[1], norm_vect[2], length=0.1, color='k')
if show_plot:
ax.scatter(cam.x, cam.y, cam.z, marker='+', color='g', s=40)
ax.quiver(cam.x, cam.y, cam.z,
cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2, color='g')
ax.text(cam.x, cam.y, cam.z, ' cam')
ax.scatter(mirror.x, mirror.y, mirror.z, marker='+', color='b', s=40)
ax.quiver(mirror.x, mirror.y, mirror.z,
mirror.norm_vect[0], mirror.norm_vect[1], mirror.norm_vect[2], length=0.2, color='b')
ax.text(mirror.x, mirror.y, mirror.z, ' mirror')
ax.scatter(virt_cam.x, virt_cam.y, virt_cam.z, marker='+', color='r', s=40)
ax.quiver(virt_cam.x, virt_cam.y, virt_cam.z,
virt_cam.norm_vect[0], virt_cam.norm_vect[1], virt_cam.norm_vect[2], length=0.2, color='r')
ax.text(virt_cam.x, virt_cam.y, virt_cam.z, ' virtual cam')
ax.set_xlabel('x [m]')
ax.set_ylabel('y [m]')
ax.set_zlabel('z [m]')
plt.ylim(ax.get_xlim())
plt.show()
# ------------------------------------------------------------------------------------------------------------------
#
dlt_coefs = np.zeros((len(quat_atags), 11))
for ind, _ in enumerate(quat_atags):
new_atags = copy.deepcopy(atags)
new_atags.rotate(quat_atags[ind])
new_atags.transpose(coord_atags[ind])
virt_cam.point_towards(coord_atags[ind]) # Point virtual camera towards the tags (move the mirror)
quat_back_to_init = virt_cam.quat.inverse
# Simulate 2d points (virtual camera) from 3d coordinates
uv, xyz = [], []
for atag in new_atags.objects:
if len(xyz) == 0: xyz = np.array([atag.coords])
else: xyz = np.append(xyz, np.array([atag.coords]), axis=0)
cur_atag = copy.deepcopy(atag)
cur_atag.transpose(- virt_cam.coords)
cur_atag.rotate(quat_back_to_init)
# Coordinates in image plan: with Z axis normal to image plan, need initial normal vector to be [0, 0, 1]
cur_uv = - cam.f / cur_atag.z * np.array([cur_atag.x, cur_atag.y]) # (m)
cur_uv = cur_uv * pixel_size # (pixels)
# cur_uv = - cur_uv # rotate image to avoid it being upside down
uv.append(cur_uv)
# Convert to pixel coordinates
# pixel_size = np.amax(uv) / np.min(cam.sensor_size)) / 2
uv = np.array(uv) / pixel_size + cam.sensor_size / 2 # scale meters to pixels
# Add noize to 2d coordinates
uv = uv + np.reshape(np.random.normal(0, sigma, len(uv) * 2), [len(uv), 2])
# Generate DLT coefficients for each virtual camera
# dlt_coef, repro_err = calib.estim_dlt_coef(xyz, uv, 3) # Will no produce dlt_coef from an orthogonal matrix
dlt_coef, repro_err = calib.estim_mdlt_coef(xyz, uv, show_plot=(show_plot and ind == ind_to_plot))
assert repro_err <= max_mean_err_px
# Estimate 2d points (virtual camera) from 3d coordinates using new dlt_coef
new_uv = calib.find2d(1, dlt_coef, xyz)
new_uv2 = calib.find2d(1, virt_cam.dlt_coef, xyz)
np.testing.assert_almost_equal(new_uv, new_uv2, decimal=decimal)
# Use simulated 2d points (+ noize) and 3d to estimate new orientation of virtual camera (+ mirror)
# By repeating previous step for all apriltag groups:
# Generate a calibration that correlate mirror orientation (quat?) to virtual camera orientation
# Use the calibration (How many points in april tag group is enough?)
# Write hovering algo that look through the space and detect the april tag group (limited by mirror size and focal length of camera)
dlt_coefs[ind] = dlt_coef
# TODO: finish writing test
assert 1 == 0
def test_virtual_cameras():
# TODO write test functions to test virtual camera when the mirror is moving, etc ...
assert 1 == 0
def main():
test_calib_multi_focus()
test_calib_gimbal_system()
test_virtual_cameras()
if __name__ == "__main__":
main()
import copy
from pyquaternion import Quaternion
import itertools
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from camera import calib, objects
def test_pinhole_cameras():
show_plot = False
norm_vect_init = [0., 0., 1.]
# Without noize
sigma = 0
max_mean_err_px = 10e-5
decimal = 4
# # With noize (std = 2 pixels)
# sigma = 1
# max_mean_err_px = 3
# decimal = -1
pixel_size = 10e-5
# Camera #1
coord_principal_pt = [1024, 1024] # So image size is 2024 x 2024 px
# distance_camera_to_focus_point / width_field_view_around_focus_point * width_sensor (m)
focal_length = 1/0.5 * 0.02 / pixel_size # (pixels) for an angle of view = 28°
coord = [0, -1, 0]
quat = Quaternion(axis=[1, 0, 0], degrees=-90) * Quaternion(axis=[0, 0, 1], degrees=180)
cam1 = objects.Camera(coord_principal_pt, [focal_length, focal_length], coord, quat)
cam1.sensor_size = np.array(coord_principal_pt) * 2
# Camera #2
coord_principal_pt = [512, 512]
focal_length = 1.3/0.5 * 0.02 / pixel_size # (pixels) for an angle of view = 21°
coord = [0.75, -0.75, 0.75]
quat = Quaternion(axis=[0, 0, 1], degrees=45) * Quaternion(axis=[1, 0, 0], degrees=-135)
cam2 = objects.Camera(coord_principal_pt, [focal_length, focal_length], coord, quat)
cam2.sensor_size = np.array(coord_principal_pt) * 2
# Camera #3
coord_principal_pt = [1024, 512]
focal_length = 2/0.5 * 0.02 / pixel_size # (pixels) for an angle of view = 14°
coord = [2, 0, 0]
quat = Quaternion(axis=[0, 1, 0], degrees=-90)
cam3 = objects.Camera(coord_principal_pt, [focal_length, focal_length], coord, quat)
cam3.sensor_size = np.array(coord_principal_pt) * 2
# ------------------------------------------------------------------------------------------------------------------
cams = objects.MultiCameras([cam1, cam2, cam3])
# ------------------------------------------------------------------------------------------------------------------
# Generate pixel coordinates of a set of 3d points for each camera
xyz = list(itertools.combinations_with_replacement([-0.45, -0.23, 0, 0.23, 0.45], 3))
xyz = [list(coord) for coord in xyz]
# xyz = [xyz[i] for i in np.random.choice(len(xyz), 6)] # To reduce number of 3d points to min needed for dlt_coefs
if show_plot:
fig = plt.figure()
ax = Axes3D(fig)
for coord in xyz:
ax.scatter(coord[0], coord[1], coord[2], marker='*', color='k', s=10)
for cam in cams.cameras:
ax.scatter(cam.x, cam.y, cam.z, marker='+', color='r', s=40)
ax.quiver(cam.x, cam.y, cam.z, cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2, color='r')
ax.set_xlabel('x [m]')
ax.set_ylabel('y [m]')
ax.set_zlabel('z [m]')
plt.show()
dlt_coefs, dlt_coefs2 = [], []
for cam in cams.cameras:
quat_back_to_init = cam.quat.inverse
if show_plot:
fig2 = plt.figure()
ax2 = Axes3D(fig2)
uv = []
for coord in xyz:
obj = objects.Object3D(coord, [0, 0, 0], in_degrees=True)
obj.transpose(- cam.coords)
obj.rotate(quat_back_to_init)
# Coordinates in image plan: with Z axis normal to image plan, need initial normal vector to be [0, 0, 1]
cur_uv = - cam.f / obj.z * np.array([obj.x, obj.y]) # (m)
cur_uv = cur_uv * pixel_size # (pixels)
# cur_uv = - cur_uv # rotate image to avoid it being upside down
uv.append(cur_uv)
if show_plot:
ax2.scatter(obj.x, obj.y, obj.z, marker='*', color='k', s=10)
ax2.scatter(cur_uv[0], cur_uv[1], - cam.f * pixel_size, marker='*', color='g', s=10)
ax2.plot([obj.x, cur_uv[0]], [obj.y, cur_uv[1]], [obj.z, - cam.f * pixel_size], marker='', color='g')
if show_plot:
cam_p = copy.deepcopy(cam)
cam_p.transpose(- cam_p.coords)
cam_p.rotate(quat_back_to_init)
ax2.scatter(cam_p.x, cam_p.y, cam_p.z, marker='+', color='r', s=40)
ax2.quiver(cam_p.x, cam_p.y, cam_p.z,
cam_p.norm_vect[0], cam_p.norm_vect[1], cam_p.norm_vect[2], length=0.2, color='r')
ax2.set_xlabel('x [m]')
ax2.set_ylabel('y [m]')
ax2.set_zlabel('z [m]')
plt.show()
# --------------------------------------------------------------------------------------------------------------
# Convert to pixel coordinates
# pixel_size = np.amax(uv) / np.min(cam.sensor_size)) / 2
uv = np.array(uv) / pixel_size + cam.sensor_size / 2 # scale meters to pixels
# Add noize to 2d coordinates
uv = uv + np.reshape(np.random.normal(0, sigma, len(uv)*2), [len(uv), 2])
# --------------------------------------------------------------------------------------------------------------
# Generate DLT coefficients for each camera
# Using only known object points and their images points
#dlt_coef, repro_err = calib.estim_dlt_coef(xyz, uv, 3) # Will no produce dlt_coef from an orthogonal matrix
dlt_coef, repro_err = calib.estim_mdlt_coef(xyz, uv, show_plot=show_plot)
assert repro_err <= max_mean_err_px
dlt_coefs.append(dlt_coef)
# Using known camera intrinsic parameters + known object points and their images points
coord, quat, repro_err = calib.extrinsic_from_intrinsic_and_coordinates(cam.xp, cam.yp, cam.fx, cam.fy, xyz, uv, show_plot=show_plot)
dlt_coef2i = calib.dlt_coef_from_intrinsic_extrinsic(cam.xp, cam.yp, cam.fx, cam.fy, coord, quat)
assert repro_err <= max_mean_err_px
# Using known camera extrinsic parameters + known object points and their images points
xp, yp, fx, fy, repro_err = calib.intrinsic_from_extrinsic_and_coordinates(cam.coords, cam.quat, xyz, uv, show_plot=show_plot)
dlt_coef2e = calib.dlt_coef_from_intrinsic_extrinsic(xp, yp, fx, fy, cam.coords, cam.quat)
assert repro_err <= max_mean_err_px
uv2i, uv2e = calib.find2d(1, dlt_coef2i, xyz), calib.find2d(1, dlt_coef2e, xyz)
np.testing.assert_almost_equal(uv2i, uv2e, decimal=decimal)
# Using known camera intrinsic and extrinsic parameters
dlt_coef2 = calib.dlt_coef_from_intrinsic_extrinsic(cam.xp, cam.yp, cam.fx, cam.fy, cam.coords, cam.quat)
repro_err = calib.reprojection_error(dlt_coef2, xyz, uv)
assert repro_err <= max_mean_err_px
uv2 = calib.find2d(1, dlt_coef2, xyz)
dlt_coefs2.append(dlt_coef2)
# --------------------------------------------------------------------------------------------------------------
if show_plot:
fig = plt.figure()
ax = fig.add_subplot(111)
for i in range(0, len(uv)):
ax.scatter(uv[i][0], uv[i][1], marker='+', color='g', s=40)
ax.scatter(uv2[i][0], uv2[i][1], marker='+', color='r', s=40)
ax.set_xlabel('x [m]')
ax.set_ylabel('y [m]')
plt.show()
np.testing.assert_almost_equal(uv, uv2, decimal=decimal)
# ------------------------------------------------------------------------------------------------------------------
# re-Generate intrinsic and extrinsic parameters from DLT coefficients
new_cams = []
for i, dlt_coef in enumerate(dlt_coefs):
cam = cams.get(i)
# --------------------------------------------------------------------------------------------------------------
# Estimate intrinsic parameters
xp, yp, fx, fy = calib.intrinsic_from_dlt_coef(dlt_coef)
np.testing.assert_almost_equal(cam.xp, xp, decimal=decimal)
np.testing.assert_almost_equal(cam.yp, yp, decimal=decimal)
np.testing.assert_almost_equal(cam.fx, fx, decimal=decimal)
np.testing.assert_almost_equal(cam.fy, fy, decimal=decimal)
new_cams.append(objects.Camera([xp, yp], [fx, fy], coord, quat))
# --------------------------------------------------------------------------------------------------------------
# Estimate extrinsic parameters
coord, quat = calib.extrinsic_from_dlt_coef(dlt_coef, (xp, yp, fx, fy))
np.testing.assert_almost_equal(cam.coords, coord, decimal=decimal)
np.testing.assert_almost_equal(cam.quat.rotate(norm_vect_init), quat.rotate(norm_vect_init), decimal=decimal)
if show_plot:
norm_vect = quat.rotate(norm_vect_init)
fig3 = plt.figure()
ax3 = Axes3D(fig3)
ax3.scatter(cam.x, cam.y, cam.z, marker='+', color='r', s=40)
ax3.quiver(cam.x, cam.y, cam.z, cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2, color='g')
ax3.quiver(cam.x, cam.y, cam.z, norm_vect[0], norm_vect[1], norm_vect[2], length=0.2, color='r')
ax3.set_xlabel('x [m]')
ax3.set_ylabel('y [m]')
ax3.set_zlabel('z [m]')
plt.show()
if show_plot:
fig = plt.figure()
ax = Axes3D(fig)
for coord in xyz:
ax.scatter(coord[0], coord[1], coord[2], marker='*', color='k', s=10)
for cam in new_cams:
ax.scatter(cam.x, cam.y, cam.z, marker='+', color='r', s=40)
ax.quiver(cam.x, cam.y, cam.z, cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2, color='r')
ax.set_xlabel('x [m]')
ax.set_ylabel('y [m]')
ax.set_zlabel('z [m]')
plt.show()
def main():
test_pinhole_cameras()
if __name__ == "__main__":
main()
\ No newline at end of file
...@@ -10,7 +10,7 @@ from utils.settings import Settings ...@@ -10,7 +10,7 @@ from utils.settings import Settings
def test_batch_processing_images_preprocessing() -> None: def test_batch_processing_images_preprocessing() -> None:
""" """
An example of how recordings (here only one) can be pre-processed An example of how recordings (here only one) can be pre-processed (with dynamic cropping)
in order to generate .avi videos that can then be analysed by DeepLabCut. in order to generate .avi videos that can then be analysed by DeepLabCut.
""" """
recording_paths = ['../data/mosquito_escapes/cam1', '../data/mosquito_escapes/cam2', '../data/mosquito_escapes/cam3'] recording_paths = ['../data/mosquito_escapes/cam1', '../data/mosquito_escapes/cam2', '../data/mosquito_escapes/cam3']
...@@ -55,9 +55,9 @@ def test_batch_processing_images_preprocessing() -> None: ...@@ -55,9 +55,9 @@ def test_batch_processing_images_preprocessing() -> None:
# rec_names_to_process=rec_names_to_process, show_plot=show_plot) # rec_names_to_process=rec_names_to_process, show_plot=show_plot)
# ------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------
# Crop all frames and save in save_path + '/_Cropped' # Dynamic crop of all frames and save in save_path + '/_Cropped'
img_process.crop_images(50, 50, show_plot=False, from_tracks=True, img_process.dynamic_crop_images(50, 50, show_plot=False, from_tracks=True,
rec_names_to_process=rec_names_to_process, from_fn_name='sample', delete_previous=True) rec_names_to_process=rec_names_to_process, from_fn_name='sample', delete_previous=True)
# ------------------------------------------------------------------------------------------------------------------ # ------------------------------------------------------------------------------------------------------------------
# Rotate view 2 (to always have swatter coming from right side of the image) and save in save_path + '/_Cropped' # Rotate view 2 (to always have swatter coming from right side of the image) and save in save_path + '/_Cropped'
...@@ -99,6 +99,45 @@ def test_batch_multiprocessing_images_preprocessing() -> None: ...@@ -99,6 +99,45 @@ def test_batch_multiprocessing_images_preprocessing() -> None:
rec_names_to_process=rec_names_to_process, delete_previous=delete_previous) rec_names_to_process=rec_names_to_process, delete_previous=delete_previous)
def test_batch_processing_images_preprocessing_static_crop() -> None:
"""
An example of how recordings (here only one) can be pre-processed (with static cropping)
in order to generate .avi videos that can then be analysed by DeepLabCut.
"""
recording_paths = ['../data/mosquito_escapes/cam1', '../data/mosquito_escapes/cam2', '../data/mosquito_escapes/cam3']
save_path = '../data/mosquito_escapes/_Process'
dlt_path = os.path.join(os.getcwd(), '../data/mosquito_escapes/calib/20200215_DLTcoefs-py.csv')
rec_names_to_process = ['20200216_075901-sample_binned'] # names of the recordings to process
# (Should be the name of the folder with the images of the main camera (by default: #1))
framerate = 12500
x_crop = [72, 250, 102]
y_crop = [44, 184, 10]
# ------------------------------------------------------------------------------------------------------------------
# Initialization of the BatchProcessing class
img_process = BatchProcessing.from_directory_by_names(recording_paths, framerate, save_path,
threshold_nb_diff_char=2)
# ------------------------------------------------------------------------------------------------------------------
# Static crop of all frames and save in save_path + '/_Cropped'
img_process.static_crop_images(x_crop, y_crop, 50, 50, show_plot=False,
rec_names_to_process=rec_names_to_process, delete_previous=True)
# ------------------------------------------------------------------------------------------------------------------
# Rotate view 2 (to always have swatter coming from right side of the image) and save in save_path + '/_Cropped'
img_process.rotate_images(270, rec_names_to_process=rec_names_to_process, from_fn_name='crop', camn_to_process=[2])
# ------------------------------------------------------------------------------------------------------------------
# Stitch all views together (for each frame) and save in save_path + '/_Stitched'
img_process.stitch_images(rec_names_to_process=rec_names_to_process, from_fn_name='crop', delete_previous=True)
# ------------------------------------------------------------------------------------------------------------------
# Save each recording in an .avi in save_path + '/_Avi'
img_process.save_avi(rec_names_to_process=rec_names_to_process, from_fn_name='stitch', delete_previous=True)
def test_batch_multiprocessing_images_preprocessing2() -> None: def test_batch_multiprocessing_images_preprocessing2() -> None:
""" """
This third example show the pre-processing of a second dataset using mutliprocessing. This third example show the pre-processing of a second dataset using mutliprocessing.
...@@ -208,6 +247,7 @@ def test_batch_processing_fit_skeleton() -> None: ...@@ -208,6 +247,7 @@ def test_batch_processing_fit_skeleton() -> None:
def main(): def main():
test_batch_processing_images_preprocessing() test_batch_processing_images_preprocessing()
test_batch_multiprocessing_images_preprocessing() test_batch_multiprocessing_images_preprocessing()
test_batch_processing_images_preprocessing_static_crop()
test_batch_processing_dlc_analysis() test_batch_processing_dlc_analysis()
test_batch_processing_fit_skeleton() test_batch_processing_fit_skeleton()
......
import numpy as np
from scipy.spatial.transform import Rotation
from camera import utils
def test_rotations():
# Test how rotation matrices works (intrinsic Vs extrinsic)
alpha, beta, gamma = -45, 90, 35
in_degrees = True
norm_vect_init = np.array([0., 0., 1.])
# ------------------------------------------------------------------------------------------------------------------
# Single rotation matrices
alpha_r, beta_r, gamma_r = np.deg2rad(alpha), np.deg2rad(beta), np.deg2rad(gamma)
R_x = np.array([[1, 0, 0],
[0, np.cos(alpha_r), -np.sin(alpha_r)],
[0, np.sin(alpha_r), np.cos(alpha_r)]])
R_y = np.array([[np.cos(beta_r), 0, np.sin(beta_r)],
[0, 1, 0],
[- np.sin(beta_r), 0, np.cos(beta_r)]])
R_z = np.array([[np.cos(gamma_r), -np.sin(gamma_r), 0],
[np.sin(gamma_r), np.cos(gamma_r), 0],
[0, 0, 1]])
R_xs = Rotation.from_euler('x', [alpha], degrees=in_degrees).as_matrix()[0]
R_ys = Rotation.from_euler('y', [beta], degrees=in_degrees).as_matrix()[0]
R_zs = Rotation.from_euler('z', [gamma], degrees=in_degrees).as_matrix()[0]
np.testing.assert_almost_equal(R_x, R_xs)
np.testing.assert_almost_equal(R_y, R_ys)
np.testing.assert_almost_equal(R_z, R_zs)
# ------------------------------------------------------------------------------------------------------------------
# Multiple rotation matrices (extrinsic 'xyz' and intrinsic 'XYZ')
R_extrinsic = R_z @ R_y @ R_x
R_intrinsic = R_x @ R_y @ R_z
R_extrinsics = Rotation.from_euler('xyz', [alpha, beta, gamma], degrees=in_degrees).as_matrix()
R_intrinsics = Rotation.from_euler('XYZ', [alpha, beta, gamma], degrees=in_degrees).as_matrix()
np.testing.assert_almost_equal(R_extrinsic, R_extrinsics)
np.testing.assert_almost_equal(R_intrinsic, R_intrinsics)
# The inverse of intrinsic rotation is equal to extrinsic rotation (inverse is true too)
# 'xyz' == 'ZYX' and 'zyx' == 'XYZ'
R_extrinsics_inv = Rotation.from_euler('zyx', [gamma, beta, alpha], degrees=in_degrees).as_matrix()
R_intrinsics_inv = Rotation.from_euler('ZYX', [gamma, beta, alpha], degrees=in_degrees).as_matrix()
np.testing.assert_almost_equal(R_extrinsics, R_intrinsics_inv)
np.testing.assert_almost_equal(R_intrinsics, R_extrinsics_inv)
# Compute new vectors
vect_extrinsic = [norm_vect_init] @ R_extrinsic
vect_intrinsic = R_intrinsic @ norm_vect_init
#vect_extrinsics = [norm_vect_init] @ R_extrinsics
#vect_intrinsics = R_intrinsics @ norm_vect_init
vect_extrinsics = norm_vect_init @ Rotation.from_euler('xyz', [alpha, beta, gamma], degrees=in_degrees).as_matrix()
vect_intrinsics = Rotation.from_euler('XYZ', [alpha, beta, gamma], degrees=in_degrees).apply(norm_vect_init)
np.testing.assert_almost_equal(vect_extrinsic, [vect_extrinsics])
np.testing.assert_almost_equal(vect_intrinsic, vect_intrinsics)
# Compute normalized vectors (is not needed here, because norm_vect_init is normalized)
angles_diff_extrinsic = utils.get_rotation_angles_btw_vectors(vect_extrinsic, vect_extrinsics)
angles_diff_intrinsic = utils.get_rotation_angles_btw_vectors(vect_intrinsic, vect_intrinsics)
np.testing.assert_almost_equal(angles_diff_extrinsic, [0, 0, 0])
np.testing.assert_almost_equal(angles_diff_intrinsic, [0, 0, 0])
# Diff between intrinsic and extrinsic rotations
angles_diff_extrinsic_intrinsic = utils.get_rotation_angles_btw_vectors(vect_extrinsic, vect_intrinsic)
print(angles_diff_extrinsic_intrinsic)
def main():
test_rotations()
if __name__ == "__main__":
main()
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment