diff --git a/README.md b/README.md index ae6d68e087819030ef7620249bc4cb90ee8d8e38..040cb4f5b82a710693cd54f9d4a8acc193431784 100755 --- a/README.md +++ b/README.md @@ -1,15 +1,15 @@ # dlc_flytracker -Python package for the tracking the 3d kinematics of animals (currently only flies) build around [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut) +Python package for the tracking the 3d kinematics of animals (currently only flies) build around [DeepLabCut](https://https://github.com/DeepLabCut/DeepLabCut) - +<!--  --> ## Introduction This package allow the estimation of animal kinematic parameters (positions and angles) by fitting a 3d skeleton to 2d tracking results from DeepLabCut (DLC). The usual steps (see more details below in 'Usage'): -- **Pre-processing of video recordings (at least 2 cameras):** image enhancement, dynamic cropping around moving animal and the stitching of multiple camera views. These cropped and stitched images are then saved in a single video per recorded event. Videos will be analysed later using DLC. This preprocessing step allow a significant increase of the tracking quality of DLC (see our [method paper](https://???path_toward_method_paper_to_be_published)). -- **Selecting or defining an animal 3d skeleton** that will be used for the tracking. You can use a previously made skeleton (e.g. fly), or make your own from body and limbs modules (see the [section below](???)). -- **Training of a DeepLabCut (DLC) network** *(Outside this package)***:** usually done using the DLC user interface. You will most likely need to manually label 100-200 images. See the [section below](???) regarding how to correctly choose and tweak DLC settings in order to get good tracking results with stitched images. +- **Pre-processing of video recordings (at least 2 cameras):** image enhancement, dynamic cropping around moving animal and the stitching of multiple camera views. These cropped and stitched images are then saved in a single video per recorded event. Videos will be analysed later using DLC. This preprocessing step allow a significant increase of the tracking quality of DLC (see our [method paper](https://https://TODO!path_toward_method_paper_to_be_published)). +- **Selecting or defining an animal 3d skeleton** that will be used for the tracking. You can use a previously made skeleton (e.g. fly), or make your own from body and limbs modules (see the [section below](TODO!)). +- **Training of a DeepLabCut (DLC) network** *(Outside this package)***:** usually done using the DLC user interface. You will most likely need to manually label 100-200 images. See the [section below](TODO!) regarding how to correctly choose and tweak DLC settings in order to get good tracking results with stitched images. - **Running DLC analysis on the full dataset** to track the position of the skeleton joints on all the images of your recordings. - **Loading DLC results in dlc_flytracker** to convert the DLC results into the .csv files that will be used in this package. - **Reconstructing 3d coordinates of the skeleton joints** from their 2d coordinates tracked with DLC. @@ -17,17 +17,17 @@ The usual steps (see more details below in 'Usage'): - **Post-processing of the kinematics parameters**: removing outlier, using low pass filter, interpolating missing datapoints and smoothing timeseries. ## Citation -If you use this code, we kindly ask that you please cite our method paper: [Cribellier et al, 2023](https://https://???path_toward_method_paper_to_be_published). -And of course, please cite the Deeplabcut articles: [Mathis et al, 2018](https://www.nature.com/articles/s41593-018-0209-y) and [Nath, Mathis et al, 2019.](https://doi.org/10.1038/s41596-019-0176-0). +If you use this code, we kindly ask that you please cite our method paper: [Cribellier et al, ??? - To be published](https://https://https://https://TODO!path_toward_method_paper_to_be_published). +And of course, please cite the Deeplabcut articles: [Mathis et al, 2018](https://https://www.nature.com/articles/s41593-018-0209-y) and [Nath, Mathis et al, 2019.](https://https://doi.org/10.1038/s41596-019-0176-0). ## Getting Started ### Installation and prerequisites -`dlc_flytracker` requires a Python version `3.8`. It depends on the Python packages `deeplabcut`, `tensorflow` and other common packages like `numpy`, `scipy`, `matplotlib`, which are specified in [`requirements.txt`](requirements.txt). +`dlc_flytracker` requires Python version `3.8`. It depends on the Python packages `deeplabcut`, `tensorflow` and other common packages like `numpy`, `scipy`, `matplotlib`, which are specified in [`requirements.txt`](requirements.txt). -To guarantee that you have a Python 3.8, regardless of your system's Python version, it is most convenient to create an Anaconda environment to install the dependencies of `dlc_flytracker` in. [Miniconda](https://docs.conda.io/en/latest/miniconda.html) is a minimal distribution of Anaconda. Be aware that this is only free to use for academic use; commercial users need to pay a licence fee. Install it (or another Anaconda distribution) with the installer, Windows, Linux and macOS versions are available. +To guarantee that you have a Python 3.8 installation, regardless of your system's Python version, it is most convenient to create an Anaconda environment to install the dependencies of `dlc_flytracker`. [Miniconda](https://https://docs.conda.io/en/latest/miniconda.html) is a minimal distribution of Anaconda. Be aware that this is only free to use for academic use; commercial users need to pay a licence fee. Install it (or another Anaconda distribution) with the installer, Windows, Linux and macOS versions are available. -Create and activate a new environment: +Create and activate a new environment: ``` conda create -n "dlc-flytracker" python=3.8 @@ -36,7 +36,7 @@ conda activate dlc-flytracker `dlc_flytracker` transitively depends on the HDF5 library. Install it system-wide with e.g. `sudo apt install libhdf5-dev` on Ubuntu, or install it in your currently active Anaconda environment with `conda install -c anaconda hdf5`. -Then, install all dependencies from [PyPI](https://pypi.org/) with `pip`: +Then, install all dependencies from [PyPI](https://https://pypi.org/) with `pip`: ``` pip install -r requirements.txt @@ -79,14 +79,14 @@ With `recording_paths` being a list of paths (a path per camera) towards directo These steps allow the generating of .avi videos that will be analysed later by Deeplabcut (See example in the [Jupyter notebook for pre-processing](Notebook1-preprocessing_recordings.ipynb)). - +<!--  --> - **Image enhancement:** *(Optional)* `img_process.enhance_images(fn_name, factor)` with `fn_name` the name of image characteristic to enhance (can be: 'contrast', 'brightness', 'sharpness' or 'color'). By enhancing the recorded images, you will make it easier to label body features later with DLC. - **Cropping of the recordings:** To reduce the size of the images that will be analysed by DLC. - **Cropping around a fixed position:** - - TODO: Add a way to manually define the position around which cropping should be done, and save in .csv file. + <!-- - TODO: Add a way to manually define the position around which cropping should be done, and save in .csv file. --> - **Crop all frames:** `img_process.crop_images(height, width)` - **Dynamic cropping around moving animal:** - **Manual tracking:** @@ -101,7 +101,7 @@ By enhancing the recorded images, you will make it easier to label body features - **Rotate one or multiple views:** *(Optional)* `img_process.rotate_images(degrees, camn_to_process=[camn])` This step can help the labeling of the body features using DLC. - **Stitching of multiple camera views:** `img_process.stitch_images()` -Will stitch all views together for each frame. By stitching all views of the same animal together, you will allow DeepLabCut to learn correlation between the different views, and therefore you will improve the overall tracking accuracy. See section [???](???) for more details. +Will stitch all views together for each frame. By stitching all views of the same animal together, you will allow DeepLabCut to learn correlation between the different views, and therefore you will improve the overall tracking accuracy. See section [TODO!](TODO!) for more details. - **Save each recording in an .avi** : `img_process.save_avi()` @@ -113,24 +113,26 @@ You can select an already defined animal skeleton (see [/skeleton_fitter/animals **OR** -New animal skeletons can be defined by confining bodies and limbs modules (see [/skeleton_fitter/modules](/skeleton_fitter/modules)). See [example animal skeleton](/skeleton_fitter/animals/???), [example body module](/skeleton_fitter/modules/bodies) and [example limbs module](/skeleton_fitter/modules/limbs/???) for more information on how to build your own skeleton and modules. +New animal skeletons can be defined by confining bodies and limbs modules (see [/skeleton_fitter/modules](/skeleton_fitter/modules)). See [example animal skeleton](/skeleton_fitter/animals/TODO!), [example body module](/skeleton_fitter/modules/bodies) and [example limbs module](/skeleton_fitter/modules/limbs/TODO!) for more information on how to build your own skeleton and modules. - +<!--  --> -??? TODO: Add details of exactly how a new skeleton is defined (what needs to be done, functions to write) +<!-- +TODO! Add details of exactly how a new skeleton is defined (what needs to be done, functions to write) -??? TODO: Explain how the contour of a stereotypical wing can be loaded in .yaml file using update_wings_init_skeleton3d.py +TODO! Explain how the contour of a stereotypical wing can be loaded in .yaml file using update_wings_init_skeleton3d.py -??? For a fly: Initial body orientation is defined along x axis. Yaw, pitch and roll are then computed one after the other and in the reference frame of the body. +TODO! For a fly: Initial body orientation is defined along x axis. Yaw, pitch and roll are then computed one after the other and in the reference frame of the body. +--> **in DeepLabCut:** You will need to use the exact same naming convention for the body joints in DLC. -??? TODO: Make a script that generate a DLC config.yaml file from the previously defined dlc_flytracker skeleton (and number of cameras). +<!-- TODO! Make a script that generate a DLC config.yaml file from the previously defined dlc_flytracker skeleton (and number of cameras). --> ### Training a network in DeepLabCut *Using the DLC user interface.* -You will need to train a DLC network that is capable of tracking accurately the joints of your skeleton in all recorded views. This will require to manually label around 100-200 images. Tutorials on how to use deeplabcut are available [here](https://deeplabcut.github.io/DeepLabCut/docs/intro.html). +You will need to train a DLC network that is capable of tracking accurately the joints of your skeleton in all recorded views. This will require to manually label around 100-200 images. Tutorials on how to use deeplabcut are available [here](https://https://deeplabcut.github.io/DeepLabCut/docs/intro.html). This step is very important, final tracking accuracy will be heavily dependent on how good the training dataset is. See the example [config.yaml](/data/mosquito_escapes/config.yaml) to see an example and how DLC settings can be tweaked in order to get good tracking results with pre-cropped and stitched images. @@ -155,7 +157,7 @@ Estimate kinematic parameters by fitting of the 3d skeleton to the 2d/3d coordin You will need to choose an optimization method (e.g. `opt_method` = 'nelder-mead', 'powell' or 'leastsq'). And you will need to select a fitting method (e.g. `fit_method` = '2d' or '3d'), to decide if the skeleton will be fitted to the 2d or 3d coordinates of the joints. It is also possible to fit the body to a hybrid skeleton that use the average position of the limbs to improve accuracy ot the kinematic tracking of the body (`fit_method = '3d_hybrid'`). -??? TODO: Document how filtering/smoothing is done by default and how to modify it + implement filtering of outlier (example of Cas data) +<!-- TODO! Document how filtering/smoothing is done by default and how to modify it + implement filtering of outlier (example of Cas data) --> ## Notes ### Running the tests diff --git a/camera/calib_gimbals.py b/camera/calib_gimbals.py deleted file mode 100755 index 9fedd763716d5d3a47215bc231075ba84b7b305f..0000000000000000000000000000000000000000 --- a/camera/calib_gimbals.py +++ /dev/null @@ -1,111 +0,0 @@ -import copy -from dataclasses import dataclass - -from typing import List - -import numpy as np -from scipy.interpolate import interp1d - -from camera import calib - -# Code for calibration mirrors mounted on gimbals + virtual cameras - -# TODO: Read calibration data (April tag position in 3D vs size and position in pixels vs orientation of mirror (in steps of stepper motors) -# TODO: Generate calibration -# TODO: Write/read calibration -# TODO: link calibration of multiple cameras system with this calibration (same reference frame/origin) -# TODO: Add function that say something about focus shift? - - -@dataclass -class CameraCalibMultiFocuses: - """ Class for keeping track of the intrinsic parameters of a camera for multiple distances of focus plan. - """ - - def __init__(self, xps: List[float], yps: List[float], fxs: List[float], fys: List[float], distances: List[float]): - """ - - Args: - xps: List of multiple x coordinate of camera principal point (pixels) for each distances of focus plan - yps: List of multiple y coordinate of camera principal point (pixels) for each distances of focus plan - fxs: List of multiple camera focal length along x (pixels) for each distances of focus plan - fys: List of multiple camera focal length along y (pixels) for each distances of focus plan - distances: List of distances between camera and focus plan - """ - - # Coordinates of principal points (if a camera sensor is N*M pixels, xp ~ N/2 and yp ~ M/2 - self.xps = xps # type: List[float] - self.yps = yps # type: List[float] - - # Focal lengths of the camera - self.fxs = fxs # type: List[float] - self.fys = fys # type: List[float] - - self.distances = distances # type: List[float] - - assert len(distances) == len(xps) and len(distances) == len(fxs) - assert len(xps) == len(yps) and len(fxs) == len(fys) - - def get_intrinsic(self, distance): - """ Get intrinsic parameters of the camera for a given focus distance. - - Args: - distance: Distance of focus points (m) - - Returns: - xp: x coordinate of camera principal point (pixels) - yp: y coordinate of camera principal point (pixels) - fx: camera focal length along x (pixels) - fy: camera focal length along y (pixels) - """ - - if distance in self.distances: - ind = self.distances.index(distance) - - return self.xps[ind], self.yps[ind], self.fxs[ind], self.fys[ind] - - else: - # TODO check if linear interpolation is correct (are the calibration curves linear?) - - fxp = interp1d(self.distances, self.xps, kind='linear') - fyp = interp1d(self.distances, self.yps, kind='linear') - ffx = interp1d(self.distances, self.fxs, kind='linear') - ffy = interp1d(self.distances, self.fys, kind='linear') - - return fxp(distance), fyp(distance), ffx(distance), ffy(distance) - - @staticmethod - def from_dlt_coefs(dlt_coefs: List[List[float]], x_focus: List[float]): - """ Creates a CameraCalibMultiFocuses dataclass from - - Args: - dlt_coefs: List of DLT coefficients of one camera for multiple focus distances (nb_distance*11). - x_focus: List of x coordinates of the focus plan - - Returns: - CameraCalibMultiFocuses - """ - - assert len(dlt_coefs) == len(x_focus) - - # Generate intrinsic and extrinsic parameters from dlt_coefs - coords, quats = np.zeros((len(x_focus), 3)), np.zeros((len(x_focus), 4)) - xps, yps, fxs, fys = np.zeros(len(x_focus)), np.zeros(len(x_focus)), np.zeros(len(x_focus)), np.zeros(len(x_focus)) - for i in range(0, len(x_focus)): - xps[i], yps[i], fxs[i], fys[i] = calib.intrinsic_from_dlt_coef(dlt_coefs[i]) - coords[i], quats[i] = calib.extrinsic_from_dlt_coef(dlt_coefs[i], (xps[i], yps[i], fxs[i], fys[i])) - - x_cams = [coord[0] for coord in coords] - # y_cams = [coord[1] for coord in coords] - # z_cams = [coord[2] for coord in coords] - # - # # To check that camera didn't move during the calibration - # np.testing.assert_almost_equal(0.0, np.nanstd(x_cams), decimal=4) - # np.testing.assert_almost_equal(0.0, np.nanstd(y_cams), decimal=4) - # np.testing.assert_almost_equal(0.0, np.nanstd(z_cams), decimal=4) - - # Generate CameraCalibMultiFocuses class - distances = x_focus - x_cams - - return CameraCalibMultiFocuses(xps, yps, fxs, fys, distances) - diff --git a/camera/objects.py b/camera/objects.py deleted file mode 100644 index 05f2ecca0a4f408987ddaa8db04e3947ee7b682e..0000000000000000000000000000000000000000 --- a/camera/objects.py +++ /dev/null @@ -1,807 +0,0 @@ -from typing import List, Any - -import numpy as np -from pyquaternion import Quaternion -from scipy.spatial.transform import Rotation - -from camera import calib, utils -from camera.calib_gimbals import CameraCalibMultiFocuses - - -class Object3D: - """Class for keeping track of an object with position and orientation in an orthogonal reference frame (X,Y,Z). - """ - - # Initial normal vector of the object (along X) - _norm_vect_init = [0., 0., 1.] # type: List[float] - - def __init__(self, coords: List[float], rotation: Any, in_degrees: bool = True) -> None: - """Initialises a 3d object. - - Args: - coords: Coordinates [X, Y, Z] of the object (m). - rotation: Rotation needed to change the initial orientation of the object [0, 0, 1] to its current orientation. - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - # Object position in the world reference frame (X,Y,Z) - self._x, self._y, self._z = tuple(coords) # type: float, float, float # (m) - - # Quaternion [q0, q1, q2, q3] - self._quat = utils.get_quat_from_rotation(rotation) # type: Quaternion - - # Euler angles (extrinsic) around X,Y,Z axes - R = Rotation.from_quat(utils.quat0123_to_1230(self._quat.elements)) - (self._alpha, self._beta, self._gamma) = R.as_euler('xyz', degrees=in_degrees) # type: float, float, float - - self.in_degrees = in_degrees # type: bool - - @property - def coords(self) -> List[float]: - return np.array([self._x, self._y, self._z]) - - @coords.setter - def coords(self, coords: List[float]) -> None: - self._x, self._y, self._z = tuple(coords) - - @property - def x(self) -> float: - return self._x - - @property - def y(self) -> float: - return self._y - - @property - def z(self) -> float: - return self._z - - @property - def quat(self) -> Quaternion: - return self._quat - - @quat.setter - def quat(self, quat) -> None: - self._quat = quat - R = Rotation.from_quat(utils.quat0123_to_1230(quat.elements)) - (self._alpha, self._beta, self._gamma) = R.as_euler('xyz', degrees=self.in_degrees) - - @property - def angles(self) -> List[float]: - return [self._alpha, self._beta, self._gamma] - - @property - def alpha(self) -> float: - return self._alpha - - @property - def beta(self) -> float: - return self._beta - - @property - def gamma(self) -> float: - return self._gamma - - @property - def norm_vect(self) -> List[float]: # normal (normalized) vector to the object - return utils.normalize(self._quat.rotate(self._norm_vect_init)) - - @norm_vect.setter - def norm_vect(self, norm_vect) -> None: - # TODO use quaternion also to compute rotation btw vectors? - R = utils.get_rotation_btw_vectors(self._norm_vect_init, norm_vect) - self._quat = Quaternion(utils.quat0123_to_1230(R.as_quat())) - - def transpose(self, coords: List[float]) -> None: - """ - To move the object of the given coordinates. - - Args: - coords: [dx, dy, dz]: Coordinates (m) of which the object will be transposed - """ - - dx, dy, dz = tuple(coords) - - self._x, self._y, self._z = self._x + dx, self._y + dy, self._z + dz - - def rotate(self, rotation: Any, in_degrees: bool = True) -> None: - """ - To rotate the object of the given angles. - - Args: - rotation: - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - new_quat = utils.get_quat_from_rotation(rotation, in_degrees=in_degrees) - - self._quat = new_quat * self._quat - self._x, self._y, self._z = new_quat.rotate(np.asarray([self._x, self._y, self._z])) - - def rotate_around(self, coords: List[float], rotation: Any, in_degrees: bool = True) -> None: - """ - To rotate the object of the given angles around a given position. - - Args: - coords: [xr, yr, zr]: Coordinates (m) around which the object will be rotated. - rotation: - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - xr, yr, zr = tuple(coords) - - # Position to transpose the object before rotating it around the temporary origin ([xr, yr, zr]) - xt, yt, zt = xr - self._x, yr - self._y, zr - self._z - - self.transpose([xt, yt, zt]) - self.rotate(rotation, in_degrees=in_degrees) - self.transpose([-xt, -yt, -zt]) - - def rotate_around_self(self, rotation: Any, in_degrees: bool = True) -> None: - """ - To rotate the object of the given angles around a given position. - - Args: - rotation: - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - self.rotate_around([self._x, self._y, self._z], rotation, in_degrees=in_degrees) - - def point_towards(self, coords: List[float]) -> None: - """ Will rotate the object to point it towards 3d coordinates. - - Args: - coords: [X, Y, Z]: Coordinates (m) of the object towards which to point. - """ - - self.norm_vect = utils.normalize(coords - self.coords) - - -class MultiObjects: - """ Class for keeping track of multiple object (in the same reference frame). - """ - - def __init__(self, objects: List[Object3D]) -> None: - """ Initialize a Cameras class (list of multiple cameras) - """ - - self.objects = objects # type: List[Object3D] # List of objects - - @property - def nb(self) -> int: - return len(self.objects) - - def get(self, index: int) -> Object3D: - """Retrieves an object from its index. - - Args: - index: The index of the object. - - Returns: - The camera at the specified index. - """ - return self.objects[index] - - def transpose(self, coords: List[float]) -> None: - """ To move all the objects of the given coordinates. - - Args: - coords: [dx, dy, dz]: Coordinates (m) of which the objects will be transposed - """ - - for obj in self.objects: - obj.transpose(coords) - - def rotate(self, rotation: Any, in_degrees: bool = True) -> None: - """ To rotate all the objects of the given orientation. - - Args: - rotation: - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - # TODO check that rotation of the full reference frame is correct here - for obj in self.objects: - obj.rotate(rotation, in_degrees=in_degrees) - - -class Camera(Object3D): - """Class for keeping track of a (pinhole) camera. - """ - - def __init__(self, coords_principal_pt: List[float], focal_lengths: List[float], - coords: List[float], rotation: Any, in_degrees: bool = True) -> None: - """Initialises a camera. - - Args: - coords_principal_pt: [xp, yp]: Coordinates of principal points (pixels). - focal_lengths: [fx, fy]: Focal lengths (pixels). - coords: [X, Y, Z]: Coordinates (m) of the object. - rotation: Rotation needed to change the initial orientation of the camera [0, 0, 1] to its current orientation. - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - # -------------------------------------------------------------------------------------------------------------- - # Intrinsic parameters: - # Coordinates of principal points (if a camera sensor is N*M pixels, xp ~ N/2 and yp ~ M/2) - self.xp, self.yp = tuple(coords_principal_pt) # type: float, float # (pixels) - - # Focal lengths (or principal distances) of the camera - self.fx, self.fy = tuple(focal_lengths) # type: float, float # (pixels) - - # -------------------------------------------------------------------------------------------------------------- - # Extrinsic parameters: - # Camera position in the world reference frame (X,Y,Z) - self._x, self._y, self._z = tuple(coords) # type: float, float, float # (m) - - # Quaternion [q0, q1, q2, q3] - self._quat = utils.get_quat_from_rotation(rotation) # type: Quaternion - - # Euler angles (extrinsic) around X,Y,Z axes - R = Rotation.from_quat(utils.quat0123_to_1230(self._quat.elements)) - (self._alpha, self._beta, self._gamma) = R.as_euler('xyz', degrees=in_degrees) # type: float, float, float - - self.in_degrees = in_degrees # type: bool - - @property - def f(self) -> float: - return (self.fx + self.fy) / 2 - - @f.setter - def f(self, f) -> None: - self.fx, self.fy = f, f - - @property - def dlt_coef(self) -> List[float]: - return calib.dlt_coef_from_intrinsic_extrinsic(self.xp, self.yp, self.fx, self.fy, self.coords, self.quat, - in_degrees=self.in_degrees) - - @dlt_coef.setter - def dlt_coef(self, dlt_coef) -> None: - self.xp, self.yp, self.fx, self.fy = calib.intrinsic_from_dlt_coef(dlt_coef) - self.coords, self._quat = \ - calib.extrinsic_from_dlt_coef(dlt_coef, (self.xp, self.yp, self.fx, self.fy), in_degrees=self.in_degrees) - - @staticmethod - def from_dlt_coef(dlt_coef: List[float], in_degrees: bool = True) -> "Camera": - """Creates a camera class from its dlt coefficients. - - Args: - dlt_coef: List of DLT coefficients of one camera (11*1) - - in_degrees: Whether the angles will be in degree. Will be in radians if false. - - Returns: - Camera class - """ - - xp, yp, fx, fy = calib.intrinsic_from_dlt_coef(dlt_coef) - coords, quat = calib.extrinsic_from_dlt_coef(dlt_coef, (xp, yp, fx, fy), in_degrees=in_degrees) - - return Camera(xp, yp, fx, fy, coords, quat, in_degrees=in_degrees) - - -class MultiCameras: - """ Class for keeping track of multiple cameras (in the same reference frame). - """ - - def __init__(self, cameras: List[Camera]) -> None: - """ Initialize a Cameras class (list of multiple cameras) - """ - - self.cameras = cameras # type: List[Camera] # List of cameras - - @property - def nb(self) -> int: - return len(self.cameras) - - @property - def dlt_coefs(self) -> List[List[float]]: - - dlt_coefs = [] - for camera in self.cameras: - dlt_coefs.append(camera.dlt_coef) - - assert self.nb == len(dlt_coefs) - - return dlt_coefs - - def get(self, index: int) -> Camera: - """Retrieves a camera from its index. - - Args: - index: The index of the camera. - - Returns: - The camera at the specified index. - """ - return self.cameras[index] - - @staticmethod - def from_dlt_coefs(dlt_coefs: List[List[float]]) -> "MultiCameras": - """ Creates a cameras class from the dlt coefficients of the multiple cameras it contains. - - Args: - dlt_coefs: List with DLT coefficients (11 * nb_cam) of multiple cameras. - """ - - cameras = [] - for dlt_coef in dlt_coefs: - cameras.append(Camera.from_dlt_coef(dlt_coef)) - - return MultiCameras(cameras) - - def transpose(self, coords: List[float]) -> None: - """ To move all the cameras of the given coordinates. - - Args: - coords: [dx, dy, dz]: Coordinates (m) of which the cameras will be transposed - """ - - for camera in self.cameras: - camera.transpose(coords) - - def rotate(self, rotation: Any, in_degrees: bool = True) -> None: - """ To rotate all the cameras of the given orientation. - - Args: - rotation: - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - # TODO check that rotation of the full reference frame is correct here - for camera in self.cameras: - camera.rotate(rotation, in_degrees=in_degrees) - - -class RoundMirror(Object3D): - """ Class for keeping track of a round mirror (plane and first surface). - """ - - def __init__(self, coords: List[float], rotation: Any, radius: float = np.inf, in_degrees: bool = True) -> None: - """ Initialize a round mirror. - - Args: - coords: [X, Y, Z]: Coordinates (m) of the object. - rotation: Rotation needed to change the initial orientation of the camera [0, 0, 1] to its current orientation. - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - radius: Radius of the round mirror (m) - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - # Object position in the world reference frame (X,Y,Z) - self._x, self._y, self._z = tuple(coords) # type: float, float, float # (m) - - # Quaternion [q0, q1, q2, q3] - self._quat = utils.get_quat_from_rotation(rotation) # type: Quaternion - - # Euler angles (extrinsic) around X,Y,Z axes - R = Rotation.from_quat(utils.quat0123_to_1230(self._quat.elements)) - (self._alpha, self._beta, self._gamma) = R.as_euler('xyz', degrees=in_degrees) # type: float, float, float - - self.radius = radius # type: float # (m) - - self.in_degrees = in_degrees # type: bool - - # TODO use radius to limit what is reflected by the mirror? - - -class VirtualCamera: - """ Virtual camera resulting from the presence of a mirror in front of a real camera. - """ - - # Initial normal vector of the object (along X) - _norm_vect_init = [0., 0., 1.] # type: List[float] - - def __init__(self, camera: Camera, mirror: RoundMirror, in_degrees: bool = True) -> None: - """ Initialize a virtual camera (generated by mirror and a camera) - - Args: - camera: An instance of the Camera class. - mirror: An instance of the RoundMirror class. - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - self.camera = camera # type: Camera # Real camera - self.mirror = mirror # type: RoundMirror # Mirror through which the virtual camera is created - - self.in_degrees = in_degrees # type: bool - - @property - def xp(self) -> float: - return self.camera.xp - - @property - def yp(self) -> float: - return self.camera.yp - - @property - def fx(self) -> float: - return self.camera.fx - - @property - def fy(self) -> float: - return self.camera.fy - - @property - def f(self) -> float: - return self.camera.f - - @property - def coords(self) -> List[float]: - return self._get_coordinates() - - @property - def x(self) -> float: - return self._get_coordinates()[0] - - @property - def y(self) -> float: - return self._get_coordinates()[1] - - @property - def z(self) -> float: - return self._get_coordinates()[2] - - @property - def quat(self) -> Quaternion: - return self._get_orientation() - - @property - def angles(self) -> List[float]: - return self._get_angles - - # @property - # def alpha(self) -> float: - # return self._get_angles[0] - # - # @property - # def beta(self) -> float: - # return self._get_angles[1] - # - # @property - # def gamma(self) -> float: - # return self._get_angles[2] - - def _get_coordinates(self) -> List[float]: - coords = tuple(2 * self.coords_projection - self.camera.coords) - return np.asarray(coords) - - def _get_orientation(self) -> Quaternion: - # TODO use quaternion also to compute rotation btw vectors? - R = utils.get_rotation_btw_vectors(self._norm_vect_init, self.norm_vect) - quat = Quaternion(utils.quat0123_to_1230(R.as_quat())) - - return quat - - def _get_angles(self) -> List[float]: - quat = self._get_orientation() - return Rotation.from_quat(utils.quat0123_to_1230(quat.elements)).as_euler('xyz', degrees=self.in_degrees) - - @property - def coords_projection(self): - """ Orthogonal projection of camera onto the mirror """ - return self.camera.coords - \ - np.dot(self.camera.coords - self.mirror.coords, self.mirror.norm_vect) * self.mirror.norm_vect - - @property - def coords_intersection(self): - """ Orthogonal projection of camera onto the mirror """ - return utils.intersect_line_plane([self.camera.alpha, self.camera.beta, self.camera.gamma], - self.camera.norm_vect, - [self.mirror.alpha, self.mirror.beta, self.mirror.gamma], - self.mirror.norm_vect) - - @property - def norm_vect(self) -> List[float]: - return self.camera.norm_vect - 2 * np.dot(self.camera.norm_vect, self.mirror.norm_vect) * self.mirror.norm_vect - - @property - def dlt_coef(self) -> List[float]: - return calib.dlt_coef_from_intrinsic_extrinsic(self.xp, self.yp, self.fx, self.fy, self.coords, self.quat, - in_degrees=self.in_degrees) - - def transpose(self, coords: List[float]) -> None: - """ - To move the camera and the mirror of the given coordinates. - - Args: - coords: [dx, dy, dz]: Coordinates (m) of which the object will be transposed - """ - - self.camera.transpose(coords) - self.mirror.transpose(coords) - - def rotate(self, rotation: Any, in_degrees: bool = True) -> None: - """ - To rotate the object of the given angles. - - Args: - rotation: - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - self.camera.rotate(rotation, in_degrees=in_degrees) - self.mirror.rotate(rotation, in_degrees=in_degrees) - - def rotate_mirror(self, rotation: Any, in_degrees: bool = True) -> None: - """ - To rotate the mirror around itself and of the given angles. - - Args: - rotation: - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - self.mirror.rotate_around_self(rotation, in_degrees=in_degrees) - - def get_rotation_to_point_towards(self, coords: List[float]) -> List[float]: - """ Get rotation between current orientation and the orientation needed to point toward given coordinates. - - Args: - coords: [X, Y, Z]: Coordinates (m) of the object towards which to point. - - Returns: - quat: Instance of pyquaternion.Quaternion. - """ - - norm_vect_mirror_points = utils.normalize(coords - self.camera.mirror.coords) - norm_vect_camera = self.camera.norm_vect - - new_norm_mirror = utils.normalize(norm_vect_mirror_points - norm_vect_camera) - - # TODO use quaternion also to compute rotation btw vectors? - R = utils.get_rotation_btw_vectors(new_norm_mirror, self.mirror.coords) - quat = Quaternion(utils.quat0123_to_1230(R.as_quat())) - - return quat - - def point_towards(self, coords: List[float]) -> None: - """ Will rotate the mirror to point the virtual camera towards the given 3d coordinates. - - Args: - coords: [X, Y, Z]: Coordinates (m) of the object towards which to point. - """ - - norm_vect = utils.normalize(coords - self.coords) - self.mirror.norm_vect = utils.normalize(self.camera.norm_vect - norm_vect) - - def distance_to(self, coords: List[float]) -> List[float]: - """ Get the distance between position of camera and given coordinates. - - Args: - coords: [X, Y, Z]: Coordinates (m) of the object. - - Returns: - distance: Distance (m) between camera and focus plan. - """ - - return np.linalg.norm(coords - self.coords) - - def update_intrinsic_from_distance(self, calib: CameraCalibMultiFocuses, distance: float) -> None: - """ Will update the intrinsic parameters (xp, yp, fx, fy) of the camera - using the calibration and the focus distance. - - Args: - calib: Instance of dataclass CameraCalibMultiFocuses - distance: Distance (m) between camera and focus plan. - """ - - self.camera.xp, self.camera.yp, self.camera.fx, self.camera.fy = calib.get_intrinsic(distance) - - -class MultiVirtualCameras: - """Class for keeping track of multiple cameras (in the same reference frame). - """ - - def __init__(self, cameras: List[VirtualCamera]) -> None: - """ Initialize a Cameras class (list of multiple virtual cameras) - """ - - self.cameras = cameras # type: List[VirtualCamera] # List of virtual cameras - - @property - def nb(self) -> int: - return len(self.cameras) - - @property - def dlt_coefs(self) -> List[List[float]]: - dlt_coefs = [] - for camera in self.cameras: - dlt_coefs.append(camera.dlt_coef) - - assert self.nb == len(dlt_coefs) - - return dlt_coefs - - def transpose(self, coords: List[float]) -> None: - """ To move all the virtual cameras of the given coordinates. - - Args: - coords: [dx, dy, dz]: Coordinates (m) of which the cameras will be transposed - """ - - for camera in self.cameras: - camera.transpose(coords) - - def rotate(self, rotation: Any, in_degrees: bool = True) -> None: - """ To rotate all the cameras of the given angles. - - Args: - rotation: - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - # TODO check that rotation of the full reference frame is correct here - for camera in self.cameras: - camera.rotate(rotation, in_degrees=in_degrees) - - def rotate_mirrors(self, rotations, in_degrees: bool = True) -> None: - """ To rotate all the cameras of the given angles. - - Args: - rotations: List of rotations, with each rotation being: - quaternion: Preferred way of defining the rotation. - Instance of pyquaternion.Quaternion. - OR - List of quaternion components [q0, q1, q2, q3] - OR - Instance of scipy.spatial.transform.Rotation - OR - Rotation matrix [3*3] OR Transformation matrix [4x4] - OR - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes (degrees). - - in_degrees: Whether the given angles are in degree. Will use radians if false. - """ - - # TODO check that rotation of the full reference frame is correct here - for i, camera in enumerate(self.cameras): - camera.rotate_mirror(rotations[i], in_degrees=in_degrees) - - def point_towards(self, coords: List[float]) -> None: - """ Point the virtual cameras towards a 3d object - - Args: - coords: [X, Y, Z]: Coordinates (m) of the object towards which the camera will be pointed. - """ - - for camera in self.cameras: - quat = camera.get_rotation_to_point_towards(coords) - camera.mirror.rotate_around_self(quat) - - def update_intrinsics_from_coordinates(self, calibs: List[CameraCalibMultiFocuses], coords: List[float]) -> None: - """ Will update the intrinsic parameters (xp, yp, fx, fy) of the cameras using the calibration - and the coordinates where the focus is made. - - Args: - calibs: List of dataclasses CameraCalibMultiFocuses (one per camera) - coords: [X, Y, Z]: Coordinates (m) of the object towards which the camera will be pointed. - """ - - assert len(calibs) == len(self.cameras) - - for i, camera in enumerate(self.cameras): - camera.update_intrinsics_from_coordinates(calibs[i], camera.distance_to(coords)) diff --git a/camera/utils.py b/camera/utils.py index b50a6d1ea71c2397df7ea56e98f9cdcdf8d17867..ee1e1eb8e842bb752ece37cef728331e96bb59ee 100644 --- a/camera/utils.py +++ b/camera/utils.py @@ -5,40 +5,6 @@ from pyquaternion import Quaternion from scipy.spatial.transform import Rotation -def intersect_line_plane(pt_line: List[float], vect_line: List[float], pt_plane: List[float], vect_plane: List[float], - epsilon: float = 1e-6) -> List[float]: - """ - - Args: - pt_line: A point on the line ([x, y, z]). - vect_line: Any vector along the line ([u, v, w]) . - - pt_plane: A point on the plane ([x, y, z]). - vect_plane: A normal vector ([u, v, w]) defining the plane direction (does not need to be normalized). - - epsilon: - - Returns: - coords: [x, y, z] coordinates of the intersection. - """ - - pt_line = np.asarray(pt_line) - vect_line = np.asarray(vect_line) - pt_plane = np.asarray(pt_plane) - vect_plane = np.asarray(vect_plane) - - dot = np.dot(vect_plane, vect_line) - - assert abs(dot) > epsilon, ('The segment is parallel to the plane.') - - fac = -np.dot(vect_plane, pt_line + pt_plane) / dot - vect_line = fac * vect_line - - coords = pt_line + vect_line - - return coords - - def normalize(vect: List[float]) -> List[float]: norm = np.linalg.norm(vect) @@ -48,62 +14,6 @@ def normalize(vect: List[float]) -> List[float]: return vect / norm -def to_inhomogeneus(X: np.ndarray) -> np.ndarray: - - X = np.asarray(X) - - if X.ndim > 1: - raise ValueError("x must be one-dimensional.") - - return (X / X[-1])[:-1] - - -def to_homogeneus(X: np.ndarray) -> np.ndarray: - - X = np.asarray(X) - - if X.ndim > 1: - raise ValueError("X must be one-dimensional.") - - return np.hstack([X, 1]) - - -def is_orthogonal(M, return_error: bool = False): - """ Check if matrix is orthogonal - - Args: - M: a square matrix (NxN) - return_error: Whether to return the rms error for orthogonality - - Return: - Whether the matrix is found to be orthogonal - OR - The rms error for orthogonality - """ - - assert all(len(row) == len(M) for row in M), "Matrix need to be square (NxN)" - - if not return_error: - rtol = 1e-05 - atol = 1e-08 - if not np.allclose(np.dot(M, M.conj().transpose()), np.eye(3), rtol=rtol, atol=atol): - # Matrix is not orthogonal (i.e. its transpose should be its inverse) - return False - - elif not np.isclose(np.linalg.det(M), 1.0, rtol=rtol, atol=atol): - # Matrix is not special orthogonal (i.e. its determinant must be +1.0) - return False - - else: - return True - - else: - eye_diff = np.dot(M, M.conj().transpose()) - np.eye(3) - ortho_error = np.sqrt(np.sum(eye_diff ** 2)) - - return ortho_error - - def get_rotation_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.])) -> Rotation: """ Get rotation matrix between two vectors using scipy.spatial.transform.Rotation. @@ -132,50 +42,6 @@ def get_rotation_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., return R -# def get_rotation_quaternion_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.])) -> Quaternion: -# """ Get rotation matrix between two vectors using scipy.spatial.transform.Rotation. -# -# Args: -# vec1: vector #1 (e.g. observed in initial frame) (N, 3) -# vec2: vector #2 (e.g. observed in another frame) (N, 3) -# -# Returns: -# Quaternion -# """ -# -# return Quaternion(quat0123_to_1230(get_rotation_btw_vectors(vec1, vec2).as_quat())) -# -# -# def get_rotation_matrix_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.])) -> np.ndarray: -# """ Get rotation matrix between two vectors using scipy.spatial.transform.Rotation. -# -# Args: -# vec1: vector #1 (e.g. observed in initial frame) (N, 3) -# vec2: vector #2 (e.g. observed in another frame) (N, 3) -# -# Returns: -# Rotation matrix -# """ -# -# return get_rotation_btw_vectors(vec1, vec2).as_matrix() - - -def get_rotation_angles_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.]), - in_degrees: bool = True) -> np.ndarray: - """ Get rotation angles between two vectors using scipy.spatial.transform.Rotation. - - Args: - vec1: vector #1 (e.g. observed in initial frame) (N, 3) - vec2: vector #2 (e.g. observed in another frame) (N, 3) - in_degrees: Whether to give angles in degree. Will be in radians if false. True by default. - - Returns: - Euler angles (extrinsic) [alpha, beta, gamma] around X,Y,Z axes. - """ - - return get_rotation_btw_vectors(vec1, vec2).as_euler('xyz', degrees=in_degrees) - - def get_rotation_angle_btw_vectors(vec1: np.ndarray, vec2: np.ndarray = np.array([0., 0., 1.]), in_degrees: bool = True) -> np.ndarray: """ Get rotation angle between two vectors. @@ -268,47 +134,3 @@ def quat0123_to_1230(quat: np.ndarray) -> np.ndarray: assert len(quat) == 4 return np.append(quat[1:], quat[0]) - -# TODO remove following code? -# def find_closest_orthogonal_matrix(A): -# ''' -# Find closest orthogonal matrix to *A* using iterative method. -# -# From: https://gist.github.com/dokato/7a997b2a94a0ec6384a5fd0e91e45f8b -# Bases on the code from REMOVE_SOURCE_LEAKAGE function from OSL Matlab package. -# -# Args: -# A (numpy.array): array shaped k, n, where k is number of channels, n - data points -# -# Returns: -# L (numpy.array): orthogonalized matrix with amplitudes preserved -# -# ''' -# -# MAX_ITER = 2000 -# -# TOLERANCE = np.max((1, np.max(A.shape) * np.linalg.svd(A.T, False, False)[0])) * np.finfo(A.dtype).eps # TODO -# reldiff = lambda a, b: 2 * abs(a - b) / (abs(a) + abs(b)) -# convergence = lambda rho, prev_rho: reldiff(rho, prev_rho) <= TOLERANCE -# -# A_b = A.conj() -# d = np.sqrt(np.sum(A * A_b, axis=1)) -# -# rhos = np.zeros(MAX_ITER) -# -# for i in range(MAX_ITER): -# scA = A.T * d -# -# u, s, vh = np.linalg.svd(scA, False) -# -# V = np.dot(u, vh) -# -# d = np.sum(A_b * V.T, axis=1) -# -# L = (V * d).T -# E = A - L -# rhos[i] = np.sqrt(np.sum(E * E.conj())) -# if i > 0 and convergence(rhos[i], rhos[i - 1]): -# break -# -# return L diff --git a/images/imagesequence.py b/images/imagesequence.py index 6ce3178c8080b6f7879b386deaab6ef75197774c..dabc10397e077a7137defd42cd5fff919e16a01a 100755 --- a/images/imagesequence.py +++ b/images/imagesequence.py @@ -87,7 +87,7 @@ class ImageSequence: return self._names def get_names_from_numbers(self, numbers: List[str]) -> List[int]: - """Retrieves the names of images from their numbers. Assumes that the list of numbers is sorted. + """ Retrieves the names of images from their numbers. Assumes that the list of numbers is sorted. Args: numbers: The numbers of images. diff --git a/main_process_mating.py b/main_process_mating.py index 9895295bb9eafd10d2d8e055175a2d7be3b88b30..4d614103fd49b7a821d67d20d9f8e248e17049ae 100755 --- a/main_process_mating.py +++ b/main_process_mating.py @@ -60,8 +60,8 @@ rec_names_to_process = 'all' # ------------------------------------------------------------------------------------------------------------------ # Crop all frames and save in save_path + '/_Cropped' -img_process.crop_images(200, 200, from_tracks=True, show_plot=False, - rec_names_to_process=rec_names_to_process, delete_previous=True) +img_process.dynamic_crop_images(200, 200, from_tracks=True, show_plot=False, + rec_names_to_process=rec_names_to_process, delete_previous=True) # ------------------------------------------------------------------------------------------------------------------ # Enhance image contrast from path '/_Cropped' diff --git a/process/batch_processing.py b/process/batch_processing.py index 5611daec7fda83e4fe49bfa5c3fcf06712b6bcdd..2c5166d8f68f7c15ea9fd412bfb5ae652bca77e6 100755 --- a/process/batch_processing.py +++ b/process/batch_processing.py @@ -23,7 +23,7 @@ from process.skeleton_fitting import fit_all_skeleton, plot_all_skeleton from images.read import * -import_dlc = False # TODO remove import_dlc / find a way to make the import of deeplabcut and tensorflow optional (only if dlc will be used) +import_dlc = False if import_dlc: import deeplabcut @@ -115,7 +115,7 @@ class BatchProcessing: Args: batch_settings: BatchSettings - + Returns: BatchProcessing """ @@ -170,7 +170,7 @@ class BatchProcessing: threshold_s: Will check if the date in the folder names differ from less than threshold in second move_to_delete: Whether to move the unmatched recording folders to a to_delete folder (will be in the same parent directory as the recording folders) - + Returns: BatchProcessing """ @@ -228,7 +228,7 @@ class BatchProcessing: threshold_nb_diff_char: Will check how many characters are different between the file names. move_to_delete: Whether to move the unmatched recording folders to a to_delete folder (will be in the same parent directory as the recording folders) - + Returns: BatchProcessing """ @@ -275,7 +275,7 @@ class BatchProcessing: protocol_dict: Dictionary with the protocol steps of the multiples processes to run in parallel Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -293,7 +293,7 @@ class BatchProcessing: step_frame: The number of images between samples to take Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -311,7 +311,7 @@ class BatchProcessing: resize_ratio: Ratio for up- or down-sizing of the images Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -320,18 +320,18 @@ class BatchProcessing: kwargs['resize_ratio'] = resize_ratio self._do_batch('resize', **kwargs) - + def enhance_images(self, fn_name: str, factor: float, **kwargs: any) -> None: """ - Enhance the 'contrast', brightness', 'sharpness' or 'color' of the images of all recordings. - + Enhance the 'contrast', brightness', 'sharpness' or 'color' of the images of all recordings. + Args: fn_name: image characteristic to enhance, can be: 'contrast', 'brightness', 'sharpness' or 'color' factor: enhancing factor: 1.0 always returns a copy of the original image, lower factors mean less color (brightness, contrast, etc), and higher values more Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -342,11 +342,11 @@ class BatchProcessing: if fn_name == 'contrast': # Enhancing contrast of images self._do_batch('enhance_contrast', **kwargs) - elif fn_name == 'brightness': # Enhancing brightness of images - self._do_batch('enhance_brightness', **kwargs) + elif fn_name == 'enhance_brightness': # Enhancing brightness of images + self._do_batch('enhance_color', **kwargs) elif fn_name == 'sharpness': # Enhancing sharpness of images - self._do_batch('enhance_sharpness', **kwargs) + self._do_batch('enhance_color', **kwargs) elif fn_name == 'color': # Enhancing color of images self._do_batch('enhance_color', **kwargs) @@ -354,7 +354,7 @@ class BatchProcessing: else: raise ValueError("fn_name unknown. It has to be either 'contrast', 'brightness', 'sharpness' or 'color'") - def auto_contrast_images(self, resize_ratio: float = 1.0, **kwargs: any) -> None: + def auto_contrast_images(self, resize_ratio: float, **kwargs: any) -> None: """ Will run an auto-contrasting algorythm on each image @@ -362,7 +362,7 @@ class BatchProcessing: resize_ratio: Ratio for up- or down-sizing of the images Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -377,7 +377,7 @@ class BatchProcessing: Will equalize all images of each sequence together Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -386,25 +386,48 @@ class BatchProcessing: self._do_batch('equalize', **kwargs) - def crop_images(self, height: int, width: int, from_tracks: bool = False, - **kwargs: any) -> None: + def static_crop_images(self, x_crop: int, y_crop: int, height: int, width: int, **kwargs: any) -> None: """ - Crop images at the given height and width around previously tracked 2d coordinates in pixels + Crop images at the given height and width around statics 2d coordinates in pixels Args: + x_crop: list of x coordinates (one per cam) of the center around which the images will each be cropped (nb_cam*1) + y_crop: dlist of x coordinates (one per cam) of the center around which the images will each be cropped (nb_cam*1) height: height to which the images will each be cropped width: width to which the images will each be cropped - from_tracks: Whether to use the coordinates from the tracked objects for the cropping + + Kwargs: + rec_names_to_process: a list of recording names (folder containing images) + camn_to_process: a list of camera numbers for which the recordings will be processed + delete_previous: Whether to erase previous results of the chosen process + show_plot: + """ + kwargs['x_crop'], kwargs['y_crop'] = x_crop, y_crop + kwargs['crop_height'], kwargs['crop_width'] = height, width + kwargs['dynamic_crop'] = False + self._do_batch('crop', **kwargs) + + def dynamic_crop_images(self, crop_height: int, crop_width: int, from_tracks: bool = False, + **kwargs: any) -> None: + """ + Crop images at the given height and width around dynamic 2d coordinates in pixels + + Args: + crop_height: height to which the images will each be cropped + crop_width: width to which the images will each be cropped + from_tracks: Whether to use the coordinates from the tracked objects for the cropping + Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed delete_previous: Whether to erase previous results of the chosen process show_plot: """ - kwargs['height_crop'], kwargs['width_crop'] = height, width + kwargs['crop_height'], kwargs['crop_width'] = crop_height, crop_width kwargs['from_tracks'] = from_tracks + kwargs['dynamic_crop'] = True self._do_batch('crop', **kwargs) def rotate_images(self, degrees: int, from_fn_name: str = 'sample', **kwargs: any) -> None: @@ -417,7 +440,7 @@ class BatchProcessing: (e.g. 'sample' or 'enhance_contrast') Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed delete_previous: Whether to erase previous results of the chosen process """ @@ -431,7 +454,7 @@ class BatchProcessing: Stitch all cameras views together to make one image per frame Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -450,7 +473,7 @@ class BatchProcessing: blob_detector_settings: BlobDetectorSettings class Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -472,7 +495,7 @@ class BatchProcessing: reconstructor_settings: Reconstructor3DSettings class Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -488,7 +511,7 @@ class BatchProcessing: reconstructor_settings: Reconstructor3DSettings = None, **kwargs: any) -> None: """ Track 2d coordinates of animal(s) manually, and then reconstruct 3d tracks. - + Args: dlt_path: path of the .csv file containing th 11*nb_cam DLT coefficients (3d calibration) index_to_track: List of indexes to select image where the manual tracking will be done @@ -538,7 +561,7 @@ class BatchProcessing: Save images as 8 bits .tiff Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -559,7 +582,7 @@ class BatchProcessing: from_tracks: Whether to use the coordinates from the tracked objects Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -582,7 +605,7 @@ class BatchProcessing: lossless: Whether to save lossless video (e.g. not compressed) Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -606,10 +629,8 @@ class BatchProcessing: save_avi: model_name: - TODO check that shuffle and trainingsetindex are needed in addition to model_name - Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -636,7 +657,7 @@ class BatchProcessing: threshold_likelihood: Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -667,7 +688,7 @@ class BatchProcessing: multiprocessing: Whether to use multiprocessing to speed up processing (then won't be able to plot results during optimization) Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -695,7 +716,7 @@ class BatchProcessing: save_images: Whether to save the plotted image Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -715,7 +736,7 @@ class BatchProcessing: to_del_names: List of recording names to delete (will be moved to to_delete folder) Kwargs: - rec_names_to_process: a list of recording names (folder containing images) + rec_names_to_process: a list of recording names (folder containing images) camn_to_process: a list of camera numbers for which the recordings will be processed from_fn_name: the name of the previous processing function, from which the current process as to follow up delete_previous: Whether to erase previous results of the chosen process @@ -768,7 +789,7 @@ class BatchProcessing: kwargs['rec_names_to_process'] = [kwargs['rec_names_to_process']] else: assert type(kwargs['rec_names_to_process']) is list - + main_rec_names_to_process = [name for name in all_main_rec_names if name in kwargs['rec_names_to_process']] else: main_rec_names_to_process = all_main_rec_names @@ -947,16 +968,22 @@ class BatchProcessing: frames=frames[camn], show_plot=show_plot) elif fn_name == 'crop': - track_path = os.path.join(self._get_path_process('track2d'), rec_names[self.main_camn]) - track_paths = {camn: os.path.join(track_path, 'cam{0}'.format(camn)) for camn in range(1, self.nb_cam + 1)} - coords_objs = load_2d_coords(frames[camn], track_paths[camn], save_process_paths[camn], - image_sizes[camn], kwargs['from_tracks'], show_plot=show_plot) + if kwargs['dynamic_crop']: + track_path = os.path.join(self._get_path_process('track2d'), rec_names[self.main_camn]) + track_paths = {camn: os.path.join(track_path, 'cam{0}'.format(camn)) for camn in range(1, self.nb_cam + 1)} + coords_objs = load_2d_coords(frames[camn], track_paths[camn], save_process_paths[camn], + image_sizes[camn], kwargs['from_tracks'], show_plot=show_plot) + + else: + coords_objs = {'obj0': {'frame': frames[camn], + 'x_px': np.ones(np.shape(frames[camn]))*kwargs['x_crop'][camn-1], + 'y_px': np.ones(np.shape(frames[camn]))*kwargs['y_crop'][camn-1]}} if coords_objs: crop_all_images(coords_objs, image_names[camn], image_paths[camn], save_process_paths[camn], - kwargs['height_crop'], kwargs['width_crop']) + kwargs['crop_height'], kwargs['crop_width']) else: - print('>> Could not find any track') + print('>> Could not find any object') elif fn_name == 'save_strobe': radius = 50 if 'radius' not in kwargs.keys() else kwargs['radius'] @@ -1018,7 +1045,7 @@ class BatchProcessing: # Save process history in yaml file settings.to_yaml(os.path.join(save_process_path, 'process_history.yaml')) - + print('>> {0} as been processed (time elapsed: {1:.4f} s)'.format(main_rec_name, time.time() - start_rec)) print('> Full batch as been processed (total time elapsed: {0:.4f} s)'.format(time.time() - start)) @@ -1079,7 +1106,7 @@ class BatchProcessing: blob_det_sets = BlobDetectorSettings() if 'blob_detector_settings' not in kwargs.keys() \ else BlobDetectorSettings.from_dict(kwargs['blob_detector_settings']) - track_2d_objects(image_names[camn], image_paths[camn], rec_names[camn], save_process_paths[camn], + track_2d_objects(recording.get(camn), rec_names[camn], save_process_paths[camn], back_subtractor_settings=back_sub_sets, blob_detector_settings=blob_det_sets, show_plot=show_plot) continue @@ -1128,10 +1155,18 @@ class BatchProcessing: continue elif fn_name == 'crop': - from_tracks = kwargs['from_tracks'] if 'from_tracks' in kwargs.keys() else False - csv_path = os.path.join(from_fn_path, rec_names[self.main_camn], 'cam{0}'.format(camn)) - coords_objs = load_2d_coords(frames[camn], csv_path, save_process_paths[camn], image_sizes[camn], - from_tracks, show_plot=show_plot) + if kwargs['dynamic_crop']: + from_tracks = kwargs['from_tracks'] if 'from_tracks' in kwargs.keys() else False + csv_path = os.path.join(from_fn_path, rec_names[self.main_camn], 'cam{0}'.format(camn)) + coords_objs = load_2d_coords(frames[camn], csv_path, save_process_paths[camn], + image_sizes[camn], + from_tracks, show_plot=show_plot) + + else: + coords_objs = {'obj0': {'frame': frames[camn], + 'x_px': np.ones(np.shape(frames[camn])) * kwargs['x_crop'][camn-1], + 'y_px': np.ones(np.shape(frames[camn])) * kwargs['y_crop'][camn-1]}} + obj_names = coords_objs.keys() elif fn_name == 'save_avi': @@ -1168,9 +1203,9 @@ class BatchProcessing: if fn_name == 'crop': coords_obj = {key: coords_objs[obj_name][key][ind_img] for key in coords_objs[obj_name].keys()} - + img[camn][obj_name][ind_img] = crop_image(img[camn][obj_name][ind_img], coords_obj, - kwargs['height_crop'], kwargs['width_crop']) + kwargs['crop_height'], kwargs['crop_width']) elif fn_name == 'rotate': img[camn][obj_name][ind_img] = img[camn][obj_name][ind_img].rotate(kwargs['degrees']) @@ -1260,8 +1295,8 @@ class BatchProcessing: elif fn_name == 'equalize': print('> Starting batch images: Equalizing of images') elif fn_name == 'crop': - print('> Starting batch images: Cropping images (height: {0}, width: {1})'.format(kwargs['height_crop'], - kwargs['width_crop'])) + print('> Starting batch images: Cropping images (height: {0}, width: {1})'.format(kwargs['crop_height'], + kwargs['crop_width'])) elif fn_name == 'rotate': print('> Starting batch images: Rotating cameras views {0} at {1} degrees '.format(kwargs['camn_to_process'], kwargs['degrees'])) @@ -1343,7 +1378,7 @@ class BatchProcessing: batch_dict = {'nb_cam': self.nb_cam, 'image_sizes': image_sizes, 'frame_rates': frame_rates, 'first_frame': all_frames[0], 'last_frame': all_frames[-1], 'missing_frames': missing_frames, 'recording_paths': rec_paths, 'save_path': self.save_path} - + return batch_dict @staticmethod diff --git a/tests/test_calib_gimbal.py b/tests/test_calib_gimbal.py deleted file mode 100644 index 19daf90b1acd89016fbb241bad899e07efb802cf..0000000000000000000000000000000000000000 --- a/tests/test_calib_gimbal.py +++ /dev/null @@ -1,367 +0,0 @@ -import copy -import itertools - -import numpy as np -from matplotlib import pyplot as plt -from mpl_toolkits.mplot3d import Axes3D -from pyquaternion import Quaternion - -from camera import objects, calib, calib_gimbals - - -def test_calib_multi_focus(): - - show_plot = False - ind_to_plot = 1 # Index of focus distance to plot - - in_degrees = True - - with_cst_focal_length = False # Will adjust focal length (zoom in/out) to keep all the 3d points in the full image - - # # Without noize - # sigma = 0 - # max_mean_err_px = 10e-5 - - # With noize (std = 2 pixels) - sigma = 3 - max_mean_err_px = 6 - - pixel_size = 10e-5 - width_sensor = 0.2 - - # ------------------------------------------------------------------------------------------------------------------ - # Camera - coord_principal_pt = [1024, 1024] # So image size is 2024 x 2024 px - # distance_camera_to_focus_point / width_field_view_around_focus_point * width_sensor (m) - focal_length = 1 / 0.1 * width_sensor / pixel_size # (pixels) for an angle of view = 5.7° - coords = [-0.1, 0, 0] - quat = Quaternion(axis=[0, 1, 0], degrees=90) - - cam = objects.Camera(coord_principal_pt, [focal_length, focal_length], coords, quat, in_degrees=in_degrees) - cam.sensor_size = np.array(coord_principal_pt) * 2 - - # ------------------------------------------------------------------------------------------------------------------ - # Generate fake calibration (dlt_coefs vs various focus distances) - x_range = np.linspace(0.5, 2.5, 10) - x_focus_offset = 0.02 - yz = list(itertools.combinations_with_replacement([-0.2, -0.07, 0, 0.12, 0.2], 2)) - yz = [list(coords) for coords in yz] - # yz = [yz[i] for i in np.random.choice(len(yz), 4)] # To reduce number of 2d points to min needed for dlt_coefs - - quat_back_to_init = cam.quat.inverse - - dlt_coefs_vs_focus, repro_errs = [], [] - for x_focus in x_range: - - if not with_cst_focal_length: - # Correct focal length of the camera to fit all the 3d points in angle of view - cam.f = x_focus / np.max(yz) * width_sensor / pixel_size - - xyz = np.array([[np.nan, np.nan, np.nan]]) - for factor_offset in [-1, 1]: # At each focus distance, offset plan with yz along x axis (towards/away from cam) - x_offsetted = x_focus + factor_offset*x_focus_offset - - xyz = np.append(xyz, [[x_offsetted, coords[0], coords[1]] for coords in yz], axis=0) - - xyz = xyz[1:] - if show_plot and x_focus == x_range[ind_to_plot]: - fig = plt.figure() - ax = Axes3D(fig) - for coords in xyz: - ax.scatter(coords[0], coords[1], coords[2], marker='*', color='k', s=10) - ax.scatter(cam.x, cam.y, cam.z, marker='+', color='r', s=40) - ax.quiver(cam.x, cam.y, cam.z, cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2, - color='r') - ax.set_xlabel('x [m]') - ax.set_ylabel('y [m]') - ax.set_zlabel('z [m]') - - fig2 = plt.figure() - ax2 = Axes3D(fig2) - - uv = [] - for coords in xyz: - obj = objects.Object3D(coords, [0, 0, 0], in_degrees=in_degrees) - obj.transpose(- cam.coords) - obj.rotate(quat_back_to_init) - - # Coordinates in image plan: with Z axis normal to image plan, need initial normal vector to be [0, 0, 1] - cur_uv = - cam.f / obj.z * np.array([obj.x, obj.y]) # (m) - cur_uv = cur_uv * pixel_size # (pixels) - # cur_uv = - cur_uv # rotate image to avoid it being upside down - uv.append(cur_uv) - - if show_plot and x_focus == x_range[ind_to_plot]: - ax2.scatter(obj.x, obj.y, obj.z, marker='*', color='k', s=10) - ax2.scatter(cur_uv[0], cur_uv[1], - cam.f * pixel_size, marker='*', color='g', s=10) - ax2.plot([obj.x, cur_uv[0]], [obj.y, cur_uv[1]], [obj.z, - cam.f * pixel_size], marker='', color='g') - - if show_plot and x_focus == x_range[ind_to_plot]: - cam_p = copy.deepcopy(cam) - cam_p.transpose(- cam_p.coords) - cam_p.rotate(quat_back_to_init) - ax2.scatter(cam_p.x, cam_p.y, cam_p.z, marker='+', color='r', s=40) - ax2.quiver(cam_p.x, cam_p.y, cam_p.z, - cam_p.norm_vect[0], cam_p.norm_vect[1], cam_p.norm_vect[2], length=0.2, color='r') - ax2.set_xlabel('x [m]') - ax2.set_ylabel('y [m]') - ax2.set_zlabel('z [m]') - plt.show() - - # -------------------------------------------------------------------------------------------------------------- - # Convert to pixel coordinates - # pixel_size = np.amax(uv) / np.min(cam.sensor_size)) / 2 - uv = np.array(uv) / pixel_size + cam.sensor_size / 2 # scale meters to pixels - - # Add noize to 2d coordinates - uv = uv + np.reshape(np.random.normal(0, sigma, len(uv) * 2), [len(uv), 2]) - - # -------------------------------------------------------------------------------------------------------------- - # Generate DLT coefficients for each camera - # dlt_coef, repro_err = calib.estim_dlt_coef(xyz, uv, 3) # Will no produce dlt_coef from an orthogonal matrix - dlt_coef, repro_err = calib.estim_mdlt_coef(xyz, uv, show_plot=(show_plot and x_focus == x_range[ind_to_plot])) - assert repro_err <= max_mean_err_px - dlt_coefs_vs_focus.append(dlt_coef) - repro_errs.append(repro_err) - - cam_calib = calib_gimbals.CameraCalibMultiFocuses.from_dlt_coefs(dlt_coefs_vs_focus, x_range) - - if show_plot: - fig = plt.figure() - ax = fig.add_subplot(111) - ax.plot(repro_errs, marker='+', color='g') - ax.set_xlabel('# iteration') - ax.set_ylabel('reprojection error') - ax.set_yscale('log') - - fig2, axs = plt.subplots(4) - fig2.suptitle('Intrinsic parameters vs focus distances') - axs[0].plot(x_range, cam_calib.xps) - axs[0].set_ylabel('x principal point [px]') - axs[1].plot(x_range, cam_calib.yps) - axs[1].set_ylabel('y principal point [px]') - axs[2].plot(x_range, cam_calib.fxs) - axs[2].set_ylabel('x focal length [px]') - axs[3].plot(x_range, cam_calib.fys) - axs[3].set_xlabel('x focus length [m]') - axs[3].set_ylabel('y focal length [px]') - - plt.show() - - # ------------------------------------------------------------------------------------------------------------------ - # Compare dlt_coef gotten from intrinsic parameters interpolated from calib to coef gotten from uv/xyz - distance = 0.86 - - cam.f = distance / np.max(yz) * width_sensor / pixel_size - - xyz = np.array([[np.nan, np.nan, np.nan]]) - for factor_offset in [-1, 1]: # At each focus distance, offset plan with yz along x axis (towards/away from cam) - x_offsetted = distance + factor_offset * x_focus_offset - xyz = np.append(xyz, [[x_offsetted, coords[0], coords[1]] for coords in yz], axis=0) - - xyz = xyz[1:] - uv = [] - for coords in xyz: - obj = objects.Object3D(coords, [0, 0, 0], in_degrees=in_degrees) - obj.transpose(- cam.coords) - obj.rotate(quat_back_to_init) - - cur_uv = - cam.f / obj.z * np.array([obj.x, obj.y]) # (m) - cur_uv = cur_uv * pixel_size # (pixels) - uv.append(cur_uv) - - uv = np.array(uv) / pixel_size + cam.sensor_size / 2 # scale meters to pixels - uv = uv + np.reshape(np.random.normal(0, sigma, len(uv) * 2), [len(uv), 2]) # Noize - dlt_coef, repro_err = calib.estim_mdlt_coef(xyz, uv, show_plot=show_plot) - assert repro_err <= max_mean_err_px - - xp, yp, fx, fy = cam_calib.get_intrinsic(distance) - dlt_coef2 = calib.dlt_coef_from_intrinsic_extrinsic(xp, yp, fx, fy, cam.coords, cam.quat) - uv2 = calib.find2d(1, dlt_coef2, xyz) - - if show_plot: - fig = plt.figure() # Diff are due to differences in focal lengths (1st is set line 157 and 2nd is interpolated) - ax = fig.add_subplot(111) - for i in range(0, len(uv)): - ax.scatter(uv[i][0], uv[i][1], marker='+', color='g') - ax.scatter(uv2[i][0], uv2[i][1], marker='+', color='r') - ax.set_xlabel('x [pixels]') - ax.set_ylabel('y [pixels]') - plt.show() - - -def test_calib_gimbal_system(): - # TODO write test functions to test calibration of gimbal system - - show_plot = False - ind_to_plot = 0 - - in_degrees = True - - norm_vect_init = [0., 0., 1.] - - # # Without noize - # sigma = 0 - # max_mean_err_px = 10e-5 - # decimal = 4 - - # With noize (std = 2 pixels) - sigma = 2 - max_mean_err_px = 6 - decimal = -1 - - pixel_size = 10e-5 - width_sensor = 0.2 - - # ------------------------------------------------------------------------------------------------------------------ - # Camera - coord_principal_pt = [1024, 1024] # So image size is 2024 x 2024 px - # distance_camera_to_focus_point / width_field_view_around_focus_point * width_sensor (m) - focal_length = 1 / 0.1 * width_sensor / pixel_size # (pixels) for an angle of view = 5.7° - coords = [-.8, 0, 1.7] - quat = Quaternion() - - cam = objects.Camera(coord_principal_pt, [focal_length, focal_length], coords, quat, in_degrees=in_degrees) - cam.sensor_size = np.array(coord_principal_pt) * 2 - - # ------------------------------------------------------------------------------------------------------------------ - # Mirror and virtual camera - coords = [-.8, 0, 2.] - quat = Quaternion(axis=[0, 1, 0], degrees=145) - mirror = objects.RoundMirror(coords, quat, radius=0.1, in_degrees=in_degrees) - - virt_cam = objects.VirtualCamera(cam, mirror, in_degrees=in_degrees) - - # ------------------------------------------------------------------------------------------------------------------ - # April tags - uv_atags = list(itertools.combinations_with_replacement([-0.1, -0.05, 0, 0.05, 0.1], 2)) - uv_atags = [list(coords) for coords in uv_atags] - # uv_atags = [uv_april[i] for i in np.random.choice(len(uv_atags), 4)] # To reduce number of 2d points to min needed for dlt_coefs - atags = objects.MultiObjects(np.append( - [objects.Object3D([uv[0], uv[1], 0.], [0., 0., 0.], in_degrees=in_degrees) for uv in uv_atags], - [objects.Object3D([uv[0], .1, uv[1]-.1], [0., 0., 0.], in_degrees=in_degrees) for uv in uv_atags])) - - # Coordinates and orientation of april tags groups - coord_atags = [[-1., 0., 0.], [-.5, 0., 0.], [0., 0., 0.], [.5, 0., 0.], [1., 0., 0.], # Tags on the floor - [.5, .5, 0.], [1., .5, 0.], [1., 1., 0.], [1.2, -.5, .2], [1.2, -.5, .7], [1.2, -1., .2], - [1.2, 0., .2], [1.2, 0., .7], [1.2, 0., 1.2], [1.2, 0., 1.7], [1.2, 0., 2.2]] # Tags on the wall - - quat_floor, quat_wall = Quaternion(), Quaternion(axis=[0, 1, 0], degrees=-90) - quat_atags = [quat_floor, quat_floor, quat_floor, quat_floor, quat_floor, quat_floor, quat_floor, quat_floor, - quat_wall, quat_wall, quat_wall, quat_wall, quat_wall, quat_wall, quat_wall, quat_wall] - - # ------------------------------------------------------------------------------------------------------------------ - - if show_plot: - fig = plt.figure() # Diff are due to differences in focal lengths (1st is set line 157 and 2nd is interpolated) - ax = Axes3D(fig) - - for ind, _ in enumerate(quat_atags): - new_atags = copy.deepcopy(atags) - new_atags.rotate(quat_atags[ind]) - new_atags.transpose(coord_atags[ind]) - - if show_plot: - for obj in new_atags.objects: - ax.scatter(obj.x, obj.y, obj.z, marker='+', color='k') - - norm_vect = quat_atags[ind].rotate(norm_vect_init) - ax.scatter(coord_atags[ind][0], coord_atags[ind][1], coord_atags[ind][2], marker='+', color='k', s=40) - ax.quiver(coord_atags[ind][0], coord_atags[ind][1], coord_atags[ind][2], - norm_vect[0], norm_vect[1], norm_vect[2], length=0.1, color='k') - - if show_plot: - ax.scatter(cam.x, cam.y, cam.z, marker='+', color='g', s=40) - ax.quiver(cam.x, cam.y, cam.z, - cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2, color='g') - ax.text(cam.x, cam.y, cam.z, ' cam') - - ax.scatter(mirror.x, mirror.y, mirror.z, marker='+', color='b', s=40) - ax.quiver(mirror.x, mirror.y, mirror.z, - mirror.norm_vect[0], mirror.norm_vect[1], mirror.norm_vect[2], length=0.2, color='b') - ax.text(mirror.x, mirror.y, mirror.z, ' mirror') - - ax.scatter(virt_cam.x, virt_cam.y, virt_cam.z, marker='+', color='r', s=40) - ax.quiver(virt_cam.x, virt_cam.y, virt_cam.z, - virt_cam.norm_vect[0], virt_cam.norm_vect[1], virt_cam.norm_vect[2], length=0.2, color='r') - ax.text(virt_cam.x, virt_cam.y, virt_cam.z, ' virtual cam') - - ax.set_xlabel('x [m]') - ax.set_ylabel('y [m]') - ax.set_zlabel('z [m]') - plt.ylim(ax.get_xlim()) - plt.show() - - # ------------------------------------------------------------------------------------------------------------------ - # - - dlt_coefs = np.zeros((len(quat_atags), 11)) - for ind, _ in enumerate(quat_atags): - new_atags = copy.deepcopy(atags) - new_atags.rotate(quat_atags[ind]) - new_atags.transpose(coord_atags[ind]) - - virt_cam.point_towards(coord_atags[ind]) # Point virtual camera towards the tags (move the mirror) - quat_back_to_init = virt_cam.quat.inverse - - # Simulate 2d points (virtual camera) from 3d coordinates - uv, xyz = [], [] - for atag in new_atags.objects: - if len(xyz) == 0: xyz = np.array([atag.coords]) - else: xyz = np.append(xyz, np.array([atag.coords]), axis=0) - cur_atag = copy.deepcopy(atag) - cur_atag.transpose(- virt_cam.coords) - cur_atag.rotate(quat_back_to_init) - - # Coordinates in image plan: with Z axis normal to image plan, need initial normal vector to be [0, 0, 1] - cur_uv = - cam.f / cur_atag.z * np.array([cur_atag.x, cur_atag.y]) # (m) - cur_uv = cur_uv * pixel_size # (pixels) - # cur_uv = - cur_uv # rotate image to avoid it being upside down - uv.append(cur_uv) - - # Convert to pixel coordinates - # pixel_size = np.amax(uv) / np.min(cam.sensor_size)) / 2 - uv = np.array(uv) / pixel_size + cam.sensor_size / 2 # scale meters to pixels - - # Add noize to 2d coordinates - uv = uv + np.reshape(np.random.normal(0, sigma, len(uv) * 2), [len(uv), 2]) - - # Generate DLT coefficients for each virtual camera - # dlt_coef, repro_err = calib.estim_dlt_coef(xyz, uv, 3) # Will no produce dlt_coef from an orthogonal matrix - dlt_coef, repro_err = calib.estim_mdlt_coef(xyz, uv, show_plot=(show_plot and ind == ind_to_plot)) - assert repro_err <= max_mean_err_px - - # Estimate 2d points (virtual camera) from 3d coordinates using new dlt_coef - new_uv = calib.find2d(1, dlt_coef, xyz) - new_uv2 = calib.find2d(1, virt_cam.dlt_coef, xyz) - - np.testing.assert_almost_equal(new_uv, new_uv2, decimal=decimal) - - # Use simulated 2d points (+ noize) and 3d to estimate new orientation of virtual camera (+ mirror) - - # By repeating previous step for all apriltag groups: - # Generate a calibration that correlate mirror orientation (quat?) to virtual camera orientation - # Use the calibration (How many points in april tag group is enough?) - - # Write hovering algo that look through the space and detect the april tag group (limited by mirror size and focal length of camera) - - dlt_coefs[ind] = dlt_coef - - # TODO: finish writing test - assert 1 == 0 - - -def test_virtual_cameras(): - # TODO write test functions to test virtual camera when the mirror is moving, etc ... - - assert 1 == 0 - - -def main(): - test_calib_multi_focus() - test_calib_gimbal_system() - test_virtual_cameras() - - -if __name__ == "__main__": - main() diff --git a/tests/test_camera.py b/tests/test_camera.py deleted file mode 100644 index c0b9929fd51cf319565b234c9409149a929270a7..0000000000000000000000000000000000000000 --- a/tests/test_camera.py +++ /dev/null @@ -1,225 +0,0 @@ -import copy - -from pyquaternion import Quaternion -import itertools - -import numpy as np -import matplotlib.pyplot as plt -from mpl_toolkits.mplot3d import Axes3D - -from camera import calib, objects - - -def test_pinhole_cameras(): - - show_plot = False - - norm_vect_init = [0., 0., 1.] - - # Without noize - sigma = 0 - max_mean_err_px = 10e-5 - decimal = 4 - # # With noize (std = 2 pixels) - # sigma = 1 - # max_mean_err_px = 3 - # decimal = -1 - - pixel_size = 10e-5 - - # Camera #1 - coord_principal_pt = [1024, 1024] # So image size is 2024 x 2024 px - # distance_camera_to_focus_point / width_field_view_around_focus_point * width_sensor (m) - focal_length = 1/0.5 * 0.02 / pixel_size # (pixels) for an angle of view = 28° - coord = [0, -1, 0] - quat = Quaternion(axis=[1, 0, 0], degrees=-90) * Quaternion(axis=[0, 0, 1], degrees=180) - - cam1 = objects.Camera(coord_principal_pt, [focal_length, focal_length], coord, quat) - cam1.sensor_size = np.array(coord_principal_pt) * 2 - - # Camera #2 - coord_principal_pt = [512, 512] - focal_length = 1.3/0.5 * 0.02 / pixel_size # (pixels) for an angle of view = 21° - coord = [0.75, -0.75, 0.75] - quat = Quaternion(axis=[0, 0, 1], degrees=45) * Quaternion(axis=[1, 0, 0], degrees=-135) - - cam2 = objects.Camera(coord_principal_pt, [focal_length, focal_length], coord, quat) - cam2.sensor_size = np.array(coord_principal_pt) * 2 - - # Camera #3 - coord_principal_pt = [1024, 512] - focal_length = 2/0.5 * 0.02 / pixel_size # (pixels) for an angle of view = 14° - coord = [2, 0, 0] - quat = Quaternion(axis=[0, 1, 0], degrees=-90) - - cam3 = objects.Camera(coord_principal_pt, [focal_length, focal_length], coord, quat) - cam3.sensor_size = np.array(coord_principal_pt) * 2 - - # ------------------------------------------------------------------------------------------------------------------ - cams = objects.MultiCameras([cam1, cam2, cam3]) - - # ------------------------------------------------------------------------------------------------------------------ - # Generate pixel coordinates of a set of 3d points for each camera - xyz = list(itertools.combinations_with_replacement([-0.45, -0.23, 0, 0.23, 0.45], 3)) - xyz = [list(coord) for coord in xyz] - # xyz = [xyz[i] for i in np.random.choice(len(xyz), 6)] # To reduce number of 3d points to min needed for dlt_coefs - - if show_plot: - fig = plt.figure() - ax = Axes3D(fig) - for coord in xyz: - ax.scatter(coord[0], coord[1], coord[2], marker='*', color='k', s=10) - for cam in cams.cameras: - ax.scatter(cam.x, cam.y, cam.z, marker='+', color='r', s=40) - ax.quiver(cam.x, cam.y, cam.z, cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2, color='r') - ax.set_xlabel('x [m]') - ax.set_ylabel('y [m]') - ax.set_zlabel('z [m]') - plt.show() - - dlt_coefs, dlt_coefs2 = [], [] - for cam in cams.cameras: - quat_back_to_init = cam.quat.inverse - - if show_plot: - fig2 = plt.figure() - ax2 = Axes3D(fig2) - - uv = [] - for coord in xyz: - obj = objects.Object3D(coord, [0, 0, 0], in_degrees=True) - obj.transpose(- cam.coords) - obj.rotate(quat_back_to_init) - - # Coordinates in image plan: with Z axis normal to image plan, need initial normal vector to be [0, 0, 1] - cur_uv = - cam.f / obj.z * np.array([obj.x, obj.y]) # (m) - cur_uv = cur_uv * pixel_size # (pixels) - # cur_uv = - cur_uv # rotate image to avoid it being upside down - uv.append(cur_uv) - - if show_plot: - ax2.scatter(obj.x, obj.y, obj.z, marker='*', color='k', s=10) - ax2.scatter(cur_uv[0], cur_uv[1], - cam.f * pixel_size, marker='*', color='g', s=10) - ax2.plot([obj.x, cur_uv[0]], [obj.y, cur_uv[1]], [obj.z, - cam.f * pixel_size], marker='', color='g') - - if show_plot: - cam_p = copy.deepcopy(cam) - cam_p.transpose(- cam_p.coords) - cam_p.rotate(quat_back_to_init) - ax2.scatter(cam_p.x, cam_p.y, cam_p.z, marker='+', color='r', s=40) - ax2.quiver(cam_p.x, cam_p.y, cam_p.z, - cam_p.norm_vect[0], cam_p.norm_vect[1], cam_p.norm_vect[2], length=0.2, color='r') - ax2.set_xlabel('x [m]') - ax2.set_ylabel('y [m]') - ax2.set_zlabel('z [m]') - plt.show() - - # -------------------------------------------------------------------------------------------------------------- - # Convert to pixel coordinates - # pixel_size = np.amax(uv) / np.min(cam.sensor_size)) / 2 - uv = np.array(uv) / pixel_size + cam.sensor_size / 2 # scale meters to pixels - - # Add noize to 2d coordinates - uv = uv + np.reshape(np.random.normal(0, sigma, len(uv)*2), [len(uv), 2]) - - # -------------------------------------------------------------------------------------------------------------- - # Generate DLT coefficients for each camera - - # Using only known object points and their images points - #dlt_coef, repro_err = calib.estim_dlt_coef(xyz, uv, 3) # Will no produce dlt_coef from an orthogonal matrix - dlt_coef, repro_err = calib.estim_mdlt_coef(xyz, uv, show_plot=show_plot) - assert repro_err <= max_mean_err_px - dlt_coefs.append(dlt_coef) - - # Using known camera intrinsic parameters + known object points and their images points - coord, quat, repro_err = calib.extrinsic_from_intrinsic_and_coordinates(cam.xp, cam.yp, cam.fx, cam.fy, xyz, uv, show_plot=show_plot) - dlt_coef2i = calib.dlt_coef_from_intrinsic_extrinsic(cam.xp, cam.yp, cam.fx, cam.fy, coord, quat) - assert repro_err <= max_mean_err_px - - # Using known camera extrinsic parameters + known object points and their images points - xp, yp, fx, fy, repro_err = calib.intrinsic_from_extrinsic_and_coordinates(cam.coords, cam.quat, xyz, uv, show_plot=show_plot) - dlt_coef2e = calib.dlt_coef_from_intrinsic_extrinsic(xp, yp, fx, fy, cam.coords, cam.quat) - assert repro_err <= max_mean_err_px - - uv2i, uv2e = calib.find2d(1, dlt_coef2i, xyz), calib.find2d(1, dlt_coef2e, xyz) - np.testing.assert_almost_equal(uv2i, uv2e, decimal=decimal) - - # Using known camera intrinsic and extrinsic parameters - dlt_coef2 = calib.dlt_coef_from_intrinsic_extrinsic(cam.xp, cam.yp, cam.fx, cam.fy, cam.coords, cam.quat) - repro_err = calib.reprojection_error(dlt_coef2, xyz, uv) - assert repro_err <= max_mean_err_px - - uv2 = calib.find2d(1, dlt_coef2, xyz) - dlt_coefs2.append(dlt_coef2) - # -------------------------------------------------------------------------------------------------------------- - - if show_plot: - fig = plt.figure() - ax = fig.add_subplot(111) - for i in range(0, len(uv)): - ax.scatter(uv[i][0], uv[i][1], marker='+', color='g', s=40) - ax.scatter(uv2[i][0], uv2[i][1], marker='+', color='r', s=40) - ax.set_xlabel('x [m]') - ax.set_ylabel('y [m]') - plt.show() - - np.testing.assert_almost_equal(uv, uv2, decimal=decimal) - - # ------------------------------------------------------------------------------------------------------------------ - # re-Generate intrinsic and extrinsic parameters from DLT coefficients - - new_cams = [] - for i, dlt_coef in enumerate(dlt_coefs): - cam = cams.get(i) - - # -------------------------------------------------------------------------------------------------------------- - # Estimate intrinsic parameters - xp, yp, fx, fy = calib.intrinsic_from_dlt_coef(dlt_coef) - - np.testing.assert_almost_equal(cam.xp, xp, decimal=decimal) - np.testing.assert_almost_equal(cam.yp, yp, decimal=decimal) - np.testing.assert_almost_equal(cam.fx, fx, decimal=decimal) - np.testing.assert_almost_equal(cam.fy, fy, decimal=decimal) - - new_cams.append(objects.Camera([xp, yp], [fx, fy], coord, quat)) - - # -------------------------------------------------------------------------------------------------------------- - # Estimate extrinsic parameters - coord, quat = calib.extrinsic_from_dlt_coef(dlt_coef, (xp, yp, fx, fy)) - - np.testing.assert_almost_equal(cam.coords, coord, decimal=decimal) - np.testing.assert_almost_equal(cam.quat.rotate(norm_vect_init), quat.rotate(norm_vect_init), decimal=decimal) - - if show_plot: - norm_vect = quat.rotate(norm_vect_init) - - fig3 = plt.figure() - ax3 = Axes3D(fig3) - ax3.scatter(cam.x, cam.y, cam.z, marker='+', color='r', s=40) - ax3.quiver(cam.x, cam.y, cam.z, cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2, color='g') - ax3.quiver(cam.x, cam.y, cam.z, norm_vect[0], norm_vect[1], norm_vect[2], length=0.2, color='r') - ax3.set_xlabel('x [m]') - ax3.set_ylabel('y [m]') - ax3.set_zlabel('z [m]') - plt.show() - - if show_plot: - fig = plt.figure() - ax = Axes3D(fig) - for coord in xyz: - ax.scatter(coord[0], coord[1], coord[2], marker='*', color='k', s=10) - for cam in new_cams: - ax.scatter(cam.x, cam.y, cam.z, marker='+', color='r', s=40) - ax.quiver(cam.x, cam.y, cam.z, cam.norm_vect[0], cam.norm_vect[1], cam.norm_vect[2], length=0.2, color='r') - ax.set_xlabel('x [m]') - ax.set_ylabel('y [m]') - ax.set_zlabel('z [m]') - plt.show() - - -def main(): - test_pinhole_cameras() - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/tests/test_process_batch.py b/tests/test_process_batch.py index 4a7a2e7768cb694bfaf9931f7076d7691c717a9d..d715ca628ab74a6f4826b4c550fba5b0657187c0 100755 --- a/tests/test_process_batch.py +++ b/tests/test_process_batch.py @@ -10,7 +10,7 @@ from utils.settings import Settings def test_batch_processing_images_preprocessing() -> None: """ - An example of how recordings (here only one) can be pre-processed + An example of how recordings (here only one) can be pre-processed (with dynamic cropping) in order to generate .avi videos that can then be analysed by DeepLabCut. """ recording_paths = ['../data/mosquito_escapes/cam1', '../data/mosquito_escapes/cam2', '../data/mosquito_escapes/cam3'] @@ -55,9 +55,9 @@ def test_batch_processing_images_preprocessing() -> None: # rec_names_to_process=rec_names_to_process, show_plot=show_plot) # ------------------------------------------------------------------------------------------------------------------ - # Crop all frames and save in save_path + '/_Cropped' - img_process.crop_images(50, 50, show_plot=False, from_tracks=True, - rec_names_to_process=rec_names_to_process, from_fn_name='sample', delete_previous=True) + # Dynamic crop of all frames and save in save_path + '/_Cropped' + img_process.dynamic_crop_images(50, 50, show_plot=False, from_tracks=True, + rec_names_to_process=rec_names_to_process, from_fn_name='sample', delete_previous=True) # ------------------------------------------------------------------------------------------------------------------ # Rotate view 2 (to always have swatter coming from right side of the image) and save in save_path + '/_Cropped' @@ -99,6 +99,45 @@ def test_batch_multiprocessing_images_preprocessing() -> None: rec_names_to_process=rec_names_to_process, delete_previous=delete_previous) +def test_batch_processing_images_preprocessing_static_crop() -> None: + """ + An example of how recordings (here only one) can be pre-processed (with static cropping) + in order to generate .avi videos that can then be analysed by DeepLabCut. + """ + recording_paths = ['../data/mosquito_escapes/cam1', '../data/mosquito_escapes/cam2', '../data/mosquito_escapes/cam3'] + save_path = '../data/mosquito_escapes/_Process' + dlt_path = os.path.join(os.getcwd(), '../data/mosquito_escapes/calib/20200215_DLTcoefs-py.csv') + + rec_names_to_process = ['20200216_075901-sample_binned'] # names of the recordings to process + # (Should be the name of the folder with the images of the main camera (by default: #1)) + + framerate = 12500 + + x_crop = [72, 250, 102] + y_crop = [44, 184, 10] + + # ------------------------------------------------------------------------------------------------------------------ + # Initialization of the BatchProcessing class + img_process = BatchProcessing.from_directory_by_names(recording_paths, framerate, save_path, + threshold_nb_diff_char=2) + # ------------------------------------------------------------------------------------------------------------------ + # Static crop of all frames and save in save_path + '/_Cropped' + img_process.static_crop_images(x_crop, y_crop, 50, 50, show_plot=False, + rec_names_to_process=rec_names_to_process, delete_previous=True) + + # ------------------------------------------------------------------------------------------------------------------ + # Rotate view 2 (to always have swatter coming from right side of the image) and save in save_path + '/_Cropped' + img_process.rotate_images(270, rec_names_to_process=rec_names_to_process, from_fn_name='crop', camn_to_process=[2]) + + # ------------------------------------------------------------------------------------------------------------------ + # Stitch all views together (for each frame) and save in save_path + '/_Stitched' + img_process.stitch_images(rec_names_to_process=rec_names_to_process, from_fn_name='crop', delete_previous=True) + + # ------------------------------------------------------------------------------------------------------------------ + # Save each recording in an .avi in save_path + '/_Avi' + img_process.save_avi(rec_names_to_process=rec_names_to_process, from_fn_name='stitch', delete_previous=True) + + def test_batch_multiprocessing_images_preprocessing2() -> None: """ This third example show the pre-processing of a second dataset using mutliprocessing. @@ -208,6 +247,7 @@ def test_batch_processing_fit_skeleton() -> None: def main(): test_batch_processing_images_preprocessing() test_batch_multiprocessing_images_preprocessing() + test_batch_processing_images_preprocessing_static_crop() test_batch_processing_dlc_analysis() test_batch_processing_fit_skeleton() diff --git a/tests/test_rotations.py b/tests/test_rotations.py deleted file mode 100644 index c52247ef9ff34566003d4b6eb15db124557a024c..0000000000000000000000000000000000000000 --- a/tests/test_rotations.py +++ /dev/null @@ -1,86 +0,0 @@ -import numpy as np -from scipy.spatial.transform import Rotation - -from camera import utils - - -def test_rotations(): - # Test how rotation matrices works (intrinsic Vs extrinsic) - - alpha, beta, gamma = -45, 90, 35 - in_degrees = True - - norm_vect_init = np.array([0., 0., 1.]) - - # ------------------------------------------------------------------------------------------------------------------ - # Single rotation matrices - alpha_r, beta_r, gamma_r = np.deg2rad(alpha), np.deg2rad(beta), np.deg2rad(gamma) - - R_x = np.array([[1, 0, 0], - [0, np.cos(alpha_r), -np.sin(alpha_r)], - [0, np.sin(alpha_r), np.cos(alpha_r)]]) - R_y = np.array([[np.cos(beta_r), 0, np.sin(beta_r)], - [0, 1, 0], - [- np.sin(beta_r), 0, np.cos(beta_r)]]) - R_z = np.array([[np.cos(gamma_r), -np.sin(gamma_r), 0], - [np.sin(gamma_r), np.cos(gamma_r), 0], - [0, 0, 1]]) - - R_xs = Rotation.from_euler('x', [alpha], degrees=in_degrees).as_matrix()[0] - R_ys = Rotation.from_euler('y', [beta], degrees=in_degrees).as_matrix()[0] - R_zs = Rotation.from_euler('z', [gamma], degrees=in_degrees).as_matrix()[0] - - np.testing.assert_almost_equal(R_x, R_xs) - np.testing.assert_almost_equal(R_y, R_ys) - np.testing.assert_almost_equal(R_z, R_zs) - - # ------------------------------------------------------------------------------------------------------------------ - # Multiple rotation matrices (extrinsic 'xyz' and intrinsic 'XYZ') - R_extrinsic = R_z @ R_y @ R_x - R_intrinsic = R_x @ R_y @ R_z - - R_extrinsics = Rotation.from_euler('xyz', [alpha, beta, gamma], degrees=in_degrees).as_matrix() - R_intrinsics = Rotation.from_euler('XYZ', [alpha, beta, gamma], degrees=in_degrees).as_matrix() - - np.testing.assert_almost_equal(R_extrinsic, R_extrinsics) - np.testing.assert_almost_equal(R_intrinsic, R_intrinsics) - - # The inverse of intrinsic rotation is equal to extrinsic rotation (inverse is true too) - # 'xyz' == 'ZYX' and 'zyx' == 'XYZ' - R_extrinsics_inv = Rotation.from_euler('zyx', [gamma, beta, alpha], degrees=in_degrees).as_matrix() - R_intrinsics_inv = Rotation.from_euler('ZYX', [gamma, beta, alpha], degrees=in_degrees).as_matrix() - - np.testing.assert_almost_equal(R_extrinsics, R_intrinsics_inv) - np.testing.assert_almost_equal(R_intrinsics, R_extrinsics_inv) - - # Compute new vectors - vect_extrinsic = [norm_vect_init] @ R_extrinsic - vect_intrinsic = R_intrinsic @ norm_vect_init - - #vect_extrinsics = [norm_vect_init] @ R_extrinsics - #vect_intrinsics = R_intrinsics @ norm_vect_init - vect_extrinsics = norm_vect_init @ Rotation.from_euler('xyz', [alpha, beta, gamma], degrees=in_degrees).as_matrix() - vect_intrinsics = Rotation.from_euler('XYZ', [alpha, beta, gamma], degrees=in_degrees).apply(norm_vect_init) - - np.testing.assert_almost_equal(vect_extrinsic, [vect_extrinsics]) - np.testing.assert_almost_equal(vect_intrinsic, vect_intrinsics) - - # Compute normalized vectors (is not needed here, because norm_vect_init is normalized) - angles_diff_extrinsic = utils.get_rotation_angles_btw_vectors(vect_extrinsic, vect_extrinsics) - angles_diff_intrinsic = utils.get_rotation_angles_btw_vectors(vect_intrinsic, vect_intrinsics) - - np.testing.assert_almost_equal(angles_diff_extrinsic, [0, 0, 0]) - np.testing.assert_almost_equal(angles_diff_intrinsic, [0, 0, 0]) - - # Diff between intrinsic and extrinsic rotations - angles_diff_extrinsic_intrinsic = utils.get_rotation_angles_btw_vectors(vect_extrinsic, vect_intrinsic) - - print(angles_diff_extrinsic_intrinsic) - - -def main(): - test_rotations() - - -if __name__ == "__main__": - main()