diff --git a/README.md b/README.md
index b593b78a2b0d564b0977b5e65d0c288cfc9f2093..56188ea2eca38f6275414c65408b84bf64cacd3b 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,101 @@
-# Analytics
+# F500 Data Analytics
 
-Application and tools for data analytics and visualizations
\ No newline at end of file
+## Project Description
+
+The F500 Data Analytics project provides a suite of tools and scripts for processing, analyzing, and visualizing point cloud data, particularly from Phenospex PLY PointCount files. The project leverages libraries such as Open3D and NumPy to handle 3D data and perform operations like NDVI computation and visualization. Additionally, it includes functionalities for interacting with the Fairdom SEEK API to manage data resources.
+
+## Table of Contents
+
+- [Installation Instructions](#installation-instructions)
+- [Usage Guide](#usage-guide)
+- [Features](#features)
+- [Modules Overview](#modules-overview)
+- [Configuration & Customization](#configuration--customization)
+- [Testing & Debugging](#testing--debugging)
+- [Contributing Guide](#contributing-guide)
+- [License & Author Information](#license--author-information)
+
+## Installation Instructions
+
+To set up the project, ensure you have Python installed on your system. Then, install the required dependencies using pip:
+
+```bash
+pip install open3d numpy requests pandas
+```
+
+## Usage Guide
+
+### Visualizing Point Cloud Data
+
+To visualize point cloud data and compute NDVI, use the `visualization_ply.py` script:
+
+```bash
+python analytics/visualizations/visualization_ply.py <path_to_ply_file>
+```
+
+### Deleting Resources via Fairdom SEEK API
+
+To delete resources from a Fairdom SEEK server, use the `deleteFAIRObject.py` script:
+
+```bash
+python analytics/f500/collecting/deleteFAIRObject.py <token>
+```
+
+### F500 Toolkit
+
+The `toolkit.py` script provides a command-line interface for various data processing tasks:
+
+```bash
+python analytics/f500/collecting/toolkit.py <command>
+```
+
+Available commands include:
+- `restructure`
+- `pointclouds`
+- `verify`
+- `histogram`
+- `upload`
+
+## Features
+
+- **Point Cloud Visualization**: Visualize and process 3D point cloud data.
+- **NDVI Computation**: Compute and visualize NDVI from point cloud data.
+- **Resource Management**: Interact with Fairdom SEEK API to manage data resources.
+- **Data Processing**: Restructure, verify, and upload data using the F500 toolkit.
+
+## Modules Overview
+
+- **visualization_ply.py**: Handles point cloud visualization and NDVI computation.
+- **clearWhites.py**: Loads point cloud data for further processing.
+- **deleteFAIRObject.py**: Deletes resources from Fairdom SEEK server.
+- **toolkit.py**: Command-line interface for F500 data processing tasks.
+
+## Configuration & Customization
+
+- **API Token**: Ensure you have a valid authorization token for accessing the Fairdom SEEK API.
+- **File Paths**: Provide correct paths to PLY files when using visualization scripts.
+
+## Testing & Debugging
+
+- **Error Handling**: Scripts include basic error handling for file paths and API requests.
+- **Future Work**: Consider adding more robust error handling and automated tests for critical functionalities.
+
+## Contributing Guide
+
+Contributions are welcome! Please follow these steps:
+
+1. Fork the repository.
+2. Create a new branch for your feature or bug fix.
+3. Commit your changes with clear messages.
+4. Push your changes to your fork.
+5. Submit a pull request with a detailed description of your changes.
+
+## License & Author Information
+
+This project is licensed under the MIT License. For more information, see the LICENSE file.
+
+Author: Sven Warris
+
+---
+
+This README provides a comprehensive overview of the F500 Data Analytics project, including installation, usage, and contribution guidelines. Feel free to update sections as the project evolves.
\ No newline at end of file
diff --git a/__init__.py b/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/f500/__init__.py b/f500/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/f500/collecting/F500.py b/f500/collecting/F500.py
index 3463b9f1bf88fa720bde6bb9ec249094947bc02d..4c4f36cfd04abe559f5e7276b1031fa8008266c7 100644
--- a/f500/collecting/F500.py
+++ b/f500/collecting/F500.py
@@ -1,8 +1,33 @@
 """
-ISA & isamodel
-https://isa-specs.readthedocs.io/en/latest/isamodel.html
-
+This script is a data processing tool for F500 PlantEye data. It provides functionalities to restructure raw data, process point clouds, combine histograms, and upload data to a specified platform. The script uses the ISA model for data representation and supports command-line interfaces for different operations.
+
+Classes:
+    F500: A class to handle the processing of F500 PlantEye data, including restructuring, point cloud processing, histogram combination, and data upload.
+
+Functions:
+    commandLineInterface: Sets up the command-line interface for the script.
+    setLogger: Configures the logging for the script.
+    removeAfterSpaceFromDataMatrix: Static method to clean up the 'DataMatrix' column in a DataFrame.
+    createISA: Initializes an ISA investigation object.
+    writeISAJSON: Writes the ISA investigation object to a JSON file.
+    copyPots: Static method to copy pot information from a reference DataFrame to a row.
+    measurementsToFile: Writes the measurements DataFrame to a file.
+    rawMeasurementsToFile: Static method to write raw measurements to a file.
+    addPointClouds: Static method to add point cloud file names to a row.
+    copyPointcloudFile: Static method to copy point cloud files to a specified location.
+    copyPlotPointcloudFile: Static method to copy plot point cloud files to a specified location.
+    createSample: Static method to create a sample object.
+    createAssay: Static method to create an assay object.
+    createAssayPlot: Static method to create an assay plot object.
+    correctDataMatrix: Corrects the 'DataMatrix' column in a row based on a reference DataFrame.
+    finalize: Finalizes the processing of measurements and creates assays.
+    getDirectoryListing: Returns a directory listing for a given root folder.
+    restructure: Restructures the raw data into an ISA-compliant format.
+    processPointclouds: Processes point cloud files and generates derived data.
+    combineHistograms: Combines histogram data from multiple assays into a single file.
+    upload: Uploads the processed data to a specified platform.
 """
+
 import argparse
 import sys
 import os
@@ -28,6 +53,24 @@ import datetime
 import string
 
 class F500:
+    """
+    A class to handle the processing of F500 PlantEye data, including restructuring, point cloud processing, histogram combination, and data upload.
+
+    Attributes:
+        description (defaultdict): A dictionary to store descriptions.
+        columnsToDrop (list): A list of columns to drop from the data.
+        ISA (dict): A dictionary to store ISA-related data.
+        datamatrix (list): A list to store data matrix information.
+        investigation (Investigation): An ISA investigation object.
+        checkAssayName (re.Pattern): A regex pattern to check assay names.
+        measurements (DataFrame): A DataFrame to store measurements.
+        currentFile (str): The current file being processed.
+        currentRoot (str): The current root directory being processed.
+        command (str): The command to execute.
+        assaysDone (set): A set to store completed assays.
+        samples (dict): A dictionary to store sample objects.
+    """
+
     description = defaultdict(str)
     columnsToDrop = []
     ISA = {}
@@ -42,6 +85,9 @@ class F500:
     samples = None
     
     def __init__(self):        
+        """
+        Initializes the F500 object with default values and configurations.
+        """
         # Some columns contain the wrong data, remove those:
         self.columnsToDrop = ["ndvi_aver","ndvi_bin0","ndvi_bin1","ndvi_bin2","ndvi_bin3","ndvi_bin4","ndvi_bin5",
                          "greenness_aver","greenness_bin0","greenness_bin1","greenness_bin2","greenness_bin3","greenness_bin4","greenness_bin5",
@@ -55,11 +101,12 @@ class F500:
         self.samples = {}
 
     def commandLineInterface(self):
+        """
+        Sets up the command-line interface for the script, defining arguments and subcommands.
+        """
         my_parser = argparse.ArgumentParser(description='F500 PlantEye data processing tool.')
         sub_parsers = my_parser.add_subparsers(dest="command")
 
-        
-    
         my_parser_restructure = sub_parsers.add_parser("restructure")
         my_parser_restructure.add_argument('--loglevel', help="Application log level (INFO/WARN/ERROR)", default="INFO")
         my_parser_restructure.add_argument('--logfile', help="Application log file")
@@ -160,6 +207,9 @@ class F500:
         
         
     def setLogger(self):
+        """
+        Configures the logging for the script based on command-line arguments.
+        """
         self.logger = logging.getLogger("F500")
         self.logger.setLevel(self.args.loglevel)
         if len(str(self.args.logfile)) > 0: 
@@ -167,6 +217,15 @@ class F500:
         
     @staticmethod
     def removeAfterSpaceFromDataMatrix(row):
+        """
+        Cleans up the 'DataMatrix' column in a DataFrame row by removing text after a space.
+
+        Args:
+            row (Series): A row from a DataFrame.
+
+        Returns:
+            Series: The modified row with cleaned 'DataMatrix' column.
+        """
         try:
             row["DataMatrix"] = row["DataMatrix"].strip().split(" ")[0]
         except:
@@ -174,6 +233,9 @@ class F500:
         return row
 
     def createISA(self):
+        """
+        Initializes an ISA investigation object and sets up the study and metadata.
+        """
         # Create investigation
         self.investigation = Investigation()
         self.investigation.title = "_".join([self.datamatrix[4], self.datamatrix[3], self.datamatrix[2]])
@@ -181,7 +243,6 @@ class F500:
         self.investigation.measurements = pandas.DataFrame()
         self.investigation.plots = set()
 
-
         # Create study, title comes datamatrix file (ID...)   
         self.investigation.studies.append(Study())
         if self.studyName != None:
@@ -200,6 +261,9 @@ class F500:
 
         
     def writeISAJSON(self):
+        """
+        Writes the ISA investigation object to a JSON file.
+        """
         jsonOutput = open(self.args.json, "w")
         jsonOutput.write(json.dumps(self.investigation, cls=ISAJSONEncoder, sort_keys=True, indent=4, separators=(',', ': ')))
         jsonOutput.close()
@@ -207,6 +271,17 @@ class F500:
 
     @staticmethod
     def copyPots(row, pots, f500):
+        """
+        Copies pot information from a reference DataFrame to a row.
+
+        Args:
+            row (Series): A row from a DataFrame.
+            pots (DataFrame): A DataFrame containing pot information.
+            f500 (F500): An instance of the F500 class.
+
+        Returns:
+            Series: The modified row with pot information.
+        """
         try:
             row["Pot"] = pots[ (pots["x"] == row["x"]) & (pots["y"] == row["y"]) ]["Pot"].iloc[0]
             if "Treatment" in pots.columns:
@@ -219,6 +294,9 @@ class F500:
         return row
 
     def measurementsToFile(self):
+        """
+        Writes the measurements DataFrame to a file.
+        """
         path =  "/".join([self.investigationPath, self.investigation.title, self.investigation.studies[0].title])
         filename = "derived/" + self.investigation.studies[0].title + ".csv"
         os.makedirs(path + "/derived", exist_ok=True)
@@ -226,13 +304,31 @@ class F500:
 
     @staticmethod
     def rawMeasurementsToFile(path, filename, measurements):
+        """
+        Writes raw measurements to a file.
+
+        Args:
+            path (str): The directory path to save the file.
+            filename (str): The name of the file.
+            measurements (list): A list of measurements to write.
+        """
         os.makedirs(path + "/derived", exist_ok=True)
         df = pandas.DataFrame(measurements)
         df = df.transpose()
         df.to_csv(path + "/" + filename, sep=";", index=False)
 
     @staticmethod
-    def addPointClouds(row, title) :
+    def addPointClouds(row, title):
+        """
+        Adds point cloud file names to a row.
+
+        Args:
+            row (Series): A row from a DataFrame.
+            title (str): The title to use in the file names.
+
+        Returns:
+            Series: The modified row with point cloud file names.
+        """
         filename = "{}_{}_full_sx{:03d}_sy{:03d}.ply.gz".format(
             title, row["timestamp_file"], 
             int(row["x"]),
@@ -259,6 +355,14 @@ class F500:
     
     @staticmethod
     def copyPointcloudFile(row, f500, fullPath):
+        """
+        Copies point cloud files to a specified location.
+
+        Args:
+            row (Series): A row from a DataFrame.
+            f500 (F500): An instance of the F500 class.
+            fullPath (str): The destination path for the point cloud files.
+        """
         if f500.args.copyPointcloud == "True": 
             AB = f500.root.split("/")[-1]
             pointcloudPath = "/".join(f500.root.split("/")[:-3]) + "/current/" + AB +'/I/'
@@ -299,6 +403,15 @@ class F500:
 
     @staticmethod
     def copyPlotPointcloudFile(row, f500, fullPath, title):
+        """
+        Copies plot point cloud files to a specified location.
+
+        Args:
+            row (Series): A row from a DataFrame.
+            f500 (F500): An instance of the F500 class.
+            fullPath (str): The destination path for the plot point cloud files.
+            title (str): The title to use in the file names.
+        """
         if f500.args.copyPointcloud == "True": 
             f500.logger.warn("The copy plot point cloud will copy a lot of data. However, users are generally not interested in these plot files.")
 
@@ -359,6 +472,20 @@ class F500:
 
     @staticmethod
     def createSample(samples, name, source, organism, taxon, term_source):
+        """
+        Creates a sample object if it doesn't already exist.
+
+        Args:
+            samples (dict): A dictionary to store sample objects.
+            name (str): The name of the sample.
+            source (Source): The source object for the sample.
+            organism (str): The organism name.
+            taxon (str): The taxon ID.
+            term_source (OntologySourceReference): The ontology source reference.
+
+        Returns:
+            Sample: The created or existing sample object.
+        """
         if str(name) not in samples:
             sample = Sample(name=str(name), derives_from=[source])
             characteristic_organism = Characteristic(category=OntologyAnnotation(term="Organism"),
@@ -372,6 +499,15 @@ class F500:
 
     @staticmethod
     def createAssay(row, f500, path, source):
+        """
+        Creates an assay object and adds it to the investigation.
+
+        Args:
+            row (Series): A row from a DataFrame.
+            f500 (F500): An instance of the F500 class.
+            path (str): The directory path for the assay.
+            source (Source): The source object for the assay.
+        """
         assay = Assay()
         assay.title = row["timestamp_file"]
         assay.filename = row["timestamp_file"]
@@ -451,6 +587,16 @@ class F500:
 
     @staticmethod
     def createAssayPlot(row, f500, path, source, title):
+        """
+        Creates an assay plot object and adds it to the investigation.
+
+        Args:
+            row (Series): A row from a DataFrame.
+            f500 (F500): An instance of the F500 class.
+            path (str): The directory path for the assay plot.
+            source (Source): The source object for the assay plot.
+            title (str): The title to use in the file names.
+        """
         assay = Assay()
         assay.title = row["timestamp_file"]
         assay.filename = row["timestamp_file"]
@@ -484,6 +630,16 @@ class F500:
             f500.investigation.studies[0].assays.append(assay)
 
     def correctDataMatrix(row, pots):
+        """
+        Corrects the 'DataMatrix' column in a row based on a reference DataFrame.
+
+        Args:
+            row (Series): A row from a DataFrame.
+            pots (DataFrame): A DataFrame containing pot information.
+
+        Returns:
+            Series: The modified row with corrected 'DataMatrix' column.
+        """
         result = pots.loc[(pots['x'] == row["x"]) & (pots['y'] == row['y']), 'Pot']
         
         # Access the result
@@ -493,6 +649,12 @@ class F500:
         return row
             
     def finalize(self, title):
+        """
+        Finalizes the processing of measurements and creates assays.
+
+        Args:
+            title (str): The title to use in the file names.
+        """
         # CSV will be combined data file (with corrected pot names) and ply file names
         # Do this, if the data matrix contains pot names (otherwise it either went wrong or data is from a different project
         # Then list the ply files as Image File
@@ -533,9 +695,21 @@ class F500:
             self.logger.info("No pots in main measurement file")
 
     def getDirectoryListing(self, rootFolder):
+        """
+        Returns a directory listing for a given root folder.
+
+        Args:
+            rootFolder (str): The root folder to list.
+
+        Returns:
+            generator: A generator yielding directory listings.
+        """
         return os.walk(rootFolder)
 
     def restructure(self):
+        """
+        Restructures the raw data into an ISA-compliant format.
+        """
         self.source = Source(name=self.args.source)
         self.sourceContainer = Source(name=self.args.sourceContainer)
         self.datamatrix = os.path.basename(self.args.datamatrix_file).split(".")[0].split("_")
@@ -618,6 +792,9 @@ class F500:
         
     
     def processPointclouds(self):
+        """
+        Processes point cloud files and generates derived data.
+        """
         from PointCloud import PointCloud
 
         self.logger.info("Reading project ISA {}".format(self.args.json))
@@ -691,6 +868,9 @@ class F500:
 
 
     def combineHistograms(self):
+        """
+        Combines histogram data from multiple assays into a single file.
+        """
         self.logger.info("Reading project ISA {}".format(self.args.json))
         self.logger.info("Creating combined histogram of {}".format(self.args.histogram))
         self.investigation = isajson.load(open(self.args.json, "r"))
@@ -728,10 +908,11 @@ class F500:
                 self.logger.warning("Could not combine data for {}, exception: {}".format(hLabel, e))
                 
     def upload(self):
+        """
+        Uploads the processed data to a specified platform.
+        """
         self.logger.info("Reading project ISA {}".format(self.args.json))
         self.logger.info("Uploading data to {}".format(self.args.URL))
         self.investigation = isajson.load(open(self.args.json, "r"))
         fairdom = Fairdom(self.investigation, self.args, self.logger)
-        fairdom.upload()
-        
-                                      
+        fairdom.upload()
\ No newline at end of file
diff --git a/f500/collecting/F500Azure.py b/f500/collecting/F500Azure.py
index e1eeea4dbb3c0739afe7da582b4da32be69cf94f..5773815cc6b43ca6637c7b053210060d472499fa 100644
--- a/f500/collecting/F500Azure.py
+++ b/f500/collecting/F500Azure.py
@@ -1,41 +1,95 @@
 from azure.storage.blob import BlobServiceClient
-
 from F500 import F500
+import os
+import json
+import pandas
+import shutil
+
+"""
+This script provides functionality to interact with Azure Blob Storage for managing 
+and processing data related to plant imaging experiments. It extends the F500 class 
+to include methods for initializing Azure connections, transferring data, and handling 
+experiment metadata.
+"""
+
+class F500Azure(F500):
+    """
+    A class to manage Azure Blob Storage interactions for plant imaging experiments.
+
+    This class extends the F500 class and provides additional methods to initialize 
+    Azure connections, transfer data between source and target containers, and handle 
+    experiment metadata.
+    """
 
-class F500Azure (F500):
-    
     def __init__(self, experimentID):
+        """
+        Initialize the F500Azure class with a specific experiment ID.
+
+        Args:
+            experimentID (str): The unique identifier for the experiment.
+        """
         super().__init__()
         self.experimentID = experimentID
-        
-    
+
     def initAzure(self, environment, metadata, logger):
+        """
+        Initialize Azure-related settings and metadata for the experiment.
+
+        Args:
+            environment (dict): A dictionary containing environment-specific settings.
+            metadata (dict): Metadata related to the experiment.
+            logger (Logger): Logger instance for logging information.
+
+        Side Effects:
+            Sets various attributes related to the experiment and Azure configuration.
+        """
         self.logger = logger
-        self.args.technologyType ="Imaging"
+        self.args.technologyType = "Imaging"
         self.args.technologyPlatform = "PlantEye"
-        self.args.sampleType ="Pot"
-        self.args.sampleTypeContainer ="Plot"
+        self.args.sampleType = "Pot"
+        self.args.sampleTypeContainer = "Plot"
         self.args.source = "Plant"
-        self.args.sourceContainer ="Plot"
-        self.args.copyPointcloud ="True"
+        self.args.sourceContainer = "Plot"
+        self.args.copyPointcloud = "True"
         self.args.investigationPath = str(self.experimentID)
 
         self.args.datamatrix_file = environment["datamatrix_file"]
         self.args.json = environment["json"]
         self.args.organism = environment["organism"]
         self.args.taxon = environment["taxon"]
-        self.args.start = enviroment["start"]
+        self.args.start = environment["start"]
         self.metadata = metadata
-        
+
     def connectToSource(self, sourceConnectionString, sourceContainerName, sourceBlobName):
+        """
+        Connect to the source Azure Blob Storage container.
+
+        Args:
+            sourceConnectionString (str): Connection string for the source Azure Blob Storage.
+            sourceContainerName (str): Name of the source container.
+            sourceBlobName (str): Name of the source blob.
+
+        Side Effects:
+            Initializes the source blob service and container clients.
+        """
         self.sourceConnectionString = sourceConnectionString
         self.sourceContainerName = sourceContainerName
         self.sourceBlobName = sourceBlobName
         self.sourceBlobServiceClient = BlobServiceClient.from_connection_string(sourceConnectionString)
         self.sourceContainerClient = self.sourceBlobServiceClient.get_container_client(sourceContainerName)
 
-    
     def connectToTarget(self, targetConnectionString, targetContainerName, targetBlobName):
+        """
+        Connect to the target Azure Blob Storage container.
+
+        Args:
+            targetConnectionString (str): Connection string for the target Azure Blob Storage.
+            targetContainerName (str): Name of the target container.
+            targetBlobName (str): Name of the target blob.
+
+        Side Effects:
+            Initializes the target blob service and container clients.
+        """
         self.targetConnectionString = targetConnectionString
         self.targetContainerName = targetContainerName
         self.targetBlobName = targetBlobName
@@ -43,18 +97,41 @@ class F500Azure (F500):
         self.targetContainerClient = self.targetBlobServiceClient.get_container_client(targetContainerName)
 
     def writeISAJSON(self):
+        """
+        Write the investigation data to a JSON file.
+
+        Side Effects:
+            Creates a JSON file with the investigation data.
+        """
         jsonOutput = open(self.args.json, "w")
         jsonOutput.write(json.dumps(self.investigation, cls=ISAJSONEncoder, sort_keys=True, indent=4, separators=(',', ': ')))
         jsonOutput.close()
 
     def measurementsToFile(self):
-        path =  "/".join([self.investigationPath, self.investigation.title, self.investigation.studies[0].title])
+        """
+        Write the measurements data to a CSV file.
+
+        Side Effects:
+            Creates directories and a CSV file with the measurements data.
+        """
+        path = "/".join([self.investigationPath, self.investigation.title, self.investigation.studies[0].title])
         filename = "derived/" + self.investigation.studies[0].title + ".csv"
         os.makedirs(path + "/derived", exist_ok=True)
         self.investigation.measurements.to_csv(path + "/" + filename, sep=";")
 
     @staticmethod
     def rawMeasurementsToFile(path, filename, measurements):
+        """
+        Write raw measurements data to a CSV file.
+
+        Args:
+            path (str): The directory path where the file will be saved.
+            filename (str): The name of the file.
+            measurements (dict): The measurements data to be written.
+
+        Side Effects:
+            Creates directories and a CSV file with the raw measurements data.
+        """
         os.makedirs(path + "/derived", exist_ok=True)
         df = pandas.DataFrame(measurements)
         df = df.transpose()
@@ -62,24 +139,46 @@ class F500Azure (F500):
 
     @staticmethod
     def copyPointcloudFile(row, f500, fullPath):
-        if f500.args.copyPointcloud == "True": 
+        """
+        Copy pointcloud files to a specified directory.
+
+        Args:
+            row (dict): A dictionary containing pointcloud file names.
+            f500 (F500): An instance of the F500 class.
+            fullPath (str): The destination directory path.
+
+        Side Effects:
+            Copies pointcloud files to the specified directory.
+
+        Exceptions:
+            Raises an exception if file copying fails.
+        """
+        if f500.args.copyPointcloud == "True":
             AB = f500.root.split("/")[-1]
-            pointcloudPath = "/".join(f500.root.split("/")[:-3]) + "/current/" + AB +'/I/'
-            f500.logger.info("Copying pointclouds from {}{} to {}".format(pointcloudPath, [row["pointcloud_full"],row["pointcloud_mr"],row["pointcloud_sl"],row["pointcloud_mg"]], fullPath))
-            try: 
+            pointcloudPath = "/".join(f500.root.split("/")[:-3]) + "/current/" + AB + '/I/'
+            f500.logger.info("Copying pointclouds from {}{} to {}".format(pointcloudPath, [row["pointcloud_full"], row["pointcloud_mr"], row["pointcloud_sl"], row["pointcloud_mg"]], fullPath))
+            try:
                 os.makedirs(fullPath, exist_ok=True)
-                shutil.copy(pointcloudPath+row["pointcloud_full"], fullPath) 
-                shutil.copy(pointcloudPath+row["pointcloud_mr"], fullPath) 
-                shutil.copy(pointcloudPath+row["pointcloud_sl"], fullPath)
-                shutil.copy(pointcloudPath+row["pointcloud_mg"], fullPath)
+                shutil.copy(pointcloudPath + row["pointcloud_full"], fullPath)
+                shutil.copy(pointcloudPath + row["pointcloud_mr"], fullPath)
+                shutil.copy(pointcloudPath + row["pointcloud_sl"], fullPath)
+                shutil.copy(pointcloudPath + row["pointcloud_mg"], fullPath)
             except Exception as e:
                 f500.logger.warn("Exception in copying files:\n{}".format(e))
-                #if f500.args.loglevel == "DEBUG":
-                #    raise e  
+                # if f500.args.loglevel == "DEBUG":
+                #     raise e
         else:
             f500.logger.info("Skipping copy (defined in command line)")
 
+    @staticmethod
     def getDirectoryListing(rootFolder):
-        return os.walk(rootFolder)
- 
+        """
+        Get a directory listing for the specified root folder.
+
+        Args:
+            rootFolder (str): The root folder path.
 
+        Returns:
+            generator: A generator yielding directory paths, directory names, and file names.
+        """
+        return os.walk(rootFolder)
\ No newline at end of file
diff --git a/f500/collecting/Fairdom.py b/f500/collecting/Fairdom.py
index 0625e859428d48e2ae4326a7f4cb2a0836fd5a93..0daa474c21d014d2d6a5cba5d887d9ae453ae366 100644
--- a/f500/collecting/Fairdom.py
+++ b/f500/collecting/Fairdom.py
@@ -1,3 +1,26 @@
+"""
+This script is designed to interact with the FAIRDOM platform to create and manage investigations, studies, assays, samples, and data files. 
+It uses the ISA-Tools library to handle ISA-JSON data structures and the requests library to communicate with the FAIRDOM API. 
+The script is intended to facilitate the upload of structured experimental data to the FAIRDOM repository.
+
+Classes:
+    Fairdom: Handles the creation and management of investigations, studies, assays, samples, and data files in FAIRDOM.
+
+Functions:
+    __init__: Initializes the Fairdom class with investigation data, arguments, and a logger.
+    createInvestigationJSON: Creates a JSON structure for an investigation.
+    createStudyJSON: Creates a JSON structure for a study.
+    createAssayJSON: Creates a JSON structure for an assay.
+    createDataFileJSON: Creates a JSON structure for a data file.
+    addSampleToAssayJSON: Adds a sample to an assay JSON structure.
+    addDataFileToAssayJSON: Adds a data file to an assay JSON structure.
+    addDataFilesToSampleJSON: Adds data files from an assay to a sample JSON structure.
+    createSampleJSON: Creates a JSON structure for a sample.
+    upload: Uploads the investigation, studies, assays, samples, and data files to FAIRDOM.
+
+Note: The script assumes that the user has a valid token for authentication with the FAIRDOM API.
+"""
+
 from isatools.isajson import ISAJSONEncoder
 import isatools
 from isatools.model import *
@@ -10,22 +33,51 @@ import time
 
 
 class Fairdom:
+    """
+    A class to manage the creation and upload of investigations, studies, assays, samples, and data files to the FAIRDOM platform.
+
+    Attributes:
+        investigation: An ISA-Tools investigation object containing the data to be uploaded.
+        args: Command-line arguments or configuration settings for the upload process.
+        logger: A logging object to record the process of uploading data.
+        session: A requests session object configured with headers for authentication with the FAIRDOM API.
+    """
+
     def __init__(self, investigation, args, logger):
+        """
+        Initializes the Fairdom class with the given investigation, arguments, and logger.
+
+        Args:
+            investigation: An ISA-Tools investigation object.
+            args: An object containing command-line arguments or configuration settings.
+            logger: A logging object for recording the upload process.
+
+        Side Effects:
+            Updates the session headers with authentication information.
+        """
         self.investigation = investigation
         self.args = args
         self.args.project = int(self.args.project)
         self.args.organism = int(self.args.organism)
         self.logger = logger
         
-        headers = {"Content-type": "application/vnd.api+json",
-           "Accept": "application/vnd.api+json",
-           "Accept-Charset": "ISO-8859-1",
-           "Authorization": "Token {}".format(self.args.token)}
+        headers = {
+            "Content-type": "application/vnd.api+json",
+            "Accept": "application/vnd.api+json",
+            "Accept-Charset": "ISO-8859-1",
+            "Authorization": "Token {}".format(self.args.token)
+        }
 
         self.session = requests.Session()
         self.session.headers.update(headers)
     
     def createInvestigationJSON(self):
+        """
+        Creates a JSON structure for an investigation.
+
+        Returns:
+            A dictionary representing the JSON structure of the investigation.
+        """
         investigationJSON = {}
         investigationJSON['data'] = {}
         investigationJSON['data']['type'] = 'investigations'
@@ -34,88 +86,156 @@ class Fairdom:
         investigationJSON['data']['attributes']['description'] = "PlantEye data from NPEC"
         investigationJSON['data']['relationships'] = {}
         investigationJSON['data']['relationships']['projects'] = {}
-        investigationJSON['data']['relationships']['projects']['data'] = [{'id' : str(self.args.project) , 'type' : 'projects'}]
+        investigationJSON['data']['relationships']['projects']['data'] = [{'id': str(self.args.project), 'type': 'projects'}]
         return investigationJSON
 
     def createStudyJSON(self, study, investigationID):
+        """
+        Creates a JSON structure for a study.
+
+        Args:
+            study: An ISA-Tools study object.
+            investigationID: The ID of the investigation to which the study belongs.
+
+        Returns:
+            A dictionary representing the JSON structure of the study.
+        """
         studyJSON = {}
         studyJSON['data'] = {}
         studyJSON['data']['type'] = 'studies'
         studyJSON['data']['attributes'] = {}
         studyJSON['data']['attributes']['title'] = study.name
         studyJSON['data']['attributes']['description'] = "F500 pot data"
-        #studyJSON['data']['attributes']['policy'] = {'access':'view', 'permissions': [{'resource': {'id': '1','type': 'people'},'access': 'manage'}]}
         studyJSON['data']['relationships'] = {}
         studyJSON['data']['relationships']['investigation'] = {}
-        studyJSON['data']['relationships']['investigation']['data'] = {'id' : str(investigationID), 'type' : 'investigations'}
+        studyJSON['data']['relationships']['investigation']['data'] = {'id': str(investigationID), 'type': 'investigations'}
         return studyJSON
     
     def createAssayJSON(self, assay, studyID):
+        """
+        Creates a JSON structure for an assay.
+
+        Args:
+            assay: An ISA-Tools assay object.
+            studyID: The ID of the study to which the assay belongs.
+
+        Returns:
+            A dictionary representing the JSON structure of the assay.
+        """
         assayJSON = {}
         assayJSON['data'] = {}
         assayJSON['data']['type'] = 'assays'
         assayJSON['data']['attributes'] = {}
         assayJSON['data']['attributes']['title'] = assay.filename
         assayJSON['data']['attributes']['description'] = 'NPEC F500 measurement assay'
-        assayJSON['data']['attributes']['assay_class'] = {'key' : 'EXP'}
-        assayJSON['data']['attributes']['assay_type'] = {'uri' : "http://jermontology.org/ontology/JERMOntology#Metabolomics"}
-        #assayJSON['data']['attributes']['technology_type'] = {'uri' : "http://jermontology.org/ontology/JERMOntology#PlantEye"}
+        assayJSON['data']['attributes']['assay_class'] = {'key': 'EXP'}
+        assayJSON['data']['attributes']['assay_type'] = {'uri': "http://jermontology.org/ontology/JERMOntology#Metabolomics"}
         assayJSON['data']['relationships'] = {}
         assayJSON['data']['relationships']['study'] = {}
-        assayJSON['data']['relationships']['study']['data'] = {'id' : str(studyID), 'type' : 'studies'}
+        assayJSON['data']['relationships']['study']['data'] = {'id': str(studyID), 'type': 'studies'}
         assayJSON['data']['relationships']['organisms'] = {}
-        assayJSON['data']['relationships']['organisms']['data'] = [{'id' : str(self.args.organism), 'type' : 'organisms'}]
+        assayJSON['data']['relationships']['organisms']['data'] = [{'id': str(self.args.organism), 'type': 'organisms'}]
         return assayJSON
     
     def createDataFileJSON(self, data_file):
+        """
+        Creates a JSON structure for a data file.
+
+        Args:
+            data_file: An object representing a data file.
+
+        Returns:
+            A dictionary representing the JSON structure of the data file.
+        """
         data_fileJSON = {}
         data_fileJSON['data'] = {}
         data_fileJSON['data']['type'] = 'data_files'
         data_fileJSON['data']['attributes'] = {}
         data_fileJSON['data']['attributes']['title'] = data_file.filename
-        data_fileJSON['data']['attributes']['content_blobs'] = [{'url': 'https://www.wur.nl/upload/854757ab-168f-46d7-b415-f8b501eebaa5_WUR_RGB_standard_2021-site.svg', 
-                                                                'original_filename': data_file.filename,
-                                                                'content-type': 'image/svg+xml'}]
+        data_fileJSON['data']['attributes']['content_blobs'] = [{
+            'url': 'https://www.wur.nl/upload/854757ab-168f-46d7-b415-f8b501eebaa5_WUR_RGB_standard_2021-site.svg',
+            'original_filename': data_file.filename,
+            'content-type': 'image/svg+xml'
+        }]
         data_fileJSON['data']['relationships'] = {}
         data_fileJSON['data']['relationships']['projects'] = {}
-        data_fileJSON['data']['relationships']['projects']['data'] = [{'id' : str(self.args.project) , 'type' : 'projects'}]
+        data_fileJSON['data']['relationships']['projects']['data'] = [{'id': str(self.args.project), 'type': 'projects'}]
         return data_fileJSON
     
     def addSampleToAssayJSON(self, sampleID, assayJSON):
+        """
+        Adds a sample to an assay JSON structure.
+
+        Args:
+            sampleID: The ID of the sample to be added.
+            assayJSON: The JSON structure of the assay to which the sample will be added.
+        """
         if 'samples' not in assayJSON['data']['relationships']:
-            assayJSON['data']['relationships']['samples'] = {} 
+            assayJSON['data']['relationships']['samples'] = {}
             assayJSON['data']['relationships']['samples']['data'] = []
         assayJSON['data']['relationships']['samples']['data'].append({'id': str(sampleID), 'type': 'samples'})
 
     def addDataFileToAssayJSON(self, data_fileID, assayJSON):
+        """
+        Adds a data file to an assay JSON structure.
+
+        Args:
+            data_fileID: The ID of the data file to be added.
+            assayJSON: The JSON structure of the assay to which the data file will be added.
+        """
         if 'data_files' not in assayJSON['data']['relationships']:
-            assayJSON['data']['relationships']['data_files'] = {} 
+            assayJSON['data']['relationships']['data_files'] = {}
             assayJSON['data']['relationships']['data_files']['data'] = []
         assayJSON['data']['relationships']['data_files']['data'].append({'id': str(data_fileID), 'type': 'data_files'})
 
     def addDataFilesToSampleJSON(self, assayJSON, sampleJSON):
+        """
+        Adds data files from an assay to a sample JSON structure.
+
+        Args:
+            assayJSON: The JSON structure of the assay containing the data files.
+            sampleJSON: The JSON structure of the sample to which the data files will be added.
+        """
         if 'data_files' not in sampleJSON['data']['relationships']:
             sampleJSON['data']['relationships']['data_files'] = {}
-            sampleJSON['data']['relationships']['data_files']['data'] = [] 
+            sampleJSON['data']['relationships']['data_files']['data'] = []
         if 'data_files' in assayJSON['data']['relationships']:
             sampleJSON['data']['relationships']['data_files']['data'].extend(assayJSON['data']['relationships']['data_files']['data'])
 
-    
     def createSampleJSON(self, sample):
+        """
+        Creates a JSON structure for a sample.
+
+        Args:
+            sample: An ISA-Tools sample object.
+
+        Returns:
+            A dictionary representing the JSON structure of the sample.
+        """
         sampleJSON = {}
         sampleJSON['data'] = {}
         sampleJSON['data']['type'] = 'samples'
         sampleJSON['data']['attributes'] = {}
         sampleJSON['data']['attributes']['title'] = sample.name
-        sampleJSON['data']['attributes']['attribute_map'] = {'PotID' : sample.name}
+        sampleJSON['data']['attributes']['attribute_map'] = {'PotID': sample.name}
         sampleJSON['data']['relationships'] = {}
         sampleJSON['data']['relationships']['projects'] = {}
-        sampleJSON['data']['relationships']['projects']['data'] = [{'id' : str(self.args.project), 'type' : 'projects'}]
+        sampleJSON['data']['relationships']['projects']['data'] = [{'id': str(self.args.project), 'type': 'projects'}]
         sampleJSON['data']['relationships']['sample_type'] = {}
-        sampleJSON['data']['relationships']['sample_type']['data'] = {'id' : str(self.args.sample_type), 'type' : 'sample_types'}        
+        sampleJSON['data']['relationships']['sample_type']['data'] = {'id': str(self.args.sample_type), 'type': 'sample_types'}
         return sampleJSON
     
     def upload(self):
+        """
+        Uploads the investigation, studies, assays, samples, and data files to the FAIRDOM platform.
+
+        Side Effects:
+            Communicates with the FAIRDOM API to create and upload data structures.
+            Logs the process and any errors encountered.
+
+        Raises:
+            SystemExit: If an error occurs during the upload process that prevents continuation.
+        """
         # create investigation
         investigationJSON = self.createInvestigationJSON()
         self.logger.info("Creating investigation in FAIRDOM at {}".format(self.args.URL))
@@ -123,7 +243,7 @@ class Fairdom:
         if r.status_code == 201 or r.status_code == 200:
             investigationID = r.json()['data']['id']
             self.logger.info("Investigation id {} created. Status: {}".format(investigationID, r.status_code))
-        else: 
+        else:
             self.logger.error("Could not create new investigation, error code {}".format(r.status_code))
             exit(1)
     
@@ -147,7 +267,7 @@ class Fairdom:
                             studyID = r.json()['data']['id']
                             self.currentStudies[sample.name]["id"] = studyID 
                             self.logger.info("Study id {} with ({}) created. Status: {}".format(studyID, sample.name, r.status_code))
-                        else: 
+                        else:
                             self.logger.error("Could not create new study, error code {}".format(r.status_code))
                             exit(1)
 
@@ -155,13 +275,13 @@ class Fairdom:
                 assayJSON = self.createAssayJSON(assay, studyJSON['id'])
                 # create add data files
                 for data_file in assay.data_files:
-                    if "derived" in data_file.filename or ".ply.gz" in data_file.filename or "ndvi" in data_file.filename: # for now, only upload phenotypic data
+                    if "derived" in data_file.filename or ".ply.gz" in data_file.filename or "ndvi" in data_file.filename:  # for now, only upload phenotypic data
                         data_fileJSON = self.createDataFileJSON(data_file)
                         r = self.session.post(self.args.URL + '/data_files', json=data_fileJSON)
                         if r.status_code == 201 or r.status_code == 200:
                             data_fileID = r.json()['data']['id']
                             self.logger.info("Data file id {} created ({}). Status: {}".format(data_fileID, data_file.filename, r.status_code))
-                        else: 
+                        else:
                             self.logger.error("Could not create new data file, error code {}".format(r.status_code))
                             exit(1)
                         data_fileJSON['id'] = data_fileID
@@ -174,20 +294,20 @@ class Fairdom:
                             sampleID = r.json()['data']['id']
                             self.samples[sample.name]['id'] = sampleID
                             self.logger.info("Sample id {} created ({}). Status: {}".format(sampleID, sample.name, r.status_code))
-                        else: 
+                        else:
                             self.logger.error("Could not create new sample, error code {}".format(r.status_code))
                             if r.status_code == 422:
                                 self.logger.info(self.logger.info(self.samples[sample.name]))
                                 self.logger.info(r.json())
                             exit(1)
                     sampleID = self.samples[sample.name]['id']
-                    self.addSampleToAssayJSON(sampleID, assayJSON )
+                    self.addSampleToAssayJSON(sampleID, assayJSON)
                     sampleJSON = self.samples[sample.name]
                 r = self.session.post(self.args.URL + '/assays', json=assayJSON)
                 if r.status_code == 201 or r.status_code == 200:
                     assayID = r.json()['data']['id']
                     self.logger.info("Assay id {} created. Status: {}".format(assayID, r.status_code))
-                else: 
+                else:
                     self.logger.error("Could not create new assay, error code {}".format(r.status_code))
                     if r.status_code == 422:
                         self.logger.info(self.logger.info(assayJSON))
@@ -198,8 +318,4 @@ class Fairdom:
                         r = self.session.post(self.args.URL + '/assays', json=assayJSON)
                         r.raise_for_status()
                     else:
-                        exit(1)
-
-
-            
-    
\ No newline at end of file
+                        exit(1)
\ No newline at end of file
diff --git a/f500/collecting/PointCloud.py b/f500/collecting/PointCloud.py
index 02c6b422a2e06690ef37173f586f2b3d5668338b..44ce96088db35ddbcc029e16ae3436158029706b 100644
--- a/f500/collecting/PointCloud.py
+++ b/f500/collecting/PointCloud.py
@@ -2,52 +2,114 @@ import open3d as o3d
 import numpy
 import os
 
+"""
+This script provides a class for handling point cloud data using the Open3D library. 
+It includes functionalities for reading point cloud data from a file, calculating various 
+spectral indices, trimming the point cloud based on z-values, and rendering images of the 
+point cloud with or without color rescaling.
+"""
+
 class PointCloud:
-    
+    """
+    A class to represent and manipulate a point cloud using Open3D.
+
+    Attributes:
+    ----------
+    pcd : open3d.geometry.PointCloud
+        The point cloud data.
+    trimmed : bool
+        A flag indicating whether the point cloud has been trimmed.
+    """
+
     pcd = None
     trimmed = False
-    
-       
+
     def __init__(self, filename):
+        """
+        Initializes the PointCloud object by reading point cloud data from a file.
+
+        Parameters:
+        ----------
+        filename : str
+            The path to the point cloud file in PLY format.
+        """
         self.pcd = o3d.io.read_point_cloud(filename, format="ply")
         self.trimmed = False
-    
+
     def writeHistogram(self, data, filename, timepoint, sampleName, bins, dataRange=None):
+        """
+        Writes a histogram of the given data to a file.
+
+        Parameters:
+        ----------
+        data : numpy.ndarray
+            The data for which the histogram is to be calculated.
+        filename : str
+            The path to the file where the histogram will be written.
+        timepoint : str
+            The timepoint associated with the data.
+        sampleName : str
+            The name of the sample.
+        bins : int
+            The number of bins for the histogram.
+        dataRange : tuple, optional
+            The lower and upper range of the bins. If not provided, range is (data.min(), data.max()).
+
+        Side Effects:
+        ------------
+        Writes the histogram data to the specified file.
+        """
         data = data[numpy.isfinite(data)]
         hist, bin_edges = numpy.histogram(data, bins=bins, range=dataRange)
-        f = open(filename, "w")
-        f.write("timepoint;sample;{}\n".format(";".join(["bin" + str(x) for x in range(0, len(bin_edges))])))        
-        f.write("{};{};{}\n".format(timepoint, "edges", ";".join([str(x) for x in bin_edges])))
-        f.write("{};{};{}\n".format(timepoint, sampleName, ";".join([str(x) for x in hist])))
-        f.close()
-
+        with open(filename, "w") as f:
+            f.write("timepoint;sample;{}\n".format(";".join(["bin" + str(x) for x in range(0, len(bin_edges))])))
+            f.write("{};{};{}\n".format(timepoint, "edges", ";".join([str(x) for x in bin_edges])))
+            f.write("{};{};{}\n".format(timepoint, sampleName, ";".join([str(x) for x in hist])))
 
     def getWavelengths(self):
+        """
+        Retrieves the wavelengths from the point cloud.
+
+        Returns:
+        -------
+        numpy.ndarray
+            The wavelengths as a numpy array. If the point cloud is trimmed, returns a vertically stacked array.
+        """
         if self.trimmed:
-            #print("trimmed data")
             return numpy.vstack(self.pcd.wavelengths)
         else:
-            #print("original data")
             return numpy.asarray(self.pcd.wavelengths)
-    
+
     def get_psri(self):
-        #(RED − GREEN)/(NIR) 
-        numpy.seterr(divide='ignore',invalid='ignore')
-        wavelengths=self.getWavelengths()
-        red =wavelengths[:,0]
-        #red = (red - min(red))/(max(red)-min(red)) * 70 + 630
-        green =wavelengths[:,1]
-        #green = (green - min(green))/(max(green)-min(green)) * 80 + 500
-        nir =wavelengths[:,3] 
-        #nir = (nir - min(nir))/(max(nir)-min(nir)) * 300 + 700
-        return ((red-green)/nir)
-    
+        """
+        Calculates the Plant Senescence Reflectance Index (PSRI).
+
+        Returns:
+        -------
+        numpy.ndarray
+            The PSRI values calculated as (RED - GREEN) / NIR.
+        """
+        numpy.seterr(divide='ignore', invalid='ignore')
+        wavelengths = self.getWavelengths()
+        red = wavelengths[:, 0]
+        green = wavelengths[:, 1]
+        nir = wavelengths[:, 3]
+        return ((red - green) / nir)
+
     def get_hue(self):
-        numpy.seterr(divide='ignore',invalid='ignore')
-        wavelengths=self.getWavelengths()
-        red =wavelengths[:,0]
-        green =wavelengths[:,1]
-        blue =wavelengths[:,2]
+        """
+        Calculates the hue from the RGB wavelengths.
+
+        Returns:
+        -------
+        numpy.ndarray
+            The hue values calculated from the RGB wavelengths.
+        """
+        numpy.seterr(divide='ignore', invalid='ignore')
+        wavelengths = self.getWavelengths()
+        red = wavelengths[:, 0]
+        green = wavelengths[:, 1]
+        blue = wavelengths[:, 2]
         hue = numpy.zeros(len(red))
         for c in range(len(hue)):
             minColor = min([red[c], green[c], blue[c]])
@@ -59,137 +121,167 @@ class PointCloud:
                     hue[c] = 2.0 + (blue[c] - red[c]) / (maxColor - minColor)
                 else:
                     hue[c] = 4.0 + (red[c] - green[c]) / (maxColor - minColor)
-                    
+
                 hue[c] = hue[c] * 60.0
-                if (hue[c] < 0):
-                     hue[c] = hue[c] + 360.0
-        return hue;
-        
-    
+                if hue[c] < 0:
+                    hue[c] = hue[c] + 360.0
+        return hue
+
     def get_greenness(self):
-        numpy.seterr(divide='ignore',invalid='ignore')
-        wavelengths=self.getWavelengths()
-        # (2*G-R-B)/(2*R+G+B)
-        #print(wavelengths)
-        return ((2.0*wavelengths[:,1]-wavelengths[:,0] -wavelengths[:,2]) / 
-                (2.0*wavelengths[:,1]+wavelengths[:,0] +wavelengths[:,2]))
+        """
+        Calculates the greenness index.
+
+        Returns:
+        -------
+        numpy.ndarray
+            The greenness values calculated as (2*G - R - B) / (2*R + G + B).
+        """
+        numpy.seterr(divide='ignore', invalid='ignore')
+        wavelengths = self.getWavelengths()
+        return ((2.0 * wavelengths[:, 1] - wavelengths[:, 0] - wavelengths[:, 2]) /
+                (2.0 * wavelengths[:, 1] + wavelengths[:, 0] + wavelengths[:, 2]))
 
     def get_ndvi(self):
-        numpy.seterr(divide='ignore',invalid='ignore')
-        wavelengths=self.getWavelengths()
-        return (wavelengths[:,3]-wavelengths[:,0])/(wavelengths[:,3]+wavelengths[:,0])
-    
-    #(RED − BLUE)/(RED + BLUE) 
+        """
+        Calculates the Normalized Difference Vegetation Index (NDVI).
+
+        Returns:
+        -------
+        numpy.ndarray
+            The NDVI values calculated as (NIR - RED) / (NIR + RED).
+        """
+        numpy.seterr(divide='ignore', invalid='ignore')
+        wavelengths = self.getWavelengths()
+        return (wavelengths[:, 3] - wavelengths[:, 0]) / (wavelengths[:, 3] + wavelengths[:, 0])
+
     def get_npci(self):
-        numpy.seterr(divide='ignore',invalid='ignore')
-        wavelengths=self.getWavelengths()
-        return ((wavelengths[:,0]-wavelengths[:,2])/(wavelengths[:,0]+wavelengths[:,2]))
+        """
+        Calculates the Normalized Pigment Chlorophyll Index (NPCI).
+
+        Returns:
+        -------
+        numpy.ndarray
+            The NPCI values calculated as (RED - BLUE) / (RED + BLUE).
+        """
+        numpy.seterr(divide='ignore', invalid='ignore')
+        wavelengths = self.getWavelengths()
+        return ((wavelengths[:, 0] - wavelengths[:, 2]) / (wavelengths[:, 0] + wavelengths[:, 2]))
 
     def setColors(self, colors):
+        """
+        Sets the colors of the point cloud.
+
+        Parameters:
+        ----------
+        colors : numpy.ndarray
+            The colors to be set for the point cloud.
+        """
         self.pcd.colors = o3d.utility.Vector3dVector(colors)
-    
-    
+
     def render_image(self, filename, image_width, image_height, rescale=True):
+        """
+        Renders an image of the point cloud.
+
+        Parameters:
+        ----------
+        filename : str
+            The path to the file where the image will be saved.
+        image_width : int
+            The width of the image.
+        image_height : int
+            The height of the image.
+        rescale : bool, optional
+            Whether to rescale the colors before rendering. Default is True.
+        """
         if rescale:
             self.render_image_rescale(filename, image_width, image_height)
         else:
             self.render_image_no_rescale(filename, image_width, image_height)
 
     def trim(self, zIndex):
+        """
+        Trims the point cloud based on the z-values.
+
+        Parameters:
+        ----------
+        zIndex : float
+            The z-value threshold for trimming the point cloud.
+
+        Side Effects:
+        ------------
+        Modifies the point cloud to only include points with z-values greater than or equal to zIndex.
+        """
         if zIndex == 0:
             return
-        # Convert point cloud points to numpy array
         self.untrimmedPCD = self.pcd
         points = numpy.asarray(self.pcd.points)
-
-        # Create a mask based on the z-values
         mask = points[:, 2] >= zIndex
-        #print(mask)
-        # Filter the point cloud
         filtered_points = points[mask]
-        
-        # Create a new point cloud with the filtered points
         filtered_pcd = o3d.geometry.PointCloud()
         filtered_pcd.points = o3d.utility.Vector3dVector(filtered_points)
 
-        # Optionally, if you want to also filter the colors or normals, you can do the following:
         if self.pcd.has_colors():
             colors = numpy.asarray(self.pcd.colors)
             filtered_colors = colors[mask]
             filtered_pcd.colors = o3d.utility.Vector3dVector(filtered_colors)
 
         wavelengths = numpy.asarray(self.pcd.wavelengths)
-        #print(wavelengths[mask])
         filtered_wavelengths = wavelengths[mask]
         filtered_pcd.wavelengths = filtered_wavelengths
-        #print(filtered_pcd.wavelengths)
         self.pcd = filtered_pcd
-        #print(self.pcd.wavelengths)
         self.trimmed = True
-            
+
     def render_image_no_rescale(self, filename, image_width, image_height):
-        ''' Create open3D vizualization'''    
-        # Create the open3d Visualizer class where the mesh can be rendered from
+        """
+        Renders an image of the point cloud without rescaling the colors.
+
+        Parameters:
+        ----------
+        filename : str
+            The path to the file where the image will be saved.
+        image_width : int
+            The width of the image.
+        image_height : int
+            The height of the image.
+
+        Side Effects:
+        ------------
+        Saves the rendered image to the specified file.
+        """
         vis = o3d.visualization.Visualizer()
-        # create an invisible window with the desired dimensions of the image
         vis.create_window(width=image_width, height=image_height, visible=False)
         vis.add_geometry(self.pcd)
         vis.update_geometry(self.pcd)
-
-        # Get the view control
-        #ctr = vis.get_view_control()
-
-
-        # In Open3D, the extrinsic matrix is a 4x4 matrix that defines the transformation from world to camera coordinates.
-
-        # Set the camera parameters
-        #camera_params = ctr.convert_to_pinhole_camera_parameters()
-        #print(camera_params.extrinsic)
-        #camera_extrinsic = numpy.copy(camera_params.extrinsic)
-        #camera_extrinsic[:3,3] = [6, 1600, 1600]#camera_extrinsic
-        #camera_extrinsic[:3,3] = [0, 0, 1600]#camera_extrinsic
-        #camera_extrinsic[:3,3] = [0, 1600, 1600]#camera_extrinsic
-        #camera_extrinsic[:3,3] = [0, 1600, 0]#camera_extrinsic
-        #camera_params.extrinsic = camera_extrinsic
-        #ctr.convert_from_pinhole_camera_parameters(camera_params, True)
-        #camera_params = ctr.convert_to_pinhole_camera_parameters()
-        #print(camera_params.extrinsic)
-        #ctr.set_camera(camera_pos, lookat, up_dir)
-        #vis.get_render_option().load_from_json(filename = os.path.dirname(__file__) + '/render_parameters.json')
-        #vis.update_renderer()
-        # render the view on the mesh as PNG file and close the (invisible) window
         vis.capture_screen_image(filename, do_render=True)
         vis.destroy_window()
-        
-        
+
     def render_image_rescale(self, filename, image_width, image_height):
+        """
+        Renders an image of the point cloud with rescaled colors.
 
-        # Convert the colors to a numpy array
-        colors =self.getWavelengths()
+        Parameters:
+        ----------
+        filename : str
+            The path to the file where the image will be saved.
+        image_width : int
+            The width of the image.
+        image_height : int
+            The height of the image.
 
-        # Select only the first three channels (RGB)
-        colors = colors[:,:3]
-        # Calculate the 1st and 99th percentiles across all color channels
+        Side Effects:
+        ------------
+        Saves the rendered image to the specified file.
+        """
+        colors = self.getWavelengths()
+        colors = colors[:, :3]
         p01 = numpy.percentile(colors, 1)
         p99 = numpy.percentile(colors, 99)
-
-        # Perform linear stretching to map the 1st percentile value to 0 and the 99th percentile value to 255
         scaled_colors = ((colors - p01) / (p99 - p01) * 255)
-
-        # Clip values below 0 and above 255
         scaled_colors = numpy.clip(scaled_colors, 0, 255).astype(numpy.uint8)
-
-        # Convert to double precision
         scaled_colors = scaled_colors.astype(numpy.float64) / 255
 
-        # Reshape to 2D array with 3 columns, if needed
         if len(scaled_colors.shape) < 2:
             scaled_colors = numpy.reshape(scaled_colors, (-1, 3))
 
-        # Assign the scaled colors back to the point cloud
         self.pcd.colors = o3d.utility.Vector3dVector(scaled_colors)
-
-        self.render_image_no_rescale(filename, image_width, image_height)
-
-    
-            
+        self.render_image_no_rescale(filename, image_width, image_height)
\ No newline at end of file
diff --git a/f500/collecting/__init__.py b/f500/collecting/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/f500/collecting/deleteFAIRObject.py b/f500/collecting/deleteFAIRObject.py
index 15de12b898ae56d7b3c7a765aec0a8b253b7fd35..8033d1d052d3e3c9d95e79e72a9170c2ce495d5f 100644
--- a/f500/collecting/deleteFAIRObject.py
+++ b/f500/collecting/deleteFAIRObject.py
@@ -1,29 +1,61 @@
+"""
+This script is designed to delete various types of resources from a specified host using the Fairdom SEEK API. 
+It utilizes the requests library to send HTTP DELETE requests to remove data files, samples, assays, studies, 
+and investigations from the server. The script requires an authorization token to authenticate the requests.
+
+Usage:
+    python script_name.py <token>
+
+Where <token> is the authorization token required for accessing the API.
+
+Note: This script performs destructive actions by deleting resources. Use with caution.
+"""
+
 import requests
 import sys
 
-token = sys.argv[1]
+def main():
+    """Main function to execute the deletion of resources.
+
+    This function sets up the session with the necessary headers and iterates over predefined ranges 
+    to delete resources from the server. It deletes data files, samples, assays, studies, and investigations.
+
+    Raises:
+        requests.exceptions.RequestException: If a network-related error occurs during the requests.
+    """
+    token = sys.argv[1]
+
+    headers = {
+        "Content-type": "application/vnd.api+json",
+        "Accept": "application/vnd.api+json",
+        "Accept-Charset": "ISO-8859-1",
+        "Authorization": "Token {}".format(token)
+    }
+
+    session = requests.Session()
+    session.headers.update(headers)
+    r = 32000
+    host = "https://test.fairdom-seek.bif.containers.wurnet.nl/"
 
-headers = {"Content-type": "application/vnd.api+json",
-   "Accept": "application/vnd.api+json",
-   "Accept-Charset": "ISO-8859-1",
-   "Authorization": "Token {}".format(token)}
+    # Delete data files
+    for i in range(1000, r):
+        session.delete(host + "data_files/{}".format(i))
 
-session = requests.Session()
-session.headers.update(headers)
-r = 32000
-host = "https://test.fairdom-seek.bif.containers.wurnet.nl/"
-for i in range(1000,r):
-    session.delete(host + "data_files/{}".format(i))
-for i in range(0,500):
-    session.delete(host + "samples/{}".format(i))
+    # Delete samples
+    for i in range(0, 500):
+        session.delete(host + "samples/{}".format(i))
 
-for i in range(0,1300):
-    session.delete(host + "assays/{}".format(i))
-for i in range(0,50):
-    session.delete(host + "studies/{}".format(i))
+    # Delete assays
+    for i in range(0, 1300):
+        session.delete(host + "assays/{}".format(i))
 
-for i in range(0,20):
-    session.delete(host + "investigations/{}".format(i))
-    
+    # Delete studies
+    for i in range(0, 50):
+        session.delete(host + "studies/{}".format(i))
 
+    # Delete investigations
+    for i in range(0, 20):
+        session.delete(host + "investigations/{}".format(i))
 
+if __name__ == "__main__":
+    main()
diff --git a/f500/collecting/fairdom.py b/f500/collecting/fairdom.py
index ab1aa16dd7e109d40aa435685406fdd4850cf531..82b7c0c4d377ea6f7030b3d68785492fc4f6b5db 100644
--- a/f500/collecting/fairdom.py
+++ b/f500/collecting/fairdom.py
@@ -1,6 +1,12 @@
 """
-SEEK FAIRDOM automatic upload
+This script automates the process of uploading data to the SEEK FAIRDOM platform. It reads metadata and measurement data from specified files, processes the data, and uploads it to the SEEK platform, creating the necessary structure of investigations, studies, and assays. The script is designed to work with PlantEye data from NPEC and assumes a specific data structure and naming convention.
 
+The script requires the following command-line arguments:
+1. Path to the datamatrix file.
+2. Path to the investigation directory.
+3. Paths to the CSV files containing measurement data.
+
+The script uses the requests library to interact with the SEEK API and pandas for data manipulation.
 """
 
 import sys
@@ -13,12 +19,10 @@ import requests
 import json
 import string
 
-
 datamatrix_file = sys.argv[1]
 investigationPath = sys.argv[2]
 csvs = sys.argv[3:]
 
-
 base_url = 'http://localhost:3000'
 
 headers = {"Content-type": "application/vnd.api+json",
@@ -29,19 +33,8 @@ session = requests.Session()
 session.headers.update(headers)
 session.auth = ("capsicum.upload@wur.nl", "3#7B&GNC</yp2{k(")
 
-"""The **Investigation**, **Study** and **Assay** will be created within **Project** 2"""
-
 containing_project_id = 2
 
-
-# some definitions:
-# Project > Investigation > Study > Observation Unit > Sample > assay type > assay > unprocessed (raw data folder)
-# 'Pilot project' -> Pepper -> Experiment 13 -> Pot 1 -> 2022-02-03 -> Imaging -> planteye -> pointcloud
-# In ISA, sample is linked to an assay. Observation unit === Sample, where the named sample is a data of Sample
-# Sample.name = data 
-
-
-# Some columns contain the wrong data, remove those:
 columnsToDrop = ["ndvi_aver","ndvi_bin0","ndvi_bin1","ndvi_bin2","ndvi_bin3","ndvi_bin4","ndvi_bin5",
                  "greenness_aver","greenness_bin0","greenness_bin1","greenness_bin2","greenness_bin3","greenness_bin4","greenness_bin5",
                  "hue_aver","hue_bin0","hue_bin1","hue_bin2","hue_bin3","hue_bin4","hue_bin5",
@@ -51,15 +44,21 @@ columnsToDrop = ["ndvi_aver","ndvi_bin0","ndvi_bin1","ndvi_bin2","ndvi_bin3","nd
 metadata = pandas.read_csv(datamatrix_file, sep=";")
 
 def removeAfterSpaceFromDataMatrix(row):
+    """
+    Removes any text after a space in the 'DataMatrix' column of a row.
+
+    Args:
+        row (pandas.Series): A row from a DataFrame.
+
+    Returns:
+        pandas.Series: The modified row with updated 'DataMatrix' value.
+    """
     row["DataMatrix"] = row["DataMatrix"].strip().split(" ")[0]
     return row
 
 metadata = metadata.apply(removeAfterSpaceFromDataMatrix , axis=1)
-# Create ISA structure
 datamatrix = os.path.basename(datamatrix_file).split(".")[0].split("_")
 
-# Create investigation
-
 investigation = {}
 investigation['data'] = {}
 investigation['data']['type'] = 'investigations'
@@ -78,7 +77,6 @@ r = session.post(base_url + '/investigations', json=investigation)
 investigation_id = r.json()['data']['id']
 r.raise_for_status()
 
-# Create study, title comes datamatrix file (ID...)   
 study = {}
 study['data'] = {}
 study['data']['type'] = 'studies'
@@ -94,10 +92,6 @@ study['data']['relationships']['investigation']['data'] = {'id' : investigation_
 r = session.post(base_url + '/studies', json=study)
 study_id = r.json()['data']['id']
 
-
-# add meta-data file to investigation 
-#investigation.filename = datamatrix_file
-#store metadata file
 os.makedirs("/".join([investigationPath, investigation['data']['attributes']['title']]), exist_ok=True)
 metadata_csv = "/".join([investigationPath, investigation['data']['attributes']['title'], os.path.basename(datamatrix_file)])
 metadata.to_csv(metadata_csv, sep="\t")
@@ -123,27 +117,28 @@ r.raise_for_status()
 
 populated_data_file = r.json()
 
-"""Extract the id and URL to the newly created **data_file**"""
-
 data_file_id = populated_data_file['data']['id']
 data_file_url = populated_data_file['data']['links']['self']
 
-"""Extract the URL for the local data"""
-
 blob_url = populated_data_file['data']['attributes']['content_blobs'][0]['link']
 
-"""Reset the local file and upload it to the URL"""
-
 upload = session.put(blob_url, data=open(metadata_csv,"r").read(), headers={'Content-Type': 'application/octet-stream'})
 upload.raise_for_status()
 
-
-# Assay is defined as a collection of files from a measurement
-# These are identified by 'f0000'
 checkAssayName = re.compile(r"f[0-9]+")
 measurements = pandas.DataFrame()
 
 def copyPots(row, pots):
+    """
+    Copies pot information from a pots DataFrame to a row based on matching coordinates.
+
+    Args:
+        row (pandas.Series): A row from a DataFrame.
+        pots (pandas.DataFrame): A DataFrame containing pot information.
+
+    Returns:
+        pandas.Series: The modified row with updated pot information.
+    """
     row["Pot"] = pots[ (pots["x"] == row["x"]) & (pots["y"] == row["y"]) ]["Pot"].iloc[0]
     if "Treatment" in pots.columns:
         row["Treatment"] = pots[ (pots["x"] == row["x"]) & (pots["y"] == row["y"]) ]["Treatment"].iloc[0]
@@ -151,20 +146,55 @@ def copyPots(row, pots):
         row["Experiment"] = pots[ (pots["x"] == row["x"]) & (pots["y"] == row["y"]) ]["Experiment"].iloc[0]
     return row
 
-
 def measurementsToFile(investigation, path, filename, measurements):
+    """
+    Saves the measurements DataFrame to a CSV file.
+
+    Args:
+        investigation (dict): The investigation dictionary.
+        path (str): The directory path where the file will be saved.
+        filename (str): The name of the file.
+        measurements (pandas.DataFrame): The DataFrame containing measurements.
+
+    Side Effects:
+        Creates directories and writes a CSV file to the specified path.
+    """
     os.makedirs(path + "/derived", exist_ok=True)
     measurements.to_csv(path + "/" + filename, sep=";")
 
-
 def rawMeasurementsToFile(investigation, path, filename, measurements):
+    """
+    Saves the raw measurements to a CSV file.
+
+    Args:
+        investigation (dict): The investigation dictionary.
+        path (str): The directory path where the file will be saved.
+        filename (str): The name of the file.
+        measurements (pandas.DataFrame): The DataFrame containing raw measurements.
+
+    Returns:
+        str: The full path to the saved file.
+
+    Side Effects:
+        Creates directories and writes a CSV file to the specified path.
+    """
     os.makedirs(path + "/derived", exist_ok=True)
     df = pandas.DataFrame(measurements)
     df = df.transpose()
     df.to_csv(path + "/" + filename, sep="\t")
     return(path + "/" + filename)
 
-def addPointClouds(row, title) :
+def addPointClouds(row, title):
+    """
+    Adds a point cloud filename to a row based on its coordinates and timestamp.
+
+    Args:
+        row (pandas.Series): A row from a DataFrame.
+        title (str): The title used in the filename.
+
+    Returns:
+        pandas.Series: The modified row with the point cloud filename added.
+    """
     filename = "pointcloud/{}_{}_full_sx{:03d}_sy{:03d}.ply.gz".format(
         title, row["timestamp_file"], 
         row["x"],
@@ -173,6 +203,18 @@ def addPointClouds(row, title) :
     return row
 
 def createAssay(row, investigation, path, study_id):
+    """
+    Creates an assay and uploads the associated data file to the SEEK platform.
+
+    Args:
+        row (pandas.Series): A row from a DataFrame containing assay data.
+        investigation (dict): The investigation dictionary.
+        path (str): The directory path for saving files.
+        study_id (str): The ID of the study to which the assay belongs.
+
+    Side Effects:
+        Creates directories, writes files, and uploads data to the SEEK platform.
+    """
     data_file = {}
 
     filename = "derived/" + assay.title + ".csv"
@@ -199,17 +241,11 @@ def createAssay(row, investigation, path, study_id):
     
     populated_data_file = r.json()
 
-    """Extract the id and URL to the newly created **data_file**"""
-
     data_file_id = populated_data_file['data']['id']
     data_file_url = populated_data_file['data']['links']['self']
     
-    """Extract the URL for the local data"""
-    
     blob_url = populated_data_file['data']['attributes']['content_blobs'][0]['link']
     
-    """Reset the local file and upload it to the URL"""
-    
     upload = session.put(blob_url, data=open(fullFilename,"r").read(), headers={'Content-Type': 'application/octet-stream'})
     upload.raise_for_status()
     
@@ -226,16 +262,22 @@ def createAssay(row, investigation, path, study_id):
     assay['data']['relationships']['study']['data'] = {'id' : study_id, 'type' : 'studies'}
     assay['data']['relationships']['organism'] = {}
     assay['data']['relationships']['organism']['data'] = {'id' : 1, 'type' : 'organisms'}
-    
-
-
-
-
 
 def finalize(investigation, measurements, investigationPath, title, metadata, study_id):
-    # CSV will be combined data file (with corrected pot names) and ply file names
-    # Do this, if the data matrix contains pot names (otherwise it either went wrong or data is from a different project
-    # Then list the ply files as Image File
+    """
+    Finalizes the processing of measurements by creating assays and saving data files.
+
+    Args:
+        investigation (dict): The investigation dictionary.
+        measurements (pandas.DataFrame): The DataFrame containing measurements.
+        investigationPath (str): The directory path for saving files.
+        title (str): The title used in filenames.
+        metadata (pandas.DataFrame): The DataFrame containing metadata.
+        study_id (str): The ID of the study to which the assays belong.
+
+    Side Effects:
+        Creates directories, writes files, and uploads data to the SEEK platform.
+    """
     if "Pot" in measurements.columns:
         pots = measurements.dropna(axis=0, subset=["Pot"])
         if len(pots) > 0 and "Pot" in pots.columns:
@@ -244,52 +286,35 @@ def finalize(investigation, measurements, investigationPath, title, metadata, st
                 measurements = measurements.drop(columnsToDrop, axis=1)
                 measurements = measurements.apply(copyPots , axis=1, pots=pots)
                 measurements = measurements.apply(addPointClouds, axis=1, title=title)
-                #now create the assays, using the timestamp_file as name
                 measurements.apply(createAssay, axis=1, investigation = investigation, path = investigationPath, study_id = study_id)
                 investigation.measurements = pandas.concat([investigation.measurements, measurements], axis=0, ignore_index=True)
-                
-                
 
 previousAssay = ""
 for csv in csvs:
     assayName = os.path.basename(csv).split("_")[0]
     timestamp = os.path.basename(csv).split("_")[1]
-    #print("Reading: {}".format(assayName))
     if checkAssayName.match(assayName) != None:
         try: 
             currentMeasurements = pandas.read_csv(csv, sep="\t", skiprows=[1])
             currentMeasurements["timestamp_file"] = timestamp
             
             if previousAssay == assayName:
-                # same assay
                 if len(measurements) == 0:
                     measurements = currentMeasurements
                 else:
                     measurements = pandas.concat([measurements, currentMeasurements], axis=0, ignore_index=True)
             else:
                 if len(measurements) > 0:
-                    # new assay, process all
                     finalize(investigation, measurements, investigationPath, previousAssay, metadata, study_id)
                 
                 measurements = currentMeasurements
             previousAssay = assayName
         except:
-            # No data?
             pass
     else:
-        #CSV file is not an assay file
         pass
 
 if len(measurements) > 0:
     finalize(investigation, measurements, investigationPath, previousAssay, metadata, study_id)
 
-measurementsToFile(investigation, "/".join([investigationPath, investigation.title, investigation.studies[0].title]), "derived/" + investigation.studies[0].title + ".csv", investigation.measurements)
-#print(json.dumps(investigation, cls=ISAJSONEncoder, sort_keys=True, indent=4, separators=(',', ': ')))
-
-
-
-
-
-
-
-
+measurementsToFile(investigation, "/".join([investigationPath, investigation.title, investigation.studies[0].title]), "derived/" + investigation.studies[0].title + ".csv", investigation.measurements)
\ No newline at end of file
diff --git a/f500/collecting/processPointClouds.py b/f500/collecting/processPointClouds.py
index 121142c551e29e8ee1c7dedef8960bbf9aa5a4eb..8f60f7f1a9de24d765343740a67c2a373f45cf2e 100644
--- a/f500/collecting/processPointClouds.py
+++ b/f500/collecting/processPointClouds.py
@@ -13,30 +13,69 @@ import open3d as o3d
 import tempfile
 import numpy
 
+"""
+This script processes ISA-JSON files and associated point cloud data files.
+It extracts greenness information from point clouds and writes histograms
+of the greenness values to specified output files. The script requires an
+ISA-JSON file and a list of point cloud files as input.
+"""
+
 investigation = isajson.load(open(sys.argv[1], "r"))
 study = investigation.studies[0]    
 BINS = 256
 
 def writeHistogram(data, filename):
+    """
+    Writes a histogram of the given data to a file.
+
+    Parameters:
+        data (numpy.ndarray): The data for which the histogram is to be computed.
+        filename (str): The name of the file where the histogram will be written.
+
+    Outputs:
+        A file containing the histogram data. The bin edges and histogram counts
+        are written in separate lines, separated by semicolons.
+
+    Side Effects:
+        Creates or overwrites the specified file with histogram data.
+    """
     hist, bin_edges = numpy.histogram(data, bins=BINS)
-    f = open(filename, "w")
-    f.write(";".join(bin_edges))
-    f.write("\n")
-    f.write(";".join(hist))
-    f.close()
+    with open(filename, "w") as f:
+        f.write(";".join(map(str, bin_edges)))
+        f.write("\n")
+        f.write(";".join(map(str, hist)))
 
 def get_greenness(pcd):
-    np.seterr(divide='ignore',invalid='ignore')
-    return ((np.asarray(pcd.wavelengths)[:,0]-np.asarray(pcd.wavelengths)[:,2] + 2.0 * np.asarray(pcd.wavelengths)[:,1])/(np.asarray(pcd.wavelengths)[:,0]+ np.asarray(pcd.wavelengths)[:,1] + np.asarray(pcd.wavelengths)[:,2]))
+    """
+    Calculates the greenness index for a point cloud.
+
+    Parameters:
+        pcd (open3d.geometry.PointCloud): The point cloud object containing wavelength data.
+
+    Returns:
+        numpy.ndarray: An array of greenness values for each point in the point cloud.
 
+    Exceptions:
+        May raise an exception if the point cloud does not contain wavelength data.
 
-# find each pointcloud in the file list
+    Notes:
+        The greenness index is calculated using the formula:
+        (R - B + 2G) / (R + G + B), where R, G, and B are the red, green, and blue
+        wavelength values, respectively.
+    """
+    np.seterr(divide='ignore', invalid='ignore')
+    return ((np.asarray(pcd.wavelengths)[:,0] - np.asarray(pcd.wavelengths)[:,2] + 
+             2.0 * np.asarray(pcd.wavelengths)[:,1]) / 
+            (np.asarray(pcd.wavelengths)[:,0] + np.asarray(pcd.wavelengths)[:,1] + 
+             np.asarray(pcd.wavelengths)[:,2]))
+
+# Find each point cloud in the file list
 pointclouds = defaultdict(str)
 for pcd in sys.argv[2:]:
     filename = os.path.basename(pcd)
     pointclouds[filename] = pcd
 
-# process isa file
+# Process ISA file
 for a in study.assays:
     print(a.data_files)
     for df in a.data_files:
@@ -45,11 +84,10 @@ for a in study.assays:
                 print(com.value)
                 print(os.path.basename(df.filename))
                 if ".ply" in df.filename and os.path.basename(df.filename) in pointclouds:
-                    # copy ply
+                    # Copy ply
                     shutil.copy2(pointclouds[os.path.basename(df.filename)], com.value)
                     a.pointcloud = com.value
 
-
 for a in study.assays:
     print(a.data_files)
     if a.pointcloud:
@@ -64,7 +102,3 @@ for a in study.assays:
                     if "greenness.csv" in com.value:
                         greenness = get_greenness(pcd)
                         writeHistogram(greenness, com.value)
-                        
-
-
-
diff --git a/f500/collecting/toolkit.py b/f500/collecting/toolkit.py
index 997e4240b2201680ebbb1566d564d051e2f0cecb..428d11f206046b29c8d53efe99dd3017daebfb08 100644
--- a/f500/collecting/toolkit.py
+++ b/f500/collecting/toolkit.py
@@ -1,18 +1,23 @@
 """
-ISA & isamodel
-https://isa-specs.readthedocs.io/en/latest/isamodel.html
+This script serves as a command-line interface for the F500 class, which provides various functionalities such as restructuring data, processing point clouds, verifying data, combining histograms, and uploading data. The script determines the command to execute based on user input.
 
+Modules:
+    - sys: Provides access to some variables used or maintained by the interpreter and to functions that interact with the interpreter.
+    - os: Provides a portable way of using operating system-dependent functionality.
+    - pandas: A data manipulation and analysis library for Python.
+    - F500: A custom module that contains the F500 class with methods for different data processing tasks.
+
+Usage:
+    Run the script with the desired command to execute the corresponding functionality.
 """
+
 import sys
 import os
 import pandas
 
 from F500 import F500
 
-                
 if __name__ == '__main__':
-
-
     f500 = F500()
     f500.commandLineInterface()
     if f500.args.command == "restructure":
@@ -25,8 +30,6 @@ if __name__ == '__main__':
         f500.combineHistograms()
     elif f500.args.command == "upload":
         f500.upload()
+```
 
-
-
-
-
+Note: Since the `F500` class and its methods are not provided in the script, I cannot add docstrings for them. However, I have added a general docstring for the script itself. If you have access to the `F500` class, you should add docstrings to its methods following the same guidelines.
\ No newline at end of file
diff --git a/lib/computePhenotypes.py b/lib/computePhenotypes.py
index 6154e639bdc87a9f26ba09e5776890a39e00ac52..7bf04f5277137c886c66be1b45c04380ebb85579 100644
--- a/lib/computePhenotypes.py
+++ b/lib/computePhenotypes.py
@@ -1,27 +1,94 @@
-import numpy as np
+"""
+This script provides functions to calculate various vegetation indices from point cloud data (PCD) using specific wavelength channels. 
+These indices include the Normalized Difference Vegetation Index (NDVI) for visualization, NDVI, the Normalized Pigment Chlorophyll Index (NPCI), 
+and a greenness index. The calculations are based on the wavelengths corresponding to different spectral bands.
+
+Functions:
+- get_ndvi_for_visualization: Computes NDVI for visualization purposes, scaling the result between 0 and 1.
+- get_ndvi: Computes the standard NDVI, with values ranging from -1 to 1.
+- get_npci: Computes the NPCI using the red and blue channels.
+- get_greenness: Computes a greenness index using the red, green, and blue channels.
+"""
 
+import numpy as np
 
-#ndvi value calculated for visualization needs to have a range between 0 and 1
-#   to  (((nirv-red)/(nirv+red))+1)/2    
 def get_ndvi_for_visualization(pcd):
-    np.seterr(divide='ignore',invalid='ignore')
-    np.asarray(pcd.ndvi)[:,0] = (((np.asarray(pcd.wavelengths)[:,3]-np.asarray(pcd.wavelengths)[:,0])/(np.asarray(pcd.wavelengths)[:,3]+np.asarray(pcd.wavelengths)[:,0]))+1)/2
-    return pcd.ndvi[:,0]
+    """
+    Calculate the NDVI for visualization purposes, scaling the result between 0 and 1.
+
+    Parameters:
+    pcd (object): A point cloud data object containing 'wavelengths' and 'ndvi' attributes. 
+                  'wavelengths' is expected to be a 2D array where the columns correspond to different spectral bands.
+
+    Returns:
+    ndarray: A 1D array of NDVI values scaled between 0 and 1.
+
+    Side Effects:
+    - Modifies the 'ndvi' attribute of the input 'pcd' object.
+
+    Notes:
+    - This function ignores division and invalid operation warnings using numpy's seterr function.
+    """
+    np.seterr(divide='ignore', invalid='ignore')
+    np.asarray(pcd.ndvi)[:, 0] = (((np.asarray(pcd.wavelengths)[:, 3] - np.asarray(pcd.wavelengths)[:, 0]) /
+                                   (np.asarray(pcd.wavelengths)[:, 3] + np.asarray(pcd.wavelengths)[:, 0])) + 1) / 2
+    return pcd.ndvi[:, 0]
 
-#ndvi value calculated from nir and red channel should be between -1 and 1
-#    (nirv-red)/(nirv+red)
 def get_ndvi(pcd):
-    np.seterr(divide='ignore',invalid='ignore')
-    np.asarray(pcd.ndvi)[:,0] = (np.asarray(pcd.wavelengths)[:,3]-np.asarray(pcd.wavelengths)[:,0])/(np.asarray(pcd.wavelengths)[:,3]+np.asarray(pcd.wavelengths)[:,0])
-    return pcd.ndvi[:,0]
+    """
+    Calculate the standard NDVI, with values ranging from -1 to 1.
+
+    Parameters:
+    pcd (object): A point cloud data object containing 'wavelengths' and 'ndvi' attributes. 
+                  'wavelengths' is expected to be a 2D array where the columns correspond to different spectral bands.
+
+    Returns:
+    ndarray: A 1D array of NDVI values ranging from -1 to 1.
+
+    Side Effects:
+    - Modifies the 'ndvi' attribute of the input 'pcd' object.
+
+    Notes:
+    - This function ignores division and invalid operation warnings using numpy's seterr function.
+    """
+    np.seterr(divide='ignore', invalid='ignore')
+    np.asarray(pcd.ndvi)[:, 0] = (np.asarray(pcd.wavelengths)[:, 3] - np.asarray(pcd.wavelengths)[:, 0]) / \
+                                 (np.asarray(pcd.wavelengths)[:, 3] + np.asarray(pcd.wavelengths)[:, 0])
+    return pcd.ndvi[:, 0]
 
-#(RED − BLUE)/(RED + BLUE) 
 def get_npci(pcd):
-    np.seterr(divide='ignore',invalid='ignore')
-    return ((np.asarray(pcd.wavelengths)[:,0]-np.asarray(pcd.wavelengths)[:,2])/(np.asarray(pcd.wavelengths)[:,0]+np.asarray(pcd.wavelengths)[:,2]))
-    
-#(2*G-R-B)/(R+G+B)    
+    """
+    Calculate the Normalized Pigment Chlorophyll Index (NPCI) using the red and blue channels.
+
+    Parameters:
+    pcd (object): A point cloud data object containing 'wavelengths' attribute. 
+                  'wavelengths' is expected to be a 2D array where the columns correspond to different spectral bands.
+
+    Returns:
+    ndarray: A 1D array of NPCI values.
+
+    Notes:
+    - This function ignores division and invalid operation warnings using numpy's seterr function.
+    """
+    np.seterr(divide='ignore', invalid='ignore')
+    return ((np.asarray(pcd.wavelengths)[:, 0] - np.asarray(pcd.wavelengths)[:, 2]) /
+            (np.asarray(pcd.wavelengths)[:, 0] + np.asarray(pcd.wavelengths)[:, 2]))
+
 def get_greenness(pcd):
-    np.seterr(divide='ignore',invalid='ignore')
-    return ((np.asarray(pcd.wavelengths)[:,0]-np.asarray(pcd.wavelengths)[:,2] + 2.0 * np.asarray(pcd.wavelengths)[:,1])/(np.asarray(pcd.wavelengths)[:,0]+ np.asarray(pcd.wavelengths)[:,1] + np.asarray(pcd.wavelengths)[:,2]))
+    """
+    Calculate a greenness index using the red, green, and blue channels.
+
+    Parameters:
+    pcd (object): A point cloud data object containing 'wavelengths' attribute. 
+                  'wavelengths' is expected to be a 2D array where the columns correspond to different spectral bands.
+
+    Returns:
+    ndarray: A 1D array of greenness index values.
 
+    Notes:
+    - This function ignores division and invalid operation warnings using numpy's seterr function.
+    """
+    np.seterr(divide='ignore', invalid='ignore')
+    return ((np.asarray(pcd.wavelengths)[:, 0] - np.asarray(pcd.wavelengths)[:, 2] + 
+             2.0 * np.asarray(pcd.wavelengths)[:, 1]) /
+            (np.asarray(pcd.wavelengths)[:, 0] + np.asarray(pcd.wavelengths)[:, 1] + np.asarray(pcd.wavelengths)[:, 2]))
diff --git a/lib/rescaleWavelength.py b/lib/rescaleWavelength.py
index 331b223721a6b1b27256beba81f941906253b955..9379517b302517e5d7f61364957bae6769726e58 100644
--- a/lib/rescaleWavelength.py
+++ b/lib/rescaleWavelength.py
@@ -1,15 +1,44 @@
 import numpy as np
 
-# wavelength: [0]R/[1]G/[2]B/[3]NIR all divide by scale
-# replace color R with wavelength [0]/scale
-# replace color G with wavelength [1]/scale
-# replace color B with wavelength [2]/scale
-# replace nir [0] with wavelength [3]/scale
-def rescale_wavelengths(pcd,scale):
-    tmp_wvl = np.asarray(pcd.wavelengths)/scale
-    np.asarray(pcd.colors)[:,0] = tmp_wvl[:,0]
-    np.asarray(pcd.colors)[:,1] = tmp_wvl[:,1]
-    np.asarray(pcd.colors)[:,2] = tmp_wvl[:,2]
-    np.asarray(pcd.nir)[:,0] = tmp_wvl[:,3]
-    return pcd 
+"""
+This script provides functionality to rescale the wavelengths of a point cloud data (PCD) object. 
+The script modifies the color and NIR (Near-Infrared) attributes of the PCD based on the provided scale.
+"""
 
+def rescale_wavelengths(pcd, scale):
+    """
+    Rescales the wavelengths of a point cloud data (PCD) object by a given scale factor.
+
+    This function takes a PCD object with attributes for wavelengths, colors, and NIR values. 
+    It rescales the wavelengths by dividing them by the provided scale factor and updates the 
+    color and NIR attributes of the PCD accordingly.
+
+    Parameters:
+    pcd : object
+        A point cloud data object that contains 'wavelengths', 'colors', and 'nir' attributes.
+        The 'wavelengths' attribute is expected to be a 2D array with columns representing 
+        R, G, B, and NIR wavelengths.
+    scale : float
+        The scale factor by which to divide the wavelengths.
+
+    Returns:
+    object
+        The modified PCD object with rescaled color and NIR attributes.
+
+    Side Effects:
+    Modifies the 'colors' and 'nir' attributes of the input PCD object in place.
+
+    Exceptions:
+    This function assumes that the input PCD object has the required attributes and that they 
+    are in the expected format. If not, it may raise AttributeError or IndexError.
+
+    Future Work:
+    Consider adding input validation to ensure the PCD object has the required attributes and 
+    that they are in the expected format. Additionally, handle potential exceptions more gracefully.
+    """
+    tmp_wvl = np.asarray(pcd.wavelengths) / scale
+    np.asarray(pcd.colors)[:, 0] = tmp_wvl[:, 0]
+    np.asarray(pcd.colors)[:, 1] = tmp_wvl[:, 1]
+    np.asarray(pcd.colors)[:, 2] = tmp_wvl[:, 2]
+    np.asarray(pcd.nir)[:, 0] = tmp_wvl[:, 3]
+    return pcd
diff --git a/mdocs_settings.json b/mdocs_settings.json
new file mode 100644
index 0000000000000000000000000000000000000000..a68a31049b9b3e4721912251e4358fef6dc4249f
--- /dev/null
+++ b/mdocs_settings.json
@@ -0,0 +1,7 @@
+{
+    "title": "F500 analytics for NPEC",
+    "description": "Several Python and R scripts for processing the raw F500 data. Uses the ISA-JSON for metadata",
+    "developer": "Sven Warris",
+    "mail": "sven.warris@wur.nl",
+    "link": "https://git.wur.nl/NPEC/analytics"
+}
diff --git a/reports/full_analysis_summary_ai.md b/reports/full_analysis_summary_ai.md
new file mode 100644
index 0000000000000000000000000000000000000000..01f34bdc0e1760856cc453090429e7ae2e1b0002
--- /dev/null
+++ b/reports/full_analysis_summary_ai.md
@@ -0,0 +1,51 @@
+### Summary of Key Findings from Each Report
+
+#### Radon MI Report
+- **Maintainability Index Scores**: The codebase has a mix of maintainability scores, with some files scoring low due to high complexity, lack of comments, large file sizes, poor modularization, and code duplication.
+- **Improvement Suggestions**: Refactor complex functions, add documentation, modularize code, reduce duplication, simplify logic, and use descriptive naming.
+
+#### Radon CC Report
+- **Complexity Scores**: Some functions have high cyclomatic complexity, making them difficult to understand and maintain.
+- **Refactoring Suggestions**: Break down complex functions, reduce nested logic, use design patterns, modularize code, and improve error handling.
+
+#### Pylint Report
+- **Linting Issues**: The codebase has convention violations, warnings, errors, and refactor suggestions, impacting readability, maintainability, and potential for bugs.
+- **Improvement Strategies**: Adopt PEP 8 standards, improve documentation, optimize imports, refactor complex functions, use context managers, update string formatting, and resolve errors.
+
+#### Vulture Report
+- **Unused Code**: The codebase contains unused imports, variables, attributes, classes, and methods, which can be removed to streamline the project.
+- **Critical Unused Code**: Some unused code might be critical if certain functionalities are expected, requiring careful review before removal.
+
+### Common Issues Across Reports and High-Level Strategies for Improvement
+
+1. **Complexity and Maintainability**: High complexity and low maintainability scores are common issues. Strategies include refactoring complex functions, simplifying logic, and improving modularization.
+
+2. **Documentation and Naming**: Lack of comments and poor naming conventions are prevalent. Strategies include adding comprehensive docstrings and adhering to PEP 8 naming conventions.
+
+3. **Code Duplication and Unused Code**: Code duplication and unused code are identified across reports. Strategies include removing unused code and refactoring duplicated code into reusable components.
+
+4. **Error Handling and Testing**: Potential runtime errors and lack of automated testing are concerns. Strategies include improving error handling and increasing test coverage.
+
+### Overall Assessment of the Codebase’s Quality, Complexity, and Maintainability
+
+- **Quality**: The codebase currently has a low quality score, primarily due to poor adherence to coding standards, lack of documentation, and potential runtime errors. Addressing these issues will significantly enhance the code's quality.
+
+- **Complexity**: While the average complexity score is relatively low, there are outliers with high complexity that need attention. Regular refactoring and code reviews can help manage complexity.
+
+- **Maintainability**: The codebase has a mix of maintainability scores, with some files requiring significant improvement. By focusing on refactoring, documentation, and modularization, maintainability can be improved.
+
+### Recommendations for Improvement
+
+1. **Refactor and Simplify**: Focus on refactoring complex functions and simplifying logic to improve readability and maintainability.
+
+2. **Enhance Documentation**: Add comprehensive docstrings and inline comments to clarify code functionality and usage.
+
+3. **Adopt Coding Standards**: Ensure adherence to PEP 8 standards for naming conventions, line length, and overall code style.
+
+4. **Remove Unused Code**: Safely remove unused imports, variables, and functions to streamline the codebase.
+
+5. **Improve Testing and Error Handling**: Increase automated test coverage and implement consistent error-handling strategies.
+
+6. **Regular Code Reviews**: Conduct regular code reviews to catch potential issues early and promote best practices.
+
+By implementing these strategies, the codebase's quality, complexity, and maintainability can be significantly improved, making it easier to understand, modify, and extend in the future.
\ No newline at end of file
diff --git a/reports/pylint_report.txt b/reports/pylint_report.txt
new file mode 100644
index 0000000000000000000000000000000000000000..5f73affac48f5f003b3117134d9dbc9e162d969b
--- /dev/null
+++ b/reports/pylint_report.txt
@@ -0,0 +1,1011 @@
+************* Module lib.rescaleWavelength
+analytics/lib/rescaleWavelength.py:14:14: C0303: Trailing whitespace (trailing-whitespace)
+analytics/lib/rescaleWavelength.py:15:0: C0305: Trailing newlines (trailing-newlines)
+analytics/lib/rescaleWavelength.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/lib/rescaleWavelength.py:1:0: C0103: Module name "rescaleWavelength" doesn't conform to snake_case naming style (invalid-name)
+analytics/lib/rescaleWavelength.py:8:0: C0116: Missing function or method docstring (missing-function-docstring)
+************* Module lib.computePhenotypes
+analytics/lib/computePhenotypes.py:5:37: C0303: Trailing whitespace (trailing-whitespace)
+analytics/lib/computePhenotypes.py:8:0: C0301: Line too long (175/100) (line-too-long)
+analytics/lib/computePhenotypes.py:15:0: C0301: Line too long (167/100) (line-too-long)
+analytics/lib/computePhenotypes.py:18:26: C0303: Trailing whitespace (trailing-whitespace)
+analytics/lib/computePhenotypes.py:21:0: C0301: Line too long (148/100) (line-too-long)
+analytics/lib/computePhenotypes.py:22:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/lib/computePhenotypes.py:23:18: C0303: Trailing whitespace (trailing-whitespace)
+analytics/lib/computePhenotypes.py:26:0: C0301: Line too long (225/100) (line-too-long)
+analytics/lib/computePhenotypes.py:27:0: C0305: Trailing newlines (trailing-newlines)
+analytics/lib/computePhenotypes.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/lib/computePhenotypes.py:1:0: C0103: Module name "computePhenotypes" doesn't conform to snake_case naming style (invalid-name)
+analytics/lib/computePhenotypes.py:6:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/lib/computePhenotypes.py:13:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/lib/computePhenotypes.py:19:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/lib/computePhenotypes.py:24:0: C0116: Missing function or method docstring (missing-function-docstring)
+************* Module visualizations.histograms_ply
+analytics/visualizations/histograms_ply.py:11:82: C0303: Trailing whitespace (trailing-whitespace)
+analytics/visualizations/histograms_ply.py:31:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/visualizations/histograms_ply.py:90:0: C0305: Trailing newlines (trailing-newlines)
+analytics/visualizations/histograms_ply.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/visualizations/histograms_ply.py:20:0: C0103: Constant name "bins" doesn't conform to UPPER_CASE naming style (invalid-name)
+analytics/visualizations/histograms_ply.py:21:0: C0103: Constant name "scale" doesn't conform to UPPER_CASE naming style (invalid-name)
+analytics/visualizations/histograms_ply.py:23:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/visualizations/histograms_ply.py:23:0: C0103: Function name "createPNG" doesn't conform to snake_case naming style (invalid-name)
+analytics/visualizations/histograms_ply.py:23:14: W0621: Redefining name 'pcd' from outer scope (line 33) (redefined-outer-name)
+analytics/visualizations/histograms_ply.py:24:10: E1101: Module 'open3d.visualization' has no 'Visualizer' member (no-member)
+analytics/visualizations/histograms_ply.py:2:0: C0411: standard import "sys" should be placed before third party import "open3d" (wrong-import-order)
+analytics/visualizations/histograms_ply.py:4:0: C0411: standard import "math" should be placed before third party imports "open3d", "numpy" (wrong-import-order)
+analytics/visualizations/histograms_ply.py:3:0: W0611: Unused import numpy (unused-import)
+analytics/visualizations/histograms_ply.py:4:0: W0611: Unused import math (unused-import)
+************* Module visualizations.animate_ply
+analytics/visualizations/animate_ply.py:7:55: C0303: Trailing whitespace (trailing-whitespace)
+analytics/visualizations/animate_ply.py:8:15: C0303: Trailing whitespace (trailing-whitespace)
+analytics/visualizations/animate_ply.py:11:82: C0303: Trailing whitespace (trailing-whitespace)
+analytics/visualizations/animate_ply.py:27:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/visualizations/animate_ply.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/visualizations/animate_ply.py:28:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/visualizations/animate_ply.py:75:4: W0621: Redefining name 'pcd' from outer scope (line 21) (redefined-outer-name)
+analytics/visualizations/animate_ply.py:29:22: E1101: Module 'open3d.visualization' has no 'Visualizer' member (no-member)
+analytics/visualizations/animate_ply.py:67:37: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/visualizations/animate_ply.py:32:4: W0612: Unused variable 'reset_motion' (unused-variable)
+analytics/visualizations/animate_ply.py:3:0: C0411: standard import "copy" should be placed before third party imports "open3d", "numpy" (wrong-import-order)
+analytics/visualizations/animate_ply.py:4:0: C0411: standard import "sys" should be placed before third party imports "open3d", "numpy" (wrong-import-order)
+analytics/visualizations/animate_ply.py:5:0: C0411: standard import "time" should be placed before third party imports "open3d", "numpy" (wrong-import-order)
+analytics/visualizations/animate_ply.py:2:0: W0611: Unused numpy imported as np (unused-import)
+analytics/visualizations/animate_ply.py:3:0: W0611: Unused import copy (unused-import)
+************* Module visualizations.visualization_ply
+analytics/visualizations/visualization_ply.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/visualizations/visualization_ply.py:10:0: E1101: Module 'open3d.visualization' has no 'draw_geometries' member (no-member)
+analytics/visualizations/visualization_ply.py:15:0: E1101: Module 'open3d.visualization' has no 'draw_geometries' member (no-member)
+analytics/visualizations/visualization_ply.py:20:0: E1101: Module 'open3d.visualization' has no 'draw_geometries' member (no-member)
+analytics/visualizations/visualization_ply.py:25:0: E1101: Module 'open3d.visualization' has no 'draw_geometries' member (no-member)
+analytics/visualizations/visualization_ply.py:2:0: C0411: standard import "sys" should be placed before third party import "open3d" (wrong-import-order)
+analytics/visualizations/visualization_ply.py:3:0: W0611: Unused import numpy (unused-import)
+************* Module rgbsideview.clearWhites
+analytics/rgbsideview/clearWhites.py:7:0: C0305: Trailing newlines (trailing-newlines)
+analytics/rgbsideview/clearWhites.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/rgbsideview/clearWhites.py:1:0: C0103: Module name "clearWhites" doesn't conform to snake_case naming style (invalid-name)
+analytics/rgbsideview/clearWhites.py:3:0: C0411: standard import "sys" should be placed before third party imports "open3d", "numpy" (wrong-import-order)
+analytics/rgbsideview/clearWhites.py:2:0: W0611: Unused import numpy (unused-import)
+************* Module f500.collecting.PointCloud
+analytics/f500/collecting/PointCloud.py:6:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:9:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:10:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:14:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:19:109: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:19:0: C0301: Line too long (109/100) (line-too-long)
+analytics/f500/collecting/PointCloud.py:32:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:34:28: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:41:29: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:43:0: C0325: Unnecessary parens after 'return' keyword (superfluous-parens)
+analytics/f500/collecting/PointCloud.py:44:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:62:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:64:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens)
+analytics/f500/collecting/PointCloud.py:65:0: W0311: Bad indentation. Found 21 spaces, expected 20 (bad-indentation)
+analytics/f500/collecting/PointCloud.py:66:0: W0301: Unnecessary semicolon (unnecessary-semicolon)
+analytics/f500/collecting/PointCloud.py:67:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:68:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:74:75: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:81:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:82:30: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:86:0: C0325: Unnecessary parens after 'return' keyword (superfluous-parens)
+analytics/f500/collecting/PointCloud.py:90:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:91:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:110:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:129:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:143:0: C0301: Line too long (123/100) (line-too-long)
+analytics/f500/collecting/PointCloud.py:158:0: C0301: Line too long (113/100) (line-too-long)
+analytics/f500/collecting/PointCloud.py:163:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:164:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:176:0: C0301: Line too long (109/100) (line-too-long)
+analytics/f500/collecting/PointCloud.py:177:0: C0325: Unnecessary parens after '=' keyword (superfluous-parens)
+analytics/f500/collecting/PointCloud.py:194:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:195:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/PointCloud.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/f500/collecting/PointCloud.py:1:0: C0103: Module name "PointCloud" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:5:0: C0115: Missing class docstring (missing-class-docstring)
+analytics/f500/collecting/PointCloud.py:102:8: C0103: Attribute name "untrimmedPCD" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:15:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:15:4: C0103: Method name "writeHistogram" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:15:56: C0103: Argument name "sampleName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:15:74: C0103: Argument name "dataRange" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:15:4: R0913: Too many arguments (7/5) (too-many-arguments)
+analytics/f500/collecting/PointCloud.py:18:12: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/PointCloud.py:19:16: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/PointCloud.py:20:16: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/PointCloud.py:21:16: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/PointCloud.py:18:12: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/PointCloud.py:25:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:25:4: C0103: Method name "getWavelengths" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:26:8: R1705: Unnecessary "else" after "return", remove the "else" and de-indent the code inside it (no-else-return)
+analytics/f500/collecting/PointCloud.py:33:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:45:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:52:8: C0200: Consider using enumerate instead of iterating with range and len (consider-using-enumerate)
+analytics/f500/collecting/PointCloud.py:53:12: C0103: Variable name "minColor" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:54:12: C0103: Variable name "maxColor" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:69:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:77:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:83:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:88:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:88:4: C0103: Method name "setColors" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:92:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:98:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:98:19: C0103: Argument name "zIndex" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/PointCloud.py:133:14: E1101: Module 'open3d.visualization' has no 'Visualizer' member (no-member)
+analytics/f500/collecting/PointCloud.py:165:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/PointCloud.py:102:8: W0201: Attribute 'untrimmedPCD' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/PointCloud.py:3:0: C0411: standard import "os" should be placed before third party imports "open3d", "numpy" (wrong-import-order)
+analytics/f500/collecting/PointCloud.py:3:0: W0611: Unused import os (unused-import)
+************* Module f500.collecting.deleteFAIRObject
+analytics/f500/collecting/deleteFAIRObject.py:27:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/deleteFAIRObject.py:29:0: C0305: Trailing newlines (trailing-newlines)
+analytics/f500/collecting/deleteFAIRObject.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/f500/collecting/deleteFAIRObject.py:1:0: C0103: Module name "deleteFAIRObject" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/deleteFAIRObject.py:9:20: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/deleteFAIRObject.py:13:0: C0103: Constant name "r" doesn't conform to UPPER_CASE naming style (invalid-name)
+analytics/f500/collecting/deleteFAIRObject.py:14:0: C0103: Constant name "host" doesn't conform to UPPER_CASE naming style (invalid-name)
+analytics/f500/collecting/deleteFAIRObject.py:16:26: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/deleteFAIRObject.py:18:26: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/deleteFAIRObject.py:21:26: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/deleteFAIRObject.py:23:26: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/deleteFAIRObject.py:26:26: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/deleteFAIRObject.py:2:0: C0411: standard import "sys" should be placed before third party import "requests" (wrong-import-order)
+************* Module f500.collecting.F500
+analytics/f500/collecting/F500.py:43:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:44:23: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:46:0: C0301: Line too long (114/100) (line-too-long)
+analytics/f500/collecting/F500.py:47:0: C0301: Line too long (144/100) (line-too-long)
+analytics/f500/collecting/F500.py:48:0: C0301: Line too long (102/100) (line-too-long)
+analytics/f500/collecting/F500.py:49:0: C0301: Line too long (109/100) (line-too-long)
+analytics/f500/collecting/F500.py:50:0: C0301: Line too long (109/100) (line-too-long)
+analytics/f500/collecting/F500.py:61:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:62:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:64:0: C0301: Line too long (120/100) (line-too-long)
+analytics/f500/collecting/F500.py:68:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:78:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:85:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:94:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:97:0: C0301: Line too long (107/100) (line-too-long)
+analytics/f500/collecting/F500.py:100:0: C0301: Line too long (183/100) (line-too-long)
+analytics/f500/collecting/F500.py:107:0: C0301: Line too long (120/100) (line-too-long)
+analytics/f500/collecting/F500.py:113:0: C0301: Line too long (138/100) (line-too-long)
+analytics/f500/collecting/F500.py:116:0: C0301: Line too long (133/100) (line-too-long)
+analytics/f500/collecting/F500.py:117:0: C0301: Line too long (109/100) (line-too-long)
+analytics/f500/collecting/F500.py:118:0: C0301: Line too long (113/100) (line-too-long)
+analytics/f500/collecting/F500.py:119:0: C0301: Line too long (115/100) (line-too-long)
+analytics/f500/collecting/F500.py:123:0: C0301: Line too long (118/100) (line-too-long)
+analytics/f500/collecting/F500.py:134:0: C0301: Line too long (115/100) (line-too-long)
+analytics/f500/collecting/F500.py:151:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:154:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:160:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:161:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:165:43: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:167:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:179:0: C0301: Line too long (105/100) (line-too-long)
+analytics/f500/collecting/F500.py:185:59: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:193:0: C0301: Line too long (103/100) (line-too-long)
+analytics/f500/collecting/F500.py:197:0: C0301: Line too long (145/100) (line-too-long)
+analytics/f500/collecting/F500.py:198:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:199:0: C0301: Line too long (201/100) (line-too-long)
+analytics/f500/collecting/F500.py:201:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:204:0: C0301: Line too long (126/100) (line-too-long)
+analytics/f500/collecting/F500.py:213:0: C0301: Line too long (113/100) (line-too-long)
+analytics/f500/collecting/F500.py:215:0: C0301: Line too long (115/100) (line-too-long)
+analytics/f500/collecting/F500.py:222:0: C0301: Line too long (113/100) (line-too-long)
+analytics/f500/collecting/F500.py:237:41: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:242:41: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:247:41: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:253:41: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:259:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:262:46: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:265:0: C0301: Line too long (189/100) (line-too-long)
+analytics/f500/collecting/F500.py:267:16: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:274:80: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:280:78: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:286:78: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:292:78: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:302:46: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:303:0: C0301: Line too long (147/100) (line-too-long)
+analytics/f500/collecting/F500.py:306:83: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:307:0: C0301: Line too long (169/100) (line-too-long)
+analytics/f500/collecting/F500.py:309:67: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:313:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:318:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:321:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:323:0: C0301: Line too long (112/100) (line-too-long)
+analytics/f500/collecting/F500.py:326:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:333:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:335:0: C0301: Line too long (103/100) (line-too-long)
+analytics/f500/collecting/F500.py:336:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:339:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:344:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:345:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:348:0: C0301: Line too long (108/100) (line-too-long)
+analytics/f500/collecting/F500.py:349:104: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:349:0: C0301: Line too long (104/100) (line-too-long)
+analytics/f500/collecting/F500.py:354:77: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:358:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:365:0: C0301: Line too long (109/100) (line-too-long)
+analytics/f500/collecting/F500.py:366:0: C0301: Line too long (152/100) (line-too-long)
+analytics/f500/collecting/F500.py:379:0: C0301: Line too long (138/100) (line-too-long)
+analytics/f500/collecting/F500.py:380:0: C0301: Line too long (168/100) (line-too-long)
+analytics/f500/collecting/F500.py:381:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:384:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:387:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:388:0: C0301: Line too long (108/100) (line-too-long)
+analytics/f500/collecting/F500.py:389:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:391:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:393:91: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:394:0: C0301: Line too long (133/100) (line-too-long)
+analytics/f500/collecting/F500.py:398:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:400:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:415:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:419:0: C0301: Line too long (106/100) (line-too-long)
+analytics/f500/collecting/F500.py:422:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:424:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:431:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:444:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:448:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:449:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:458:0: C0301: Line too long (138/100) (line-too-long)
+analytics/f500/collecting/F500.py:461:0: C0301: Line too long (181/100) (line-too-long)
+analytics/f500/collecting/F500.py:463:0: C0301: Line too long (126/100) (line-too-long)
+analytics/f500/collecting/F500.py:465:68: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:466:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:471:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:473:95: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:474:0: C0301: Line too long (146/100) (line-too-long)
+analytics/f500/collecting/F500.py:483:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:488:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:494:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:497:0: C0301: Line too long (124/100) (line-too-long)
+analytics/f500/collecting/F500.py:507:0: C0301: Line too long (180/100) (line-too-long)
+analytics/f500/collecting/F500.py:511:0: C0301: Line too long (139/100) (line-too-long)
+analytics/f500/collecting/F500.py:513:0: C0301: Line too long (114/100) (line-too-long)
+analytics/f500/collecting/F500.py:514:0: C0301: Line too long (112/100) (line-too-long)
+analytics/f500/collecting/F500.py:515:0: C0301: Line too long (111/100) (line-too-long)
+analytics/f500/collecting/F500.py:516:0: C0301: Line too long (113/100) (line-too-long)
+analytics/f500/collecting/F500.py:517:0: C0301: Line too long (109/100) (line-too-long)
+analytics/f500/collecting/F500.py:519:0: C0301: Line too long (141/100) (line-too-long)
+analytics/f500/collecting/F500.py:520:0: C0301: Line too long (152/100) (line-too-long)
+analytics/f500/collecting/F500.py:523:0: C0301: Line too long (121/100) (line-too-long)
+analytics/f500/collecting/F500.py:524:43: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:525:0: C0301: Line too long (158/100) (line-too-long)
+analytics/f500/collecting/F500.py:549:0: C0301: Line too long (142/100) (line-too-long)
+analytics/f500/collecting/F500.py:554:0: C0301: Line too long (135/100) (line-too-long)
+analytics/f500/collecting/F500.py:559:0: C0301: Line too long (137/100) (line-too-long)
+analytics/f500/collecting/F500.py:574:0: C0301: Line too long (111/100) (line-too-long)
+analytics/f500/collecting/F500.py:577:36: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:585:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:588:40: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:589:0: C0301: Line too long (106/100) (line-too-long)
+analytics/f500/collecting/F500.py:595:0: C0301: Line too long (146/100) (line-too-long)
+analytics/f500/collecting/F500.py:600:0: C0301: Line too long (113/100) (line-too-long)
+analytics/f500/collecting/F500.py:603:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:606:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:608:16: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:613:27: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:618:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:619:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:630:0: C0301: Line too long (106/100) (line-too-long)
+analytics/f500/collecting/F500.py:633:0: C0301: Line too long (103/100) (line-too-long)
+analytics/f500/collecting/F500.py:637:0: C0301: Line too long (143/100) (line-too-long)
+analytics/f500/collecting/F500.py:638:86: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:643:81: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:644:98: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:650:106: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:650:0: C0301: Line too long (106/100) (line-too-long)
+analytics/f500/collecting/F500.py:651:0: C0301: Line too long (127/100) (line-too-long)
+analytics/f500/collecting/F500.py:653:0: C0301: Line too long (168/100) (line-too-long)
+analytics/f500/collecting/F500.py:654:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:655:0: C0301: Line too long (117/100) (line-too-long)
+analytics/f500/collecting/F500.py:657:0: C0301: Line too long (158/100) (line-too-long)
+analytics/f500/collecting/F500.py:658:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:659:0: C0301: Line too long (117/100) (line-too-long)
+analytics/f500/collecting/F500.py:661:0: C0301: Line too long (158/100) (line-too-long)
+analytics/f500/collecting/F500.py:663:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:664:0: C0301: Line too long (117/100) (line-too-long)
+analytics/f500/collecting/F500.py:666:0: C0301: Line too long (158/100) (line-too-long)
+analytics/f500/collecting/F500.py:668:0: C0301: Line too long (115/100) (line-too-long)
+analytics/f500/collecting/F500.py:670:0: C0301: Line too long (157/100) (line-too-long)
+analytics/f500/collecting/F500.py:672:0: C0301: Line too long (144/100) (line-too-long)
+analytics/f500/collecting/F500.py:673:0: C0301: Line too long (154/100) (line-too-long)
+analytics/f500/collecting/F500.py:674:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:674:0: C0325: Unnecessary parens after 'if' keyword (superfluous-parens)
+analytics/f500/collecting/F500.py:675:0: C0301: Line too long (154/100) (line-too-long)
+analytics/f500/collecting/F500.py:678:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:686:0: C0301: Line too long (174/100) (line-too-long)
+analytics/f500/collecting/F500.py:687:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:705:0: C0301: Line too long (110/100) (line-too-long)
+analytics/f500/collecting/F500.py:714:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:715:67: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:717:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:721:58: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:724:0: C0301: Line too long (108/100) (line-too-long)
+analytics/f500/collecting/F500.py:728:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/F500.py:729:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:736:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:737:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500.py:557:41: W0511: TODO: get this from the metadata file (fixme)
+analytics/f500/collecting/F500.py:1:0: C0103: Module name "F500" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:14:0: W0401: Wildcard import isatools.model (wildcard-import)
+analytics/f500/collecting/F500.py:25:0: E0401: Unable to import 'Fairdom' (import-error)
+analytics/f500/collecting/F500.py:30:0: C0115: Missing class docstring (missing-class-docstring)
+analytics/f500/collecting/F500.py:46:8: C0103: Attribute name "columnsToDrop" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:53:8: C0103: Attribute name "assaysDone" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:156:8: C0103: Attribute name "investigationPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:540:8: C0103: Attribute name "sourceContainer" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:543:8: C0103: Attribute name "studyName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:551:12: C0103: Attribute name "NPEC" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:628:12: C0103: Attribute name "sampleName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:30:0: R0902: Too many instance attributes (20/7) (too-many-instance-attributes)
+analytics/f500/collecting/F500.py:57:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:57:4: C0103: Method name "commandLineInterface" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:162:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:162:4: C0103: Method name "setLogger" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:169:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:169:4: C0103: Method name "removeAfterSpaceFromDataMatrix" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:172:8: W0702: No exception type(s) specified (bare-except)
+analytics/f500/collecting/F500.py:176:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:176:4: C0103: Method name "createISA" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:187:11: C0121: Comparison 'self.studyName != None' should be 'self.studyName is not None' (singleton-comparison)
+analytics/f500/collecting/F500.py:202:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:202:4: C0103: Method name "writeISAJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:203:8: C0103: Variable name "jsonOutput" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:203:21: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/F500.py:203:21: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/F500.py:209:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:209:4: C0103: Method name "copyPots" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:216:8: W0702: No exception type(s) specified (bare-except)
+analytics/f500/collecting/F500.py:217:29: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:221:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:221:4: C0103: Method name "measurementsToFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:228:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:228:4: C0103: Method name "rawMeasurementsToFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:235:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:235:4: C0103: Method name "addPointClouds" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:236:19: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:241:19: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:246:19: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:252:19: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:261:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:261:4: C0103: Method name "copyPointcloudFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:261:38: C0103: Argument name "fullPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:263:12: C0103: Variable name "AB" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:264:12: C0103: Variable name "pointcloudPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:265:29: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:269:19: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:270:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:275:19: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:276:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:281:19: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:282:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:287:19: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:288:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:293:19: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:294:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:301:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:301:4: C0103: Method name "copyPlotPointcloudFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:301:42: C0103: Argument name "fullPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:301:4: R0914: Too many local variables (20/15) (too-many-locals)
+analytics/f500/collecting/F500.py:305:12: C0103: Variable name "AB" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:306:12: C0103: Variable name "pointcloudPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:353:19: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:309:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:314:20: C0103: Variable name "MS" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:323:61: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:325:65: E0602: Undefined variable 'full_file_path' (undefined-variable)
+analytics/f500/collecting/F500.py:302:8: R1702: Too many nested blocks (6/5) (too-many-nested-blocks)
+analytics/f500/collecting/F500.py:348:41: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:354:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:361:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:361:4: C0103: Method name "createSample" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:361:4: R0913: Too many arguments (6/5) (too-many-arguments)
+analytics/f500/collecting/F500.py:366:85: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:374:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:374:4: C0103: Method name "createAssay" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:393:8: C0103: Variable name "fullPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:395:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:401:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:404:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:407:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:410:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:414:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:418:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:425:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:428:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:432:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:436:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:440:8: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:374:4: R0915: Too many statements (54/50) (too-many-statements)
+analytics/f500/collecting/F500.py:453:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:453:4: C0103: Method name "createAssayPlot" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:463:29: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:464:11: C0121: Comparison 'sample != None' should be 'sample is not None' (singleton-comparison)
+analytics/f500/collecting/F500.py:473:12: C0103: Variable name "fullPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:475:12: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:479:12: C0103: Variable name "dataFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:453:35: W0613: Unused argument 'path' (unused-argument)
+analytics/f500/collecting/F500.py:453:49: W0613: Unused argument 'title' (unused-argument)
+analytics/f500/collecting/F500.py:486:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:486:4: C0103: Method name "correctDataMatrix" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:486:4: E0213: Method 'correctDataMatrix' should have "self" as first argument (no-self-argument)
+analytics/f500/collecting/F500.py:487:40: E1136: Value 'row' is unsubscriptable (unsubscriptable-object)
+analytics/f500/collecting/F500.py:487:66: E1136: Value 'row' is unsubscriptable (unsubscriptable-object)
+analytics/f500/collecting/F500.py:492:12: E1137: 'row' does not support item assignment (unsupported-assignment-operation)
+analytics/f500/collecting/F500.py:495:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:521:27: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:522:44: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:522:44: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:523:20: C0103: Variable name "assayPlot" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:527:41: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:527:24: W4902: Using deprecated method warn() (deprecated-method)
+analytics/f500/collecting/F500.py:527:41: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:535:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:535:4: C0103: Method name "getDirectoryListing" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:535:34: C0103: Argument name "rootFolder" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:538:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:544:11: C0121: Comparison 'self.metadata == None' should be 'self.metadata is None' (singleton-comparison)
+analytics/f500/collecting/F500.py:547:11: C0121: Comparison 'self.args.metadata != None' should be 'self.args.metadata is not None' (singleton-comparison)
+analytics/f500/collecting/F500.py:549:34: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:549:34: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:550:16: R1722: Consider using 'sys.exit' instead (consider-using-sys-exit)
+analytics/f500/collecting/F500.py:554:34: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:554:34: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:555:16: R1722: Consider using 'sys.exit' instead (consider-using-sys-exit)
+analytics/f500/collecting/F500.py:562:8: C0103: Variable name "previousAssay" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:565:29: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:565:29: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:570:24: C0103: Variable name "assayName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:571:27: C0121: Comparison 'self.checkAssayName.match(assayName) != None' should be 'self.checkAssayName.match(assayName) is not None' (singleton-comparison)
+analytics/f500/collecting/F500.py:574:49: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:574:32: W4902: Using deprecated method warn() (deprecated-method)
+analytics/f500/collecting/F500.py:574:49: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:599:39: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:586:53: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:586:53: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:587:53: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:587:53: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:596:36: W0702: No exception type(s) specified (bare-except)
+analytics/f500/collecting/F500.py:564:8: R1702: Too many nested blocks (9/5) (too-many-nested-blocks)
+analytics/f500/collecting/F500.py:589:40: C0103: Variable name "currentMeasurements" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:597:57: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:597:40: W4902: Using deprecated method warn() (deprecated-method)
+analytics/f500/collecting/F500.py:597:57: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:598:36: C0103: Variable name "previousAssay" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:600:56: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:600:56: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:564:8: R1702: Too many nested blocks (9/5) (too-many-nested-blocks)
+analytics/f500/collecting/F500.py:605:45: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:605:45: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:610:19: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:611:36: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:611:36: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:538:4: R0912: Too many branches (23/12) (too-many-branches)
+analytics/f500/collecting/F500.py:538:4: R0915: Too many statements (70/50) (too-many-statements)
+analytics/f500/collecting/F500.py:566:22: W0612: Unused variable 'dirs' (unused-variable)
+analytics/f500/collecting/F500.py:620:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:620:4: C0103: Method name "processPointclouds" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:620:4: R0914: Too many local variables (19/15) (too-many-locals)
+analytics/f500/collecting/F500.py:621:8: E0401: Unable to import 'PointCloud' (import-error)
+analytics/f500/collecting/F500.py:621:8: C0415: Import outside toplevel (PointCloud.PointCloud) (import-outside-toplevel)
+analytics/f500/collecting/F500.py:623:25: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:623:25: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:624:42: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/F500.py:624:42: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/F500.py:630:29: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:630:29: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:634:41: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:634:41: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:689:31: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:637:28: C0103: Variable name "outputBase" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:638:32: R0916: Too many boolean expressions in if statement (8/5) (too-many-boolean-expressions)
+analytics/f500/collecting/F500.py:646:32: C0103: Variable name "tmpFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:651:53: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:651:53: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:655:53: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:655:53: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:659:53: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:659:53: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:664:53: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:664:53: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:668:53: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:668:53: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:672:53: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:672:53: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:675:53: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:675:53: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:690:48: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:690:48: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:620:4: R0912: Too many branches (14/12) (too-many-branches)
+analytics/f500/collecting/F500.py:620:4: R0915: Too many statements (58/50) (too-many-statements)
+analytics/f500/collecting/F500.py:627:8: R1702: Too many nested blocks (8/5) (too-many-nested-blocks)
+analytics/f500/collecting/F500.py:646:42: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/F500.py:693:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:693:4: C0103: Method name "combineHistograms" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:694:25: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:694:25: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:695:25: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:695:25: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:696:42: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/F500.py:696:42: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/F500.py:698:8: C0103: Variable name "histogramLabels" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:699:12: C0103: Variable name "hLabel" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:701:12: C0103: Variable name "histogramList" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:705:33: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:705:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:709:45: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:709:45: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:710:28: C0103: Variable name "histogramFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:716:35: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:712:32: C0103: Variable name "histogramData" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:714:36: C0103: Variable name "histogramData" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:717:52: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:717:52: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:727:19: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500.py:699:8: R1702: Too many nested blocks (7/5) (too-many-nested-blocks)
+analytics/f500/collecting/F500.py:719:16: C0103: Variable name "histogramFilename" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500.py:724:33: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:724:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:728:36: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:728:36: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:730:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500.py:731:25: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:731:25: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:732:25: W1202: Use lazy % formatting in logging functions (logging-format-interpolation)
+analytics/f500/collecting/F500.py:732:25: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500.py:733:42: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/F500.py:733:42: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/F500.py:153:8: W0201: Attribute 'args' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:156:8: W0201: Attribute 'investigationPath' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:163:8: W0201: Attribute 'logger' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:539:8: W0201: Attribute 'source' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:540:8: W0201: Attribute 'sourceContainer' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:542:8: W0201: Attribute 'datamatrix_file' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:543:8: W0201: Attribute 'studyName' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:559:12: W0201: Attribute 'studyName' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:551:12: W0201: Attribute 'NPEC' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:552:12: W0201: Attribute 'NPEC' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:560:8: W0201: Attribute 'start' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:583:36: W0201: Attribute 'file' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:584:36: W0201: Attribute 'root' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:628:12: W0201: Attribute 'sampleName' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:703:16: W0201: Attribute 'sampleName' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:629:12: W0201: Attribute 'timepoint' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:704:16: W0201: Attribute 'timepoint' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500.py:30:0: R0904: Too many public methods (21/20) (too-many-public-methods)
+analytics/f500/collecting/F500.py:10:0: C0411: standard import "pprint" should be placed before third party import "pandas" (wrong-import-order)
+analytics/f500/collecting/F500.py:11:0: C0411: standard import "json" should be placed before third party import "pandas" (wrong-import-order)
+analytics/f500/collecting/F500.py:17:0: C0411: standard import "re" should be placed before third party imports "pandas", "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/F500.py:18:0: C0411: standard import "collections.defaultdict" should be placed before third party imports "pandas", "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/F500.py:19:0: C0411: standard import "logging" should be placed before third party imports "pandas", "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/F500.py:20:0: C0411: standard import "shutil" should be placed before third party imports "pandas", "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/F500.py:21:0: C0411: standard import "glob" should be placed before third party imports "pandas", "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/F500.py:22:0: C0411: standard import "gzip" should be placed before third party imports "pandas", "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/F500.py:23:0: C0411: standard import "tempfile" should be placed before third party imports "pandas", "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/F500.py:27:0: C0411: standard import "datetime" should be placed before third party imports "pandas", "isatools.isajson.ISAJSONEncoder", "isatools" (...) "numpy", "Fairdom.Fairdom", "matplotlib.cm" (wrong-import-order)
+analytics/f500/collecting/F500.py:28:0: C0411: standard import "string" should be placed before third party imports "pandas", "isatools.isajson.ISAJSONEncoder", "isatools" (...) "numpy", "Fairdom.Fairdom", "matplotlib.cm" (wrong-import-order)
+analytics/f500/collecting/F500.py:7:0: W0611: Unused import sys (unused-import)
+analytics/f500/collecting/F500.py:10:0: W0611: Unused import pprint (unused-import)
+analytics/f500/collecting/F500.py:13:0: W0611: Unused import isatools (unused-import)
+analytics/f500/collecting/F500.py:15:0: W0611: Unused isatab imported from isatools (unused-import)
+analytics/f500/collecting/F500.py:14:0: W0614: Unused import(s) Commentable, Comment, set_context, RawDataFile, DerivedDataFile, RawSpectralDataFile, DerivedArrayDataFile, ArrayDataFile, DerivedSpectralDataFile, ProteinAssignmentFile, PeptideAssignmentFile, DerivedArrayDataMatrixFile, PostTranslationalModificationAssignmentFile, AcquisitionParameterDataFile, FreeInductionDecayDataFile, FactorValue, StudyFactor, log, Material, Extract, LabeledExtract, MetadataMixin, StudyAssayMixin, OntologySource, ParameterValue, Person, Process, ProcessSequenceNode, Protocol, load_protocol_types_info, ProtocolComponent, ProtocolParameter, Publication, plink, batch_create_assays and batch_create_materials from wildcard import of isatools.model (unused-wildcard-import)
+************* Module f500.collecting.processPointClouds
+analytics/f500/collecting/processPointClouds.py:17:32: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/processPointClouds.py:30:0: C0301: Line too long (225/100) (line-too-long)
+analytics/f500/collecting/processPointClouds.py:67:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/processPointClouds.py:70:0: C0305: Trailing newlines (trailing-newlines)
+analytics/f500/collecting/processPointClouds.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/f500/collecting/processPointClouds.py:1:0: C0103: Module name "processPointClouds" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/processPointClouds.py:6:0: W0401: Wildcard import isatools.model (wildcard-import)
+analytics/f500/collecting/processPointClouds.py:16:29: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/processPointClouds.py:16:29: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/processPointClouds.py:20:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/processPointClouds.py:20:0: C0103: Function name "writeHistogram" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/processPointClouds.py:20:25: W0621: Redefining name 'filename' from outer scope (line 36) (redefined-outer-name)
+analytics/f500/collecting/processPointClouds.py:22:8: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/processPointClouds.py:22:8: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/processPointClouds.py:28:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/processPointClouds.py:28:18: W0621: Redefining name 'pcd' from outer scope (line 35) (redefined-outer-name)
+analytics/f500/collecting/processPointClouds.py:29:4: E0602: Undefined variable 'np' (undefined-variable)
+analytics/f500/collecting/processPointClouds.py:30:13: E0602: Undefined variable 'np' (undefined-variable)
+analytics/f500/collecting/processPointClouds.py:30:46: E0602: Undefined variable 'np' (undefined-variable)
+analytics/f500/collecting/processPointClouds.py:30:87: E0602: Undefined variable 'np' (undefined-variable)
+analytics/f500/collecting/processPointClouds.py:30:122: E0602: Undefined variable 'np' (undefined-variable)
+analytics/f500/collecting/processPointClouds.py:30:156: E0602: Undefined variable 'np' (undefined-variable)
+analytics/f500/collecting/processPointClouds.py:30:191: E0602: Undefined variable 'np' (undefined-variable)
+analytics/f500/collecting/processPointClouds.py:9:0: C0411: standard import "collections.defaultdict" should be placed before third party imports "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/processPointClouds.py:10:0: C0411: standard import "shutil" should be placed before third party imports "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/processPointClouds.py:11:0: C0411: standard import "gzip" should be placed before third party imports "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson" (wrong-import-order)
+analytics/f500/collecting/processPointClouds.py:13:0: C0411: standard import "tempfile" should be placed before third party imports "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*", "isatools.isatab", "isatools.isajson", "open3d" (wrong-import-order)
+analytics/f500/collecting/processPointClouds.py:3:0: W0611: Unused import json (unused-import)
+analytics/f500/collecting/processPointClouds.py:4:0: W0611: Unused ISAJSONEncoder imported from isatools.isajson (unused-import)
+analytics/f500/collecting/processPointClouds.py:5:0: W0611: Unused import isatools (unused-import)
+analytics/f500/collecting/processPointClouds.py:7:0: W0611: Unused isatab imported from isatools (unused-import)
+analytics/f500/collecting/processPointClouds.py:6:0: W0614: Unused import(s) Assay, Characteristic, Commentable, Comment, set_context, DataFile, RawDataFile, DerivedDataFile, RawSpectralDataFile, DerivedArrayDataFile, ArrayDataFile, DerivedSpectralDataFile, ProteinAssignmentFile, PeptideAssignmentFile, DerivedArrayDataMatrixFile, PostTranslationalModificationAssignmentFile, AcquisitionParameterDataFile, FreeInductionDecayDataFile, FactorValue, StudyFactor, Investigation, log, Material, Extract, LabeledExtract, MetadataMixin, StudyAssayMixin, OntologyAnnotation, OntologySource, ParameterValue, Person, Process, ProcessSequenceNode, Protocol, load_protocol_types_info, ProtocolComponent, ProtocolParameter, Publication, Sample, Source, Study, plink, batch_create_assays and batch_create_materials from wildcard import of isatools.model (unused-wildcard-import)
+************* Module f500.collecting.Fairdom
+analytics/f500/collecting/Fairdom.py:19:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:27:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:37:0: C0301: Line too long (128/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:47:0: C0301: Line too long (150/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:50:0: C0301: Line too long (126/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:52:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:61:0: C0301: Line too long (126/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:62:0: C0301: Line too long (128/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:65:0: C0301: Line too long (103/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:67:0: C0301: Line too long (122/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:69:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:76:168: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:76:0: C0301: Line too long (168/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:77:0: C0301: Line too long (104/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:81:0: C0301: Line too long (124/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:83:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:86:62: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:88:0: C0301: Line too long (110/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:92:65: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:94:0: C0301: Line too long (119/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:99:74: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:101:0: C0301: Line too long (134/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:103:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:113:0: C0301: Line too long (120/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:115:129: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:115:0: C0301: Line too long (129/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:117:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:125:0: C0301: Line too long (110/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:126:13: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:127:0: C0301: Line too long (104/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:129:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:143:0: C0301: Line too long (104/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:144:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:145:0: C0301: Line too long (112/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:148:76: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:149:0: C0301: Line too long (133/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:150:29: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:151:0: C0301: Line too long (112/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:158:0: C0301: Line too long (161/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:163:0: C0301: Line too long (143/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:164:29: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:165:0: C0301: Line too long (116/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:172:0: C0301: Line too long (105/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:176:0: C0301: Line too long (130/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:177:29: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:178:0: C0301: Line too long (113/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:189:0: C0301: Line too long (102/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:190:21: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:191:0: C0301: Line too long (104/100) (line-too-long)
+analytics/f500/collecting/Fairdom.py:204:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/Fairdom.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/f500/collecting/Fairdom.py:1:0: C0103: Module name "Fairdom" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:3:0: W0401: Wildcard import isatools.model (wildcard-import)
+analytics/f500/collecting/Fairdom.py:12:0: C0115: Missing class docstring (missing-class-docstring)
+analytics/f500/collecting/Fairdom.py:131:12: C0103: Attribute name "currentStudies" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:23:28: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:28:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/Fairdom.py:28:4: C0103: Method name "createInvestigationJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:29:8: C0103: Variable name "investigationJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:40:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/Fairdom.py:40:4: C0103: Method name "createStudyJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:40:37: C0103: Argument name "investigationID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:41:8: C0103: Variable name "studyJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:53:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/Fairdom.py:53:4: C0103: Method name "createAssayJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:53:37: C0103: Argument name "studyID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:54:8: C0103: Variable name "assayJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:70:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/Fairdom.py:70:4: C0103: Method name "createDataFileJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:71:8: C0103: Variable name "data_fileJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:84:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/Fairdom.py:84:4: C0103: Method name "addSampleToAssayJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:84:35: C0103: Argument name "sampleID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:84:45: C0103: Argument name "assayJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:90:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/Fairdom.py:90:4: C0103: Method name "addDataFileToAssayJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:90:37: C0103: Argument name "data_fileID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:90:50: C0103: Argument name "assayJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:96:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/Fairdom.py:96:4: C0103: Method name "addDataFilesToSampleJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:96:39: C0103: Argument name "assayJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:96:50: C0103: Argument name "sampleJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:104:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/Fairdom.py:104:4: C0103: Method name "createSampleJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:105:8: C0103: Variable name "sampleJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:118:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/Fairdom.py:118:4: R0914: Too many local variables (18/15) (too-many-locals)
+analytics/f500/collecting/Fairdom.py:120:8: C0103: Variable name "investigationJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:121:25: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:123:11: R1714: Consider merging these comparisons with 'in' by using 'r.status_code in (201, 200)'. Use a set instead if elements are hashable. (consider-using-in)
+analytics/f500/collecting/Fairdom.py:124:12: C0103: Variable name "investigationID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:125:29: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:127:30: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:128:12: R1722: Consider using 'sys.exit' instead (consider-using-sys-exit)
+analytics/f500/collecting/Fairdom.py:133:12: C0103: Variable name "countFiles" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:134:12: C0103: Variable name "countAssays" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:136:16: R1723: Unnecessary "else" after "break", remove the "else" and de-indent the code inside it (no-else-break)
+analytics/f500/collecting/Fairdom.py:139:20: C0103: Variable name "countAssays" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:140:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:146:27: R1714: Consider merging these comparisons with 'in' by using 'r.status_code in (201, 200)'. Use a set instead if elements are hashable. (consider-using-in)
+analytics/f500/collecting/Fairdom.py:147:28: C0103: Variable name "studyID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:149:45: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:151:46: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:152:28: R1722: Consider using 'sys.exit' instead (consider-using-sys-exit)
+analytics/f500/collecting/Fairdom.py:154:20: C0103: Variable name "studyJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:155:16: C0103: Variable name "assayJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:159:24: C0103: Variable name "data_fileJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:161:27: R1714: Consider merging these comparisons with 'in' by using 'r.status_code in (201, 200)'. Use a set instead if elements are hashable. (consider-using-in)
+analytics/f500/collecting/Fairdom.py:162:28: C0103: Variable name "data_fileID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:163:45: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:165:46: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:166:28: R1722: Consider using 'sys.exit' instead (consider-using-sys-exit)
+analytics/f500/collecting/Fairdom.py:173:27: R1714: Consider merging these comparisons with 'in' by using 'r.status_code in (201, 200)'. Use a set instead if elements are hashable. (consider-using-in)
+analytics/f500/collecting/Fairdom.py:174:28: C0103: Variable name "sampleID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:176:45: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:178:46: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:182:28: R1722: Consider using 'sys.exit' instead (consider-using-sys-exit)
+analytics/f500/collecting/Fairdom.py:183:20: C0103: Variable name "sampleID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:185:20: C0103: Variable name "sampleJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:130:8: R1702: Too many nested blocks (6/5) (too-many-nested-blocks)
+analytics/f500/collecting/Fairdom.py:187:19: R1714: Consider merging these comparisons with 'in' by using 'r.status_code in (201, 200)'. Use a set instead if elements are hashable. (consider-using-in)
+analytics/f500/collecting/Fairdom.py:188:20: C0103: Variable name "assayID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/Fairdom.py:189:37: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:191:38: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/Fairdom.py:201:24: R1722: Consider using 'sys.exit' instead (consider-using-sys-exit)
+analytics/f500/collecting/Fairdom.py:118:4: R0912: Too many branches (23/12) (too-many-branches)
+analytics/f500/collecting/Fairdom.py:118:4: R0915: Too many statements (73/50) (too-many-statements)
+analytics/f500/collecting/Fairdom.py:133:12: W0612: Unused variable 'countFiles' (unused-variable)
+analytics/f500/collecting/Fairdom.py:185:20: W0612: Unused variable 'sampleJSON' (unused-variable)
+analytics/f500/collecting/Fairdom.py:131:12: W0201: Attribute 'currentStudies' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/Fairdom.py:132:12: W0201: Attribute 'samples' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/Fairdom.py:8:0: C0411: standard import "json" should be placed before third party imports "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*" (...) "isatools.isajson", "pandas", "requests" (wrong-import-order)
+analytics/f500/collecting/Fairdom.py:9:0: C0411: standard import "time" should be placed before third party imports "isatools.isajson.ISAJSONEncoder", "isatools", "isatools.model.*" (...) "isatools.isajson", "pandas", "requests" (wrong-import-order)
+analytics/f500/collecting/Fairdom.py:1:0: W0611: Unused ISAJSONEncoder imported from isatools.isajson (unused-import)
+analytics/f500/collecting/Fairdom.py:2:0: W0611: Unused import isatools (unused-import)
+analytics/f500/collecting/Fairdom.py:4:0: W0611: Unused isatab imported from isatools (unused-import)
+analytics/f500/collecting/Fairdom.py:5:0: W0611: Unused isajson imported from isatools (unused-import)
+analytics/f500/collecting/Fairdom.py:6:0: W0611: Unused import pandas (unused-import)
+analytics/f500/collecting/Fairdom.py:8:0: W0611: Unused import json (unused-import)
+analytics/f500/collecting/Fairdom.py:3:0: W0614: Unused import(s) Assay, Characteristic, Commentable, Comment, set_context, DataFile, RawDataFile, DerivedDataFile, RawSpectralDataFile, DerivedArrayDataFile, ArrayDataFile, DerivedSpectralDataFile, ProteinAssignmentFile, PeptideAssignmentFile, DerivedArrayDataMatrixFile, PostTranslationalModificationAssignmentFile, AcquisitionParameterDataFile, FreeInductionDecayDataFile, FactorValue, StudyFactor, Investigation, log, Material, Extract, LabeledExtract, MetadataMixin, StudyAssayMixin, OntologyAnnotation, OntologySource, ParameterValue, Person, Process, ProcessSequenceNode, Protocol, load_protocol_types_info, ProtocolComponent, ProtocolParameter, Publication, Sample, Source, Study, plink, batch_create_assays and batch_create_materials from wildcard import of isatools.model (unused-wildcard-import)
+************* Module f500.collecting.F500Azure
+analytics/f500/collecting/F500Azure.py:6:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:10:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:11:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:29:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:34:0: C0301: Line too long (103/100) (line-too-long)
+analytics/f500/collecting/F500Azure.py:35:0: C0301: Line too long (107/100) (line-too-long)
+analytics/f500/collecting/F500Azure.py:37:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:42:0: C0301: Line too long (103/100) (line-too-long)
+analytics/f500/collecting/F500Azure.py:43:0: C0301: Line too long (107/100) (line-too-long)
+analytics/f500/collecting/F500Azure.py:47:0: C0301: Line too long (126/100) (line-too-long)
+analytics/f500/collecting/F500Azure.py:51:0: C0301: Line too long (113/100) (line-too-long)
+analytics/f500/collecting/F500Azure.py:65:46: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:68:0: C0301: Line too long (189/100) (line-too-long)
+analytics/f500/collecting/F500Azure.py:69:16: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:71:76: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:72:74: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:78:28: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:84:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/F500Azure.py:85:0: C0305: Trailing newlines (trailing-newlines)
+analytics/f500/collecting/F500Azure.py:1:0: C0114: Missing module docstring (missing-module-docstring)
+analytics/f500/collecting/F500Azure.py:1:0: C0103: Module name "F500Azure" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:3:0: E0401: Unable to import 'F500' (import-error)
+analytics/f500/collecting/F500Azure.py:5:0: C0115: Missing class docstring (missing-class-docstring)
+analytics/f500/collecting/F500Azure.py:9:8: C0103: Attribute name "experimentID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:31:8: C0103: Attribute name "sourceConnectionString" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:32:8: C0103: Attribute name "sourceContainerName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:33:8: C0103: Attribute name "sourceBlobName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:34:8: C0103: Attribute name "sourceBlobServiceClient" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:35:8: C0103: Attribute name "sourceContainerClient" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:39:8: C0103: Attribute name "targetConnectionString" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:40:8: C0103: Attribute name "targetContainerName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:41:8: C0103: Attribute name "targetBlobName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:42:8: C0103: Attribute name "targetBlobServiceClient" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:43:8: C0103: Attribute name "targetContainerClient" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:5:0: R0902: Too many instance attributes (13/7) (too-many-instance-attributes)
+analytics/f500/collecting/F500Azure.py:7:23: C0103: Argument name "experimentID" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:12:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500Azure.py:12:4: C0103: Method name "initAzure" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:27:26: E0602: Undefined variable 'enviroment' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:30:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500Azure.py:30:4: C0103: Method name "connectToSource" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:30:30: C0103: Argument name "sourceConnectionString" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:30:54: C0103: Argument name "sourceContainerName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:30:75: C0103: Argument name "sourceBlobName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:38:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500Azure.py:38:4: C0103: Method name "connectToTarget" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:38:30: C0103: Argument name "targetConnectionString" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:38:54: C0103: Argument name "targetContainerName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:38:75: C0103: Argument name "targetBlobName" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:45:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500Azure.py:45:4: C0103: Method name "writeISAJSON" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:46:8: C0103: Variable name "jsonOutput" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:46:21: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/F500Azure.py:47:25: E0602: Undefined variable 'json' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:47:60: E0602: Undefined variable 'ISAJSONEncoder' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:46:21: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/F500Azure.py:50:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500Azure.py:50:4: C0103: Method name "measurementsToFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:53:8: E0602: Undefined variable 'os' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:57:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500Azure.py:57:4: C0103: Method name "rawMeasurementsToFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:58:8: E0602: Undefined variable 'os' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:59:13: E0602: Undefined variable 'pandas' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:64:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500Azure.py:64:4: C0103: Method name "copyPointcloudFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:64:38: C0103: Argument name "fullPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:66:12: C0103: Variable name "AB" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:67:12: C0103: Variable name "pointcloudPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:68:29: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500Azure.py:75:19: W0718: Catching too general exception Exception (broad-exception-caught)
+analytics/f500/collecting/F500Azure.py:70:16: E0602: Undefined variable 'os' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:71:16: E0602: Undefined variable 'shutil' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:72:16: E0602: Undefined variable 'shutil' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:73:16: E0602: Undefined variable 'shutil' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:74:16: E0602: Undefined variable 'shutil' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:76:33: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/F500Azure.py:82:4: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/F500Azure.py:82:4: C0103: Method name "getDirectoryListing" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:82:28: C0103: Argument name "rootFolder" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/F500Azure.py:82:4: E0213: Method 'getDirectoryListing' should have "self" as first argument (no-self-argument)
+analytics/f500/collecting/F500Azure.py:83:15: E0602: Undefined variable 'os' (undefined-variable)
+analytics/f500/collecting/F500Azure.py:13:8: W0201: Attribute 'logger' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:28:8: W0201: Attribute 'metadata' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:31:8: W0201: Attribute 'sourceConnectionString' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:32:8: W0201: Attribute 'sourceContainerName' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:33:8: W0201: Attribute 'sourceBlobName' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:34:8: W0201: Attribute 'sourceBlobServiceClient' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:35:8: W0201: Attribute 'sourceContainerClient' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:39:8: W0201: Attribute 'targetConnectionString' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:40:8: W0201: Attribute 'targetContainerName' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:41:8: W0201: Attribute 'targetBlobName' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:42:8: W0201: Attribute 'targetBlobServiceClient' defined outside __init__ (attribute-defined-outside-init)
+analytics/f500/collecting/F500Azure.py:43:8: W0201: Attribute 'targetContainerClient' defined outside __init__ (attribute-defined-outside-init)
+************* Module f500.collecting.fairdom
+analytics/f500/collecting/fairdom.py:38:0: C0301: Line too long (114/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:39:0: C0301: Line too long (104/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:40:0: C0301: Line too long (111/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:41:20: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:45:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:46:0: C0301: Line too long (136/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:48:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:49:0: C0301: Line too long (101/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:67:0: C0301: Line too long (102/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:74:0: C0301: Line too long (114/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:81:51: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:93:0: C0301: Line too long (110/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:98:37: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:101:0: C0301: Line too long (103/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:102:0: C0301: Line too long (125/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:109:0: C0301: Line too long (124/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:119:0: C0301: Line too long (110/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:137:0: C0301: Line too long (120/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:149:0: C0301: Line too long (105/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:151:0: C0301: Line too long (107/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:165:0: C0325: Unnecessary parens after 'return' keyword (superfluous-parens)
+analytics/f500/collecting/fairdom.py:169:37: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:179:0: C0301: Line too long (170/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:185:0: C0301: Line too long (128/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:190:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:195:0: C0301: Line too long (114/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:196:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:199:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:206:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:208:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:210:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:212:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:213:0: C0301: Line too long (124/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:215:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:222:0: C0301: Line too long (115/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:223:0: C0301: Line too long (118/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:229:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:237:0: C0301: Line too long (120/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:243:0: C0301: Line too long (166/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:248:0: C0301: Line too long (133/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:249:0: C0301: Line too long (129/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:250:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:251:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:259:12: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:262:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:268:0: C0301: Line too long (112/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:272:0: C0301: Line too long (111/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:273:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/fairdom.py:286:0: C0301: Line too long (199/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:287:0: C0301: Line too long (103/100) (line-too-long)
+analytics/f500/collecting/fairdom.py:295:0: C0305: Trailing newlines (trailing-newlines)
+analytics/f500/collecting/fairdom.py:22:0: C0103: Constant name "base_url" doesn't conform to UPPER_CASE naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:34:0: C0103: Constant name "containing_project_id" doesn't conform to UPPER_CASE naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:53:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/fairdom.py:53:0: C0103: Function name "removeAfterSpaceFromDataMatrix" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:102:0: C0103: Constant name "metadata_csv" doesn't conform to UPPER_CASE naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:137:36: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/fairdom.py:137:36: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/fairdom.py:146:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/fairdom.py:146:0: C0103: Function name "copyPots" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:155:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/fairdom.py:155:0: C0103: Function name "measurementsToFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:155:23: W0621: Redefining name 'investigation' from outer scope (line 63) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:155:54: W0621: Redefining name 'measurements' from outer scope (line 144) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:155:23: W0613: Unused argument 'investigation' (unused-argument)
+analytics/f500/collecting/fairdom.py:160:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/fairdom.py:160:0: C0103: Function name "rawMeasurementsToFile" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:160:26: W0621: Redefining name 'investigation' from outer scope (line 63) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:160:57: W0621: Redefining name 'measurements' from outer scope (line 144) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:160:26: W0613: Unused argument 'investigation' (unused-argument)
+analytics/f500/collecting/fairdom.py:167:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/fairdom.py:167:0: C0103: Function name "addPointClouds" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:168:15: C0209: Formatting a regular string which could be an f-string (consider-using-f-string)
+analytics/f500/collecting/fairdom.py:175:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/fairdom.py:175:0: C0103: Function name "createAssay" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:175:0: R0914: Too many local variables (16/15) (too-many-locals)
+analytics/f500/collecting/fairdom.py:175:21: W0621: Redefining name 'investigation' from outer scope (line 63) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:175:42: W0621: Redefining name 'study_id' from outer scope (line 95) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:176:4: W0621: Redefining name 'data_file' from outer scope (line 105) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:191:4: W0621: Redefining name 'local_blob' from outer scope (line 115) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:197:4: W0621: Redefining name 'r' from outer scope (line 77) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:200:4: W0621: Redefining name 'populated_data_file' from outer scope (line 124) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:204:4: W0621: Redefining name 'data_file_id' from outer scope (line 128) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:205:4: W0621: Redefining name 'data_file_url' from outer scope (line 129) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:209:4: W0621: Redefining name 'blob_url' from outer scope (line 133) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:213:4: W0621: Redefining name 'upload' from outer scope (line 137) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:178:28: E0601: Using variable 'assay' before assignment (used-before-assignment)
+analytics/f500/collecting/fairdom.py:179:4: C0103: Variable name "fullPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:179:78: E0602: Undefined variable 'sample' (undefined-variable)
+analytics/f500/collecting/fairdom.py:179:104: E0602: Undefined variable 'description' (undefined-variable)
+analytics/f500/collecting/fairdom.py:179:135: E0602: Undefined variable 'description' (undefined-variable)
+analytics/f500/collecting/fairdom.py:180:4: C0103: Variable name "fullFilename" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:202:4: W0105: String statement has no effect (pointless-string-statement)
+analytics/f500/collecting/fairdom.py:207:4: W0105: String statement has no effect (pointless-string-statement)
+analytics/f500/collecting/fairdom.py:211:4: W0105: String statement has no effect (pointless-string-statement)
+analytics/f500/collecting/fairdom.py:213:40: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
+analytics/f500/collecting/fairdom.py:213:40: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
+analytics/f500/collecting/fairdom.py:204:4: W0612: Unused variable 'data_file_id' (unused-variable)
+analytics/f500/collecting/fairdom.py:205:4: W0612: Unused variable 'data_file_url' (unused-variable)
+analytics/f500/collecting/fairdom.py:235:0: C0116: Missing function or method docstring (missing-function-docstring)
+analytics/f500/collecting/fairdom.py:235:42: C0103: Argument name "investigationPath" doesn't conform to snake_case naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:235:0: R0913: Too many arguments (6/5) (too-many-arguments)
+analytics/f500/collecting/fairdom.py:235:13: W0621: Redefining name 'investigation' from outer scope (line 63) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:235:28: W0621: Redefining name 'measurements' from outer scope (line 144) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:235:42: W0621: Redefining name 'investigationPath' from outer scope (line 18) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:235:68: W0621: Redefining name 'metadata' from outer scope (line 51) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:235:78: W0621: Redefining name 'study_id' from outer scope (line 95) (redefined-outer-name)
+analytics/f500/collecting/fairdom.py:253:0: C0103: Constant name "previousAssay" doesn't conform to UPPER_CASE naming style (invalid-name)
+analytics/f500/collecting/fairdom.py:258:7: C0121: Comparison 'checkAssayName.match(assayName) != None' should be 'checkAssayName.match(assayName) is not None' (singleton-comparison)
+analytics/f500/collecting/fairdom.py:276:8: W0702: No exception type(s) specified (bare-except)
+analytics/f500/collecting/fairdom.py:286:63: E1101: Instance of 'dict' has no 'title' member (no-member)
+analytics/f500/collecting/fairdom.py:286:84: E1101: Instance of 'dict' has no 'studies' member (no-member)
+analytics/f500/collecting/fairdom.py:286:131: E1101: Instance of 'dict' has no 'studies' member (no-member)
+analytics/f500/collecting/fairdom.py:286:172: E1101: Instance of 'dict' has no 'measurements' member (no-member)
+analytics/f500/collecting/fairdom.py:9:0: C0411: standard import "pprint" should be placed before third party import "pandas" (wrong-import-order)
+analytics/f500/collecting/fairdom.py:10:0: C0411: standard import "re" should be placed before third party import "pandas" (wrong-import-order)
+analytics/f500/collecting/fairdom.py:11:0: C0411: standard import "collections.defaultdict" should be placed before third party import "pandas" (wrong-import-order)
+analytics/f500/collecting/fairdom.py:13:0: C0411: standard import "json" should be placed before third party imports "pandas", "requests" (wrong-import-order)
+analytics/f500/collecting/fairdom.py:14:0: C0411: standard import "string" should be placed before third party imports "pandas", "requests" (wrong-import-order)
+analytics/f500/collecting/fairdom.py:9:0: W0611: Unused import pprint (unused-import)
+analytics/f500/collecting/fairdom.py:11:0: W0611: Unused defaultdict imported from collections (unused-import)
+analytics/f500/collecting/fairdom.py:13:0: W0611: Unused import json (unused-import)
+analytics/f500/collecting/fairdom.py:14:0: W0611: Unused import string (unused-import)
+************* Module f500.collecting.toolkit
+analytics/f500/collecting/toolkit.py:12:0: C0303: Trailing whitespace (trailing-whitespace)
+analytics/f500/collecting/toolkit.py:32:0: C0305: Trailing newlines (trailing-newlines)
+analytics/f500/collecting/toolkit.py:10:0: E0401: Unable to import 'F500' (import-error)
+analytics/f500/collecting/toolkit.py:6:0: W0611: Unused import sys (unused-import)
+analytics/f500/collecting/toolkit.py:7:0: W0611: Unused import os (unused-import)
+analytics/f500/collecting/toolkit.py:8:0: W0611: Unused import pandas (unused-import)
+analytics/f500/collecting/toolkit.py:1:0: R0801: Similar lines in 2 files
+==f500.collecting.F500:[261:268]
+==f500.collecting.F500Azure:[64:70]
+        if f500.args.copyPointcloud == "True":
+            AB = f500.root.split("/")[-1]
+            pointcloudPath = "/".join(f500.root.split("/")[:-3]) + "/current/" + AB +'/I/'
+            f500.logger.info("Copying pointclouds from {}{} to {}".format(pointcloudPath, [row["pointcloud_full"],row["pointcloud_mr"],row["pointcloud_sl"],row["pointcloud_mg"]], fullPath))
+
+            try:
+                os.makedirs(fullPath, exist_ok=True) (duplicate-code)
+analytics/f500/collecting/toolkit.py:1:0: R0801: Similar lines in 2 files
+==f500.collecting.F500:[210:215]
+==f500.collecting.fairdom:[146:151]
+    row["Pot"] = pots[ (pots["x"] == row["x"]) & (pots["y"] == row["y"]) ]["Pot"].iloc[0]
+    if "Treatment" in pots.columns:
+        row["Treatment"] = pots[ (pots["x"] == row["x"]) & (pots["y"] == row["y"]) ]["Treatment"].iloc[0]
+    if "Experimemt" in pots.columns:
+        row["Experiment"] = pots[ (pots["x"] == row["x"]) & (pots["y"] == row["y"]) ]["Experiment"].iloc[0] (duplicate-code)
+
+-----------------------------------
+Your code has been rated at 0.92/10
+
diff --git a/reports/pylint_report_summary_ai.md b/reports/pylint_report_summary_ai.md
new file mode 100644
index 0000000000000000000000000000000000000000..58d9068b52757debc1039af06955b1409ea74595
--- /dev/null
+++ b/reports/pylint_report_summary_ai.md
@@ -0,0 +1,61 @@
+### Summary of Linting Issues
+
+The pylint report highlights several types of issues across the codebase. These can be grouped into the following categories:
+
+1. **Convention Violations (C):**
+   - **Trailing Whitespace and Newlines:** Frequent occurrences of trailing whitespace and newlines across multiple files.
+   - **Naming Conventions:** Many variables, functions, and module names do not conform to PEP 8 naming conventions (e.g., snake_case for variables and functions, UPPER_CASE for constants).
+   - **Missing Docstrings:** Many modules, classes, and functions lack docstrings, which are essential for understanding the purpose and usage of the code.
+   - **Line Length:** Numerous lines exceed the recommended maximum line length of 100 characters.
+
+2. **Warnings (W):**
+   - **Unused Imports and Variables:** Several imports and variables are declared but not used, leading to unnecessary clutter.
+   - **Redefining Names:** Some variables are redefined from an outer scope, which can lead to confusion and potential errors.
+   - **Deprecated and Unused Methods:** Usage of deprecated methods and methods that are defined but not used.
+
+3. **Errors (E):**
+   - **Import Errors:** Some modules are unable to import certain packages, indicating potential issues with dependencies or incorrect paths.
+   - **Undefined Variables:** Usage of variables that are not defined within the scope.
+   - **No-Member Errors:** Attempting to access non-existent members of modules, which could indicate incorrect usage or outdated libraries.
+
+4. **Refactor Suggestions (R):**
+   - **Too Many Arguments/Attributes:** Functions and classes with too many arguments or attributes, suggesting a need for refactoring to improve readability and maintainability.
+   - **Too Many Branches/Statements:** Functions with excessive branching or statements, indicating complex logic that could be simplified.
+
+5. **Specific Issues:**
+   - **Consider Using 'with' for Resource Management:** Several instances where resource-allocating operations like file handling should use context managers for better resource management.
+   - **Consider Using f-Strings:** Recommendations to use f-strings for string formatting for better readability and performance.
+
+### Impact on Code Quality
+
+- **Readability and Maintainability:** The lack of adherence to naming conventions and missing docstrings significantly impacts the readability and maintainability of the code. Developers may find it challenging to understand and modify the code.
+- **Potential Bugs:** Undefined variables, import errors, and no-member errors can lead to runtime errors, affecting the reliability of the software.
+- **Performance and Efficiency:** Unused imports and variables, along with inefficient string formatting, can lead to unnecessary memory usage and reduced performance.
+
+### Suggested Fixes and Refactoring Strategies
+
+1. **Adopt PEP 8 Standards:**
+   - Rename variables, functions, and modules to conform to PEP 8 naming conventions.
+   - Ensure all lines are within the recommended length.
+
+2. **Improve Documentation:**
+   - Add docstrings to all modules, classes, and functions to describe their purpose and usage.
+
+3. **Optimize Imports:**
+   - Remove unused imports and variables to clean up the code.
+
+4. **Refactor Complex Functions:**
+   - Break down functions with too many arguments or complex logic into smaller, more manageable functions.
+
+5. **Use Context Managers:**
+   - Implement context managers for file operations to ensure proper resource management.
+
+6. **Update String Formatting:**
+   - Replace old string formatting methods with f-strings for improved readability and performance.
+
+7. **Resolve Errors:**
+   - Investigate and resolve import errors and undefined variables to ensure the code runs correctly.
+
+### Overall Quality Assessment
+
+Based on the pylint report, the codebase currently has a low quality score of 0.92/10. The code suffers from poor adherence to coding standards, lack of documentation, and potential runtime errors. Addressing the highlighted issues will significantly improve the code's readability, maintainability, and reliability. Prioritizing the most critical issues, such as errors and convention violations, will be essential in enhancing the overall quality of the codebase.
\ No newline at end of file
diff --git a/reports/radon_cc_report.txt b/reports/radon_cc_report.txt
new file mode 100644
index 0000000000000000000000000000000000000000..97509dc13bcd6040456c539e2c276912a4d0c58d
--- /dev/null
+++ b/reports/radon_cc_report.txt
@@ -0,0 +1,87 @@
+analytics/lib/rescaleWavelength.py
+    F 8:0 rescale_wavelengths - A (1)
+analytics/lib/computePhenotypes.py
+    F 6:0 get_ndvi_for_visualization - A (1)
+    F 13:0 get_ndvi - A (1)
+    F 19:0 get_npci - A (1)
+    F 24:0 get_greenness - A (1)
+analytics/visualizations/histograms_ply.py
+    F 23:0 createPNG - A (1)
+analytics/visualizations/animate_ply.py
+    F 28:0 play_motion - A (1)
+analytics/f500/collecting/PointCloud.py
+    M 45:4 PointCloud.get_hue - B (6)
+    M 15:4 PointCloud.writeHistogram - A (4)
+    C 5:0 PointCloud - A (3)
+    M 98:4 PointCloud.trim - A (3)
+    M 25:4 PointCloud.getWavelengths - A (2)
+    M 92:4 PointCloud.render_image - A (2)
+    M 165:4 PointCloud.render_image_rescale - A (2)
+    M 11:4 PointCloud.__init__ - A (1)
+    M 33:4 PointCloud.get_psri - A (1)
+    M 69:4 PointCloud.get_greenness - A (1)
+    M 77:4 PointCloud.get_ndvi - A (1)
+    M 83:4 PointCloud.get_npci - A (1)
+    M 88:4 PointCloud.setColors - A (1)
+    M 130:4 PointCloud.render_image_no_rescale - A (1)
+analytics/f500/collecting/F500.py
+    M 620:4 F500.processPointclouds - E (31)
+    M 538:4 F500.restructure - D (21)
+    M 261:4 F500.copyPointcloudFile - C (11)
+    M 693:4 F500.combineHistograms - B (10)
+    M 301:4 F500.copyPlotPointcloudFile - B (9)
+    M 495:4 F500.finalize - B (9)
+    C 30:0 F500 - B (6)
+    M 453:4 F500.createAssayPlot - A (5)
+    M 209:4 F500.copyPots - A (4)
+    M 162:4 F500.setLogger - A (2)
+    M 169:4 F500.removeAfterSpaceFromDataMatrix - A (2)
+    M 176:4 F500.createISA - A (2)
+    M 361:4 F500.createSample - A (2)
+    M 486:4 F500.correctDataMatrix - A (2)
+    M 44:4 F500.__init__ - A (1)
+    M 57:4 F500.commandLineInterface - A (1)
+    M 202:4 F500.writeISAJSON - A (1)
+    M 221:4 F500.measurementsToFile - A (1)
+    M 228:4 F500.rawMeasurementsToFile - A (1)
+    M 235:4 F500.addPointClouds - A (1)
+    M 374:4 F500.createAssay - A (1)
+    M 535:4 F500.getDirectoryListing - A (1)
+    M 730:4 F500.upload - A (1)
+analytics/f500/collecting/processPointClouds.py
+    F 20:0 writeHistogram - A (1)
+    F 28:0 get_greenness - A (1)
+analytics/f500/collecting/Fairdom.py
+    M 118:4 Fairdom.upload - D (24)
+    C 12:0 Fairdom - A (4)
+    M 96:4 Fairdom.addDataFilesToSampleJSON - A (3)
+    M 84:4 Fairdom.addSampleToAssayJSON - A (2)
+    M 90:4 Fairdom.addDataFileToAssayJSON - A (2)
+    M 13:4 Fairdom.__init__ - A (1)
+    M 28:4 Fairdom.createInvestigationJSON - A (1)
+    M 40:4 Fairdom.createStudyJSON - A (1)
+    M 53:4 Fairdom.createAssayJSON - A (1)
+    M 70:4 Fairdom.createDataFileJSON - A (1)
+    M 104:4 Fairdom.createSampleJSON - A (1)
+analytics/f500/collecting/F500Azure.py
+    M 64:4 F500Azure.copyPointcloudFile - A (3)
+    C 5:0 F500Azure - A (2)
+    M 7:4 F500Azure.__init__ - A (1)
+    M 12:4 F500Azure.initAzure - A (1)
+    M 30:4 F500Azure.connectToSource - A (1)
+    M 38:4 F500Azure.connectToTarget - A (1)
+    M 45:4 F500Azure.writeISAJSON - A (1)
+    M 50:4 F500Azure.measurementsToFile - A (1)
+    M 57:4 F500Azure.rawMeasurementsToFile - A (1)
+    M 82:4 F500Azure.getDirectoryListing - A (1)
+analytics/f500/collecting/fairdom.py
+    F 235:0 finalize - B (7)
+    F 146:0 copyPots - A (3)
+    F 53:0 removeAfterSpaceFromDataMatrix - A (1)
+    F 155:0 measurementsToFile - A (1)
+    F 160:0 rawMeasurementsToFile - A (1)
+    F 167:0 addPointClouds - A (1)
+    F 175:0 createAssay - A (1)
+
+74 blocks (classes, functions, methods) analyzed.
+Average complexity: A (3.135135135135135)
diff --git a/reports/radon_cc_report_summary_ai.md b/reports/radon_cc_report_summary_ai.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f4587735d9002a1ebcc95167ba80d597283e121
--- /dev/null
+++ b/reports/radon_cc_report_summary_ai.md
@@ -0,0 +1,40 @@
+### Highlight of Functions/Methods with Highest Complexity Scores
+
+1. **F500.processPointclouds - E (31)**
+   - **Impact on Maintainability**: This method has the highest cyclomatic complexity score of 31, indicating a very high level of complexity. Such a high score suggests that the method likely contains numerous conditional statements and branches, making it difficult to understand, test, and maintain. This complexity can lead to increased chances of bugs and errors, as well as making future modifications challenging.
+
+2. **Fairdom.upload - D (24)**
+   - **Impact on Maintainability**: With a complexity score of 24, this method is also quite complex. It may contain multiple decision points and nested logic, which can obscure the flow of the code and make it harder to follow. This complexity can hinder the ability to quickly identify and fix issues or to extend the functionality.
+
+3. **F500.restructure - D (21)**
+   - **Impact on Maintainability**: This method's complexity score of 21 suggests it is also quite intricate. Similar to the above methods, it likely involves numerous branches and conditions, which can complicate understanding and maintenance.
+
+### Suggestions for Refactoring or Simplifying the Most Complex Functions/Methods
+
+1. **F500.processPointclouds**
+   - **Break Down into Smaller Functions**: Identify logical sections within the method and extract them into smaller, well-named functions. This will help isolate different functionalities and make the code more readable.
+   - **Reduce Nested Logic**: Simplify nested if-else statements by using guard clauses or early returns where possible.
+   - **Use Design Patterns**: Consider using design patterns like Strategy or Command to encapsulate varying behaviors and reduce complexity.
+
+2. **Fairdom.upload**
+   - **Modularize Code**: Break down the method into smaller, single-responsibility functions. Each function should handle a specific part of the upload process.
+   - **Simplify Conditionals**: Use polymorphism or a configuration-driven approach to handle complex conditional logic.
+   - **Improve Error Handling**: Implement a consistent error-handling strategy to manage exceptions and edge cases more effectively.
+
+3. **F500.restructure**
+   - **Refactor for Clarity**: Extract complex logic into helper functions with descriptive names to clarify the purpose of each code block.
+   - **Use Data Structures**: Consider using more appropriate data structures to simplify data manipulation and reduce the number of operations.
+   - **Optimize Loops**: Review loops for opportunities to simplify or combine them, reducing the overall complexity.
+
+### General Summary of the Codebase’s Complexity and Recommendations for Improvement
+
+- **Overall Complexity**: The average complexity score of the codebase is A (3.135), which indicates that most of the code is relatively simple and maintainable. However, there are a few outliers with significantly higher complexity scores that need attention.
+
+- **Recommendations**:
+  - **Regular Code Reviews**: Implement regular code reviews focusing on complexity and maintainability to catch potential issues early.
+  - **Automated Testing**: Increase the coverage of automated tests, especially for complex methods, to ensure that changes do not introduce new bugs.
+  - **Continuous Refactoring**: Encourage continuous refactoring practices to gradually improve the codebase's structure and readability.
+  - **Documentation**: Improve documentation for complex methods to aid understanding and future maintenance efforts.
+  - **Training and Best Practices**: Provide training on best practices for writing clean, maintainable code, and encourage the use of design patterns where appropriate.
+
+By addressing the most complex areas and promoting a culture of clean code, the maintainability and quality of the codebase can be significantly improved.
\ No newline at end of file
diff --git a/reports/radon_mi_report.txt b/reports/radon_mi_report.txt
new file mode 100644
index 0000000000000000000000000000000000000000..c271690e47f0fd05967f07d51549a9d0b7530e64
--- /dev/null
+++ b/reports/radon_mi_report.txt
@@ -0,0 +1,16 @@
+analytics/lib/rescaleWavelength.py - A (100.00)
+analytics/lib/computePhenotypes.py - A (82.28)
+analytics/visualizations/histograms_ply.py - A (64.27)
+analytics/visualizations/animate_ply.py - A (73.84)
+analytics/visualizations/visualization_ply.py - A (100.00)
+analytics/rgbsideview/clearWhites.py - A (100.00)
+analytics/f500/__init__.py - A (100.00)
+analytics/f500/collecting/PointCloud.py - A (59.55)
+analytics/f500/collecting/deleteFAIRObject.py - A (59.92)
+analytics/f500/collecting/F500.py - B (13.87)
+analytics/f500/collecting/processPointClouds.py - A (57.88)
+analytics/f500/collecting/Fairdom.py - A (36.73)
+analytics/f500/collecting/F500Azure.py - A (51.75)
+analytics/f500/collecting/fairdom.py - A (47.99)
+analytics/f500/collecting/__init__.py - A (100.00)
+analytics/f500/collecting/toolkit.py - A (84.06)
diff --git a/reports/radon_mi_report_summary_ai.md b/reports/radon_mi_report_summary_ai.md
new file mode 100644
index 0000000000000000000000000000000000000000..3ed665f4732f81daab91b079b18ba1c19cfe3010
--- /dev/null
+++ b/reports/radon_mi_report_summary_ai.md
@@ -0,0 +1,37 @@
+### Files with the Lowest Maintainability Index Scores
+
+1. **analytics/f500/collecting/F500.py** - B (13.87)
+2. **analytics/f500/collecting/Fairdom.py** - A (36.73)
+3. **analytics/f500/collecting/fairdom.py** - A (47.99)
+4. **analytics/f500/collecting/F500Azure.py** - A (51.75)
+5. **analytics/f500/collecting/processPointClouds.py** - A (57.88)
+6. **analytics/f500/collecting/PointCloud.py** - A (59.55)
+7. **analytics/f500/collecting/deleteFAIRObject.py** - A (59.92)
+
+### Common Patterns or Reasons for Low Scores
+
+- **High Complexity**: Files with low maintainability scores often have complex logic, which can be due to deeply nested loops, conditionals, or large functions.
+- **Lack of Comments**: Insufficient documentation can make the code difficult to understand and maintain.
+- **Large File Size**: Files that contain too much code can be hard to navigate and understand.
+- **Poor Modularization**: Functions or classes that try to do too much can lead to low maintainability.
+- **Code Duplication**: Repeated code blocks can increase the complexity and reduce maintainability.
+
+### Specific Improvements for Increasing Maintainability
+
+1. **Refactor Complex Functions**: Break down large functions into smaller, more manageable pieces. This can help reduce complexity and improve readability.
+   
+2. **Add Comments and Documentation**: Ensure that each function and class has a clear docstring explaining its purpose, inputs, outputs, and any side effects. Inline comments can also help clarify complex logic.
+
+3. **Modularize Code**: Consider splitting large files into smaller modules or packages. Each module should have a single responsibility.
+
+4. **Reduce Code Duplication**: Identify repeated code blocks and refactor them into reusable functions or classes.
+
+5. **Simplify Logic**: Look for opportunities to simplify complex logic, such as using helper functions or more straightforward algorithms.
+
+6. **Use Descriptive Naming**: Ensure that variables, functions, and classes have descriptive names that convey their purpose.
+
+### Overall Assessment of the Codebase’s Maintainability
+
+The codebase appears to have a mix of highly maintainable files and some that require significant improvement. The files with the lowest maintainability scores are concentrated in the `analytics/f500/collecting` directory, suggesting that this part of the codebase may have been developed with less emphasis on maintainability. 
+
+Overall, the codebase could benefit from a focused effort to refactor and document the lower-scoring files. By addressing the specific issues identified, the maintainability of the entire codebase can be improved, making it easier to understand, modify, and extend in the future.
\ No newline at end of file
diff --git a/reports/vulture_analysis_summary_ai.md b/reports/vulture_analysis_summary_ai.md
new file mode 100644
index 0000000000000000000000000000000000000000..865f381b54e7738ba42038296b70cb3f19136b22
--- /dev/null
+++ b/reports/vulture_analysis_summary_ai.md
@@ -0,0 +1,45 @@
+### Summary of Unused Code
+
+#### Unused Imports
+- `pprint` in `F500.py` and `fairdom.py`
+- `isatools` in `F500.py`, `Fairdom.py`, and `processPointClouds.py`
+- `isatab` in `F500.py`, `Fairdom.py`, and `processPointClouds.py`
+- `math` in `histograms_ply.py`
+
+#### Unused Variables
+- `ISA`, `currentFile`, `currentRoot`, `dirs` in `F500.py`
+- `technology_platform`, `technology_type` in `F500.py`
+- `data_file_id`, `data_file_url`, `countFiles` in `Fairdom.py`
+- `fig` in `histograms_ply.py`
+
+#### Unused Attributes
+- `technology_platform`, `technology_type` in `F500.py`
+- `sourceContainerClient`, `targetContainerClient` in `F500Azure.py`
+- `auth` in `Fairdom.py`
+
+#### Unused Classes and Methods
+- Class `F500Azure` and its methods `initAzure`, `connectToSource`, `connectToTarget` in `F500Azure.py`
+- Method `addDataFilesToSampleJSON` in `Fairdom.py`
+
+#### Unused Functions
+- `get_ndvi_for_visualization` in `computePhenotypes.py`
+- `rescale_wavelengths` in `rescaleWavelength.py`
+- `reset_motion` in `animate_ply.py`
+
+### Critical-Looking Unused Code
+- **Class `F500Azure` and its methods**: This class and its methods are entirely unused, which might indicate a significant portion of functionality that is not being utilized. It could be critical if Azure integration is expected in the project.
+- **Attributes `technology_platform` and `technology_type`**: These attributes appear multiple times in `F500.py`, suggesting they might be part of a larger, potentially critical feature that is not being used.
+
+### Suggestions for Code Removal or Review
+
+#### Likely Safe to Remove
+- **Unused Imports**: These can generally be removed safely as they do not affect the program's execution.
+- **Unused Variables**: Variables like `ISA`, `currentFile`, `currentRoot`, `dirs`, `data_file_id`, `data_file_url`, `countFiles`, and `fig` can likely be removed if they are not used elsewhere in the code.
+
+#### Needs Review for Hidden Dependencies
+- **Class `F500Azure` and its methods**: Review the project requirements to ensure Azure functionality is not needed before removing.
+- **Attributes `technology_platform` and `technology_type`**: Investigate if these attributes are part of a larger feature that might be needed in the future.
+- **Method `addDataFilesToSampleJSON`**: Check if this method is part of a planned feature or if it has been replaced by another implementation.
+- **Functions `get_ndvi_for_visualization`, `rescale_wavelengths`, `reset_motion`**: Review their intended use and check if they are part of any planned features or documentation.
+
+By carefully reviewing the critical-looking unused code and safely removing the rest, the project can be streamlined and made more maintainable.
\ No newline at end of file
diff --git a/reports/vulture_report.txt b/reports/vulture_report.txt
new file mode 100644
index 0000000000000000000000000000000000000000..e2e78410e54b171b233607b0739f26325620a7cc
--- /dev/null
+++ b/reports/vulture_report.txt
@@ -0,0 +1,34 @@
+analytics/f500/collecting/F500.py:10: unused import 'pprint' (90% confidence)
+analytics/f500/collecting/F500.py:13: unused import 'isatools' (90% confidence)
+analytics/f500/collecting/F500.py:15: unused import 'isatab' (90% confidence)
+analytics/f500/collecting/F500.py:33: unused variable 'ISA' (60% confidence)
+analytics/f500/collecting/F500.py:38: unused variable 'currentFile' (60% confidence)
+analytics/f500/collecting/F500.py:39: unused variable 'currentRoot' (60% confidence)
+analytics/f500/collecting/F500.py:378: unused attribute 'technology_platform' (60% confidence)
+analytics/f500/collecting/F500.py:379: unused attribute 'technology_type' (60% confidence)
+analytics/f500/collecting/F500.py:457: unused attribute 'technology_platform' (60% confidence)
+analytics/f500/collecting/F500.py:458: unused attribute 'technology_type' (60% confidence)
+analytics/f500/collecting/F500.py:566: unused variable 'dirs' (60% confidence)
+analytics/f500/collecting/F500Azure.py:5: unused class 'F500Azure' (60% confidence)
+analytics/f500/collecting/F500Azure.py:12: unused method 'initAzure' (60% confidence)
+analytics/f500/collecting/F500Azure.py:30: unused method 'connectToSource' (60% confidence)
+analytics/f500/collecting/F500Azure.py:35: unused attribute 'sourceContainerClient' (60% confidence)
+analytics/f500/collecting/F500Azure.py:38: unused method 'connectToTarget' (60% confidence)
+analytics/f500/collecting/F500Azure.py:43: unused attribute 'targetContainerClient' (60% confidence)
+analytics/f500/collecting/Fairdom.py:2: unused import 'isatools' (90% confidence)
+analytics/f500/collecting/Fairdom.py:4: unused import 'isatab' (90% confidence)
+analytics/f500/collecting/fairdom.py:9: unused import 'pprint' (90% confidence)
+analytics/f500/collecting/fairdom.py:30: unused attribute 'auth' (60% confidence)
+analytics/f500/collecting/Fairdom.py:96: unused method 'addDataFilesToSampleJSON' (60% confidence)
+analytics/f500/collecting/fairdom.py:128: unused variable 'data_file_id' (60% confidence)
+analytics/f500/collecting/fairdom.py:129: unused variable 'data_file_url' (60% confidence)
+analytics/f500/collecting/Fairdom.py:133: unused variable 'countFiles' (60% confidence)
+analytics/f500/collecting/fairdom.py:204: unused variable 'data_file_id' (60% confidence)
+analytics/f500/collecting/fairdom.py:205: unused variable 'data_file_url' (60% confidence)
+analytics/f500/collecting/processPointClouds.py:5: unused import 'isatools' (90% confidence)
+analytics/f500/collecting/processPointClouds.py:7: unused import 'isatab' (90% confidence)
+analytics/lib/computePhenotypes.py:6: unused function 'get_ndvi_for_visualization' (60% confidence)
+analytics/lib/rescaleWavelength.py:8: unused function 'rescale_wavelengths' (60% confidence)
+analytics/visualizations/animate_ply.py:32: unused function 'reset_motion' (60% confidence)
+analytics/visualizations/histograms_ply.py:4: unused import 'math' (90% confidence)
+analytics/visualizations/histograms_ply.py:43: unused variable 'fig' (60% confidence)
diff --git a/rgbsideview/clearWhites.py b/rgbsideview/clearWhites.py
index eb81bb00bfe67f61a630aaab6bb7824b9160b687..41b2e3197cf50ba91b061697ba89c0bc453537f8 100644
--- a/rgbsideview/clearWhites.py
+++ b/rgbsideview/clearWhites.py
@@ -1,7 +1,50 @@
+"""
+This script reads a PLY file containing a 3D point cloud and loads it into an Open3D PointCloud object.
+
+The script uses the Open3D library to handle the point cloud data. It reads a PLY file specified as a command-line argument and loads the data into a PointCloud object for further processing or visualization.
+
+Usage:
+    python script_name.py <path_to_ply_file>
+
+Dependencies:
+    - Open3D: Install via `pip install open3d`
+    - NumPy: Install via `pip install numpy`
+
+Note:
+    Ensure that the Open3D library is correctly installed and that the PLY file path is valid.
+"""
+
 import open3d as o3d
 import numpy
 import sys
 
-ply = o3d.io.read.read_point_cloud(sys.argv[1], format="ply")
+def main():
+    """
+    Main function to read a PLY file and load it into an Open3D PointCloud object.
+
+    This function reads a PLY file specified as a command-line argument and loads it into a PointCloud object using the Open3D library.
+
+    Raises:
+        IndexError: If no command-line argument is provided for the PLY file path.
+        FileNotFoundError: If the specified PLY file does not exist.
+        Exception: If there is an error reading the PLY file.
+
+    Side Effects:
+        Loads the point cloud data into memory.
 
+    Future Work:
+        - Add error handling for unsupported file formats.
+        - Implement visualization of the point cloud data.
+    """
+    try:
+        ply = o3d.io.read_point_cloud(sys.argv[1], format="ply")
+        print("Point cloud successfully loaded.")
+    except IndexError:
+        print("Error: No PLY file path provided. Please specify the path to a PLY file as a command-line argument.")
+    except FileNotFoundError:
+        print(f"Error: The file '{sys.argv[1]}' does not exist. Please provide a valid file path.")
+    except Exception as e:
+        print(f"An error occurred while reading the PLY file: {e}")
 
+if __name__ == "__main__":
+    main()
diff --git a/visualizations/animate_ply.py b/visualizations/animate_ply.py
index bf46de64d5a88d426324259e373c5a6fc44e03f8..6abaad323570d5367c93324a9c77da80a88d0891 100644
--- a/visualizations/animate_ply.py
+++ b/visualizations/animate_ply.py
@@ -1,19 +1,18 @@
 import open3d as o3d
 import numpy as np
-import copy
 import sys
 import time
 
-# Reads a list of Phenospex PLY PointClouds and creates 
-# an animation. 
-# In /tmp a set of pngs are created which you can stitch into a animated gif.
-# For example: convert -delay 20 -loop 0 *.png animation.gif
-# Requires the NPEC Open3d additions (https://github.com/swarris/Open3D/tree/NPEC) 
-# In the first for-loop you can set colors to:
-# pcd.colors
-# pcd.nir
-# pcd.ndvi
-
+"""
+This script reads a list of Phenospex PLY PointClouds and creates an animation.
+In the /tmp directory, a set of PNG images are created, which can be stitched into an animated GIF.
+For example: convert -delay 20 -loop 0 *.png animation.gif
+Requires the NPEC Open3D additions (https://github.com/swarris/Open3D/tree/NPEC).
+In the first for-loop, you can set colors to:
+pcd.colors
+pcd.nir
+pcd.ndvi
+"""
 
 pcds = []
 
@@ -24,12 +23,38 @@ for i in sys.argv[1:]:
     pcd.colors = pcd.colors
     pcds.append(pcd)
 
-    
 def play_motion(list_of_pcds):
+    """
+    Plays an animation of a list of point clouds using Open3D's visualization tools.
+
+    This function initializes a visualizer and iterates through the provided list of point clouds,
+    updating the visualizer with each point cloud to create an animation effect. It captures each
+    frame as a PNG image in the /tmp directory.
+
+    Args:
+        list_of_pcds (list): A list of Open3D point cloud objects to be animated.
+
+    Side Effects:
+        - Creates PNG images in the /tmp directory for each frame of the animation.
+        - Opens a visualization window to display the animation.
+
+    Notes:
+        - The function uses a nested callback structure to handle forward and backward animation.
+        - Future work could include adding more control over the animation speed and direction.
+    """
     play_motion.vis = o3d.visualization.Visualizer()
     play_motion.index = 0
 
     def reset_motion(vis):
+        """
+        Resets the animation to the first frame.
+
+        Args:
+            vis: The Open3D visualizer instance.
+
+        Returns:
+            bool: Always returns False to indicate the animation should not stop.
+        """
         play_motion.index = 0
         pcd.points = list_of_pcds[0].points
         pcd.colors = list_of_pcds[0].colors
@@ -41,6 +66,12 @@ def play_motion(list_of_pcds):
         return False
 
     def backward(vis):
+        """
+        Moves the animation one frame backward.
+
+        Args:
+            vis: The Open3D visualizer instance.
+        """
         pm = play_motion
 
         if pm.index > 0:
@@ -55,6 +86,15 @@ def play_motion(list_of_pcds):
             vis.register_animation_callback(forward)
 
     def forward(vis):
+        """
+        Moves the animation one frame forward and captures the frame as a PNG image.
+
+        Args:
+            vis: The Open3D visualizer instance.
+
+        Returns:
+            bool: Always returns False to indicate the animation should not stop.
+        """
         pm = play_motion
         if pm.index < len(list_of_pcds) - 1:
             pm.index += 1
@@ -65,16 +105,12 @@ def play_motion(list_of_pcds):
             vis.poll_events()
             vis.update_renderer()
             vis.capture_screen_image("/tmp/animation_{}.png".format(pm.index))
-
         else:
-            # vis.register_animation_callback(reset_motion)
             vis.register_animation_callback(backward)
         return False
 
     # Geometry of the initial frame
     pcd = list_of_pcds[0]
-    #pcd.points = o3d.utility.Vector3dVector(list_of_pcds[0].reshape(-1, 3))
-    #pcd.colors = o3d.utility.Vector3dVector(np.ones(list_of_pcds[0].view(-1, 3).shape) * orange)
 
     # Initialize Visualizer and start animation callback
     vis = play_motion.vis
@@ -84,4 +120,4 @@ def play_motion(list_of_pcds):
     vis.run()
     vis.destroy_window()
 
-play_motion(pcds)
+play_motion(pcds)
\ No newline at end of file
diff --git a/visualizations/histograms_ply.py b/visualizations/histograms_ply.py
index de23793004d5a6ac3a56201928e499d5036b97c7..10ae782a07e0899bf180a26e8ff24f8c60b20b84 100644
--- a/visualizations/histograms_ply.py
+++ b/visualizations/histograms_ply.py
@@ -5,11 +5,12 @@ import math
 import matplotlib.pyplot as plt
 import matplotlib.image as mpimg
 
-# reads a Phenospex PLY PointCloud (first CLI argument) and
-# displays RGB, nir, ndvi and intensity histograms.
-# It will open 3D images of RGB, nir and ndvi for you to show.
-# Requires the NPEC Open3d additions (https://github.com/swarris/Open3D/tree/NPEC) 
-
+"""
+This script reads a Phenospex PLY PointCloud file specified as a command-line argument
+and displays histograms for RGB, NIR, NDVI, and intensity values. It also generates and
+displays 3D images for RGB, NIR, and NDVI. The script requires the NPEC Open3d additions
+(https://github.com/swarris/Open3D/tree/NPEC).
+"""
 
 r = []
 g = []
@@ -21,6 +22,16 @@ bins = 100
 scale = 500
 
 def createPNG(pcd, filename):
+    """
+    Creates a PNG image from a point cloud.
+
+    Parameters:
+    pcd (open3d.geometry.PointCloud): The point cloud to visualize and capture.
+    filename (str): The filename where the image will be saved.
+
+    Side Effects:
+    Opens a visualization window and saves a screenshot as a PNG file.
+    """
     vis = o3d.visualization.Visualizer()
     vis.create_window(visible=True)
     vis.add_geometry(pcd)
@@ -28,10 +39,11 @@ def createPNG(pcd, filename):
     vis.run()
     vis.capture_screen_image(filename)
     vis.destroy_window()
-    
 
+# Read the point cloud from the file specified in the command-line argument
 pcd = o3d.io.read_point_cloud(sys.argv[1])
-# calculate colors+nir based on wavelength data
+
+# Calculate colors and NIR based on wavelength data
 pcd.wavelengthsToData(scale)
 
 for c in pcd.colors:
@@ -39,7 +51,6 @@ for c in pcd.colors:
     g.append(c[1] * scale)
     b.append(c[2] * scale)
 
-
 fig, axs = plt.subplots(3, 3, sharey=False, tight_layout=True)
 axs[0][0].hist(r, bins=bins, color="red")
 axs[0][0].set_title("Red", color="red")
@@ -48,17 +59,16 @@ axs[0][1].set_title("Green", color="green")
 axs[1][0].hist(b, bins=bins, color="blue")
 axs[1][0].set_title("Blue", color="blue")
 
-
 for n in pcd.nir:
     nir.append(n[0] * scale)
 
 axs[1][1].hist(nir, bins=bins, color="m")
 axs[1][1].set_title("Near-IR", color="m")
 
-# compute NDVI based on wavelength data (R + nir)
+# Compute NDVI based on wavelength data (R + NIR)
 pcd.computeNDVI()
 for n in pcd.ndvi:
-    ndvi.append((n[0]-0.5)*2.0)
+    ndvi.append((n[0] - 0.5) * 2.0)
 
 axs[2][0].hist(ndvi, bins=bins, color="olive")
 axs[2][0].set_title("NDVI", color="olive")
@@ -69,14 +79,12 @@ for n in pcd.intensity:
 axs[2][1].hist(intensity, bins=bins, color="grey")
 axs[2][1].set_title("Intensity", color="grey")
 
-createPNG(pcd,"/tmp/ply_rgb.png")
-pcd.colors=pcd.nir
-createPNG(pcd,"/tmp/ply_nir.png")
+createPNG(pcd, "/tmp/ply_rgb.png")
+pcd.colors = pcd.nir
+createPNG(pcd, "/tmp/ply_nir.png")
 pcd.colorizeNDVI()
-pcd.colors=pcd.ndvi
-createPNG(pcd,"/tmp/ply_ndvi.png")
-
-
+pcd.colors = pcd.ndvi
+createPNG(pcd, "/tmp/ply_ndvi.png")
 
 axs[0][2].imshow(mpimg.imread("/tmp/ply_rgb.png"))
 axs[0][2].set_title("RGB image")
@@ -85,6 +93,4 @@ axs[1][2].set_title("NIR image")
 axs[2][2].imshow(mpimg.imread("/tmp/ply_ndvi.png"))
 axs[2][2].set_title("NDVI image")
 
-plt.show()
-
-
+plt.show()
\ No newline at end of file
diff --git a/visualizations/visualization_ply.py b/visualizations/visualization_ply.py
index 734a83b77f8566da2afcfc57fbb0f1fc96751b28..cc451d8b7c5e2940e3861b3d9c2f7303b714b2ad 100644
--- a/visualizations/visualization_ply.py
+++ b/visualizations/visualization_ply.py
@@ -2,24 +2,78 @@ import open3d as o3d
 import sys
 import numpy
 
-# Opens a Phenospex PLY PointCount file
-# Example file to see the added functionalities op Open3D
-# https://github.com/swarris/Open3D/tree/NPEC
-
-pcd = o3d.io.read_point_cloud(sys.argv[1])
-o3d.visualization.draw_geometries([pcd])
-# store current colors
-colors = pcd.colors
-# calculate colors+nir based on wavelength data
-pcd.wavelengthsToData()
-o3d.visualization.draw_geometries([pcd])
-
-# compute NDVI based on wavelength data (R + nir)
-pcd.computeNDVI()
-pcd.colors = pcd.ndvi
-o3d.visualization.draw_geometries([pcd])
-
-# scale single-channel ndvi into RGB
-pcd.colorizeNDVI()
-pcd.colors = pcd.ndvi
-o3d.visualization.draw_geometries([pcd])
+"""
+This script demonstrates the use of Open3D to process and visualize point cloud data from a Phenospex PLY PointCount file.
+It includes functionalities to read point cloud data, visualize it, compute NDVI (Normalized Difference Vegetation Index),
+and colorize the NDVI data for better visualization.
+
+Usage:
+    python script_name.py <path_to_ply_file>
+
+Dependencies:
+    - Open3D
+    - NumPy
+
+Note:
+    This script assumes that the point cloud data contains wavelength information necessary for NDVI computation.
+    The script also assumes the existence of certain methods like `wavelengthsToData`, `computeNDVI`, and `colorizeNDVI`
+    which may not be part of the standard Open3D library.
+"""
+
+def main():
+    """
+    Main function to execute the point cloud processing and visualization.
+
+    This function reads a point cloud file specified by the command line argument, visualizes it, computes NDVI,
+    and visualizes the NDVI data. It also colorizes the NDVI data for enhanced visualization.
+
+    Raises:
+        IndexError: If no file path is provided as a command line argument.
+        FileNotFoundError: If the specified file does not exist.
+        AttributeError: If the point cloud object does not have the required methods.
+    """
+    try:
+        # Opens a Phenospex PLY PointCount file
+        pcd = o3d.io.read_point_cloud(sys.argv[1])
+    except IndexError:
+        raise IndexError("Please provide a path to the PLY file as a command line argument.")
+    except FileNotFoundError:
+        raise FileNotFoundError(f"The file {sys.argv[1]} does not exist.")
+
+    # Visualize the original point cloud
+    o3d.visualization.draw_geometries([pcd])
+
+    # Store current colors
+    colors = pcd.colors
+
+    try:
+        # Calculate colors+nir based on wavelength data
+        pcd.wavelengthsToData()
+    except AttributeError:
+        raise AttributeError("The point cloud object does not have the method 'wavelengthsToData'.")
+    
+    # Visualize the point cloud with updated data
+    o3d.visualization.draw_geometries([pcd])
+
+    try:
+        # Compute NDVI based on wavelength data (R + nir)
+        pcd.computeNDVI()
+    except AttributeError:
+        raise AttributeError("The point cloud object does not have the method 'computeNDVI'.")
+    
+    # Update colors to NDVI and visualize
+    pcd.colors = pcd.ndvi
+    o3d.visualization.draw_geometries([pcd])
+
+    try:
+        # Scale single-channel NDVI into RGB
+        pcd.colorizeNDVI()
+    except AttributeError:
+        raise AttributeError("The point cloud object does not have the method 'colorizeNDVI'.")
+    
+    # Update colors to colorized NDVI and visualize
+    pcd.colors = pcd.ndvi
+    o3d.visualization.draw_geometries([pcd])
+
+if __name__ == "__main__":
+    main()
\ No newline at end of file