ThermoLiDAR Plugin

Introduction

ThermoLiDAR software is designed to provide the required tools to integrate Thermal and LiDAR information for the assessment of forest health and production. The software includes different modules and several settings offering different levels of processing to suit each needs. The main functionalities of the software are:

  1. Tools for raw LiDAR and Thermal data processing, data exploring, quality assessment and visualisation.
  2. Tools to integrate both data sources using suitable data fusion techniques.
  3. Spatial statistical tools to analyse biophysical variables of the vegetation based on the integration of LiDAR and Thermal data integration.
  4. Tools for mapping forest health condition and forest production dynamics and producing accuracy assessment.

The plug-in

ThermoLiDAR software has been developed as a QGIS plug-in and, once installed, it becomes part of the Processing Toolbox of QGIS. This makes possible that final users get free access to lots of GIS capabilities. The QGIS version currently supported by the ThermoLiDAR plug-in is QGIS 2.4.0 Chugiak.

The plug-in has been divided into two different packages: Processing and Data Analysis.

The Processing package includes tools for conducting the processing of raw Thermal and LiDAR data in order to obtain the products required to achieve the parametric analysis of forest health assessment.

  • A.1. Lidar processing. LiDAR data tool set for the generation of DSMs, DTMs, DVMs and vegetation statistical derivatives thereof. This part includes a graphical user interface for the SPDLib tools.
  • A.2. Thermal image processing. Thermal data tool set for the calibration of RAW airborne thermal imaging. In addition, tools provide the possibility of calculating a derivate indicator using the difference of the air temperature minus the crown temperature.

The Data Analysis package includes tools for conducting the simultaneous analysis of thermal and LiDAR data information linked to field data measurements to evaluate the state and trends of forest health. A detailed description of data inputs and processes applied is included in Fig. 2

  • B.1. Forest Stand Segmentation (FSS). The processed image is decomposed into regions or objects. Object based delineation algorithms are applied with this tool to define forest stands unis for further study.
  • B.2. Health condition levels (HCL). Different physiological indicators from field data measurements are processed with this tool to define the ground truth condition of forest status. Health condition levels are statistically generated based in clustering and subsequently validated by ANOVA.
  • B.3. Structurally homogeneous forest units (SHFU). This option provides the tools required for the classification of forest stands structurally different. The data applied to perform this classification is defined by the user. In this report the main average height of the trees estimated based on LiDAR data has been applied as input data. Alternatively, user can provide external forest maps in a .shp format type with an attribute of the number of class.
  • B.3. Forest Health Monitoring (FHM). The final result of this toolkit is the classification of forest health condition levels. This process requires the training sample plots analysed and classified in step 1, the forest stand classification calculated in step 2 (or externally provided) and the thermal image data. Then, the user can apply a supervised classification of thermal data using the training sample plots classified in different health condition levels and based on a specific structural stand level. Optionally, users can apply a non-supervised classification in case the absence of field data measurements.

Installation

Requirements

Hardware Requirements

Minimum hardware requirements implies

  • x86–64 or compatible CPU
  • 1 GB RAM
  • 20 GB available hard disk space

For a fully operational system, the recommended hardware requirements are:

  • x86–64 or compatible CPU
  • 4 GB RAM
  • 50 GB available hard disk space

Software Requirements

THERMOLIDAR software has been developed as a QGIS plug-in and, once installed, it becomes part of the Processing Toolbox of QGIS. This makes possible that final users get free access to lots of GIS capabilities, but also implies that the Processing Toolbox is already installed in QGIS. The QGIS version currently supported by the THERMOLIDAR plug-in is QGIS 2.4.0 Chugiak.

For the THERMOLIDAR software to run, the plug-in requires some external libraries that need to be previously installed:

R is a free programming language and software environment for statistical computing and is used by the Data Analysis Modules. SPDLib is a set of open source software tools to process LiDAR data and is used by the Data Processing modules.

Another optional software is Fusion, which is a software for manipulation and analysis of LiDAR data.

Windows Installer

There exists a THERMOLIDAR installer for Windows versions. The installer helps the user through the process of install QGIS, R, SPDLib and Fusion by launching their respectively installers sequentially. In case the user had already installed any package, it would be possible to skip its installation select the software to be installed at the beginning of the process.

For each package, a new Welcome window will be popped-up in order to install the package.

_images/ThermoLiDAR_installer.png

THERMOLIDAR installer “Welcome” window

The user will be always questioned about the installation folder. In addition to the path folder, the user can select between different configuration options during the installation of QGIS and R.

_images/R_installer_options.png

R installer windows

Once the installation of each package is complete and its “Good-bye” windows is closed, the new installer will pop-up.

_images/ThermoLiDAR_installer_Goodbye.png

THERMOLIDAR installer final window

Installing SPDlib

Windows

To install SPDLib in Windows systems, download the window binaries that are available in the SPDLib repository, and unzip them into any folder in the file system. Be sure the installation path does not contain spaces. A good place where to install SPDLib is C:\SPDLib>

Mac OSX, Linux and Solaris

The notes below provide some useful details on the process for installing SPDLib. These notes are intended for people compiling the software on a UNIX platform such as Mac OSX, Linux or Solaris (these are the platforms on which the software has been tested).

To compile the software (and the pre-requisites) you will need a C++ compiler, we use the GNU GCC compilers but the software has also been tested and compiles without a problem using the SunPro compiler on Solaris and the Intel x86 compilers.

You will also need to have mercurial installed to download the latest version of the SPDLib source code, cmake to configure the source code before compilation.

Getting the SPDlib Source Code

The SPDLib source code is hosted within a Mercurial repository on bitbucket. To clone the source code into a folder spdlib run the following command:

hg clone https://bitbucket.org/petebunting/spdlib spdlib
Compiling SPDlib

If libraries are not installed within /usr/local then the path needs to be specified using the variables available on CMake listed below.

$ cmake -D CMAKE_INSTALL_PREFIX=/usr/local \
        -D HDF5_INCLUDE_DIR=/usr/local/include \
        -D HDF5_LIB_PATH=/usr/local/lib \
        -D LIBLAS_INCLUDE_DIR=/usr/local/include \
        -D LIBLAS_LIB_PATH=/usr/local/lib\
        -D GSL_INCLUDE_DIR=/usr/local/include \
        -D GSL_LIB_PATH=/usr/local/lib \
        -D CGAL_INCLUDE_DIR=/usr/local/include \
        -D CGAL_LIB_PATH=/usr/local/lib \
        -D BOOST_INCLUDE_DIR=/usr/local/include \
        -D BOOST_LIB_PATH=/usr/local/lib \
        -D GDAL_INCLUDE_DIR=/usr/local/include \
        -D GDAL_LIB_PATH=/usr/local/lib \
        -D XERCESC_INCLUDE_DIR=/usr/local/include \
        -D XERCESC_LIB_PATH=/usr/local/lib \
        -D GMP_INCLUDE_DIR=/usr/local/include \
        -D GMP_LIB_PATH=/usr/local/lib \
        -D MPFR_INCLUDE_DIR=/usr/local/include \
        -D MPFR_LIB_PATH=/usr/local/lib \
        -D CMAKE_VERBOSE_MAKEFILE=ON \

$ make
$ make install
Pre-requisites

The SPDLib software library has a number of software prerequisites, which are required to built the software.

During the development process, to date, the following libraries have been included:

  1. Boost (http://www.boost.org) (oldest Version 1.49)
  2. HDF5 (http://www.hdfgroup.org) (oldest Version 1.8.2)
  3. GNU Scientific Library (GSL; http://www.gnu.org/software/gsl) (oldest Version 1.14)
  4. Xerces-C (http://xerces.apache.org/xerces-c) (oldest Version 3.1.1)
  5. GDAL/OGR (http://www.gdal.org) (oldest Version 1.7)
  6. LibLAS (http://www.liblas.org) (oldest Version 1.6)
  7. CGAL (http://www.cgal.org) (oldest Version 3.8)

Installing and Configuring ThermoLiDAR software

Installation

The easiest way to install ThermoLiDAR plug-in is through the windows installer (see Windows Installer). However, the installer might not be updated or the user may work under Linux or MacOS platforms. If this should be the casegit , the source code of the ThermoLiDAR software can be downloaded from the official and private bitbucket repository (https://bitbucket.org/thermolidar/thermolidar/downloads). Once the file is downloaded just unzip it into your system QGIS plug-ins folder. This plug-ins folder should be located in:

%HOMEDRIVE\%HOMEPATH\.qgis2\python\plugins

in Windows systems and, for Unix-based systems:

~/.qgis2/python/plugins

Finally, the unzip folder has to be renamed to thermolidar.

Alternatively, the latest version of the software can be directly clone from the repository to our local QGIS plug-in folder using git:

$ git clone http://bitbucket.org/thermolidar/thermolidar.git ~/.qgis2/python/plugins/thermolidar

Configuration

Start QGIS and make sure the Processing Toolbox is enable and the Advanced interface is selected (at the bottom). The Processing Toolbox should look like this:

_images/Window_ProcessingToolBox.png

Processing Toolbox

Manage and Install Plugins is placed into the Plugins menu in QGIS

Now the plugin must be enabled in the Installed tab within the Plugins > Manager and Install Plugin menu.

From the Manage and Install Plugins... within the Plugins menu, select the ThermoLiDAR plug-in:

_images/Window_ThermoLiDAR_Plugin.png

ThermoLiDAR plug-in is enabled in the Installed tab within the Manage and Install Plugins...

Once QGIS load ThermoLiDAR, it appears within the Processing Toolbox

_images/Window_ProcessingToolBox_ThermoLiDAR.png

ThermoLiDAR fully integrated into Processing Toolbox

However, to use the plug-in SPDLib and R software have to be enabled and visible from QGIS. Otherwise the user will get an error message that denies running any tool:

Warning

SPDTools folder is not configured. Please, consider to configure it before running SPDTools algorithms.

To activate them, open the Processing > Options and configuration > Providers menu. First, enable the R statistical package and provide the R Scripts Folder and the R folder. R Scripts Folder specifies where the R scripts are located, C:\Users\admin\.qgis2\processing\scripts, and R folder specifies where R is installed, C:\Program Files\R\R-XXX (XXX stands for the current R version).

Finally, activate the enable the ThermoLiDAR plug-in and supplies the folder of the SPDLib binaries, commonly C:\SPDLib> in Windows systems and /usr/local/bin

_images/Window_OptAndConfig_SPDLib_R.png

Visualization of Processing > Options and configuration window

User Manual

Introduction

This section summarize the description of the software and the practical application of the tools implemented using thermal and LiDAR data collected in the framework of THREMOLIDAR project.

The main structure of the software is summarized in this figure.

_images/general_flowchart.png

Detail of the main processes carried out by the software

Software structure

Processing

This package includes tools for conducting the processing of raw Thermal and LiDAR data in order to obtain the products required to achieve the parametric analysis of forest health assessment.

  • A.1. Lidar processing. LiDAR data tool set for the generation of DSMs, DTMs, DVMs and vegetation statistical derivatives thereof.
  • A.2. Thermal image processing. Thermal data tool set for the calibration of RAW airborne thermal imaging. In addition, tools provide the possibility of calculating a derivate indicator using the difference of the air temperature minus the crown temperature.
Data Analysis

This package includes tools for conducting the simultaneous analysis of thermal and LiDAR data information linked to field data measurements to evaluate the state and trends of forest health. A detailed description of data inputs and processes applied is included in Fig. 2

  • B.1. Forest Stand Segmentation (FSS). The processed image is decomposed into regions or objects. Object based delineation algorithms are applied with this tool to define forest stands unis for further study.
  • B.2. Health condition levels (HCL). Different physiological indicators from field data measurements are processed with this tool to define the ground truth condition of forest status. Health condition levels are statistically generated based in clustering and subsequently validated by ANOVA.
  • B.3. Structurally homogeneous forest units (SHFU). This option provides the tools required for the classification of forest stands structurally different. The data applied to perform this classification is defined by the user. In this report the main average height of the trees estimated based on LiDAR data has been applied as input data. Alternatively, user can provide external forest maps in a .shp format type with an attribute of the number of class.
  • B.3. Forest Health Monitoring (FHM). The final result of this toolkit is the classification of forest health condition levels. This process requires the training sample plots analysed and classified in step 1, the forest stand classification calculated in step 2 (or externally provided) and the thermal image data. Then, the user can apply a supervised classification of thermal data using the training sample plots classified in different health condition levels and based on a specific structural stand level. Optionally, users can apply a non-supervised classification in case the absence of field data measurements.

Thermal Processing

Thermal processing tools data allows a user to perform thermal imaging calibration using the Emissive Empirical Line Method (EELM) which is a common method for airborne thermal data processing based on the ‘In-scene atmospheric correction methods. These approaches were developed to remove atmospheric effects from hyper-spectral imaging data allowing the user to utilize similar conditions to the atmosphere state. The advantage of using this type of methods over model-based methods based in radiative transfer theory is that they capture the true state of the atmosphere at the time of data collection and the relative low computational efforts required to perform the corrections (in comparison with radiative transfer models approaches). The main difficulty for in-scene method is getting correctly the field measurements parameters required for the correction algorithm.

The Emissive Empirical Line Method (EEELM) is the infra-red extension of the widely known Empirical Line Method (ELM) atmospheric correction. EELM employs a linear regression for each band to relate at-sensor radiance with the ground leaving radiance (GLR) via target emissivity and temperature by generating atmospheric transmission, up-welling radiance, and down-welling radiance terms. EELM requires at least one bright target, one dark target and it is also recommended to measure any intermediate target.

Use Empirical Line Compute Factors calibration to force spectral data to match selected field reflectance spectra. A linear regression is used for each band to equate DN and reflectance. This is equivalent to removing the solar irradiance and the atmospheric path radiance. The following equation shows how the empirical line gain and offset values are calculated.

This tool will also allow the user to calculate the difference between Crown Temperature minus Air temperature (Tc-Ta). This indicator has been widely demonstrated to be related with different physiological indicators such as stem water potential, stomatal conductance or sap flow rate. User need to select as input the thermal imaging and the air temperature collected in the same time of the airborne imaging acquisitions.

_images/workflow_thermalproc.png

Thermal Calibration

This module allows to calibrate thermal images based on the calibration data obtained in the field. During the calibration process, spectral data is linearly correlated with selected field reflectance spectral.

_images/T_calibration.jpg
Required Input Parameters
  • Temperature layer: Thermal raster
  • Temperature AOIs: Shapefile containing information on temperature field measurements
  • Temperature Field: Vector’s field containing temperature
Output Parameters
  • Output layer: Calibrated thermal raster output.

Tc - Ta

This module normalizes the temperature raster image according to the air temperature during the image acquisition.

_images/Tc_Ta.jpg
Required Input Parameters
  • Input layer: Thermal raster
  • Air temperature: Constant air temperature measure at flight time
Output Parameters
  • Output layer: Thermal raster output including normalized temperature.

LiDAR Processing

LiDAR data is often provided as a number of tiles or flight lines. Depending on computers capacity, in some cases it might be convenient to merge flight files into a single file, or to divide data in overlapping tiles with an appropriate size.

Due to ThermoLiDAR software uses SPDLib suite for LiDAR data manipulation, data has to be converted into the native SPDLib file format. This means that LiDAR data must be converted into Sorted Pulse Data format (SPD) from LAS files, which is the file standard for the interchange of LASer data recommended by the American Society for Photogrammetry and Remote Sensing (ASPRS). Noisy data can be eventually removed before being processed. Once the SPD files have been obtained, ground points are classified and then point heights relative to the ground are inferred. Ground and no-ground points are interpolated to generate DTM, DSM and CHM. From height information a range of metrics mainly applied to forestry applications -but not only- can be derived. Here below this workflow is depicted.

_images/WorkFlow.png

LiDAR workflow using the SPDLib toolset. In this case, SPD format files are supplied as input. Names of SPDLib commands are highlighted in bold font (i.e. spdmerge, spdtranslate, spddeftiles, etc.). Pink boxes represent output products. [Bunting2013]

Convert between formats

Since LiDAR processing modules make use of SPDLib tools, the first step is to convert the input dataset into SPD files. There are two types of SPD files, non-indexed and indexed. A format translation module has been included in QGIS to this purpose. The module allows the conversion between different formats and it is also used to re-project data.

_images/spdtranslate.png

The module for format conversion is located in the LiDAR submenu of the ThermoLiDAR Toolbox

ThermoLiDAR plug-in supports SPD/UPD, LAS and a wide range of ASCII formats but the current level of support is not intended to encompass all the available formats. For more information about the formats SPDLib supports, please see the Supported File Formats section.

Reprojection

It is possible to define the projection of the SPD file explicitly using the input and output projection. Both options expect a text file containing the WKT (Well Known Text) string representing the projection information. In order to change projection, the input projection option is not required, but if it is known, then it should be advantageous to provide it.

Memory Requirements

When converting to an UPD very little memory is required as only a few pulses are held in memory at any one time, this is because no sorting of the pulses is required. On the other hand when generating an SPD file the data needs to be spatially sorted. Therefore, the whole file is read into memory and sorted into the spatial grid before being written output the file. This requires enough memory to store the whole dataset and index data structure in memory. If memory is not sufficient to complete this operation the file needs to be split into blocks to fit into memory.

The option to select splitting the file to disk while building the SPD file is temporal path which is the path and base file name while the tiles will be written. The num rows parameter specifies the number of rows of the final SPD file that will be written to each temporary tile. Note that the tile height in is binsize x num rows (in units the data is projected). The num. columns option maybe set where datasets are very wide such that the tiles are not the full width of the output file. Whether this option is used, the final SPD file will result in a non-sequential rather than a sequential file. This means the data on disk is not order left-to-right top-to-bottom or top-left to bottom-right, which has some performance benefits. Obviously, allowing the SPD file to be built in stages is slower but once completed it is faster to make spatial queries within the file. Besides, other processing steps (i.e., classification and interpolations) can be applied to the whole file with only relatively small memory requirements.

Parameters
_images/Module_spdtranslate.png

Interface to convert between different data formats

Required Input Parameters
  • Input: SPD file that contains the LiDAR point clouds.

  • Index: The location used to index the pulses and points (required):
    • FIRST_RETURN
    • LAST_RETURN
  • Input Format: Format of the input file (Default SPD).
    • SPD: SPD input format with or without spatial index
    • ASCII: ASCII input format
    • LAS/LAZ: Both zipped or normal LAS input format
    • LASNP: LAS input without pulse information
  • Ouput Format: Format of the output file (Default SPD).
    • SPD: SPD output format
    • UPD: SPD output format without spatial index
    • ASCII: ASCII output format
    • LAS: LAS output format
    • LAZ: Zipped LAS output format
Optional Input Parameters
  • Binsize: (float) Bin size for SPD file index (Default 1)
  • Schema: (string) schema for the format of the ASCII file being imported
  • Input Projection: (string) WKT string representing the projection of the input file
  • Output Projection: (string) WKT string representing the projection of the output file
  • Num. Columns: (integer) Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequential SPD file.
  • Num. Rows: (integer) Number of rows within a block (Default 25)
  • Temporal Path: (string) Path where temporary files can be written to.
Output Parameters
  • Output: The output SPD file

Merge files

In some situations it might be convenient to merge various files into a single SPD file. The merging module merges compatible files into a single non-indexed SPD file. It is possible to provide the projection information of the output file and input files if known.

This module allows displaying classes and returns IDs of the input files with list returns IDs and list classes options, respectively. The ignore checks option forces the input files to be merged in case files come from different sources or have different bin sizes.

_images/spdmerge.png

The module for merging files is located in the LiDAR submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_spdmerge.png

Interface to merge compatible files into a single non-indexed SPD file

Required Input Parameters
  • Input: SPD file that contains the LiDAR point clouds (accept multiple files separated by comas).

  • Index: The location used to index the pulses and points (required):

    • FIRST_RETURN
    • LAST_RETURN
  • Input Format: Format of the input file (Default SPD).

    • SPD: SPD input format with or without spatial index
    • ASCII: ASCII input format
    • LAS/LAZ: Both zipped or normal LAS input format
    • LASNP: LAS input without pulse information
Optional Input Parameters
  • List Returns IDs: (list of files) Lists the return IDs for the files listed (accept multiple files separated by comas).
  • List Classes: (list of files) Lists the classes for the files listed (accept multiple files separated by comas).
  • Keep Extent: (Yes/No) Use the extent of the input files as the minimum extent of the output file when indexing the file.
  • Source ID: (Yes/No) Set source ID for each input file
  • Ignore Checks: (Yes/No) Ignore checks between input files to ensure compatibility
  • Schema: (string) schema for the format of the ASCII file being imported
  • Input Projection: (string) WKT string representing the projection of the input file
  • Output Projection: (string) WKT string representing the projection of the output file
Output Parameters
  • Output: The output SPD file

Split data into tiles

LiDAR data is supplied as flight lines or tiles with different shapes and sizes. It is always useful to divide laser data into equally sized square tiles, though. This helps to store, manage and access data easily. Single tiles should meet memory requirements in order to reduce computational times, which determines the maximum size of each file given an average point density. Overlapping zones between tiles help to prevent border errors and guarantee continuous raster models.

ThermoLiDAR has a built-in tool to create tiles given tiles size and the overlap. Output tiles are saved into the output path (this includes path and prefix) and are named as _rowYYcolXX.spd, being YY and XX the number of row and column of the corresponding tile. Tiles definition is stored in an output XML file (output xml) containing their column, row, extent and core extent (tile extent without overlap).

<tiles columns="23" overlap="50" rows="16" xmax="256500" xmin="239343.510" xtilesize="750" ymax="707224.300" ymin="695246.440" ytilesize="750">
   <tile col="1" corexmax="240093.510" corexmin="239343.510" coreymax="695996.440" coreymin="695246.440" file="" row="1" xmax="240143.510" xmin="239293.510" ymax="696046.440" ymin="695196.440"/>
   <tile col="2" corexmax="240843.510" corexmin="240093.510" coreymax="695996.440" coreymin="695246.440" file="" row="1" xmax="240893.510" xmin="240043.510" ymax="696046.440" ymin="695196.440"/>

   ...

   <tile col="22" corexmax="255843.510" corexmin="255093.510" coreymax="707246.440" coreymin="706496.440" file="" row="16" xmax="255893.510" xmin="255043.510" ymax="707296.440" ymin="706446.440"/>
   <tile col="23" corexmax="256593.510" corexmin="255843.510" coreymax="707246.440" coreymin="706496.440" file="" row="16" xmax="256643.510" xmin="255793.510" ymax="707296.440" ymin="706446.440"/>
</tiles>

The module supports to create single tiles (SINGLE option) by supplying row and column, or to generate the complete set of tiles (ALL option).

The module also creates an auxiliary file listing the input LiDAR which can be eventually kept (keep file list) once the module has finished. It might happen that some tiles are empty; in that case those files can be removed enabling the delete tiles option.

_images/spdtiling.png

The module for tiling LiDAR data is within the LiDAR submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_spdtiling.png

Interface for tiling a set of SPD files

Required Input Parameters
  • Input: SPD file that contains the LiDAR point clouds (accept multiple files separated by comas).
Optional Input Parameters
  • Extract Tiles: Where to extract
    • ALL: Create all tiles
    • SINGLE: Extract an individual tile given its row and column
  • Delete Tiles: (Yes/No) If shapefile exists delete it and then run

  • Keep File List: (Yes/No) Keep auxiliary file containing a list of the input files to be tiled

  • Tile Size: (float) Size (in units of the coordinate system) of the square tiles (Default: 1000)

  • Output Path: (string) The output XML file that contains the tiles definition

Output Parameters
  • Output XML: The output XML file that contains the tiles definition

Remove Noise

Many factors may introduce errors in LiDAR point clouds, including water vapour clouds, multipath, poor equipment calibration, or even a flock of birds. In order to avoid further errors and artefacts in final digital models and poor assess of height metrics, those points have to be removed.

This module removes vertical noise from LiDAR datasets by means of three different. Upper and lower absolute thresholds will clip the file to fit these values. Relative threshold will remove, for each bin within a SPD file, points outside the upper and lower values relative to the median height. Whilst global threshold will use the whole SPD file to calculate the median height and remove points relative to it.

_images/spdrmnoise.png

The module for removing noise from data is located in the LiDAR submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_spdtiling.png

Interface for tiling a set of SPD files

Required Input Parameters
  • Input: SPD file that contains the LiDAR point clouds.
Optional Input Parameters
  • Global Rel. Upper Threshold: (float) Global relative to median upper threshold for returns which are to be removed
  • Global Rel. Lower Threshold: (float) Global relative to median lower threshold for returns which are to be removed
  • Relative Upper Threshold: (float) Relative to median upper threshold for returns which are to be removed
  • Relative Lower Threshold: (float) Relative to median lower threshold for returns which are to be removed
  • Absolute Upper Threshold: (float) Absolute upper threshold for returns which are to be removed
  • Absolute Lower Threshold: (float) Absolute lower threshold for returns which are to be removed
Output Parameters
  • Output: The output SPD file without noise

Classify Ground Returns

Two different classification algorithms have been implemented into the plug-in. These algorithms also called filters allow the classification of the LiDAR points, identifying which point belong to the ground. The filters implement the Progressive Morphology (Zhang et al., 2003; [Zhang2003]) and the Multiscale Curvature (Evans and Hudak, 2007; [EvansHudak2007]) methodologies.

Progressive Morphology filter

To classify ground returns an implementation of the Progressive Morphology algorithm (PMF) has been provided. The algorithm QGIS interface has only three options to be set. Under most circumstances the default parameters will be fit the purpose and it is recommended to use the simplest parameters configuration given by default.

The class option allows to apply the filter to particular classes (i.e., if ground returns have been already classified but they need tidying up).

_images/spdpmfgrd.png

The module that implements the PMF algorithm to classify ground is within the LiDAR submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_spdpmfgrd.png

Interface of the Progressive Morphology Algorithm to classify ground points

Required Input Parameters
  • Input: SPD file that contains the LiDAR point clouds.
Optional Input Parameters
  • Binsize: (float) Bin size for SPD file index (Default 1)
  • Class: (integer) Only use points of particular class
  • Ground Threshold: (float) Threshold for deviation from identified ground surface for classifying the ground returns (Default 0.3)
  • Median Filter: (integer) Size of the median filter (half size i.e., 3x3 is 1) (Default 2)
  • No Median: (Yes/No) Do not run a median filter on generated surface
  • Max. Elevation: (float) Maximum elevation difference threshold (Default 5)
  • Initial Elevation: (float) Initial elevation difference threshold (Default 0.3)
  • Slope: (float) Slope parameter related to terrain (Default 0.3)
  • Max. Filter: (float) Maximum size of the filter (Default 7)
  • Initial Filter: (float) Initial size of the filter (half size i.e., 3x3 is 1) (Default 1)
Output Parameters
  • Output: The output SPD file containing classification
Multiscale Curvature filter

The plug-ins integrates an implementation of the Multiscale Curvature algorithm (MCC). As before, the QGIS interface has only three options to be set: input, output and class argument. Under most circumstances default parameters for the algorithm will be fit for purpose, but be careful that the bin size used within SPD is not too large as the processing will be at this resolution.

_images/spdmccgrd.png

The module that implements the MCC algorithm to classify ground is within the LiDAR submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_spdmccgrd.png

Interface of the Progressive Morphology Algorithm to classify ground points

Required Input Parameters
  • Input: SPD file that contains the LiDAR point clouds.
Optional Input Parameters
  • Binsize: (float) Bin size for SPD file index (Default 1)
  • Class: (integer) Only use points of particular class
  • Median: (Yes/No) Use a median filter to smooth the generated raster instead of a (mean) averaging filter.
  • Filter Size: (integer) The size of the smoothing filter (half size i.e., 3x3 is 1; Default = 1)
  • Num. Points Tps: (integer) The number of points used for the TPS interpolation (Default = 16)
  • Max. Radius Tps: (float) Maximum search radius for the TPS interpolation (Default = 20)
  • Step Curve Tolerance: (float) Iteration step curvature tolerance parameter (Default = 0.5)
  • Min. Curve Tolerance: (float) Minimum curvature tolerance parameter (Default = 0.1)
  • Initial Curve Tolerance: (float) Initial curvature tolerance parameter (Default = 1)
  • Scale Gaps: (float) Gap between increments in scale (Default = 0.5)
  • Num. Scales Below: (integer) The number of scales below the init scale to be used (Default = 1)
  • Num. Scales Above: (integer) The number of scales above the init scale to be used (Default = 1)
  • Initial Scale: (float) Initial processing scale, this is usually the native resolution of the data.
  • Max. Elevation Threshold: (float) Maximum elevation difference threshold (Default 5)
  • Initial Elevation Threshold: (float) Initial elevation difference threshold (Default 0.3)
  • Slope: (float) Slope parameter related to terrain (Default 0.3)
  • Max. Filter: (float) Maximum size of the filter (Default 7)
  • Initial Filter: (float) Initial size of the filter (half size i.e., 3x3 is 1) (Default 1)
Output Parameters
  • Output: The output SPD file containing classification
Filter points depending on class

The class option applies the filter to returns of a particular class (i.e., ground returns). This represents a way to improve the ground return classification is to combine more than one filtering algorithm to take advantage of their particular strengths and weaknesses. In fact, a particularly useful combination is to first run the PMF algorithm where a thick slice is taken (e.g., 1 or 2 metres above the raster surface) and then the MCC is applied to find the ground returns (setting the class option to 3).

It can be also useful for TLS as it can take a thick slice with MCC algorithm and then use the PMF algorithm to tidy that result up to get a good overall ground classification.

Normalise heights

SPD files supports both elevation corresponding to a vertical datum and an above-ground height for each discrete return. Before data can be used for generating a Canopy Height Model (CHM) or any height related metric, height field has to be populated. This can be done in two ways. The simplest way is to use a DTM of the same resolution as the SPD file bin size (Image option). The disadvantage of using a DTM is that it is if the DTM is not accurate it can introduce some artefacts. Using this method the only parameters are the input files, both LiDAR file and the DTM, and an output file. The raster DTM needs to the same resolution as the SPD grid and it can be any raster format supported by the GDAL library.

The other option is to interpolate a value for each point generating a continuous surface and reducing any artefacts (Interpolate option). The recommended approach for the interpolation is to use the Natural Neighbour method, as demonstrated by Bater and Coops (2009; [BaterCoops2009]).

_images/spddefheight.png

The module that defines the height above ground is located in the LiDAR submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_spddefheight.png

Interface of the module to define points heights from the ground

Required Input Parameters
  • Input: SPD file that contains the LiDAR point clouds.
Optional Input Parameters
  • Reference Surface: Select the reference surface for normalise heights
    • Interpolation
    • Image
  • Elevation: (raster) The input elevation image

  • Binsize: (float) Bin size for SPD file index (Default 1)

  • Interpolator: Different interpolation methods to choose from
    • Natural Neighbour
    • Nearest Neighbour
    • TIN Plate
Output Parameters
  • Output: The output SPD file

Interpolation Module

The most common products that can be created from a LiDAR dataset are Digital Terrain Models (DTMs), Digital Surface Models (DSMs) and Canopy Height Models (CHMs). To produce those products, it is necessary to interpolate a raster surface from the classified ground returns and top surface points. Create Digital Model within the ThermoLiDAR plug-in permits to generate these products by choosing model option.

A key parameter is the output raster resolution or binsize which needs to be a multiple of the SPD input file spatial index. Different interpolators can be selected with the interpolator option. This module supports many raster formats by means of the GDAL library.

_images/spdinterpolate.png

The interpolation module is in the LiDAR submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_spdinterp.png

Interface to interpolate data

Required Input Parameters
  • Input: SPD file that contains the LiDAR point clouds.

  • MODEL:
    • DTM: Digital Terrain Model
    • MDS: Digital Surface Model
    • CHM: Canopy Height Model
Optional Input Parameters
  • Binsize: (float) Bin size for SPD file index (Default 1)

  • Interpolator: Different interpolation methods to choose from
    • Natural Neighbour
    • Nearest Neighbour
    • TIN Plate
Output Parameters
  • Output: The raster file containing the interpolated model
Examples

Digital Surface Models (DSMs) are easily done by setting model to DSM. The module will perform an interpolation of the elevation information of all the points of the LiDAR file. The result is a surface representing the ground and all the objects attached to it as can be seen in figure below.

_images/QGIS_dsm.png

Example of visualization in grey scales of a DSM (1m resolution) generated with the ThermoLiDAR plug-in

In case ground returns have been classified, then interpolating elevation information of ground points will generate a Digital Terrain Model (DTM). For this purpose model has to be set to DTM. The output raster represents the bare ground surface (see next figure).

_images/QGIS_dtm.png

Example of visualization in grey scales of a DTM (1m resolution) generated with the ThermoLiDAR plug-in

In a forest environment, those points not classified as ground are commonly classified as vegetation. To interpolate the height above ground information of vegetation points produces a Canopy Height Model (CHM). In this case, the model option is set to CHM. The result is raster where canopies are perfectly depicted and ground (see figure below).

_images/QGIS_chm.png

Example of visualization in grey scales of a CHM (1m resolution) generated with the ThermoLiDAR plug-in

Generate metrics

ThermoLiDAR plug-in is able to calculated different metrics at the same time. Metrics can be simple statistical moments, percentiles of point heights, or even count ratios. Mathematical operators can also be applied to either other metrics or operators.

The XML file required by the metrics option has to be defined a priori with a hierarchical list of metrics and operators. Metrics can be defined manually by typing each XML file which allows the user to adequate the XML file to particular purposes (for more information about XML metrics files, see the How to define metrics? document. However, the user can generate the XML file automatically by means of the module Create metrics XML file.

The module supports different output data formats, that is, raster (all GDAL formats) and vector. Raster option extends the output to the entire input file, assessing the metrics for each pixel the final raster output and creating as many bands as metrics have been defined within the XML file. Vector option requires an input shapefile containing polygon entities. The output shapefile database will be populated with the metrics computed inside the polygons.

_images/spdmetrics.png

The module to generate metrics is located in the LiDAR submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_spdmetrics.png

Interface to calculate metrics

Required Input Parameters
  • Input: SPD file that contains the LiDAR point clouds.

  • Metrics: (file) XML file containing the metrics template

  • Output Data Format:
    • Image: Raster output
    • Vector: Vector output
Required Input Parameters
  • Binsize: (float) Bin size for processing and the resolution of the output image. Note: 0 will use the native SPD file bin size
  • Vector File: (shapefile) Input shapefile (only with vector output).
Output Parameters
  • Output: The raster file containing the interpolated model

The figure below shows the Height 95th percentile computed for the same dataset than the previous examples:

_images/QGIS_metrics_pH95.png

Height 95th percentile computed into a raster image (10m resolution)

Create metrics XML file

The module offers the possibility to create the file with a single metric selected from a list. However, the list also accept the options ALL, FOREST and PERCENTILES:

  • ALL: Selects all the metrics listed in the option list.
  • FOREST: Includes some metrics that are commonly used in forest applications. This metrics includes percentiles from 99th to 50th, groundCover, canopyCover, maxHeight and meanHeight.
  • PERCENTILES: Includes all percentiles from 99th to 10th.

In case other metrics than those available in the list are needed, the user can specify them by writing their names separated by ; (semi-colon, and without spaces) in the field labelled with string format. This could be an example:

pH99;pH98;pH95;pH90;pH80;maxHeight;canopyCover;NumberReturnsNoGround

In this case, the module will omit any selection made in the first input parameter and it will create the XML file with the metrics supplied by the user. In case the string any metric is misspelled, the module will inform the user of the mistake and omit the metric. The file will be created with the metrics that are correctly written.

_images/spdmetrics.png

The module to generate metrics XML file is located in the LiDAR submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_spddefmetrics.png

Interface of the module to generate XML file containing metrics

Optional Input Parameters
  • List of metrics: Metrics to be used
  • String of metrics: Metrics to be used
Output Parameters
  • Output: The output XML file containing metric definitions

References

[Bunting2013]Bunting, P., Armston, J., Clewley, D., Lucas, R. M., 2013. Sorted pulse data (SPD) library. Part II: A processing framework for LiDAR data from pulsed laser systems in terrestrial environments. Computers and Geosciences 56, 207 – 215.
[BaterCoops2009]Bater, C. W., Coops, N. C., 2009. Evaluating error associated with lidar-derived DEM interpolation. Computers and Geosciences 35 (2), pp. 289–300.
[EvansHudak2007]Evans, J. S., Hudak, A. T., 2007. A multi-scale curvature algorithm for classifying discrete return lidar in forested environments. IEEE Transactions on Geoscience and Remote Sensing 45 (4), pp. 1029 – 1038.
[Naesset1997a]Naesset, E., 1997. Determination of mean tree height of forest stands using airborne laser scanner data. ISPRS Journal of Photogrammetry and Remote Sensing, 52, pp. 49 – 56.
[Naesset1997b]Naesset, E., 1997. Estimating timber volume of forest stands using airborne laser scanner data. Remote Sensing of Environment, 61, pp. 246 – 253.
[Naesset2002]Naesset, E., 2002. Predicting forest stands characteristics with airborne scanning laser using a practical two-stage procedure and field data. Remote Sensing of Environment, 80, pp. 88 – 99.
[Zhang2003]Zhang, K., Chen, S., Whitman, D., Shyu, M., Yan, J., Zhang, C., 2003. A progressive morphological filter for removing nonground measurements from airborne LIDAR data. IEEE Transactions on Geoscience and Remote Sensing 41 (4), pp. 872 – 882.

Forest Health Assessment

Forest Health assessment module consists of four different tools: Forest Stand Segmentation (FSS), Health Condition Level (HCL), Structurally Homogeneous Forest Units (SHFU) and Forest Health Monitoring (FHM). The following figure summarizes the main structure of this module and the input required throughout the process.

_images/FHA.png

Forest stands segmentation

Within OD tools, users are willing to choose between developing a semi-automatic segmentation and using a pre-defined object feature. Segmentation tools are based on algorithms that segment an image into areas of connected pixels based on the pixel DN value. ThermoLiDAR image segmentation tools will be based on region growing algorithms. The basic approach of a region growing algorithm is to start from a seed region (typically one or more pixels) that are considered to be inside the object to be segmented. The pixels neighbouring this region are evaluated to determine if they should also be considered part of the object. If so, they are added to the region and the process continues as long as new pixels are added to the region. Region growing algorithms vary depending on the criteria used to decide whether a pixel should be included in the region or not, the type connectivity used to determine neighbours, and the strategy used to visit neighbouring pixels. Image segmentation is a crucial step within the object-based remote sensing information retrieval process. As a step prior to classification the quality assessment of the segmentation result is of fundamental significance for the recognition process as well as for choosing the appropriate approach and parameters for a given segmentation task. Alternatively, user could be interested on using a pre-defined object feature. This object feature could be a segmentation shape file provided from other source or any other land cover mapping. Also, the user can use a pre-defined regular object, defining the size of the square to be used previously.

_images/example_FSS.png

Forest Condition Levels

The most critical part in applying forest health condition indicators is the user’s accuracy defining forest degradation levels. Besides user’s training another critical factor is to select under analysis a robust physiological indicator and to carry out an accurate field measurements campaign.

Potential physiological indicators of forest decline such us pigment concentration, photosynthesis, respiration and transpiration rate holds great potential to shed light on the mechanisms and processes that occur as a result of drought stress. In the short-term, climate can change the physiological conditions of the forest resulting in acute damage, but chronic exposure usually results in cumulative effects on physiological process. These factors effects on the plants light reactions or enzymatic functions and increased respiration from reparative activities. Gradual decreases in photosynthesis, stomatal conductance, carbon fixation, water use efficiency, resistance to insect and cold resistance were found in most of trees which are very typical symptom of stress conditions

Long-term exposure of water stress to a combination of high light levels and high temperatures causes a depression of photosynthesis and photosystem II efficiency that is not easily reversed, even for water-stress-resistant forest species. The decrease in the photochemical efficiency of photosystem II (ΦPSII) is related to the conversion of violaxanthin to antheraxanthin and zeaxanthin produced by an increase in harmless non-radiative energy dissipation (qN) and providing photo-protection from oxidative damage. One of the most widely physiological indicator applied in the analysis of long-term effect on forest health condition is de Leaf Area Index (LAI). The following is an example of the statistical analysis performs on LAI values measured from an Oak forest inventoried in the framework of THERMOLIDAR project.

_images/example_FCL.png

Structurally Homogeneous Forest Units

This tool provides the tools required for the classification of forest stands structurally different. The classification is based on two main structural parameters, average height of the trees and density. Input data needed to run this process is obtained from LiDAR data. Alternatively, user can provide external forest maps in a .shp format type with an attribute of the number of class. The following figure shows an example of the units defined for the oak forest under analysis. Using a grey scale, trees were grouped in 3 classes with significant differences in terms of structural composition.

_images/example_SHFU.png

Forest Health Monitoring

The main function of this tool is defining health condition differences in the vegetation at the stand level. Input parameters defined by users should mainly contain: thermal imaging data and the FSC Polygons (vector file with structurally homogeneous stands. Forest stands included in this analysis should be carried specifically based on one species. The user can perform a supervised or an unsupervised classification depending of the availability of field data measurements to define training areas. It should be highlight, that at this point of the analysis, users are willing to obtain an integrated mapping of forest health distribution levels based on thermal data but also standardized by forest stands units defined from lidar-based metrics. The following figure, shows an example of the units defined for assessment of the status of forest condition. Using a colour palette, trees were grouped in different classes with significant differences in terms of structural composition and physiological status. The colour palette ranges from red to green, where red colour is relate with trees with high level of damage and green colour represents trees with optimum health condition.

_images/example_FHM.png

Data Analysis

This section provides the tools for analysis and interpretation of results. Once thermal and LiDAR data have been processed in the previous sections, the user can generate the mapping needed to interpret the physiological state of the forest mass analysed.

First, the user has a set of data obtained in the field of physiology which are analysed and grouped, using the tools available at the module Health Condition Level.

From the tools available in the structurally homogeneous forest units module, the user can perform a preliminary classification of the stands, based on structural homogeneity. This factor is important because thermal values behave differently according to the structure of objects.

Finally, from the Forest Heath classification tools the user has the necessary tools to perform a classification based on the thermal values for the various homogeneous units. To improve the classification, the user can define training plots according to data collected in the field of physiology, visually are established different levels of affection.

_images/workflow_analysis.png

Health Condition Levels

Shapiro Test

Before proceeding with the classification of items by level of damage according to several variables taken in the field, we verify that the set of physiological variables follow a normal distribution. For this we use the Shapiro test, located in the toolbox [Analysis] Health Condition Level > Shapiro Test.

_images/shapiro.jpg
Required Input Parameters
  • Input vector: Vector file that contains information on physiological data.
  • Parameter: Vector’s field to analyse if it follows the normal distribution
Ouput Parameters
  • R Console Output: File with the output result. The output is composed of the following values
    • Statistic - The value of the Shapiro-Wilk statistic.
    • p.value - An approximate p-value for the test. This is said in Royston (1995) to be adequate for p.value < 0.1.
    • Method - The character string “Shapiro-Wilk normality test”.
    • data.name - A character string giving the name(s) of the data.
Interpretation

The null-hypothesis of this test is that the population is normally distributed. Thus if the p-value is less than the chosen alpha level, then the null hypothesis is rejected and there is evidence that the data tested are not from a normally distributed population. In other words, the data is not normal. On the contrary, if the p-value is greater than the chosen alpha level, then the null hypothesis that the data came from a normally distributed population cannot be rejected. E.g. for an alpha level of 0.05, a data set with a p-value of 0.02 rejects the null hypothesis that the data are from a normally distributed population. However, since the test is biased by sample size, the test may be statistically significant from a normal distribution in any large samples.

Standardize

To be able to use the variables in our analysis, they should follow a normal distribution. One way to force these follow a normal distribution is by definition. This characterization involves the conversion of the variable that follows a distribution N (μ, σ), a new variable with distribution N (1,0). This tool is situated in [Analysis] Heath condition level > Standardize.

_images/standardize.jpg
Required Input Parameters
  • Input vector: Shape that contains information on physiological data.
  • Variable: Shapefile’s field to standardize
Ouput Parameters
  • Output vector: The user will output a new shapefile with the standardized variable.
Clustering

This tool allows us to group one or more physiological variables according to their degree of similarity between individuals in the sample. So, the goal of clustering is to determine the intrinsic grouping in a set of unlabelled data. But how to decide what constitutes a good clustering? It can be shown that there is no absolute best criterion which would be independent of the final aim of the clustering. Consequently, it is the user which must supply this criterion, in such a way that the result of the clustering will suit their needs.

To make this tool has been chosen by a hierarchical approach. The user-supplied items are categorized into levels and sublevels within a class hierarchy, forming a hierarchical tree structure.

_images/clustering.jpg
Required Input Parameters
  • Input vector: Shape containing information on physiological data.
  • Cols names: Name the columns containing details physiology (separated by semicolons ‘;’)
  • Number of groups: Number of groups
Ouput Parameters
  • R plots: File with the R output result.
  • Output vector: Output shapefile with a new variable (group) indicating that each group is member.
ANOVA

ANOVA test are conducted for each variable to indicate how well the variable discriminates between clusters.

The hypothesis is tested in the ANOVA is that the population means (the average of the dependent variable at each level of the independent variable) are equal. If the population means are equal, it means that the groups did not differ in the dependent variable, and consequently, the independent variable is independent of the dependent variable.

_images/anova.jpg
Required Input Parameters
  • Input vector: Shapefile that contains information about the cluster
  • Dependent variable: Field shape that acts as a dependent variable
  • Independent variable: Field shape that acts as an independent variable
Ouput Parameters
  • R Console Output: File with the R output result. The result will be a list of ANOVA tables, one for each response (even if there is only one response”. They have columns “Df”, “Sum Sq”, “Mean Sq”, as well as “F value” and “Pr(>F)” if there are non-zero residual degrees of freedom. There is a row for each term in the model, plus one for “Residuals” if there are any.
Interpretation

If the critical level associated with the F statistics (i.e., the probability of obtaining values as obtained or older), is less than 0.05, we reject the hypothesis of equal means and conclude that not all the population means being compared are equal. Otherwise, we cannot reject the hypothesis of equality and we cannot claim that the groups being compared differ in their population averages.

Structurally Homogeneous Forest Units

The analytical purpose of this tool is the definition of structurally homogeneous stands that allow us to minimize the effects of structure on the thermal information, and therefore allow us to obtain related health outcomes woodland. To do this, the software allows the user to define the structure of the stand from the height data. The calculation of uniformity is therefore a function of two variables directly derived from LiDAR data the 95th percentile obtained from MDV and the penetration rate, obtained from density points that penetrate the forest canopy.

_images/SHFU.jpg
  • Polygons: Vector file that contains polygons to classify.
  • ID: Vector’s field that that indicates the id of each item
  • Equation: Operator used to classify the feats.
  • Clusters: Number of output groups. The value is 3 by default.
  • Cutoff: Stop threshold algorithm. The value is 0.5 by default.
  • Output: Vector file name containing the classification.

Forest Health Classification

Unsupervised Pixel-based Classification

The user has a stratification of the study area, and classified based on the structure. Through this tool, a classification of the pixels of temperatures will be performed, based on the classification of defined structure. Thus, the output will be a raster temperature for each of the groups of homogeneity.

_images/UP_classification.jpg
Required Input Parameters
  • Temperature raster layer: Input temperature raster
  • SHFU Polygons: Vector layer that contains structurally homogeneous stands.
  • SHFU Field: SHFU layer field containing the group that belong each item.
  • Clusters: Number of output classes.
  • Cutoff: Stop threshold. The value is 0.5 by default.
Ouput Parameters
  • Output: Raster file name containing the classification of temperatures based on homogeneous stand units.
  • Statistics: CSV File containing statistics for groups.
Unsupervised Object-oriented Classification

The user has a stratification of the study area, and classified based on the structure. Through this tool, a classification of the objects of temperatures mean will be performed, based on the classification of defined structure. Thus, the output will be a raster temperature for each of the groups of homogeneity.

_images/UO_classification.jpg
Required Input Parameters
  • Temperature raster layer: Input temperature raster
  • SHFU Polygons: Vector layer that contains structurally homogeneous forest units.
  • SHFU Field: SHFU layer field containing the group that belongs each item.
  • Clusters: Number of output classes.
  • Cutoff: Stop threshold. The value is 0.5 by default.
Ouput Parameters
  • Output: Raster file name containing the classification of temperatures based on homogeneous stand units.
  • Statistics: CSV File containing statistics for groups.
Supervised Pixel-based Classification

Similarly as in the previous point, the user can perform a classification of the temperature response to the stand of homogeneity. Unlike supervised classification, the user has a number of AOIs that guide the classification process.

_images/SP_classification.jpg
Required Input Parameters
  • Temperature raster layer: Input temperature raster
  • SHFU Polygons: Vector layer that contains structurally homogeneous stands.
  • SHFU Field: SHFU layer field containing the group that belong each item.
  • ROIs vector: Vector file with training areas.
  • ROIs - SHFU Identifier: Vector’s field that contains the group’s homogeneity that belongs each item of training areas.
  • ROIs - FSC Identifier: Vector’s field that contains the group “Forest health level” that belongs within their group of homogeneity.
  • k: Number k nearest neighbours. The value is 3 by default.
Ouput Parameters
  • Output: Raster file name containing the classification of temperatures based on homogeneous stand units.
  • Statistics: CSV File containing statistics for groups.
Supervised Object-oriented Classification
_images/SO_classification.jpg
Required Input Parameters
  • Temperature raster layer: Input temperature raster
  • SHFU Polygons: Vector layer that contains structurally homogeneous stands.
  • SHFU Field: SHFU layer field containing the group that belong each item.
  • ROIs vector: Vector file with training areas.
  • ROIs - SHFU Identifier: Vector’s field that contains the group’s homogeneity that belongs each item of training areas.
  • ROIs - FSC Identifier: Vector’s field that contains the group “Forest health level” that belongs within their group of homogeneity.
  • k: Number k nearest neighbours. The value is 3 by default.
Ouput Parameters
  • Output: Raster file name containing the classification of temperatures based on homogeneous stand units.
  • Statistics: CSV File containing statistics for groups.

Forest Models

To measure the field biological and physical properties (e.g. dominant height, mean diameter, stem number, basal area, timber volume, etc...) throughout entire woodlands is impossible. However, due to the characteristics of LiDAR technology it is possible to assess these properties in wide areas.

Two different models have been implemented in the ThermoLiDAR plug-in to this purpose. The first one implements an empirical method where field measurements are correlated with LiDAR data. In this case, only a few sample plots are commonly measured in field in order to relate these measurements to canopy height metrics derived from LiDAR data. These relationships are then used to estimate and extend those characteristics to the area covered by LiDAR creating forest inventory cartography. This method is commonly called Scandinavian method.

The second is a hybrid method where some equation relating LiDAR canopy height and forest variables have been defined a priori. This method does not require in the field measurement and once is calibrated, it can be used anywhere else.

Besides, ...

Warning

Some introduction to Stomatal conductance module is missing!

Stepwise Multivariate Regression Model

The Stepwise Multivariate Regression model was firstly introduce by Naesset (1997a, 1997b) to estimates tree heights ([Naesset1997a]) and volumes ([Naesset1997b]). The methodology assumes that stands have been previously classified before sample plots are measured. Plots must have the same size and have to be regularly distributed throughout the study area. Height metrics derived from LiDAR have to be calculated within each single sample plot (see `Generate metrics`_) excluding points lower than 2 meters, so that stones and shrubs are avoided. Metrics have to include (Naesset 2002; [Naesset2002]):

  • Quantiles corresponding to the 0th, 5th, 10th, 15th, ..., 90th, 95th, 98th percentiles of the distribution.
  • The maximum values
  • The mean values
  • The coefficients of variation
  • Measures of canopy density

Naesset definition of canopy density is considered as the proportions of the first echo laser hits above 10th, ..., 90th percentiles of the first echo height distribution to total number of first echoes.

For each sample plot a logarithmic regression equation is formulated:

\[Y = \beta_0 h_0^{\beta_1} h_{10}^{\beta_2} \ldots h_{90}^{\beta_{11}} h_{max}^{\beta_{12}} h_{mean}^{\beta_{13}} d_{10}^{\beta_{14}} \ldots d_{90}^{\beta_{23}} \ldots\]

which, in linear form is expressed as:

\[\begin{split}\ln Y &= \ln\beta_0 + \beta_1 \ln h_0 + \beta_2 \ln h_{10} \ldots + \beta_{11} \ln h_{90} + \beta_{12} + \\ &+ \ln h_{max} + \beta_{13} \ln h_{mean} + \beta_{14} \ln d_{10} + \ldots + \beta_{23} \ln d_{90}^{\beta_23} + \ldots\end{split}\]

Where \(Y\) are the field values (dependent variable); \(h_{i}\) are height percentiles; \(h_{max}\) and \(h_{mean}\) are maximum and mean height, respectively; and \(d_{j}\) are Naesset canopy densities.

_images/stepwise_regression.png

The module to calibrate the the forest model is located in [Analysis] Forest Model submenu of the ThermoLiDAR toolbox

Parameters
_images/Module_multivariate_regression.png

Interface to calculate calibrate the forest model

Required Input Parameters
  • Layer: Shapefile layer that contains the LiDAR metrics.
  • Dependent Variable: Vector field containing the observations of the predictable variable
  • Independent Variable i: Vector field containing the LiDAR metrics that will be use to characterized the model
Output Parameters
  • Output: (html) File containing the final report of the multivariate regression

Hybrid model

Warning

Juan to write a small theoretical introduction about what the model does

This module implements a hybrid model to infer some forestry variables. The equations of the model can be written as:

\[TH = H_c \cdot 1.10\]\[SI = \alpha_1 \dfrac{TH}{\left[1-\exp(-\alpha_2 \cdot age)\right]^{\alpha_3}}\]\[Yield = \beta_1 + \beta_2 \cdot SI\]\[Mean_{DBH} = b_1 \cdot TH^{b_4}\]\[Trees_{ha} = \exp(c_1 - c_4 \cdot \log(Mean_{DBH}))\]\[Volume_{ha} = g_1 \cdot TH^{g_3}\]\[Basal\_Area_{ha} = h_1 \cdot TH^{h_4}\]

where \(TH\) is Top Height, \(SI\) is Site Index, \(Yield\) is Yield class, \(Mean_{DBH}\) is Mean DBH, \(Trees_{ha}\) is the number of trees per hectare, \(Volume_{ha}\) is the volume of biomass per hectare, \(Basal\_Area_{ha}\) is the basal area per hectare; and \(\left\{\alpha_1, \alpha_2, \alpha_3, \beta_1, \beta_2, b_1, b_4, c_1, c_4, g_1, g_3, h_1, h_4\right\}\) depends on the specie. \(TH\) is well correlated with the \(95^{th}\) height percentile for Norway Spruce and Sitka Spruce, and with the \(90^{th}\) height percentile for Scot Pine, depending on the specie.

In case no information about trees age is provided, a singularity occurs in the Site Index and Yield Class equations, and thus these variables cannot be then computed.

If the study area contains various stands with different species and ages, then users could be interested in using a layer with sub-compartmentation information. In this case, both age and vegetation layers can be provided in order to overwrite the default constant values of age and vegetation. Layers require to specify the name of the attribute column for each parameter (Age column and Vegetation column).

The output raster will contain 7 different bands, one per each different estimated variable: Top Height, Mean DBH, number of trees per hectare, Volume per hectare, Basal Area per hectare, Site Index and Yield Class. However, when age has been set by default, that is equal to 0, only 5 bands are created in the output raster due to the restrictions in the equations mentioned above.

_images/hybrid.png

This figure should depict where to find the module

Parameters
_images/Module_hybrid_method.png

This figure should show the module graphical interface

Required Input Parameters
  • Canopy: (raster) Canopy height layer
  • Age: (integer): Age of the vegetation (years) (Default 0)
Optional Input Parameters
  • Age layer: (vector) Sub-compartmentation at stand level with age layer

  • Age column: (string) Name of the column attribute for age sub-compartmentation

  • Vegetation layer: (vector) Sub-compartmentation at stand level with species information

  • Vegetation column: (string) Name of the column attribute for species sub-compartmentation

  • Vegetation type:
    • Sitka Spruce
    • Norway Spruce
    • Scots Pine
Output Parameters
  • Output: Output raster

Canopy stomatal conductance

The module calculates canopy stomatal conductance (TC) of an area based on the information supplied by canopy temperature, terrain, canopy model, LAI, vegetation type and weather conditions amongst others parameters. The model implements the methodology proposed by Blonquist et al., 2009 ([blonquist2009]).

The user is required to provide raster layers for canopy temperature, digital terrain model and canopy height model as input parameters, an output raster layer for the output and the day of the year (DOY). DOY is needed in order to calculate the solar radiation of the particular day and hour of the year when measurements were taken, and must be provided as DDMMYY:hhmm, where DD is the day of the month, MM is the month of the year, YYYY is the year, hh is the hour of the day (24h format) and mm is the minutes of the hour.

LAI, wind speed and air temperature parameters have default values that will be used as constants throughout the whole area. Users can provide different values or even raster layers that to continuously cover the study area. Latitude, precipitation, pressure, humidity should be set to meet location and the weather conditions of the thermal survey. Parameters such as quantum yield efficiency, fraction of photosynthetic active radiation, light extinction coefficient, or atmospheric turbidity factor are optional and default values can be used in most cases.

Ground coverage reflects energy in different ways depending on the coverage type. Vegetation type can be selected from a list in order to set the corresponding energy reflection coefficient. Types available are: * Crop: 0.31 * Deciduous low sun: 0.29 * Deciduous high sun: 0.23 * Conifers: 0.312 * Grass: 0.24 * Snow: 0.85

Warning

Georgios to write a small theoretical introduction about what the model does (Yes, you can use LaTeX equations :) )

_images/cs_conductance.png

This figure should depict where to find the module

Parameters
_images/Module_conductance.png

This figure should show the module graphical interface

Required Input Parameters
  • Canopy temperature: (raster) Canopy temperature layer
  • Digital terrain model: (raster) Digital terrain model layer
  • Canopy hight model: (raster) Canopy hight model layer
Optional Input Parameters
  • LAI layer: (vector) Leaf area index layer

  • LAI: (float) Leaf area index constant value (m2/m2) (Default: 3.0)

  • Wind layer: (vector) Wind speed layer

  • Wind: (float) Wind speed (m s-1) (Default: 2.0)

  • Air Temperature layer Air temperature layer

  • Temperature (float) Air temperature constant value (Default: 0.0)

  • Vegetation type:
    • Conifers
    • Crop
    • Deciduous low sun
    • Deciduous high sun
    • Grass
    • Snow
  • DOY: (string) Day of the year to calculate sun radiance (Format: DDMMYYYY:hhmm)

  • Latitude: (float) Approximate latitude of study area (degrees)

  • Pressure: (float) Barometric pressure (mb) (Default: 1013.25)

  • Precipitation: (float) Precipitation (mm) (Default: 0.0)

  • Humidity: (float) Relative humidity (%) (Default: 0.00)

  • Wind height: (float) Wind reference height (m) (Default: 0.0)

  • Temperature height: (float) Temperature reference height (m) (Default: 0.0)

  • Yield: (float) Quantum yield efficiency (mol C mol-1 PAR) (Default: 0.05)

  • Active radiation: (float) Fraction of photosynthetic active radiation (Default: 0.45)

  • Respiration: (float) Fraction of respiration (Default: 0.5)

  • Light extinction coefficient: (float) Light extinction coefficient (Default: 0.5)

  • Linke: (float) Atmospheric turbidity factor (Default: 3.0)

Output Parameters
  • Output: Output raster containing stomatal conductance estimation

References

[blonquist2009]Blonquist, J. M., Norman, J. M., Bugbee, B., 2009. Automated measurements of canopy stomatal conductance based on infrared temperature. Agricultural and Forest Meteorology, 149, 1931-1945

Known bugs

There are some known bugs related to SPDLib tools. Some of them have been reported in the SPDLib distribution list.
  1. spdmetrics tool crashes with particular values of metrics in Windows systems (thread)
  2. spdmetrics does not work with vector output (in both Linux and Windows systems, see thread)
  3. SPD Windows files are bigger than what they should be
  4. SPD Linux files are not read in Windows systems

Examples and tutorials

LiDAR tutorial

Convert File Formats - spdtranslate

The spdtranslate command is one of the key commands associated with SPDLib as it allows for the conversion between the various supported file formats, while it also supports coordinate system conversion.

Although, the most common use of this tool is converting data to the SPD format. The simplest command for converting to UPD (SPD with a spatial index) is shown below, where the input and output file formats have been specified alongside the field used to attribute each pulse with a location to be used if the pulses where later index (i.e., into an SPD file):

spdtranslate --if LAS --of UPD -x FIRST_RETURN -i QueenElisabeth_example.las -o QueenElisabeth_example.spd

If you wish to explicitly define the projection of the SPD file then use the --input_proj and --output_proj switch to specify a text file containing the OGC WKT string representing the projection. The following command provides an example where the coordinate system (UK Ordnance Survey national grid) has been specified when converting data to an SPD file:

spdtranslate --if LAS --of UPD -x FIRST_RETURN --input_proj ./OSGB1936.wkt -i QueenElisabeth_example.las -o QueenElisabeth_example.spd

While the following command converts from WGS84 to UK Ordnance Survey national grid while reading the file and converting to SPD:

spdtranslate --if LAS --of SPD --convert_proj --input_proj WGS84.wkt --output_proj OSGB1936.wkt -xFIRST_RETURN -b 1 -i QueenElisabeth_example.las -o QueenElisabeth_example.spd

To convert data to the SPD format, where data is indexed on to a 10 m grid the following command is the simplest form.:

spdtranslate --if LAS --of SPD -x FIRST_RETURN -b 10 -i QueenElisabeth_example.las -o QueenElisabeth_example.spd

ThermoLiDAR plug-in supports SPD/UPD, LAS and a wide range of ASCII formats but the current level of support is not intended to encompass all the available formats. For more information about the formats SPDLib supports, please see the Supported File Formats section.

In the examples given above the whole input files are read into memory and sorted into the spatial grid before being written output the file. This requires that you have sufficient memory to store the whole dataset and index data structure in memory. If you do not have sufficient memory you can use the --temppath option. This will tile the file into the disk while building the SPD.

Format convertion can be easily done in QGIS with the Processing Toolbox ‣ [Processing] LiDAR ‣ Convert between formats tool:

_images/Module_spdtranslate1.png

Some options are set by default as the format of the input (LAS) and output (SPD) files, and the location used to index pulses (FIRST_RETURN) and the user has to set the other options manually. The two only required options are the input and output file. If the absolute path of the input file is know it can be written directly, otherwise there exists the possibility of search it by clicking the dotbutton button on the right. This will open another interface.

_images/Window_OpenFile_LAS.png

The output file path is specify in the same way by clicking the dotbutton button on the right of the output file field. In this case, the user search for the path where the output file should to be saved into and writes the name of the file. It is worth to notice that the file extension has to be set and it should agree with the output format.

_images/Window_SaveFile_SPD.png

Once both files have been set, the graphical interface should look like this

_images/Module_spdtranslate_example.png

and it should be ready to be executed by clicking OK. To be sure it worked properly, the user can visualise the new spd file with the SPD Points Viewer or load it into QGIS by selecting Processing Toolbox ‣ [Processing] LiDAR ‣ Visualise SPD file:

_images/Module_spdimport.png

To load the spd file click on the dotbutton button and the user will be ask to select a file.

_images/Window_OpenFile_SPD.png

Warning

Be sure to select the spd file, otherwise the visualization module will not work

Once the spd file has been loaded, QGIS will show the result in the canvas:

_images/QGIS_import.png

Classify Ground Returns

Progressive Morphology Filter - spdpmfgrd

The spdpmfgrd command is an implementation of the progressive morphological filter algorithm of [Zhang2003]. The algorithm works by generating an initial minimum return raster surface at the bin resolution of the SPD file. Circulate morphological operators of a range of scales. At each scale a morphological closing (erosion + dilation) operation is performed. The new height value from the morphological operator is kept if it is above the elevation difference threshold where the elevation difference threshold is increased between threshold using a slope value.

Under most circumstances the default parameters for the algorithm will be fit for purpose, but be careful that the bin size used within SPD is not too large as the processing will be at this resolution:

spdpmfgrd -i QueenElisabeth_example.spd -o QueenElisabeth_example_pmfgrd.spd

The QGIS interface of this tool (Processing Toolbox ‣ [Processing] LiDAR ‣ Classify points (Progressive Morphology algorithm)) has only three options that are required. Apart from the input and output files, you can select the class the filter will be applied to.

_images/Module_spdpmfgrd_example.png
Filter points depending on class

The --class option allows the filter to be applied to returns of a particular class (i.e., if you have ground returns classified but it needs tidying up etc). It’s useful for TLS as it can take a thick slice with mcc algorithm and then use the PMF algorithm to tidy that result up to get a good overall ground classification.

Multi-Curvature Classifier – spdmccgrd

The spdmccgrd command is an implementation of the multi-scale curvature algorithm [EvansHudak2007]. This algorithm was created at the US Forest Service and does a good job at classifying ground returns under a forest canopy while retaining the terrain but it does not differentiate the buildings. Under most circumstances the default parameters for the algorithm will be fit for purpose and it is recommend that you try these first with the following command.:

$ spdmccgrd -i QueenElisabeth_example.spd -o QueenElisabeth_example_mccgrd.spd

The QGIS interface of this tool (Processing Toolbox ‣ [Processing] LiDAR ‣ Classify points (Multiscale Curvature Algorithm)) has only three options that are required. Apart from the input and output files, you can select the class the filter will be applied to.

_images/Module_spdmccgrd_example1.png
Filter points depending on class

As in the spdpmfgrd case, the --class option allows the filter to be applied to returns of a particular class. It’s useful as it can take a thick slice with PMF algorithm and then use the MCC algorithm to tidy that result up to get a good overall ground classification.

Combining filters

Another option which can improve the ground return classification is to combine more than one filtering algorithm to take advantage of their particular strengths and weaknesses. A particularly useful combination is to first run the PMF algo- rithm where a ‘thick’ slice is taken (e.g., 1 or 2 metres above the raster surface) and then the MCC is applied to find the ground returns (using the --class 3 option):

spdmccgrd -i --class 3 QueenElisabeth_example_pmfgrd.spd -o QueenElisabeth_example_mccgrd.spd
_images/Module_spdmccgrd_example2.png

Normalise Heights - spddefheight

The spddefheight command is used to define the height field within both the pulse and point fields of the SPD data file. This can be done in two ways, the simplest is the use of a DTM of the same resolution as the SPD file bin size. The disadvantage of using a DTM is that it is in effect using a series of spot heights and this can introduce artefacts. Therefore, interpolating a value for each point/pulse generates a continuous surface reducing any artefacts. The recommended approach is to use Natural Neighbour interpolation, as demonstrated in the paper of [BaterCoops2009].

Without Interpolation, using DTM

Using the DTM method the only parameters are the input file and output files. The raster DTM needs to the same resolution as the SPD grid and it can be any raster format supported by the GDAL library.

Interpolation Mode

The interpolators are the same as those defined within the spdinterp command so look to the Interpolate DTM and DSM section for details on their use. The recommend command using the natural neighbour interpolation algorithm, along with the default parameters is:

spddefheight --interp --in NATURAL_NEIGHBOR -i QueenElisabeth_example_mccgrd.spd -o QueenElisabeth_example_height.spd

Within QGIS just select Processing Toolbox ‣ [Processing] LiDAR ‣ Normalise heights:

_images/Module_spddefheight_example.png

Interpolate DTM and DSM

The most common product to be created from a LiDAR SPD dataset are Digital Terrain Model (DTM), Digital Surface Model (DSM) and Canopy Height Models (CHM). To produce those products you need to interpolate a raster surface from the classified ground returns and top surface points. A key parameter is the resolution of the raster which is generated, within SPDLib the resolution of the raster needs to be a whole number multiple of the SPD index, for example, if the SPD file has a bin size of 10 m then the the output raster file resolution can be 1, 2 or 5 m but not 3 m.

The same module spdinterp will permit to generate DTMs, DSMs and CHMs by choosing the output model option and the type of point elevation. To interpolate topographic point elevations generates a DTM in the next way:

$ spdinterp --dsm --topo --in NATURAL_NEIGHBOR -f GTiff -b 1 -i QueenElisabeth_example.spd -o DSM.tif

The graphical interface of this tool is located in Processing Toolbox ‣ [Processing] LiDAR ‣ Create digital models:

_images/Module_spdinterp_DSM.png

The interpolation module can produce a DSM

The result can be visualised in QGIS:

_images/QGIS_dsm1.png

Visualisation of a DSM in QGIS

Moreover, importing the spd files generated so far would not show any different result from that imported at the beginning of this tutorial:

$ spdinterp --dtm --topo --in NATURAL_NEIGHBOR -f GTiff -b 1 -i QueenElisabeth_example_mccgrd.spd -o DTM.tif

$ spdinterp --chm --height --in NATURAL_NEIGHBOR -f GTiff -b 1 -i QueenElisabeth_example_mccgrd.spd -o CHM.tif
_images/Module_spdinterp_DTM.png

The interpolation module can produce a DTM

_images/QGIS_dtm1.png

Visualisation of a DTM in QGIS

_images/Module_spdinterp_CHM.png

The interpolation module can produce a CHM

_images/QGIS_chm1.png

Visualisation of a CHM in QGIS

Generate metrics

spdmetrics calculates metrics, which can be simple statistical moments, percentiles of the point height or return amplitude, or even count ratios. Mathematical operators can be applied to either other operators or metric primitives to allow a range of LiDAR metrics to be derived.

_images/Module_spdmetrics1.png

Multiple metrics can be calculated at the same time if listed within an XML. This XML file has to be defined a priori with a hierarchical list of metrics and operators. Within the metrics tags a list of metrics can be provided by the metric tag. Within each metric the {field attribute is used to name the raster band or vector attribute. Here is an example of an SPD metrics XML file template containing the maximum, the average, the median, the number of pulses, the canopy cover and the percentile 95th.

_images/Module_spdmetrics_example.png

References

[Bunting2013b]Bunting, P., Armston, J., Clewley, D., Lucas, R. M., 2013. Sorted pulse data (SPD) library. Part II: A processing framework for LiDAR data from pulsed laser systems in terrestrial environments. Computers and Geosciences 56, 207 – 215.
[BaterCoops2009]Bater, C. W., Coops, N. C., 2009. Evaluating error associated with lidar-derived DEM interpolation. Computers and Geosciences 35 (2), pp. 289–300.
[EvansHudak2007]Evans, J. S., Hudak, A. T., 2007. A multiscale curvature algorithm for classifying discrete return lidar in forested environments. IEEE Transactions on Geoscience and Remote Sensing 45 (4), pp. 1029 – 1038.
[Zhang2003]Zhang, K., Chen, S., Whitman, D., Shyu, M., Yan, J., Zhang, C., 2003. A progressive morphological filter for removing nonground measurements from airborne LIDAR data. IEEE Transactions on Geoscience and Remote Sensing 41 (4), pp. 872 – 882.

Case Study 1 - Huelva (Spain)

Thermal Processing

Thermal Calibration

For calibration of the thermal image we have the following data:

  • Thermal image of the study area in Huelva, each pixel of the image represents the temperature value in degrees kelvin. thld_training\huelva\Z1_holmOak\Temperature\t_huelvaZ1.tif
  • Set of calibration values stored in a vector file. Each item has the id and the value of collected temperature (in degrees Kelvin). thld_training\huelva\Z1_holmOak\Temperature\t_aois\t_calibration.shp

We proceed to load both the image and the shape to QGIS interface. To load the image in the main menu (Layer - Add Raster Layer ...) to load the shape of points (Layer - Add Vector Layer ...).

_images/thermal_img_Huelva.png

Thermal image of Huelva and attribute table containing the field thermal data

We proceed to perform the calibration of the thermal image through Thermolidar tool toolbox [Processing] Thermal - Thermal Calibration.

_images/thermal_calibration.png

Thermal calibration tool

We introduce the values as shown in the interface:

_images/iu_thermal_calibration.png

Interface of the Thermal Calibration module

We obtain as output calibration file of the thermal image, as shown in the following figure:

_images/thermal_calibration_out.png

Thermal calibration output

Tc - Ta

We proceed to subtract the air temperature to the calibrated raster temperatures. We have available the following information.

  • Raster temperature calibrated in the previous section. thld_training\huelva\Z1_holmOak\out\thermal_processing\t_calibration.tif
  • Air temperature acquired in flight time. In our case we will use the value of temperature 298.15 ºK

We proceed to perform the calibration of the thermal image through Thermolidar tool toolbox [Processing] Thermal - Tc-Ta.

_images/tc_ta.png

Tc Ta module is located in the “Thermal” submenu of the THERMOLIDAR plugin toolbox

We introduce the input parameters in the user interface:

_images/iu_tc_ta.png

Interface of the Tc-Ta module

Obtaining as output file the following result:

_images/thermal_tcta_out.png

Output thermal image of Huelva

Data analysis

Health condition levels

We have the physiology data of Leaf Area Index (LAI) for 25 of the 216 trees inventoried in the area of Huelva, stored in the vector file: thld_training\huelva\Z1_holmOak\FieldData\LAI\lai_huelva.shp.

First, load the shape of polygons in QGIS (Layer - Add vector layer ...)

_images/hcl_lai_plots.png

LAI measurements in Huelva

Before proceeding with the classification of items by level of damage according to several variables taken in the field, we verify that the set of physiological variables follow a normal distribution. For this we use the Shapiro test, located in the toolbox [Analysis] Health Condition Level > Shapiro Test.

Warning

The first time you use these tools, you will need to start QGIS in administrator mode (R install required dependencies)

Shapiro Test
_images/hcl_shapiro.png

Shapire Test is located in the “Health Condition Level” submenu of the THERMOLIDAR plugin toolbox

  • Input vector: Vector file that contains information on physiological data.
  • Var: Vector’s field to analyze if it follows the normal distribution. In this case the parameter LAI

We introduce the parameters as shown in the following figure:

_images/hcl_shapiro1.png

Interface of the “Shapiro Test” module

Obtaining the following output:

_images/hcl_shaphiro_out.png

In the example, the p-value is much higher than 0.05, so we conclude that LAI data follow a normal distribution. In the case where p-value is less than 0.05 the data would be discarded, or these should be normalized.

In the case that the variable does not follow a normal distribution, it is necessary to standardize using the [Analysis] Health Condition Level> Standarize

Clustering

This tool allows us to group one or more physiological variables according to their degree of similarity between individuals in the sample. In this case we will group by the variable LAI, we have previously verified that follows a normal distribution. This tool creates many groups tool damage level depending on the specified physiological variable. In this case, we will select 3 levels as a function of LAI variable. We can find the tool in [Analysis] Health Condition Level > Clustering

_images/hcl_clustering.png

Clustering is located in the “Health Condition Level” submenu of the THERMOLIDAR plugin toolbox

We introduce the values as shown in the following figure:

_images/hcl_clustering1.png

The vector output file will be stored in the folder thld_training\huelva\Z1_holmOak\out\hcl.shp that will be used subsequently. Each of the trees will be classified as regions of interest to guide the supervised classification of the thermal image.

We obtain results in the dendrogram with the cluster of data:

_images/hcl_clustering_out.png

LAI data grouped into 3 categories health condition, with the following results:

_images/hcl_clustering_out2.png

Finally, we must ensure that the groups are significantly different, according to the variables used. This is done through the tool [Analysis] Heath condition level> ANOVA.

ANOVA
_images/hcl_anova.png

ANOVA is located in the “Health Condition Level” submenu of the THERMOLIDAR plugin toolbox

Selected as the dependent variable the group that owns each parcel; and as the dependent variable that we want to check if it is significant in the group.

_images/hcl_anova1.png

We get the following output:

_images/hcl_anova_out.png

If the critical level associated with the F statistics (ie, the probability of obtaining values as obtained or older), is less than 0.05, we reject the hypothesis of equal means and conclude that not all the population means being compared are equal. Otherwise, we cannot reject the hypothesis of equality and we cannot claim that the groups being compared differ in their population averages.

Structurally homogenous forest units

The analytical purpose of this tool is the definition of structurally homogeneous stands that allow us to minimize the effects of structure on the thermal information, and therefore allow us to obtain related health outcomes woodland.

We start from the metric of the objects to be classified. This vector file was generated at the point of obtaining metric lidar [Processing] LiDAR > Calculate metrics.

  • thld_training\huelva\Z1_holmOak\out\metrics.shp

The user must enter the equation by which you want to group the stands, for example according to the equation of the dominant height. In our case, we use the equation obtained from the 95th percentile.

The tool used in this section is located in [Analysis] Structurally homogeneus forest units

_images/shfu.png

SHFU is located in the “Structuraly Homogeneous Forest Units” submenu of the THERMOLIDAR plugin toolbox

In this example, the polygons will be classified into 3 different groups of homogeneity (Cluster parameter).

The Cutoff parameter determines the accuracy of the fit. A value of 0 is of higher precision and larger number of iterations will be needed to reach the final solution.

_images/shfu1.png

Interface of the “SHFU” module

We get as output:

_images/shfu_out.png

Now, we must assign the SHFU group to the objects (regions of interest) described on the Forest Health Level section.

Thus we will use vector files: * thld_training\huelva\Z1_holmOak\out\hcl.shp * thld_training\huelva\Z1_holmOak\out\shfu.shp

We will use the tool QGIS Geoalgorithms > Vector general tools > Join attributes table

_images/shfu_joinAttr.png

We insert the parameters as shown in the following figure:

_images/shfu_joinAttr2.png

We will obtain a new output file (thld_training\huelva\Z1_holmOak\out\aois.shp) will use to guide the classification of the thermal image, are of interest to the following fields: * SHFU. Homogeneous group that the object owns. * HCL. Health condition level groups.

Forest Health Classification

We proceed to perform the classification of the thermal image, based on the following information available:

  • thld_training\huelva\Z1_holmOak\Temperature\t_huelvaZ1.tif
  • thld_training\huelva\Z1_holmOak\out\shfu.shp

For supervised classification we will use health condition levels obtained in previous sections.

  • thld_training\huelva\Z1_holmOak\out\aois.shp
Unsupervised pixel-based classification

Through this tool raster classify the temperature as many classes as you specify. Temperature classification is performed based on homogeneous units, so many output raster have been defined as SHFU be obtained.

In our case, we obtain three raster classified. Each raster has associated many categories defined temperature.

_images/fhc_unsupervised_pixel.png

Interface of the “Unsupervised pixel-based classification” module

We get as output:

_images/fhc_unsupervised_pixel_out.png
_images/fhc_unsupervised_pixel_out2.png
_images/fhc_unsupervised_pixel_out3.png
_images/fhc_unsupervised_pixel_out4.png
Unsupervised object-based classification

In the same way that the pixel-oriented, this tool classification raster classify the temperature as many classes as you specify. Temperature classification is performed based on homogeneous units, so many output raster have been defined as SHFU be obtained.

In our case, we obtain three raster classified. Each raster has associated many categories defined temperature.

The difference from the previous tool, is that instead of being classified pixels are classified objects. The classification is made based on the mean value of each of the objects.

_images/fhc_unsupervised_object.png

Interface of the “Unsupervised object-based classification” module

We get as output:

_images/fhc_unsupervised_object_out.png
_images/fhc_unsupervised_object_out2.png
_images/fhc_unsupervised_object_out3.png
_images/fhc_unsupervised_object_out4.png
Supervised pixel-based classification

Through this tool the temperature raster will be classified, based on the condition levels defined in the regions of interest (AOIs). * thld_training\huelva\Z1_holmOak\out\aois.shp

Temperature classification is performed based on homogeneous units, so many output raster have been defined as SHFU be obtained.

In our case, we obtain three raster classified. Each raster has associated many categories defined temperature.

_images/fhc_supervised_pixel.png

Interface of the “Supervised pixel-based classification” module

We get as output:

_images/fhc_supervised_pixel_out.png
_images/fhc_supervised_pixel_out2.png
_images/fhc_supervised_pixel_out3.png
_images/fhc_supervised_pixel_out4.png
Supervised object-based classification

Through this tool the temperature raster will be classified, based on the condition levels defined in the regions of interest (AOIs). * thld_training\huelva\Z1_holmOak\out\aois.shp

Temperature classification is performed based on homogeneous units, so many output raster have been defined as SHFU be obtained.

In our case, we obtain three raster classified. Each raster has associated many categories defined temperature.

The difference from the previous tool, is that instead of being classified pixels are classified objects. The classification is made based on the mean value of each of the objects.

_images/fhc_supervised_object.png

Interface of the “Supervised object-based classification” module

We get as output:

_images/fhc_supervised_object_out.png
_images/fhc_supervised_object_out2.png
_images/fhc_supervised_object_out3.png
_images/fhc_supervised_object_out4.png

Case Study 2 - Almería (Spain)

Thermal Processing

Thermal Calibration

First, we have a thermal image of the study area. Each digital image value represents the temperature in degrees kelvin.

_images/thermal_img_Filabres.png

Thermal image of Sierra de los Filabres

Several values of temperature field of invariant surfaces (black and white cloth) have been collected, and GPS position of each sample.

_images/thermal_field_data.png

Attribute table of shapefile containing the field thermal data.

We introduce the input parameters in the user interface:

_images/iu_thermal_calibration1.png

Interface of the Thermal Calibration module

As a result the software generates a calibrated image of the study area.

Tc - Ta

We introduce the input parameters in the user interface:

  • Input layer: Calibrated thermal raster (generated in the previous section)
  • Air temperature: Constant air temperature measure at flight time. In this case, the air temperature is considered to 293.15 degrees kelvin
_images/iu_tc_ta1.png

Interface of the Tc-Ta module

_images/thermal_img_Filabres.png

Output thermal image of Sierra de los Filabres

Data analysis

Health condition levels

In the following example we have acquired several variables physiology field (LAI), for a number of control plots in Filabres.

_images/hcl_lai_plots1.png

LAI measurements in Filabres

Before proceeding with the classification of items by level of damage according to several variables taken in the field, we verify that the set of physiological variables follow a normal distribution. For this we use the Shapiro test, located in the toolbox [Analysis] Health Condition Level > Shapiro Test.

Warning

The first time you use these tools, you will need to start QGIS in administrator mode (R install required dependencies)

Shapiro Test
  • Input vector: Vector file that contains information on physiological data.
  • Var: Vector’s field to analyze if it follows the normal distribution. In this case the parameter lai_LICOR2
_images/hcl_shapiro2.png

Interface of the “Shapiro Test” module

Obtaining the following output:

_images/hcl_shaphiro_out1.png

In the example, the p-value is much higher than 0.05, so we conclude that LAI data follow a normal distribution. In the case where p-value is less than 0.05 the data would be discarded, or these should be normalized.

In the case that the variable does not follow a normal distribution, it is necessary to standardize using the [Analysis] Health Condition Level> Standarize

Standarize

We verified that the lai_LICOR2 variable follows a normal distribution.

  • Input vector: Vector file that contains information on physiological data.
  • Var: Shapefile’s field to standardize. In this case the parameter lai_LICOR2
_images/hcl_standarize.png
Clustering

This tool allows us to group one or more physiological variables according to their degree of similarity between individuals in the sample. In this case we will group by the variable lai_LICOR2, we have previously verified that follows a normal distribution. This tool creates many groups tool damage level depending on the specified physiological variable. In this case, we will select 3 levels as a function of LAI variable.

_images/hcl_clustering2.png

LAI data grouped into 3 categories health condition, with the following results:

_images/hcl_clustering_out1.png
_images/hcl_clustering_out21.png

Finally, we must ensure that the groups are significantly different, according to the variables used. This is done through the tool [Analysis] Heath condition level> ANOVA.

ANOVA

Selected as the dependent variable the group that owns each parcel; and as the dependent variable that we want to check if it is significant in the group.

_images/hcl_anova2.png

We get the following output:

_images/hcl_anova_out1.png

If the critical level associated with the F statistics (ie, the probability of obtaining values as obtained or older), is less than 0.05, we reject the hypothesis of equal means and conclude that not all the population means being compared are equal. Otherwise, we cannot reject the hypothesis of equality and we cannot claim that the groups being compared differ in their population averages.

Structurally homogenous forest units

The analytical purpose of this tool is the definition of structurally homogeneous stands that allow us to minimize the effects of structure on the thermal information, and therefore allow us to obtain related health outcomes woodland.

The user must enter the equation by which you want to group the stands, for example according to the equation of the dominant height. In our case, we use the equation obtained from the 95th percentile. In this example, the polygons will be classified into 3 different groups of homogeneity.

_images/shfu2.png

Interface of the “SHFU” module

We get as output:

_images/shfu_out1.png
Forest Health Classification
Unsupervised pixel-based classification

Through this tool raster classify the temperature as many classes as you specify. Temperature classification is performed based on homogeneous units, so many output raster have been defined as SHFU be obtained.

In our case, we obtain three raster classified. Each raster has associated many categories defined temperature.

_images/fhc_unsupervised_pixel1.png

Interface of the “Unsupervised pixel-based classification” module

We get as output:

_images/fhc_unsupervised_pixel_out1.png
_images/fhc_unsupervised_pixel_out21.png
_images/fhc_unsupervised_pixel_out31.png
_images/fhc_unsupervised_pixel_out41.png
Unsupervised object-based classification

In the same way that the pixel-oriented, this tool classification raster classify the temperature as many classes as you specify. Temperature classification is performed based on homogeneous units, so many output raster have been defined as SHFU be obtained.

In our case, we obtain three raster classified. Each raster has associated many categories defined temperature.

The difference from the previous tool, is that instead of being classified pixels are classified objects. The classification is made based on the mean value of each of the objects.

_images/fhc_unsupervised_object1.png

Interface of the “Unsupervised object-based classification” module

We get as output:

_images/fhc_unsupervised_object_out1.png
_images/fhc_unsupervised_object_out21.png
_images/fhc_unsupervised_object_out31.png
_images/fhc_unsupervised_object_out41.png

SPDLib tool help files

spdclearclass

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdclearclass  -o <String> -i <String>  [-c <unsigned int>] [-r <unsigned  int>] [--] [--version] [-h]

Where
-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Clear the classification of an SPD file: spdclearclass

spdcopy

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdcopy  -o <String> -i <String> [-c  <unsigned int>] [-r <unsigned int>]  [--] [--version] [-h]

Where
-o <String>, --output <String>
 (required) The output file.
-i <String>, --input <String>
 (required) The input file.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Makes a copy of an indexed SPD file: spdcopy

spddecomp

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spddecomp  -o <String> -i <String> [-a]  [-d <float>] [-w <uint_fast32_t>]  [-e <uint_fast32_t>] [-n] [-t  <uint_fast32_t>] [-c <unsigned  int>] [-r <unsigned int>] [--]  [--version] [-h]

Where
-o <String>, --output <String>
 (required) The output file.
-i <String>, --input <String>
 (required) The input file.
-a, --all Fit all Gaussian at once
-d <float>, --decay <float>
 Decay value for ignoring ringing artifacts (Default 5)
-w <uint_fast32_t>, --window <uint_fast32_t>
 Window for the values taken either side of the peak for fitting (Default 5)
-e <uint_fast32_t>, --decaythres <uint_fast32_t>
 Intensity threshold above which a decay function is used (Default 100)
-n, --noise Estimate noise. Only applicable when –all is set. Note an initial estimate is required for peak detection (see -t)
-t <uint_fast32_t>, --threshold <uint_fast32_t>
 Noise threshold below which peaks are ignored (Default: Value in pulse->waveNoiseThreshold)
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Decompose full waveform data to create discrete points: spddecomp

spddefheight

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spddefheight  {--interp|--image} -o  <String> [-e <String>] -i <String>  [--idxres <float>] [--thinres  <float>] [--ptsperbin  <uint_fast16_t>] [--thin]  [--tpsnopts <uint_fast16_t>]  [--tpsRadius <float>]  [--stdDevRadius <float>]  [--largeRadius <float>]  [--smallRadius <float>]  [--stddevThreshold <float>] [--in  <TIN_PLANE|NEAREST_NEIGHBOR|NATURAL_NEIGHBOR|STDEV_MULTISCALE|TPS_RAD|TPS_PTNO>] [-b <float>]  [--overlap <uint_fast16_t>] [-c  <unsigned int>] [-r <unsigned int>]  [--] [--version] [-h]

Where
--interp (OR required) Use interpolation of the ground returns to calculate ground elevation

– or – –image (OR required) Use an image which defines the ground elevation.

-o <String>, --output <String>
 (required) The output file.
-e <String>, --elevation <String>
 The input elevation image.
-i <String>, --input <String>
 (required) The input SPD file.
--idxres <float>
 Resolution of the grid index used for some interpolates
--thinres <float>
 Resolution of the grid used to thin the point cloud
--ptsperbin <uint_fast16_t>
 The number of point allowed within a grid cell following thinning
--thin Thin the point cloud when interpolating
--tpsnopts <uint_fast16_t>
 TPS: (TPS_RAD - minimum) Number of points to be used by TPS algorithm
--tpsRadius <float>
 TPS: (TPS_PTNO - maximum) Radius used to retrieve data in TPS algorithm
--stdDevRadius <float>
 STDEV_MULTISCALE: Radius used to calculate the standard deviation
--largeRadius <float>
 STDEV_MULTISCALE: Large radius to be used when standard deviation is low
--smallRadius <float>
 STDEV_MULTISCALE: Smaller radius to be used when standard deviation is high
--stddevThreshold <float>
 STDEV_MULTISCALE: Standard Deviation threshold
--in <TIN_PLANE|NEAREST_NEIGHBOR|NATURAL_NEIGHBOR|STDEV_MULTISCALE|TPS_RAD|TPS_PTNO>
 The interpolator to be used.
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
--overlap <uint_fast16_t>
 Size (in bins) of the overlap between processing blocks (Default 10)
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Define the height field within pulses and points: spddefheight

spddefrgb

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spddefrgb  {--define|--stretch} -o  <String> [--image <String>] -i  <String> [--independ] [--coef  <float>] [--blue <uint_fast16_t>]  [--green <uint_fast16_t>] [--red  <uint_fast16_t>] [-c <unsigned  int>] [-r <unsigned int>]  [--stddev] [--linear] [--]  [--version] [-h]

Where
--define (OR required) Define the RGB values on an SPD file from an input image.

– or – –stretch (OR required) Stretch existing RGB values to a range of 0 to 255.

-o <String>, --output <String>
 (required) The output SPD file.
--image <String>
 The input image file.
-i <String>, --input <String>
 (required) The input SPD file.
--independ Stretch the RGB values independently.
--coef <float> The coefficient for the standard deviation stretch (Default is 2)
--blue <uint_fast16_t>
 Image band for blue channel
--green <uint_fast16_t>
 Image band for green channel
--red <uint_fast16_t>
 Image band for red channel
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--stddev Use a linear 2 standard deviation stretch.
--linear Use a linear stretch between the min and max values.
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Define the RGB values on the SPDFile: spddefrgb

spddeftiles

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spddeftiles  {-t|-e} [-i <String>] [-o  <String>] [--ymax <double>] [--xmax  <double>] [--ymin <double>] [--xmin  <double>] [--overlap <double>]  [--ysize <double>] [--xsize  <double>] [--] [--version] [-h]

Where
-t, --tiles (OR required) Define a set of tiles for a region.

– or – -e, –extent (OR required) Calculate the extent of a set of files.

-i <String>, --input <String>
 Input file listing the set of input files (–extent).
-o <String>, --output <String>
 Output XML file defining the tiles (–tiles).
--ymax <double>
 Y max (in units of coordinate systems) of the region to be tiled (–tiles).
--xmax <double>
 X max (in units of coordinate systems) of the region to be tiled (–tiles).
--ymin <double>
 Y min (in units of coordinate systems) of the region to be tiled (–tiles).
--xmin <double>
 X min (in units of coordinate systems) of the region to be tiled (–tiles).
--overlap <double>
 Size (in units of coordinate systems) of the overlap for tiles (Default 100) (–tiles).
--ysize <double>
 Y size (in units of coordinate systems) of the tiles (Default 1000) (–tiles).
--xsize <double>
 X size (in units of coordinate systems) of the tiles (Default 1000) (–tiles).
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Tools for defining a set of tiles: spddeftiles

spdelevation

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdelevation  {--constant <double>|--variable <string>} {--add|--minus} -o <String> -i <String>  [-c <unsigned int>] [-r <unsigned  int>] [--] [--version] [-h]

Where
--constant <double>
 (OR required) Alter pulse elevation by a constant amount

– or – –variable <string> (OR required) Alter pulse elevation by a variable amount defined using an image

--add (OR required) Add offset

– or – –minus (OR required) Remove offset

-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Alter the elevation of the pulses: spdelevation

spdextract

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdextract  -o <String> -i <String>  [--max] [--min] [--return <ALL|FIRST|LAST|NOTFIRST|FIRSTLAST>]  [--class <unsigned int>] [-b  <float>] [-c <unsigned int>] [-r  <unsigned int>] [--] [--version]  [-h]

Where
-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
--max Extract only the maximum returns (within the bin and therefore only available for SPD file, not UPD).
--min Extract only the minimum returns (within the bin and therefore only available for SPD file, not UPD).
--return <ALL|FIRST|LAST|NOTFIRST|FIRSTLAST>
 The return(s) of interest
--class <unsigned int>
 Class of interest (Ground == 3; Not Ground == 104)
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Extract returns and pulses which meet a set of criteria: spdextract

spdgrdtidy

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdgrdtidy  {--negheights} -o <String> -i  <String> [-c <unsigned int>] [-r  <unsigned int>] [--] [--version]  [-h]

Where
--negheights (OR required) Classify negative height as ground
-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Attempt to tidy up the ground return classification: spdgrdtidy

spdinfo

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdinfo  [--] [--version] [-h] <string>  ...

Where
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.
<string>
(accepted multiple times) Input file

NAME

Print header info for an SPD File: spdinfo

spdinterp

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdinterp  {--dtm|--chm|--dsm|--amp}  {--topo|--height|--other} -o  <String> -i <String> [--rbflayers  <unsigned int>] [--rbfradius  <double>] [--idxres <float>]  [--thinres <float>] [--ptsperbin  <uint_fast16_t>] [--thin]  [--tpsnopts <uint_fast16_t>]  [--tpsRadius <float>]  [--stdDevRadius <float>]  [--largeRadius <float>]  [--smallRadius <float>]  [--stddevThreshold <float>] [--in  <TIN_PLANE|NEAREST_NEIGHBOR|NATURAL_NEIGHBOR|STDEV_MULTISCALE|TPS_RAD|TPS_PTNO|RBF>] [-f  <string>] [-b <float>] [--overlap  <uint_fast16_t>] [-c <unsigned  int>] [-r <unsigned int>] [--]  [--version] [-h]

Where
--dtm (OR required) Interpolate a DTM image

– or – –chm (OR required) Interpolate a CHM image.

– or – –dsm (OR required) Interpolate a DSM image.

– or – –amp (OR required) Interpolate an amplitude image.

--topo (OR required) Use topographic elevation

– or – –height (OR required) Use height above ground elevation.

– or – –other (OR required) Interpolator is not using height.

-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
--rbflayers <unsigned int>
 The number of layers used within the RBF interpolator
--rbfradius <double>
 The radius used within the RBF interpolator
--idxres <float>
 Resolution of the grid index used for some interpolaters
--thinres <float>
 Resolution of the grid used to thin the point cloud
--ptsperbin <uint_fast16_t>
 The number of point allowed within a grid cell following thinning
--thin Thin the point cloud when interpolating
--tpsnopts <uint_fast16_t>
 TPS: (TPS_RAD - minimum) Number of points to be used by TPS algorithm
--tpsRadius <float>
 TPS: (TPS_PTNO - maximum) Radius used to retrieve data in TPS algorithm
--stdDevRadius <float>
 STDEV_MULTISCALE: Radius used to calculate the standard deviation
--largeRadius <float>
 STDEV_MULTISCALE: Large radius to be used when standard deviation is low
--smallRadius <float>
 STDEV_MULTISCALE: Smaller radius to be used when standard deviation is high
--stddevThreshold <float>
 STDEV_MULTISCALE: Standard Deviation threshold
--in <TIN_PLANE|NEAREST_NEIGHBOR|NATURAL_NEIGHBOR|STDEV_MULTISCALE|TPS_RAD|TPS_PTNO|RBF>
 The interpolator to be used.
-f <string>, --format <string>
 Image format (GDAL driver string), Default is ENVI.
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
--overlap <uint_fast16_t>
 Size (in bins) of the overlap between processing blocks (Default 10)
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Interpolate a raster elevation surface: spdinterp

spdlastest

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdlastest  {-p|-c|-f} -i <String> [-n  <unsigned int>] [-s <unsigned int>]  [--] [--version] [-h]

Where
-p, --print (OR required) Print a selction of pulses from LAS file.

– or – -c, –count (OR required) Count the number of pulses in LAS file.

– or – -f, –notfirst (OR required) Print the returns which start a pulse with point IDs greater than 1

-i <String>, --input <String>
 (required) The input SPD file.
-n <unsigned int>, --number <unsigned int>
 Number of pulses to be printed out (Default 10)
-s <unsigned int>, --start <unsigned int>
 Starting pulse index (Default 0)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Print data pulses from a LAS file - for debugging: spdlastest

spdmaskgen

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdmaskgen  -o <String> -i <String> [-f  <string>] [-b <float>] [-c  <unsigned int>] [-r <unsigned int>]  [-p <unsigned int>] [--]  [--version] [-h]

Where
-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
-f <string>, --format <string>
 Image format (GDAL driver string), Default is ENVI.
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
-p <unsigned int>, --numpulses <unsigned int>
 Number of pulses for a bin to be included in mask (Default 1)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Generate a binary mask for the an input SPD File: spdmaskgen

spdmccgrd

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdmccgrd  -o <String> -i <String>  [--thresofchangemultireturn]  [--class <uint_fast16_t>]  [--median] [--thresofchange  <float>] [--filtersize  <uint_fast16_t>] [--interpnumpts  <uint_fast16_t>] [--interpmaxradius  <float>] [--stepcurvetol <float>]  [--mincurvetol <float>]  [--initcurvetol <float>]  [--scalegaps <float>]  [--numofscalesbelow  <uint_fast16_t>]  [--numofscalesabove  <uint_fast16_t>] [--initscale  <float>] [--overlap  <uint_fast16_t>] [-b <float>] [-c  <unsigned int>] [-r <unsigned int>]  [--] [--version] [-h]

Where
-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
--thresofchangemultireturn
 Use only multiple return pulses to calculate the amount of change between iterations.
--class <uint_fast16_t>
 Only use points of particular class
--median Use a median filter to smooth the generated raster instead of a (mean) averaging filter.
--thresofchange <float>
 The threshold for the (Default = 0.1)
--filtersize <uint_fast16_t>
 The size of the smoothing filter (half size i.e., 3x3 is 1; Default = 1).
--interpnumpts <uint_fast16_t>
 The number of points used for the TPS interpolation (Default = 16)
--interpmaxradius <float>
 Maximum search radius for the TPS interpolation (Default = 20)
--stepcurvetol <float>
 Iteration step curveture tolerance parameter (Default = 0.5)
--mincurvetol <float>
 Minimum curveture tolerance parameter (Default = 0.1)
--initcurvetol <float>
 Initial curveture tolerance parameter (Default = 1)
--scalegaps <float>
 Gap between increments in scale (Default = 0.5)
--numofscalesbelow <uint_fast16_t>
 The number of scales below the init scale to be used (Default = 1)
--numofscalesabove <uint_fast16_t>
 The number of scales above the init scale to be used (Default = 1)
--initscale <float>
 Initial processing scale, this is usually the native resolution of the data.
--overlap <uint_fast16_t>
 Size (in bins) of the overlap between processing blocks (Default 10)
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Classifies the ground returns using the multiscale curvature algorithm: spdmccgrd

spdmerge

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdmerge  -o <String> [--keepextent] [-s  <std::string>] [--classes  <uint_fast16_t>] ...  [--returnIDs  <uint_fast16_t>] ...  [--source]  [--ignorechecks] [-c] [-r  <std::string>] [-p <std::string>]  [--wavebitres <8BIT|16BIT|32BIT>]  [-x <FIRST_RETURN|LAST_RETURN|START_WAVEFORM|END_WAVEFORM|ORIGIN|MAX_INTENSITY|UNCHANGED>] -f <SPD|ASCIIPULSEROW|ASCII|FWF_DAT|DECOMPOSED_DAT|LAS|LASNP|LASSTRICT|DECOMPOSED_COO|ASCIIMULTILINE>  [--] [--version] [-h] <std::string>  ...

Where
-o <String>, --output <String>
 (required) The output SPD file.
--keepextent When indexing the file use the extent of the input file as the minimum extent of the output file.
-s <std::string>, --schema <std::string>
 A schema for the format of the file being imported (Note, most importers do not require a schema)
--classes <uint_fast16_t>
 (accepted multiple times) Lists the classes for the files listed.
--returnIDs <uint_fast16_t>
 (accepted multiple times) Lists the return IDs for the files listed.
--source Set source ID for each input file
--ignorechecks Ignore checks between input files to ensure compatibility
-c, --convert_proj
 Convert file buffering to disk
-r <std::string>, --output_proj <std::string>
 WKT std::string representing the projection of the output file
-p <std::string>, --input_proj <std::string>
 WKT std::string representing the projection of the input file
--wavebitres <8BIT|16BIT|32BIT>
 The bit resolution used for storing the waveform data (Default: 32BIT)
-x <FIRST_RETURN|LAST_RETURN|START_WAVEFORM|END_WAVEFORM|ORIGIN|MAX_INTENSITY|UNCHANGED>, --indexfield <FIRST_RETURN|LAST_RETURN|START_WAVEFORM|END_WAVEFORM|ORIGIN|MAX_INTENSITY|UNCHANGED>
 The location used to index the pulses
-f <SPD|ASCIIPULSEROW|ASCII|FWF_DAT|DECOMPOSED_DAT|LAS|LASNP|LASSTRICT|DECOMPOSED_COO|ASCIIMULTILINE>, --inputformat <SPD|ASCIIPULSEROW|ASCII|FWF_DAT|DECOMPOSED_DAT|LAS|LASNP|LASSTRICT|DECOMPOSED_COO|ASCIIMULTILINE>
 (required) Format of the input file
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

<std::string> (accepted multiple times) (required) The list of input files

NAME

Merge compatable files into a single non-indexed SPD file: spdmerge

spdmetrics

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdmetrics  {--image|--vector|--ascii}  [-v <String>] -m <String> -o  <String> -i <String> [-f <string>]  [-b <float>] [-c <unsigned int>]  [-r <unsigned int>] [--]  [--version] [-h]

Where
--image (OR required) Run metrics with image output

– or – –vector (OR required) Run metrics with vector output

– or – –ascii (OR required) Run metrics with ASCII output

-v <String>, --vectorfile <String>
 The input vector file.
-m <String>, --metricsxml <String>
 (required) The output SPD file.
-o <String>, --output <String>
 (required) The output file.
-i <String>, --input <String>
 (required) The input SPD file.
-f <string>, --format <string>
 Image format (GDAL driver string), Default is ENVI.
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Calculate metrics : spdmetrics

spdoverlap

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdoverlap  {-c|-s} [-o <String>] [--]  [--version] [-h] <string> ...

Where
-c, --cartesian
 (OR required) Find cartesian overlap.

– or – -s, –spherical (OR required) Find spherical overlap.

-o <String>, --output <String>
 The output file.
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.
<string>
(accepted multiple times) (required) File names for the output (if required) and input files

NAME

Calculate the overlap between UPD and SPD files: spdoverlap

spdpffgrd

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdpffgrd  -o <String> -i <String>  [--gdal <string>] [--class  <uint_fast16_t>] [--morphmin]  [--image] [-m <uint_fast16_t>] [-f  <uint_fast32_t>] [--tophatscales  <bool>] [-t <uint_fast32_t>] [-s  <uint_fast32_t>] [-k  <uint_fast32_t>] [--grd <float>]  [--overlap <uint_fast16_t>] [-b  <float>] [-c <unsigned int>] [-r  <unsigned int>] [--] [--version]  [-h]

Where
-o <String>, --output <String>
 (required) The output file.
-i <String>, --input <String>
 (required) The input SPD file.
--gdal <string>
 Provide the GDAL driver format (Default ENVI), Erdas Imagine is HFA, KEA is KEA
--class <uint_fast16_t>
 Only use points of particular class
--morphmin Apply morphological opening and closing to remove multiple path returns (note this can remove real ground returns).
--image If set an image of the output surface will be generated rather than classifying the points (useful for debugging and parameter selection)
-m <uint_fast16_t>, --mpd <uint_fast16_t>
 Minimum point density in block to use for surface estimation - default 40
-f <uint_fast32_t>, --tophatfactor <uint_fast32_t>
 How quickly the tophat window reduces through the resolution, higher numbers reduce size quicker - default 2
--tophatscales <bool>
 Whether the tophat window size decreases through the resolutions - default true
-t <uint_fast32_t>, --tophatstart <uint_fast32_t>
 Starting window size (actually second, first is always 1) for tophat transforms, must be >= 2, setting this too big can cause segfault! - default 4
-s <uint_fast32_t>, --stddev <uint_fast32_t>
 Number of standard deviations used in classification threshold - default 3
-k <uint_fast32_t>, --kvalue <uint_fast32_t>
 Number of stddevs used for control point filtering - default 3
--grd <float> Threshold for deviation from identified ground surface for classifying the ground returns (Default 0.3)
--overlap <uint_fast16_t>
 Size (in bins) of the overlap between processing blocks (Default 10)
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Classifies the ground returns using a parameter-free filtering algorithm: spdpffgrd

spdpmfgrd

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdpmfgrd  -o <String> -i <String>  [--gdal <string>] [--class  <uint_fast16_t>] [--image]  [--medianfilter <uint_fast16_t>]  [--nomedian] [--grd <float>]  [--maxelev <float>] [--initelev  <float>] [--slope <float>]  [--maxfilter <uint_fast16_t>]  [--initfilter <uint_fast16_t>]  [--overlap <uint_fast16_t>] [-b  <float>] [-c <unsigned int>] [-r  <unsigned int>] [--] [--version]  [-h]

Where
-o <String>, --output <String>
 (required) The output file.
-i <String>, --input <String>
 (required) The input SPD file.
--gdal <string>
 Provide the GDAL driver format (Default ENVI), Erdas Imagine is HFA, KEA is KEA
--class <uint_fast16_t>
 Only use points of particular class
--image If set an image of the output surface will be generated rather than classifying the points (useful for debugging and parameter selection)
--medianfilter <uint_fast16_t>
 Size of the median filter (half size i.e., 3x3 is 1) (Default 2)
--nomedian Do not run a median filter on generated surface (before classifying ground point or export)
--grd <float> Threshold for deviation from identified ground surface for classifying the ground returns (Default 0.3)
--maxelev <float>
 Maximum elevation difference threshold (Default 5)
--initelev <float>
 Initial elevation difference threshold (Default 0.3)
--slope <float>
 Slope parameter related to terrain (Default 0.3)
--maxfilter <uint_fast16_t>
 Maximum size of the filter (Default 7)
--initfilter <uint_fast16_t>
 Initial size of the filter (note this is half the filter size so a 3x3 will be 1 and 5x5 will be 2) (Default 1)
--overlap <uint_fast16_t>
 Size (in bins) of the overlap between processing blocks (Default 10)
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Classifies the ground returns using the progressive morphology algorithm: spdpmfgrd

spdpolygrd

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdpolygrd  {--global|--local} -o  <String> -i <String> [--class  <uint_fast16_t>] [--iters <int>]  [--degree <int>] [--grdthres  <float>] [-b <float>] [--overlap  <uint_fast16_t>] [-c <unsigned  int>] [-r <unsigned int>] [--]  [--version] [-h]

Where
--global (OR required) Classify negative height as ground

– or – –local (OR required) Remove falsely classified ground returns using plane fitting

-o <String>, --output <String>
 (required) The output file.
-i <String>, --input <String>
 (required) The input SPD file.
--class <uint_fast16_t>
 Only use points of particular class (Ground is class == 3, Default is All classes)
--iters <int> Number of iterations for polynomial surface to converge on ground (Default = 2).
--degree <int> Order of polynomial surface (Default = 1).
--grdthres <float>
 Threshold for how far above the interpolated ground surface a return can be and be reclassified as ground (Default = 0.25).
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
--overlap <uint_fast16_t>
 Size (in bins) of the overlap between processing blocks (Default 10)
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Classify ground returns using a surface fitting algorithm: spdpolygrd

spdprofile

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdprofile  -o <String> -i <String> [-f  <std::string>] [-b <float>] [-c  <unsigned int>] [-r <unsigned int>]  [-n <unsigned int>] [-t <unsigned  int>] [-m <float>] [-w <unsigned  int>] [--order <unsigned int>]  [--smooth] [--] [--version] [-h]

Where
-o <String>, --output <String>
 (required) The output file.
-i <String>, --input <String>
 (required) The input SPD file.
-f <std::string>, --format <std::string>
 Image format (GDAL driver string), Default is ENVI.
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
-n <unsigned int>, --numbins <unsigned int>
 The number of bins within the profile (Default: 20).
-t <unsigned int>, --topheight <unsigned int>
 The highest bin of the profile (Default: 40).
-m <float>, --minheight <float>
 The the height below which points are ignored (Default: 0).
-w <unsigned int>, --window <unsigned int>
 The window size ((w*2)+1) used for the smoothing filter (Default: 3).
--order <unsigned int>
 The order of the polynomial used to smooth the profile (Default: 3).
--smooth Apply a Savitzky Golay smoothing to the profiles.
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Generate vertical profiles: spdprofile

spdproj

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdproj  {--proj4 <string>|--proj4pretty  <string>|--image <string>|--imagepretty <string>|--spd  <string>|--spdpretty <string>|--epsg <int>|--epsgpretty <int>|--shp <string>|--shppretty  <string>} [--] [--version] [-h]

Where
--proj4 <string>
 (OR required) Enter a proj4 string (to print WKT)

– or – –proj4pretty <string> (OR required) Enter a proj4 string (to print Pretty WKT)

– or – –image <string> (OR required) Print the WKT string associated with the input image.

– or – –imagepretty <string> (OR required) Print the WKT (to print Pretty WKT) string associated with the input image.

– or – –spd <string> (OR required) Print the WKT string associated with the input spd file.

– or – –spdpretty <string> (OR required) Print the WKT (to print Pretty WKT) string associated with the input spd file.

– or – –epsg <int> (OR required) Print the WKT string associated with the EPSG code provided.

– or – –epsgpretty <int> (OR required) Print the WKT (to print Pretty WKT) string associated with the EPSG code provided.

– or – –shp <string> (OR required) Print the WKT string associated with the input ESRI shapefile.

– or – –shppretty <string> (OR required) Print the WKT (to print Pretty WKT) string associated with the input ESRI shapefile.

--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Print and convert projection strings: spdproj

spdrmnoise

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdrmnoise  -o <String> -i <String>  [--grellow <float>] [--grelup  <float>] [--rellow <float>]  [--relup <float>] [--abslow  <float>] [--absup <float>] [-c  <unsigned int>] [-r <unsigned int>]  [--] [--version] [-h]

Where
-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
--grellow <float>
 Global relative (to median) lower threshold for returns which are to be removed.
--grelup <float>
 Global relative (to median) upper threshold for returns which are to be removed.
--rellow <float>
 Relative (to median) lower threshold for returns which are to be removed.
--relup <float>
 Relative (to median) upper threshold for returns which are to be removed.
--abslow <float>
 Absolute lower threshold for returns which are to be removed.
--absup <float>
 Absolute upper threshold for returns which are to be removed.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Remove vertical noise from LiDAR datasets: spdrmnoise

spdstats

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdstats  {--image|--overall} -o <String>  -i <String> [-f <string>] [-b  <float>] [-c <unsigned int>] [-r  <unsigned int>] [--] [--version]  [-h]

Where
--image (OR required) Create a point / pulses density statistics image

– or – –overall (OR required) Create overall statistics for point / pulse density

-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
-f <string>, --format <string>
 Image format (GDAL driver string), Default is ENVI.
-b <float>, --binsize <float>
 Bin size for processing and output image (Default 0) - Note 0 will use the native SPD file bin size.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Provides statistics the point and pulse density of an SPD file: spdstats

spdsubset

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdsubset  -o <String> -i <String> [--num  <uint_fast32_t>] [--start  <uint_fast32_t>] [--shpfile  <string>] [--ignorez]  [--ignorerange] [--txtfile  <string>] [--spherical] [--height]  [--ranmax <double>] [--ranmin  <double>] [--zenmax <double>]  [--zenmin <double>] [--azmax  <double>] [--azmin <double>]  [--hmax <double>] [--hmin <double>]  [--zmax <double>] [--zmin <double>]  [--ymax <double>] [--ymin <double>]  [--xmax <double>] [--xmin <double>]  [--] [--version] [-h]

Where
-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
--num <uint_fast32_t>
 Number of pulses to be exported
--start <uint_fast32_t>
 First pulse in the block
--shpfile <string>
 A shapefile to which the dataset should be subsetted to
--ignorez Defining that Z should be ignored when subsetting using a text file.
--ignorerange Defining that range should be ignored when subsetting using a text file.
--txtfile <string>
 A text containing the extent to which the file should be cut to.
--spherical Subset a spherically indexed SPD file.
--height Threshold the height of each pulse (currently only valid with SPD to SPD subsetting)
--ranmax <double>
 Maximum range threshold
--ranmin <double>
 Minimum range threshold
--zenmax <double>
 Maximum zenith threshold
--zenmin <double>
 Minimum zenith threshold
--azmax <double>
 Maximum azmuth threshold
--azmin <double>
 Minimum azimuth threshold
--hmax <double>
 Maximum Height threshold
--hmin <double>
 Minimum Height threshold
--zmax <double>
 Maximum Z threshold
--zmin <double>
 Minimum Z threshold
--ymax <double>
 Maximum Y threshold
--ymin <double>
 Minimum Y threshold
--xmax <double>
 Maximum X threshold
--xmin <double>
 Minimum X threshold
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Subset point cloud data: spdsubset

spdtest

spdthin

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdthin  -o <String> -i <String> [-n  <unsigned int>] [-c <unsigned int>]  [-r <unsigned int>] [--]  [--version] [-h]

Where
-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
-n <unsigned int>, --numpulses <unsigned int>
 Number of pulses within the bin (Default 1).
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Thin a point cloud to a defined bin spacing: spdthin

spdtileimg

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdtileimg  {-m|-c} [-i <String>] [-t  <String>] [-o <String>] [-w  <String>] [-b <double>] [-r  <double>] [-f <String>] [--]  [--version] [-h]

Where
-m, --mosaic (OR required) Mosaic the images (within the input list) together.

– or – -c, –clump (OR required) Create a clumps image specifying the location of the tiles.

-i <String>, --input <String>
 The text file with a list of input files.
-t <String>, --tiles <String>
 The input XML file defining the tiles.
-o <String>, --output <String>
 The output image.
-w <String>, --wkt <String>
 A file containing the WKT string representing the projection (–clump only).
-b <double>, --background <double>
 The output image background value (–mosaic only).
-r <double>, --resolution <double>
 The output image pixel size (–clump only).
-f <String>, --format <String>
 The output image format.
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Tools for mosaicing raster results following tiling: spdtileimg

spdtiling

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdtiling  {--all|--extract|--extractcore|--tilespdfile|--builddirs|--rmdirs|--upxml|--xml2shp} [--deleteshp]  [-u] [-d] [--useprefix]  [--usedirstruct] [-c <Unsigned  int>] [-r <Unsigned int>] [--wkt  <String>] [-i <String>] [-o  <String>] -t <String> [--]  [--version] [-h]

Where
--all (OR required) Create all tiles.

– or – –extract (OR required) Extract an individual tile as specified in the XML file.

– or – –extractcore (OR required) Extract the core of a tile as specified in the XML file.

– or – –tilespdfile (OR required) Tile an input SPD file the core of a tile as specified in the XML file.

– or – –builddirs (OR required) Creates a directory structure (rowXX/colXX/tiles) for tiles specified in the XML file.

– or – –rmdirs (OR required) Removes the directories within the structure (rowXX/colXX/tiles) which do not have any tiles.

– or – –upxml (OR required) Using a list of input files this option updates the XML file to only contain tiles with exist.

– or – –xml2shp (OR required) Create a polygon shapefile for the tiles within the XML file.

--deleteshp If shapefile exists delete it and then run.
-u, --updatexml
 Update the tiles XML file.
-d, --deltiles Remove tiles which have no data.
--useprefix Use a prefix of the input file name within the output tile name (Only available for –tilespdfile).
--usedirstruct Use the prebuild directory structure in the output base path for outputs (Only available for –tilespdfile).
-c <Unsigned int>, --col <Unsigned int>
 The column of the tile to be extracted (–extract).
-r <Unsigned int>, --row <Unsigned int>
 The row of the tile to be extracted (–extract).
--wkt <String> A wkt file with the projection of the output shapefile. (–xml2shp requires wkt file to be provided)
-i <String>, --input <String>
 A text file with a list of input files, one per line. (–extractcore and –tilespdfile expects a single input SPD file, –xml2shp doesn’t require an input)
-o <String>, --output <String>
 The base path for the tiles. (–extractcore expects a single output SPD file, –xml2shp expects a single shapefile, –upxml doesn’t have an output)
-t <String>, --tiles <String>
 (required) XML file defining the tile regions
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Tools for tiling a set of SPD files using predefined tile areas: spdtiling

spdtranslate

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdtranslate  -o <String> -i <String>  [--keepextent] [--pulseversion  <unsigned int>] [--pointversion  <unsigned int>] [--wavenoise  <float>] [--Oz <float>] [--Oy  <double>] [--Ox <double>]  [--defineOrigin] [--tly <double>]  [--tlx <double>] [--defineTL]  [--keeptemp] [--convert_proj]  [--output_proj <string>]  [--input_proj <string>] [-b  <float>] [-c <unsigned int>] [-r  <unsigned int>] [-s <string>] [-t  <string>] [--scan] [--polar]  [--spherical] [--wavebitres <8BIT|16BIT|32BIT>] [-x <FIRST_RETURN|LAST_RETURN|START_WAVEFORM|END_WAVEFORM|ORIGIN|MAX_INTENSITY|UNCHANGED>] --of <SPD|UPD|ASCII|LAS|LAZ> --if <SPD|ASCIIPULSEROW|ASCII|FWF_DAT|DECOMPOSED_DAT|LAS|LASNP|LASSTRICT|DECOMPOSED_COO|ASCIIMULTILINE> [--] [--version]  [-h]

Where
-o <String>, --output <String>
 (required) The output file.
-i <String>, --input <String>
 (required) The input file.
--keepextent When indexing the file use the extent of the input file as the minimum extent of the output file.
--pulseversion <unsigned int>
 Specify the pulse version to be used within the SPD file (Default: 2)
--pointversion <unsigned int>
 Specify the point version to be used within the SPD file (Default: 2)
--wavenoise <float>
 Waveform noise threshold (Default 0)
--Oz <float> Origin Z coordinate
--Oy <double> Origin Y coordinate
--Ox <double> Origin X coordinate.
--defineOrigin Define the origin coordinate for the SPD.
--tly <double> Top left Y coordinate for defining the SPD file index.
--tlx <double> Top left X coordinate for defining the SPD file index.
--defineTL Define the top left (TL) coordinate for the SPD file index
--keeptemp Keep the tempory files generated during the conversion.
--convert_proj Convert file buffering to disk
--output_proj <string>
 WKT string representing the projection of the output file
--input_proj <string>
 WKT string representing the projection of the input file
-b <float>, --binsize <float>
 Bin size for SPD file index (Default 1)
-c <unsigned int>, --numofcols <unsigned int>
 Number of columns within a tile (Default 0), using this option generats a non-sequencial SPD file.
-r <unsigned int>, --numofrows <unsigned int>
 Number of rows within a tile (Default 25)
-s <string>, --schema <string>
 A schema for the format of the file being imported (Note, most importers do not require a schema)
-t <string>, --temppath <string>
 A path were temporary files can be written too
--scan Index the pulses using a scan coordinate system
--polar Index the pulses using a polar coordinate system
--spherical Index the pulses using a spherical coordinate system
--wavebitres <8BIT|16BIT|32BIT>
 The bit resolution used for storing the waveform data (Default: 32BIT)
-x <FIRST_RETURN|LAST_RETURN|START_WAVEFORM|END_WAVEFORM|ORIGIN|MAX_INTENSITY|UNCHANGED>, --indexfield <FIRST_RETURN|LAST_RETURN|START_WAVEFORM|END_WAVEFORM|ORIGIN|MAX_INTENSITY|UNCHANGED>
 The location used to index the pulses (Default: UNCHANGED)
--of <SPD|UPD|ASCII|LAS|LAZ>
 (required) Format of the output file (Default SPD)
--if <SPD|ASCIIPULSEROW|ASCII|FWF_DAT|DECOMPOSED_DAT|LAS|LASNP|LASSTRICT|DECOMPOSED_COO|ASCIIMULTILINE>
 (required) Format of the input file (Default SPD)
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.

NAME

Convert between file formats: spdtranslate

spdversion

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdversion  [--] [--version] [-h]  <string> ...

Where
--ignore_rest Ignores the rest of the labeled arguments following this flag.
--version Displays version information and exits.
-h, --help Displays usage information and exits.
<string>
(accepted multiple times) File names for the input files

NAME

Prints version information: spdversion

spdwarp

SPDLib 3.1.0, Copyright (C) 2013 Sorted Pulse Library (SPD) This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; See website (http://www.spdlib.org). Bugs are to be reported on the trac or directly to spdlib-develop@lists.sourceforge.net

USAGE

spdwarp  {--shift|--warp} -o <String> -i  <String> [-c <unsigned int>] [-r  <unsigned int>] [-y <float>] [-x  <float>] [-g <std::string>]  [--order <unsigned int>] [-p <>]  [-t <POLYNOMIAL|NEAREST_NEIGHBOR|TRIANGULATION>] [--] [--version]  [-h]

Where
--shift (OR required) Apply a linear shift to the SPD file.

– or – –warp (OR required) Apply a nonlinear warp to the SPD file defined by a set of GCPs.

-o <String>, --output <String>
 (required) The output SPD file.
-i <String>, --input <String>
 (required) The input SPD file.
-c <unsigned int>, --blockcols <unsigned int>
 Number of columns within a block (Default 0) - Note values greater than 1 result in a non-sequencial SPD file.
-r <unsigned int>, --blockrows <unsigned int>
 Number of rows within a block (Default 100)
-y <float>, --yshift <float>
 SHIFT: The y shift in the units of the dataset (probably metres).
-x <float>, --xshift <float>
 SHIFT: The x shift in the units of the dataset (probably metres).
-g <std::string>, --gcps <std::string>
 WARP: The path and file name of the gcps file.
--order <unsigned int>
 POLY TRANSFORM (Default=3): The order of the polynomial fitted.

-p <>, –pulsewarp <> WARP (Default=PULSE_IDX): The eastings and northings used to calculate the warp. ALL_RETURNS recalculates the offsets for each X,Y while PULSE_IDX and PULSE_ORIGIN use a single offset for the whole pulse. -t <POLYNOMIAL|NEAREST_NEIGHBOR|TRIANGULATION>, –transform <POLYNOMIAL|NEAREST_NEIGHBOR|TRIANGULATION> WARP (Default=POLYNOMIAL): The transformation model to be fitted to the GPCs and used to warp the data. –ignore_rest Ignores the rest of the labeled arguments following this flag. –version Displays version information and exits. -h, –help Displays usage information and exits.

NAME

Interpolate a raster elevation surface: spdwarp

SHFU Equation

To define an equation that characterizes the structure of objects, we use a model consisting of the combination of variables derived from LiDAR, using the following functions and operators:

Operators

**, Power operator

+, Addition operator

-, Minus operator

*, Multiplication operator

/, Division operator

%, Modulo operator

Representation fuctions

  • fabs(x) Return the absolute value of x.
  • factorial(x) Return x factorial. Raises ValueError if x is not integral or is negative.
  • fmod(x, y) Return fmod(x, y), as defined by the platform C library. Note that the Python expression x % y may not return the same result. The intent of the C standard is that fmod(x, y) be exactly (mathematically; to infinite precision) equal to x - n*y for some integer n such that the result has the same sign as x and magnitude less than abs(y). Python’s x % y returns a result with the sign of y instead, and may not be exactly computable for float arguments. For example, fmod(-1e-100, 1e100) is -1e-100, but the result of Python’s -1e-100 % 1e100 is 1e100-1e-100, which cannot be represented exactly as a float, and rounds to the surprising 1e100. For this reason, function fmod() is generally preferred when working with floats, while Python’s x % y is preferred when working with integers.

Power and logarithmic functions

  • exp(x) Return e**x.
  • expm1(x) Return e**x - 1. For small floats x, the subtraction in exp(x) - 1 can result in a significant loss of precision; the expm1() function provides a way to compute this quantity to full precision
  • log(x[, base]) With one argument, return the natural logarithm of x (to base e). With two arguments, return the logarithm of x to the given base, calculated as log(x)/log(base)
  • log1p(x) Return the natural logarithm of 1+x (base e). The result is calculated in a way which is accurate for xnear zero.
  • log10(x) Return the base-10 logarithm of x. This is usually more accurate than log(x, 10).
  • pow(x, y) Return x raised to the power y. Exceptional cases follow Annex ‘F’ of the C99 standard as far as possible. In particular, pow(1.0, x) and pow(x, 0.0) always return 1.0, even when x is a zero or a NaN. If both x and y are finite, x is negative, and y is not an integer then pow(x, y) is undefined, and raisesValueError.
  • sqrt(x) Return the square root of x.

Trigonometric functions

  • atan(x) Return the arc tangent of x, in radians.
  • atan2(y, x) Return atan(y / x), in radians. The result is between -pi and pi. The vector in the plane from the origin to point (x, y) makes this angle with the positive X axis. The point of atan2() is that the signs of both inputs are known to it, so it can compute the correct quadrant for the angle. For example, atan(1) andatan2(1, 1) are both pi/4, but atan2(-1, -1) is -3*pi/4.
  • cos(x) Return the cosine of x radians.
  • hypot(x, y) Return the Euclidean norm, sqrt(x*x + y*y). This is the length of the vector from the origin to point (x,y).
  • sin(x) Return the sine of x radians.
  • tan(x) Return the tangent of x radians.

Constant

  • pi The mathematical constant π = 3.141592..., to available precision.
  • e The mathematical constant e = 2.718281..., to available precision.

Examples

  • E_P95
  • E_P80*2 + E_CV*2.34 + 5.89

Indices and tables