Impact Toolbox User Guide English (en) français (fr)

From JRC Impact Toolbox User Guide
(Redirected from Main Page)
Jump to: navigation, search



Getting started

IMPACT toolbox was designed for analysing and assessing forest degradation using satellite imagery. Processing satellite imagery requires many technical operations: IMPACT toolbox proposes a series of module simplifying those tasks, as many intermediate steps are wrapped in unique functions.

Downloading and launching Impact

Impact runs on Microsoft Windows 64bit (Xp,Vista,Win7,Win8,Win10). You can download the self-extracting pakage (.exe) or the compressed one (.zip) from our server http://forobs.jrc.ec.europa.eu/products/software/: save it on your disk and run it.

Choose a place on your hard drive to extract and save the software. Once the extraction is done, you will find a directory named IMPACT (default), which has the following directories inside:

- DATA folder: contains user’s vector and raster data (for further details see the "getting data section" below);

- GUI folder: contains the graphical user interface, dependencies and map editing functions;

- LIBS folder: contains the engine and other software such as Apache, Python, GDAL, Openlayers, Mapserver, GeoExt, Javascript and HTML;

- TOOLS folder: contains a dedicated folder for each processing module or external library/package used within the tool such as a portable version of Firefox under “Browser”, python scripts for image classification, segmentation, clipping etc;

- START_Impact.bat: Impact launching command.

To start Impact, double click on "START_Impact.bat". You have two windows: a shell window (a black windows showing only text) and the Impact graphic user interface, showing a map, and different icons. Note that the shell windows should not be close, else Impact would not work properly.

To close Impact, close the shell windows. If you acciddently close the Graphic User Interface, close the shell window and then restart Impact tool.

Updates

When you launch Impact, it checks our server through your internet connection. If a newer version is found, it will propose you to update your own version. Simply accept the update and follow the instructions on screen.

Description of the interface

The Main Panel is the IMPACT’s desktop from where is possible to monitor available raster and vector layers (left panel), visualize them on the map (central panel) and execute processing modules available on the right panel.

RTENOTITLE

The 60 seconds auto refresh (can be adjusted within the “Settings” panel, see dedicated chapter) guarantees a prompt visualization of user datasets and processing outputs; available layers are grouped into the following categories:

RTENOTITLE

Base Layers: contains five background map options; blank, Blue Marble map, Sentinel2, Google Satellite and Streets maps, which is the default and automatically selected if internet connection is available.
DATA: any vector or raster layers in DATA and subfolders will be visible under this group; it should contain reference data like administrative boundaries, area of interests etc.

Drawing an area of interest

RTENOTITLE

To draw an area of interest you need first to activate the function by clicking on "Draw Rectangle" or "Draw Polygon" in the upper left corner of the map. When selected, Edit/Delete/Save buttons appear. Once the button is enabled, the area of interest can be drawn on the map. The draw tool will create an rectangular/polygonal overlay with a orange border (while it is drawn the border is blue). 

Right mouse button menu for layers

RTENOTITLE

Right click on the DATA files to display the context menu, where you can select:

  1. Layer Info
  2. Zoom to Layer Extent
  3. Set Opacity
  4. Start Editing. With the multiband color renderer, three selected bands from the image will be rendered, each band representing the red, green or blue component that will be used to create a color image. The user can also modify the appearance and the data range from the image (Stretch).
  5. Rebuild image statistics and pyramids
  6. Rename dataset
  7. Delete dataset

Processing Modules

Zip/DN to TOA reflectance

RTENOTITLE

By executing this module, Landsat/Sentinel-2/RapidEye zipped (.tar.gz or .tar.bz) archives placed in the DATA/DATA_RAW directory will be: 1) converted into a single Geo Tiff file, and 2) converted to top-of-atmosphere (TOA) reflectance and placed in the DATA directory. 

Standard satellite data products provided by space agencies consist of quantized and calibrated scaled Digital Numbers (DN) representing multispectral image data. The products are generally delivered in 16-bit unsigned integer format and can be rescaled to the Top Of Atmosphere (TOA) reflectance and/or radiance using radiometric rescaling coefficients provided in the product metadata file, as briefly described below.  

Geo Tiff conversion

Only (R,G,B,NIR,SWIR1,SWIR2)# bands are extracted, renamed and layer stacked; thermal and panchromatic bands are  zipped and stored within the same folder. The output directory contains the resulting files with the following naming convention:

Multispectral GeoTiff file:  [sensor]_[path]_[row]_[ddmmyyyy].tif

Multispectral quick look :  [sensor]_[path]_[row]_[ddmmyyyy].gif

Metadata file: [sensor]_[path]_[row]_[ddmmyyyy].met

Zipped files for archive: [sensor]_[path]_[row]_[ddmmyyyy]_[band{1*,61,62,8,9,10,11,BQA}].tif.gz

Projection and spatial resolution:  as derived from the source data. 

# Landsat 4/5/7 : bands 1,2,3,4,5,7    Landsat 8 : bands 2,3,4,5,6,7

* Landsat 8 pre-blue band 1.

TOA Reflectance Conversion

By converting the raw digital number (DN) values to top-of-atmosphere (TOA) reflectance data from different sensors/platforms are calibrated to a common radiometric scale, minimizing spectral differences caused by acquisition time, sun elevation, and sun–earth distance.

Calibrations coefficients:
Landsat  8bit (or 12bit for Oli) DN to TOA correction formula is as follows:

ρλ = π * Lλ * d2 / ESUNλ

where

  • ρλ= TOA reflectance for band λ
  • Lλ = Radiance for band λ = Mλ * Qcal + Aλ
  • Mλ = Band-specific multiplicative rescaling factor
  • Aλ = Band-specific additive rescaling factor
  • Qcal = Quantized and calibrated standard product pixel values
  • d = (1-0.01672*COS(0.01745*(0.9856*(Julian Day Image - 4)))
  • θSZ= Local solar zenith angle
  • Tm ESUN = [ 1957.0,1826.0,1554.0,1036.0,215.0,80.67]
  • Etm ESUN = [1969.0,1840.0,1551.0,1044.0,225.70,82.07]
  • Oli ESUN = [2067.0,1893.0,1603.0,972.6,245.0,79.72]

Multiplicative and additive rescaling factors are extracted from the metadata file. In order to reduce the size of the calibrated data, 32bit Float reflectance values [0-1] are then rescaled to 8bit Byte [0-255] with a linear multiplication factor of 255.


RapidEye data are usually provided as 5 band layer stacked Geo-Tiff files. To convert 16bit Digital Number (DN) to radiance it is necessary to multiply by the radiometric scale factor, as follows:

Lλ = DNλ * ScaleFactor(λ)

where ScaleFactor(λ) = 0.01


The resulting value is the at sensor radiance of that pixel in watts per steradian per square meter (W/m2 sr μm). The TOA correction formula for RapidEye data is as follows:

ρλ = π * Lλ* d2 / ESUNλ * cos θSZ

where

  • ρλ= TOA reflectance for band λ
  • Lλ = Radiance for band λ
  • θSZ= Local solar zenith angle
  • d = (1-0.01672*COS(0.01745*(0.9856*(Julian Day Image - 4)))
  • ESUN = [1997.8,1863.5,1560.4,1395.0, 1124.4]


32bit Float reflectance values [0-1] are then rescaled to 16bit Unsigned Integer [0-10000] with a linear multiplication factor of 10000. Formulas and parameter are derived from [1].

Sentinel2

to be completed


Note: processed bands name are saved in the metadata tag "Impact_bands"


Image Clip

RTENOTITLE

The Image clip tool can be used with any raster layer to create a clipped layer using: 1) a selected GeoTiff image, and 2) a selected vector layer. Image clipping represents a crucial step to reduce processing time and data volume. The user has the possibility to clip any GeoTiff file from the input directory using predefined vector layer(s) containing one or more features each. Vector projection will be on the fly converted to match the raster one. Note that clipping can be done only after Zip/DN to TOA-Reflectance.

RTENOTITLE


Image Filters allow users to filter raster files by satellite type (e.g. Oli will keep only Lansat 8 Oli images).

Use Individual feature: this option creates output files as many as the individual features of the vector. When No is selected, a unique output files containing the bounding box of the features will be used.

EXAMPLE
A test vector file has 4 features with ID=”feat_1-4”. In case “Use individual features” is flagged as ‘Yes‘, the ‘ID’ will be used as prefix for the 4 out filenames e.g. Clip_plot{1-4}_oli_226-068_03072014.tif. If ‘ID’ filed is not available, it will be generated on the fly using a sequential number.

RTENOTITLE

NOTE

If you have collected data from a variety of sources, chances are that not all layers contain the same coordinate system information/projection. For example, the coordinate system of shapefile created with the "Draw Rectangle" option can be different from the coordinate system of the raster data. Specifically, the shapefile created with the "Draw Rectangle" option are automatically saved  in LatLong Geogpraphic projection, while most of the Sentinel 2 / Landsat rasters are in other projections. The user must check the coherence between the projection of the shapefile and the rasters before clipping. Figure below shows that the difference of projections causes small 'edge effects'. Changing the shapefile projection using the fishnet option (see General tool/Fishnet section) should give you a raster clipped to the same shape as your polygon.

RTENOTITLE




Image Classification

RTENOTITLE

Classification tools allows the user to carry out: 1) the automatic classification and the 2) K-means classification. It is possible to reclassify the classification according to the list of classes we want, producing a new raster classification using the 3) recoding tool.


Automatic Classification

The aim of this tool is to offer a fully automatic pixel-based classification product to be used in further processing steps like segmentation and land cover mapping. The Single Date Classification (SDC) algorithm as described and implemented in [2], is based on pre-defined knowledge-based “fuzzy” rules aiming to convert the TOA reflectance input bands into discrete thematic classes (Table 1) In brief, the classification chain is based on 2 steps: 1) NDVI partition into 3 broad categories as follow : [-1,0] = water; ]0,0.45] = soil; [0.45,1] = vegetation; 2) ad-hoc bands conditions (e.g. NIR > RED > 0.5) to split each category in sub-classes and eventually, promote pixels to other categories as it might happen e.g. for turbid water when having NDVI values > 0 (falling into soil range).

Classification rules and satellite bands involved

The current implementation performs best when using B,G,R,NIR,SWIR1-2 bands (Landsat TM/ETM+/OLI, Sentinel 2 and landsat-like imagery ) however sensor like RapidEye, DMC, ALOS/AVNIR2, SPOT4/5 and Komsat, are fully supported although yielding reduced accuracy in water/dark soil discrimination due to the missing SWIR bands as indicated on Figure 8 and Figure 9.

Figure 8
Figure 9
Similar SDC algorithm robustness and scalability among the aforementioned sensor have been confirmed by [3] and [4]; however, SDC’s accuracy is not easily quantifiable since the algorithm delivers broad thematic categories derived from spectral properties observed during a precise time in the vegetation cycle; is therefore possible to classify leafs-off deciduous forest as grass or soil. [5] better explains how to combine and analyze SDCs time series in order to produce more accurate land cover maps. Is worth noting the SDC is capable of retrieving sun azimuth from the correspondent metadata in order to apply post classification 3D models and morphological filters (opening and closing) for better clouds/shadow masking and “salt and pepper” reduction.
Class Id and description

As on Table 1, cloudy pixels (ID 1 and 2) and potential shadow pixels (ID = 10,35,40,41,42 ) are initially treated using the morphological ‘closing’ filter of 500mt; afterward, cloudy pixels are projected in the sun azimuth and possible overlaps are automatically recoded as Shadow/Low Illumination (ID=42). Please note that on off-nadir acquisition sensor like RapidEye, the relative position of clouds and their shadows don’t match the provided sun azimuth angle. The apparent cloud shift distance, in relation to its true position, depends on the off-nadir angle and on the cloud height. Whereas the satellite off-nadir angle is well known, the height of imaged clouds is unknown [6]. For this later reason the clouds and shadow masking as provided by SDC might not be optimal for RapidEye imagery. Ideally the user could replace the ‘real’ sun azimuth with the apparent one within the .xml metadata.

A pop-up interface will easy data selection (single or multiple files) and settings such as filters, overwrite or evergreen forest normalization. This latter option is performing the so called “dark object subtraction”, an image normalization process towards predefined median forest valued improving classification accuracy. See dedicated chapter for more details.

SDC output legend is showed on Table 1. SDC is not aiming to offer detailed and reliable land cover map since is relying on spectral properties observed during a precise time in the vegetation cycle; however, during the leaf-on acquisitions is more likely to have a good match between the proposed class description and actual land cover type.


K-means Classification

K-means is one of the simplest unsupervised learning algorithms that solve the clustering problem. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters fixed a priori [7]. The main idea is to define k centroids, one for each cluster. These centroids shoud be placed in a cunning way because of different location causes different result. So, the better choice is to place them as much as possible far away from each other. The next step is to take each point belonging to a given data set and associate it to the nearest centroid. When no point is pending, the first step is completed and an early groupage is done. At this point we need to re-calculate k new centroids as barycenters of the clusters resulting from the previous step. After we have these k new centroids, a new binding has to be done between the same data set points and the nearest new centroid. A loop has been generated. As a result of this loop we may notice that the k centroids change their location step by step until no more changes are done. In other words centroids do not move any more.

Specifically, the algorithm is composed of the following steps:

1) Place K points into the space represented by the objects that are being clustered. These points represent initial group centroids.
2) Assign each object to the group that has the closest centroid.
3) When all objects have been assigned, recalculate the positions of the K centroids.
4) Repeat Steps 2 and 3 until the centroids no longer move or for a given number of iterations. This produces a separation of the objects into groups from which the metric to be minimized can be calculated.


RTENOTITLE

The user has the possibility to select:

1) the number of clusters,

2) the number of iterations (i.e. step 4 of the kmeans procedure),

3) the suffix of the output file.



Recoding

The Recoding tool changes the pixel values of an image. The tool works with float or integer values. It allows to modify the values in a raster map. Recoding directly modifies the image. It is however possible to save the recoding in another file, in order to not alter the original image.

Right click on the raster and select "start editing". Then the Editing toolbox will pop up. Firstly, the user is asked to select the band to recode. There are two ways to define how the values will be reclassified in the output raster: Recode by intervals and Recode by values. Either ranges of input values can be assigned to a new output value, or individual values can be assigned to a new output value.

Using recoding by intervals, the user is asked to type the reclassification values: "from", "to", and "recode to". "from" and  "to" define the reclassification range for the new value ("recode to").

Using recoding by values, the user is asked to type the exact reclassification values: "value" and "recode to". The button "use unique values" allows to visualize the unique values for the raster.



RTENOTITLE




Analysis & Enhancement

RTENOTITLE

The tool provides options to apply processes, such as Linear Spectral Unmixing, Evergreen Forest Normalization, Pansharpening, Principal Component Analysis and Index Builder.


Linear Spectral Unmixing

The Linear Spectral Unmixing (LSU) is a tool to decompose a reflectance source spectrum into a set of given endmember spectra. The result of the unmixing is a measure of the membership of the individual endmember to the source spectrum. This measure is called the endmember's abundance. The proposed model, inspired by [8],[9], makes use of predefined endmembers for estimating soil, vegetation and water fraction images. Prior to the LSU, is possible to perform the “Evergreen Forest Normalization” to minimize spectral difference across images acquired at different time and space. The adopted endmembers have the following values expressed in TOA-Reflactance [0-1] for bands [B,G,R,NIR,SWIR1,SWIR2]:

Soil = [0.14, 0.16, 0.22, 0.39, 0.45, 0.27]

Vegetation = [0.086,0.062,0.043,0.247,0.109,0.039]

Water = [0.07, 0.039, 0.023, 0.031, 0.011, 0.007]

When processing RapidEye data, only B,G,R,NIR bands are used and further adjustment could be done to better fit sensor properties and using Red-Edge band as well. The following LSU formula has been implemented in python (RUN_image_unmixing.py):

Unmix = I x (E+ ) T

where:

Unmix = 3x1 matrix of endmembers fractions computed using the Unconstrained LSU

I = input image (6 bands) reshaped into a 2D array concatenating a) pixels from each band into a 1D vector and b) concatenating all vector bands

E = 3x1 array of endmembers [Soil, Vegetation, Water]

(E+ ) T = transpose of the pseudo inverse of E

The user is asked to select the input image and choose whether to use or not the “EVG (Evergreen) Forest Normalization”.

RTENOTITLE


Evergreen Forest Normalization

The normalization of multi-temporal data can set the radiometric measurements to a common relative scale and, consequently, ensure the spectral homogeneity of such data. Relative normalization adjusts the spectral values of all images to the values of one reference image. Commonly the reference image is selected as the most recent image or the least affected by atmospheric effects. This approach relies on the ability to identify stable targets between dates, named Pseudo-Invariant Features (PIF), and assumes that reflectance differences in these stable targets are due to atmospheric perturbations. A simple linear relationship among images across time is generally used to normalize images to the same reference level. 

Using this specific approach, dense evergreen forest pixels have been considered as PIFs. Evergreen forest normalisation searches each band for the median value of dense evergreen forest pixels. The image is then corrected by (1) subtracting this value and (2) adding a reference dense evergreen value from every pixel in the band.  

Each reflective band (λ) is normalized, i.e. rescaled to the same reference forest value as follows:

Fnorm formula1.png

RTENOTITLE = median value of dense evergreen forest of the sample site for the band λ, 
 
RTENOTITLE = reference dense evergreen forest value for the band λ computed from representative areas selected visually in 100 images across all continents (22, 16, 11, 63, 28, 10 for bands B,G,R,NIR,SWIR1,SWIR2).
 
The median forest value parameter was extracted from a forest mask (/Tools/Evergreen_normalization/Global_EVG_map_ll.tif). The map has been created using the Global Forest Change product [10](Tree cover >65% in 2013) intersect with Globcover 2009 product [11]classes 40,65,70,160.
Fnorm.png


RTENOTITLE

Evergreen forest mask as derived from the Global Forest Change Product and Globcover 2009 intersection


Pansharpen

Pansharpening is a process of using the spatial information in the high-resolution grayscale band (panchromatic, or pan-band) and color information in the multispectral bands to create a single high-resolution color image. Suppose the program is invoked like:

RTENOTITLE
All of the low resolution bands are scaled up to match the resolution of the panchromatic band, using the resampling method selected (e.g. cubic interpolation). The 'Exponential stretch' option applies a non-linear scaling with a power function. 1.5 is the exponent of the power function (that must be positive). This power function stretchs pixel values between mean - 1.5*sd to mean + 1.5*sd where sd is the standard deviation.


Principal Component Analysis

Principal components analysis (PCA) is a technique applied to multispectral remotely sensed data. Adjacent bands in a multispectral remotely sensed image are often highly correlated. If DN values of adjacent bands are plotted against each other, a high correlation may exist, meaning thereby that the two datasets are not statistically independent.  

Principal Components Analysis (PCA) is related to another statistical technique called factor analysis and can be used to transform a set of image bands such that the new bands (called principal components) are uncorrelated with one another and are ordered in terms of the amount of image variation they explain.  Thus the first principal component (PC1) contains the highest variance in a scene, followed by PC3, PC3 and so on.  The components are thus a statistical abstraction of the variability inherent in the original band set.

For an n dimensional dataset, n principal components can be produced. In addition to PC images, the PCA also produces eigenvalues. Eigenvalues contain information about percent of total variance explained by each PC.

An important advantage of PCA is that most of the information dispersed throughout the X bands may be compressed into a few bands with virtually no loss of information.  The first three principal components typically contain over 90% of the variance in the data and hence the information in the scene.  Using the principal components we may prepare a new raster in which the correlation between the bands (PCs now) used is zero.  A false color composite produced by using the first principal component (PC1) as red, the second principal component (PC2) as green and the third principal component (PC3) as blue will thus contain almost all the information in a scene.  It must be remembered however, that the high PC images (PC6 for example) do contain little variance, they must not be discarded without through examination because they may well contain information not contained in the lower principal components. 

RTENOTITLE

The user is just asked to select the Geotiff file and the output suffix.

Figure below shows the principal component analysis in RGB (PC1, PC2,PC3), respectively of a Sentinel 2 image.

RTENOTITLE



ND(V,W,S)I Threshold (Index Builder)

The purpose of the index builder is to cluster images based on normalized index. In remote sensing, normalized index is an enhancement technique in which a raster pixel from one spectral band is divided by the corresponding value in another band. The choice of bands used is what makes them appropriate for a broad range of applications, from minerals to soil to vegetation. 

The user is asked to choose:

1) the raster image,

2) the two bands (e.g. band 4 and band 3 to compute NDVI using Landsat TM)

3) the number of clusters,

4) the suffix of the output image.

RTENOTITLE


Although the formula

Index = (Band 1 - Band 2 ) / (Band 1 + Band 2 ) 

remains the same, the generated index can assume a variety of meaning by selecting a different band combination as reported on Table below

Non exhaustive list of possible indexes to be used for image thresholding and clustering.

The partition (clustering) of the index is done dividing its histogram into a number of equal-width bins in the given range (number of clusters defined by the user); calculated bins are then used to determine the cluster Id and associated Color ramp [blue-red-green].

Delta NBR disturbance detection module

Methods:
Changes in canopy cover (signs of forest disturbance and degradation) are monitored applying the Delta Normalized Burned Ratio( ∆NBR) approach.

Basically, the Normalized Burned Ratio (NBR) index is computed for two different periods, and pixels that display a difference greater than a specific threshold are labeled as disturbed forest.

The Normalized Burned Ratio (NBR)  is defined as:

NBR = (NIR - SWIR) / (NIR + SWIR)

where: 

NIR is the Near InfraRed band (Band 4 for Landsat 7, Band 5 for Landsat 8, Band 8 for Sentinel 2)

SWIR is the Short Wave InfraRed band (Band 7 for Landsat 7/8, Band 12 for Sentinel 2)

However, atmospheric influences as well as other effects (e.g. sun incidence angle) can result in artifacts and outliers. To circumvent this issue, the "self-reference" version of the NBR is computed:

NBRself-referenced = NBR - NBRn_median

Where NBRn_median is a kernel moving window median filter. Choice of the kernel size (radius of n pixel) of the circular median filter depends on spatial resolution of the satellite sensor with Landsat (7 pixel) and Sentinel-2 (21 pixel) .

The difference of Self-referenced NBR between two periods allows the assessment of forest disturbance:

ΔNBR = NBRself‐referenced_time1 – NBRself‐referenced_time2

Thresholds are defined on the self-referenced ∆NBR to define undisturbed and disturbed forests. Specifically, the thresholds are:

0 ≥ ΔNBR > -0.05 undisturbed forest
0.05 ≥ ΔNBR > -0.1 medium disturbed
ΔNBR ≤ -0.1 strongly disturbed

RTENOTITLE




Segmentation

Image segmentation is the process of delineating object in a image, such as crop fields, wood, roads, houses, etc. Segmentation algorithms usually exploits spectral signatures to delineate the objects within the image.

Impact uses INPE's TerraLib[12]  libraries which offers two algorithms: Region growing which considers only the spectral signatures of the objects and Baatz[13] which also takes into account the objects compactness. The algorithms tries to delineate objets that are sufficiently homogenous in term of color (Region growing and Baatz) and compact (Baatz). The image is then cut in a series of contiguous polygons called "segments". A second step allows to re-unite small size segments that are sufficiently similar.

The image segmentation tool is launched by clicking on the following icon:

RTENOTITLE

It launches the segmentation interface for selecting the image to segment, allowing to control the different parameters of the algorithms.

RTENOTITLE

Segmentation Options:

- Multi-date segmentation:

  • “No”: selected images are treated individually and the correspondent vector file (ESRI Shapefile format) is saved within the image directory; please ensure that segmentation parameters are applicable to all selected images, bands selection above any other.
  • “Yes”: selected images are layer stacked (only selected bands) into a single one using the selected bands and weight. Please ensure the geographical overlap. First file is use to extract output location and reference projection

- Use classification to pre-label objects: selected raster files (thematic or not) are used to pre-label objects according to occurrence rules (majority, min, max’) either in single or multi-date mode. In the latter case, the .dbf file will reflect the top-down order in which files have been selected in the interface, saving (after the ID field) as many fields as the input images as on Table 3.

- Overwrite: set to “Yes” to automatically delete output files (if any)
- Optimization: if “Yes”, input images are processed using a tiling approach reducing the total amount of memory; however, the final segmentation might reflect the tiling patters; do not use in case of small images.

RTENOTITLE Segmentation Parameters: - Bands and weights: raster bands and associated weight to be used
- Scale factor: this factor controls the spectral heterogeneity of the image objects and is therefore correlated with their average size; smaller it is, more objects you will get
- Color: Baatz spectral component [0.0, 1.0]
- Compactness: Baatz morphological component [0.0, 1.0]
- Similarity [0,1]: represents the minimum Euclidean Distance (expressed in DN values) to be used while merging segments; low values will allow aggregation of heterogeneous objects.
- Suffix: user define string (only alphanumeric) to be add to the output filename

NOTE:

- Please ensure that the selected band numbers are available within the raster/s (generic “Baatz Failure” error message will be raised)
- The multi-date segmentation creates an ancillary file within the directory of the first selected image (master) containing the name of the processed images and the order of the _class.tif within the .DBF file
- Objects pre-labeling requires a raster  (e.g. the result of the automatic classification) from which to extract statistical information from (e.g. the “majority”); values are save on “T(n)_class” and, as a backup in “T(n)_cluster” attribute
- Big TIFF files are now supported
Segmentation results could be visualized by loading the appropriate Vector Legend using the "Context Menu" on the vector file

Load Classification Layers

TODO

RTENOTITLE


RTENOTITLE


Examples

Suggesting a default set of parameters to be used for image segmentation is not often possible; image size, resolution, data type (byte,integer,float)  and nonetheless, the landscape fragmentation may vary significantly from biome to biome; however, this chapter gives an overview of the main parameter involved and the different results obtained by using different acquisition sensors.


Segmentation scal params.png


Degradation and deforestation detection and reporting

Impact provides two degradation methods: a Pixel-Based deforestation and disturbance reporting tool and the CarbEFmodule at Minimum Mapping Unit.

RTENOTITLE



Pixel based deforestation and disturbance reporting

RTENOTITLE


CarbEF: estimation of Carbon Emissions from deforestation and forest degradation at  a Minimum Area

The CarbEF module estimates carbon emissions from deforestation and forest degradation, on the basis of a forest activity map, describing the loss of tree cover for two different time periods. For the sake of consistency, FAO and UNFCCC define a forest as being made up of homogeneous forest units of a minimum area, so as to fix the limit between isolated trees and groups of trees that are actually parts of a forest.

The module outputs a report of areas deforested and degraded and their corresponding carbon emission. It also generates a map of carbon emission for the two periods of time, and a map summarising the status (degradation, deforestation) for each forest unit. These data can be disaggregated by land use change if required.


Inputs: activity data

FAO and UNFCCC generic definition of a forest indicates that a forest unit is defined by a minimum ground area (in general 1ha or 0.5ha), its minimal tree cover (typically 30%, but some countries would consider minimal covers as low as 10%) and a minimal tree height (typically 5m). In remote sensing, the tree height is generally inferred from different ancillary sources of information, and incorporated as a landcover class, often not updated dynamically. Contrariwise, time series of observations allow the detection of  loss of tree crown. As a result the detection of forest degradation is mostly linked to the disparition of tree crowns from satellite images, while the replacement of trees with smaller species in-between two observations is more difficult to assess.

In CarbEF, a forest unit is therefore defined with:

  • a minimum forest unit, approximating the minimum area of the forest definition

  • a minimum tree cover per forest unit.


CarbEF requires an activity map as input, indicating how each pixel of the image changed in-between the two periods of time, namely:

0 - no data: unclassified data

1 -T-T: pixel found with tree presence in both period of time

2 - NT-NT: pixel not found with tree presence in both period of time

3 T-NT1, period 1: pixel's tree presence was found to be lost in the first period

4 T-NT2, period 2: pixel's tree presence was found to be lost in the second period.

If a forest unit loses tree presence pixels in-between the two periods, but still has more than the minimum tree cover left in period 2, then it is degraded. If the tree cover decreases below it minimum allowed, it is declared deforested(it is no more matching the definition of a forest unit).

CarbEF outputs a report, giving for each period of time:

  • the total area degraded

  • the total area deforested

  • corresponding carbon emissions


Carbon emissions are computed by associating for each tree pixel an amount of carbon released by deforestation of degradation. Technically, CarbEF allows:

  • the use of a map of biomass conversion factor for estimating the carbon emission

or

  • use a unique biomass conversion factor for all pixels

Carbon emissions due to degradation is computed as a percentage of the emission. A typical value is 50%, but CarbEF would allow anything in the 0-100% range.

Input: report control

CarbEF also offers to use two additional layers for controlling the output report:

  • A layer (shapefile) of landcover types. It allows the breakdown of the areas and emission count per type of landcover (ex. Protected areas, exploitations, etc.). It is up to the user to decide how to organize this breakdown;

  • A layer (raster) of exception: identifies areas of activity that should not be taken into account in the counts of forest units deforestation and degradation areas and emissions. Typically, the user may want to exclude an area that doesn’t represent the loss of trees (a burning of the herbaceous layer or a flooded area e.g.), and treat them separately.

Algorithm

The processing algorithm is described in the following document: [Document under revision, coming soon]

Using CarbEF

Module CarbEF is found under "Degradation".

Carbef interface.png

The interface has mandatory and optional inputs. Mandatory inputs:

  • Activity map. This is the reference map, it must have pixels with values from 0 to 4

  • An output prefix name, that will be used to name all the generated files.

  • The report language. For now, only French [FR] and English [EN] are supported.

  • Overwrite output: Yes/No

  • Minimum forest units (MFU) definition:

    • the size: MFUs are square shape, the size is in pixels of the Activity map;

    • the minimum tree crown cover per MFU.

  • Start and End year of the two periods. This information is used to write the output report and compute the various rates. A period starts at the beginning of a year and ends at the end of a year. So, a period of 1 year (2010), starts in 2010 and ends in 2010, a period of two years (2010, 2011), starts in 2010 and ends in 2011.

  • Biomass conversion factor, to estimate the emissions due to the deforestation. This factor is expressed in tons of carbon per hectare, CarbEF converts it in tons equivalent C02. One can either use a constant, or a map. CarbEF reprojects and resamples the map to align it with the activity map. Emission are computed by summing each individual pixel contribution within each MFU (conversion factors are not averaged per MFU but considered separately for each pixel);

  • The emissions due to degradation is expressed as a fraction of the emissions due to the deforestation.


The optional input layers are:

  • a disaggregation layer

  • an exception map

The disaggregation layer is defined with:

  • a disaggregation layer. This is a shapefile, with polygons.

  • the field to use from the disaggregation layer, to breakdown the report various counts. In general, the field value is a type of landcover, say "State concession", "Private concession", "Protected areas", "Other";

CarbEF will report separately the list of areas and emissions, for each value of the selected field, showing for each field value the corresponding total counts. CarbEF reprojects the shapefile to have it matching the Activity map projection. As shapefile may sometimes have loosely defined projections (as projection is not part of the norm of the shapefile format, some software may generate a shapefile with missing prj ancillary file), it is safer to ensure the shapefile is already projected in the right projection.

The exception map is a raster. Pixels set to 0 are not part of the exception and used in the normal processing. Pixels identified with a different value will be counted separately and discarded from the general processing.

Output images

Three images are created and added to Impact:

  • [output name]_class.tif: MFU classification

  • [output_name]_change.tif: change proportion per MFU

  • [output_name]_biomass.tif: carbon emission, period 1, period 2, and their sum, in separate bands.


The MFU classification [output name]_class.tif is a single band raster with the classes of the MFU boxes, displayed with the following color table:

Carbef outputclasses.png

The change of proportions per MFU, [output_name]_change.tif, is a five bands raster:

- Band 1: OUT_ND, proportion of no data pixels per MFU

- Band 2: OUT_FF, proportion of tree pixels per MFU, stable in-between period 1 and period 2;

- Band 3:  OUT_NFNF, proportion of pixels that were not trees for the two periods;
- Band 4  OUT_FNF1: proportion of pixels that were lost in period 1;
- Band 5  OUT_FNF2l: proportion of pixels that were lost in period 2.

The carbon emission estimates, [output name] _biomass.tif, is a three bands raster, and expresses estimated in tons equivalent C02:
- Band 1:  emission per MFU, for period 1
- Band 2:  emissions per MFU, for period 2
- Band 3:  sum of band 1 and band 2

General Tools


Raster conversion

Users often need to convert raster images in order to change projection, rescale, or to select a given number of band layers.
Raster conversion tool allows to:
1) Reproject rasters
2) Set nodata    
3) Set pixel Size
4) Set Data type (e.g. byte )
5) Rescale value
6) Select the bands of interest
7) Modify the output format (i.e. GeoTiff/JPEG/PNG)

RTENOTITLE


Mosaic

A common task may be to stitch together the available satellite scenes to create a much larger image. It becomes thus necessary to deal with the creation of a mosaic from many tiles. The goal will be to have a large single file as output.

RTENOTITLE

The first processing option, i.e."Place each file into separate bands", when "No" is selected, avoid to duplicate data, combining the bands per scenes. In other word, the output file will contain X bands, where X is the number of bands of the images. Conversely, when "Yes" is selected, the output file will contain N*X bands, where N is the number of scenes, and X is the number of bands.

The resampling algorithm is default to nearest neighbor ("near").


Fishnet

Create Fishnet creates a feature class containing a net of rectangular cells. Creating a fishnet requires two basic pieces of information: the spatial extent of the fishnet and the number of rows and columns. 

The first parameter of the tool is 'Use template for extent'. If selected, the spatial extent of the fishnet will be the spatial extent of the selected raster/shapefile. Otherwise the user will be asked to manualy supply both the minimum and maximum x- and y-coordinates.

The second set of parameters is the the number of rows and columns or height and width of each cell in the fishnet.

It is also possible to select the random percentage of the total samples by checking the random parameter. 

Note that if the user select a raster template for extent and then select one row and a column, the tool will return the shapefile containing the selected raster.

RTENOTITLE

This tool can be used to divides a raster into smaller pieces by tiles. In this case the user has to select  'Use template for extent' and specify the number of Rows and Columns to split the raster into. Once the fishnet shapefile has been created, the user should use "Image Clip" tool selecting the raster, the fishnet shapefile as clipping boundary and "use individual features".


Statistics

Statistics tool allows the user to analyze and understand what is going on in a given vector dataset. Specifically, it allows to calculate several values of the pixels of a raster layer within a vector layer. The user can calculate (amongst others) the sum, the mean value and the total count of the pixels that are within each polygon within the selected shapefile. The tool generates output columns in the vector layer with a user-defined prefix. To understand the statistics above, refer to this definition list:

- Count: Number of pixels.

- Percent: The percentage of valid pixel.

- Area: The area of each feature of the shapefile.

- i_para: Perimeter-area ratio is the ratio of the patch perimeter (m) to area (m2).

- Mean: The mean (average) value is simply the sum of the values divided by the amount of values.

- Median

- Mode

- Standard Deviation

- Min: The minimum value.

- Max: The maximum value.

The processing option allows users to set value to be considered as Nodata in addition to metadata one.

Note that the uses must check the coordinate system of the data. If the data are in decimal degrees, also the areas are given in degrees. In order to calculate the area for the each polygon of the shapefile in square meters, the data has to be in square meters as well. So, the user needs to reproject it.

How to download Sentinel 2 and Landsat data

Sentinel 2

We here briefly review two different ways of getting the data: one the official ESA channel, the Sentinels Hub; and the other is through JRC repository at the Sentinel2 portal.

Downloading through the Sentinels Hub

We start by visiting at the Sentinels Data Hub web site (https://scihub.copernicus.eu/dhus/#/home ). A login is necessary to access the data, so if you do not have an account you will have to create one first. Once this is done, it is simple to start searching, just click in the search button, or the advanced search button on the top left:

RTENOTITLE

After we perform the search, the results are shown in this way:

RTENOTITLE

Then, by clicking in the “eye” icon, we are shown the details of each scene. If this scene satisfies our needs, we might download it by just pressing the “arrow” button, at the bottom right. Beware that scenes are packages of about 6 GB in size.

RTENOTITLE

Downloading through JRC Sentinel2 portal

We start by navigating to the JRC Sentinel 2 portal https://cidportal.jrc.ec.europa.eu/forobs/sentinel.py

An information box will pop up immediately.  The user is asked to enter his/her email address in order to recover his/her session later on. Note that entering the email address is not mandatory, but if the user does not insert a valid email address, he/she will not been able to recover his/her session. The email address will not be saved but converted into an hash code.

When CID portal starts, you are presented with the GUI as shown below. The numbers 1 through 5 in blue circles refer to the five major areas of the interface as described below:

1) This area lists all the layers in the project.
2) Filters: allows users to look for specific information about content such as timespan, orbit and cloud cover
3) Cart: allows users to download data
4) Map view: maps are displayed in this area. The bar at the top allows user to draw area of interests such as rectangles and polygons.
5) Log Monitor: Log Monitor provides complete monitoring of application logs, log files, event logs and service logs.


RTENOTITLE


Then, by clicking in "Filters”, in the menu to the left, we can choose orbit, time span and admissible cloud cover:

RTENOTITLE

By clicking in the desired cell, we are shown the details of the image. By clicking on Add Image to Cart button (i.e. the cart button on the left of the Sentinel2 thumbnails), we can add the selected Sentinel2 scenes to the cart.

RTENOTITLE

The Sentinel 2 scenes' cart allows users to download the desired images.

RTENOTITLE

Two options are available:

1) Download the Sentinel 2 file "as it is" in a zip format.

2) Process the Sentinel 2 file and rescale image data to the Top Of Atmosphere (TOA) reflectance. The process also entails the selection of bands of interest, change of the projection and resolution Additionally, by selecting a vector file as Area Of Interest (AOI) from the one available in Layers section, only intersecting images and/or part of them are processed, reducing processing time and image size.

RTENOTITLE

The images one downloads as Zip file have to be processed with the Zip/DN to TOA-Reflectance tool (see the dedicated section), whereas the images one downloads as '"processed" are directly usable as rasters in IMPACT. In the next sections we will review how to open and preprocess the files downloaded.

Landsat

The USGS distributes Landsat imageries through the EarthExplorer data portal (https://earthexplorer.usgs.gov/ ). 

Procedure

1) Downloading data from EarthExplorer requires that you first sign in as a registered user.

2) After you sign in, use the “Search Criteria” tab in the upper left to specify an area of interest (see Figure). You can type a place name in the search field or click on the map to place a pin that you can drag to other places.

RTENOTITLE

3) Click the “Data Sets” tab next. From the long list that appears, select “Landsat Archive” and check the box for the first item: “L8 OLI/TIRS.”

RTENOTITLE

4) Click the “Results” tab to peruse the available images. Then click on the image thumbnails to see larger image previews in false color, which you can evaluate for cloud cover and geographic coverage. Scrolling down the image previews reveals additional metadata.

RTENOTITLE

5) The EarthExplorer portal includes other useful tools. For example, you can filter an image search by acquisition date and percent cloud cover. Landsat image extents are also viewable as transparent map overlays.

6) To download a scene from the search results list, click the “Download Options” icon. Select the last option “Level 1 GeoTIFF Data Product” to download all Landsat data bands. These are very big files (about 1 GB compressed, 2 GB uncompressed).

The data is provided in a compressed (zipped) format. You can use Zip/DN to TOA-Reflectance Tool to unpack the file and create a set of GeoTiff files. The section Zip/DN to TOA-Reflectance Tool goes into more detail on this. 

Tutorials

Find tutorials in English  and Tutoriels from this wiki.

Troubleshooting and FAQ

Question: Why IMPACT is not starting?

Answer: There are several factors preventing the correct execution of IMPACT:

  • At launch time, START_Impact.bat does nothing, no DOS promt appering: ensure the antivirus is not blocking the execution of .bat or .exe file; in case add an exception or disable it
  • START_Impact.bat runs and the DOS console is correclty opened but Firefox and the GUI is not appearing: could happen that ather http services (Apache, TomCat, etc ) are running simultaneusly and the IMPACT dafault port 2020 is busy; edit the START_Impact.bat changing the port number; check if the DOS console reports error messages.

Question: Why IMPACT is not updating?

Answer: While updating, the DOS console is asking to retry unlinking one or more file; say yes by typing 'Y' and press enter. If the message persists or it fails to update, ensure there are no IMPACT processing running in background (see the Log Monitor) and/or ther are no file (layer) in pending status (opaque and with spinning icon); is so, wait until IMPACT collect all necessary information on your files and try to update the tool again. In case report any error message to us. 

Question: Why my files are not appearing in the GUI?

Answer: At launch time, Impact is scanning the DATA folder collecting information about GeoTiff (.tif) and Shapefile (.shp) files; only files with a valid coordinate reference system (any projection supported by GDAL/OGR) are then visualized. In case of lots or big files, the scan may take several minutes; 

Question: My old version of Impact (before 3.2) is freezing at launch time.

Answer: At launch time, Impact is processing the statistics of any new layer (rasters and vectors). Depending on the number of files and their size, this process may consume a significant amount of computer resources. In this case, your user interface may think the software is blocked and propose you to kill the script. After version 3.2, we introduced multi-threading, which allows processing the new layers statistics without blocking the user interface. Also, this new approach allows the software update mecanism to run in background (which could have been blocked in some cases before version 3.2).

If your are still on an old version and experience some freezing, you need to close Impact, remove your data and restart Impact so that its resources are not consumed for computing data statistics but can be used to update the software. Here are the operations in detail:

  • First kill Impact and any spawn script: open an MS-DOS prompt (type "cmd" in Windows search tool) and type:
    •  taskkill.exe /F /IM python.exe /T
  • Then move your data folder out of Impact folder (IMPACT/DATA); the point being to have an empty Impact data folder when launching it again. By doing so, there won't be any layer to process, and Impact should have enough resources to run the software update;
  • Launch again Impact, and accept the update (you must be connected to the Internet for that);
  • You can copy your data back to the Impact folder. If your software was updated, you can move everything back to the Impact data folder

Copyright

Copyright Copyright (c) 2015,

European Union; All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The IMPACT Toolbox is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.


License

The IMPACT toolbox is distribute under the GNU General Public License (GPLv3).

Please refer to http://www.gnu.org/licenses/gpl-3.0.en.html to access the official versions of the license together with a preamble explaining the purpose of this Free/Open Source Software License. The toolbox includes a number of subcomponents with separate copyright notices and license terms. Your use of the source code for these subcomponents is subject to the terms and conditions stated in the correspondent License.txt file available at:

Apace
\Libs\Apache2\ LICENSE.txt
OpenLayers
\Gui\resources\libs_external\OpenLayers-2.13.1\license.txt
ExtJs
\Gui\resources\libs_external\ext-4.2.1.883\license.txt
Python
\Libs\Python27\ LICENSE.txt
PyMorph
\Libs\Python27\Lib\site-packages\pymorph\README.rst
Numpy \Libs\Python27\Lib\site-packages\numpy\LICENSE.txt
Scipy \Libs\Python27\Lib\site-packages\scipy\LICENSE.txt
Sklearn \Libs\Python27\Lib\site-packages\sklearn\
Gdal/Ogr + Libs
\Libs\GDAL\License\
Mapserver + Libs \Libs\mapserver\Licenses\
Firefox Portable
\Libs\Browser\FirefoxPortable\Other\Source\License.txt

References

  1. RapidEye Satellite Imagery Product Specifications, "http://www.blackbridge.com/rapideye/upload/RE_Product_Specifications_ENG.pdf," [Online].
  2. Simonetti, D.; Simonetti, E.; et al,, "First results from the phenology-based synthesis classifier using Landsat 8 imagery," IEEE Geoscience and remote sensing letters, 2015.
  3. Szantoi Z., Simonetti D., "Fast and robust topographic correction method for medium resolution satellite imagery using a stratified approach," IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 6, p. 1921–1933, 2013.
  4. Baraldi, A. et al, "Automatic spectral-rule-based preliminary classification of radiometrically calibrated SPOT-4/-5/IRS,AVHRR/MSG,AATSR,IKONOS/QuickBird/OrbView/GeoEye and DMC/SPOT-1/-2 imagery; Part I: System design and implementation," IEEE Trans. Geosci. Remote Sens.,vol. 48, p. 1299–1325, 2010.
  5. Simonetti, D.; Simonetti, E.; et al,, "First results from the phenology-based synthesis classifier using Landsat 8 imagery," IEEE Geoscience and remote sensing letters, 2015.
  6. Apparent Cloud Shift in RapidEye Imagery, "http://blackbridge.com/rapideye/upload/Apparent%20Cloud%20Shift_Final.pdf," [Online].
  7. MacQueen, J. B. (1967). Some Methods for classification and Analysis of Multivariate Observations. Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability. University of California Press. pp. 281–297.
  8. Forrest, G. et al.,, "Remote Sensing of Forest Biophysical Structure Using Mixture Decomposition and Geometric Reflectance Models," Ecological Applications, vol. 5, no. 4, pp. 993-1013, 1995.
  9. Shimabukuro, Y.E., "Landsat derived shade images of forested areas," in International Society for photogrammetry and remote sensing, 1988.
  10. Hansen, M. et al., "High-Resolution Global Maps of 21st-Century Forest Cover Change," Science, vol. 342, pp. pp. 850-853, 2013.
  11. Arino O., et al., "GlobCover 2009," in ESA Living Planet Symposium, Bergen, Norway, 2010.
  12. Câmara, G., Vinhas, L., Ferreira, K., Queiroz, G., Souza, R., "TerraLib: An open source GIS library for largescale environmental and socio-economic applications.," Open Source, pp. 247-270, 2008.
  13. Baatz, M.; Schäpe, A., "Multiresolution segmentation: an optimization approach for high quality multi-scale image segmentation," in XII Angewandte Geographische Informationsverarbeitung, Heidelberg, 2000.

Related Website