Impact Toolbox User Guide English (en) français (fr)

From JRC Impact Toolbox User Guide
(Redirected from Main Page)
Jump to: navigation, search

Getting started

IMPACT toolbox was designed for analysing and assessing forest degradation using satellite imagery. Processing satellite imagery requires many technical operations: IMPACT toolbox proposes a series of module simplifying those tasks, as many intermediate steps are wrapped in unique functions.

Downloading and launching Impact

Impact installs on Microsoft Windows (Xp,Vista,Win7,Win8,Win10). You can download the installer from our server save the installer on your disk and run it.

Choose a place on your hard drive to install the software. Once the installation is done, you find a directory named impact, which has the following directories inside:

- DATA folder: contains user’s vector and raster data (for further details see the "getting data section" below);

- GUI folder: contains the graphical user interface, dependencies and map editing functions;

- LIBS folder: contains the engine and other software such as Apache, Python, GDAL, Openlayers, Mapserver, GeoExt, Javascript and HTML;

- TOOLS folder: contains a dedicated folder for each processing module or external library/package used within the tool such as a portable version of Firefox under “Browser”, python scripts for image classification, segmentation, clipping etc;

- START_Impact.bat: Impact launching command.

To start Impact, double click on "START_Impact.bat". You have two windows: a shell window (a black windows showing only text) and the Impact graphic user interface, showing a map, and different icons. Note that the shell windows should not be close, else Impact would not work properly.

To close Impact, close the shell windows. If you acciddently close the Graphic User Interface, close the shell window and then restart Impact tool.


When you launch Impact, it checks our server through your internet connection. If a newer version is found, it will propose you to update your own version. Simply accept the update and follow the instructions on screen.

Description of the interface

The Main Panel is the IMPACT’s desktop from where is possible to monitor available raster and vector layers (left panel), visualize them on the map (central panel) and execute processing modules available on the right panel.


The 60 seconds auto refresh (can be adjusted within the “Settings” panel, see dedicated chapter) guarantees a prompt visualization of user datasets and processing outputs; available layers are grouped into the following categories:


Base Layers: contains five background map options; blank, Blue Marble map, Sentinel2, Google Satellite and Streets maps, which is the default and automatically selected if internet connection is available.
DATA: any vector or raster layers in DATA and subfolders will be visible under this group; it should contain reference data like administrative boundaries, area of interests etc.

Drawing an area of interest


To draw an area of interest you need first to activate the function by clicking on "Draw Rectangle" or "Draw Polygon" in the upper left corner of the map. When selected, Edit/Delete/Save buttons appear. Once the button is enabled, the area of interest can be drawn on the map. The draw tool will create an rectangular/polygonal overlay with a orange border (while it is drawn the border is blue). 

Right mouse button menu for layers


Right click on the DATA files to display the context menu, where you can select:

  1. Layer Info
  2. Zoom to Layer Extent
  3. Set Opacity
  4. Start Editing. With the multiband color renderer, three selected bands from the image will be rendered, each band representing the red, green or blue component that will be used to create a color image. The user can also modify the appearance and the data range from the image (Stretch).
  5. Rebuild image statistics and pyramids
  6. Rename dataset
  7. Delete dataset

Processing Modules

Zip/DN to TOA reflectance


By executing this module, Landsat/Sentinel-2/RapidEye zipped (.tar.gz or archives placed in the DATA/DATA_RAW directory will be: 1) converted into a single Geo Tiff file, and 2) converted to top-of-atmosphere (TOA) reflectance and placed in the DATA directory. 

Standard satellite data products provided by space agencies consist of quantized and calibrated scaled Digital Numbers (DN) representing multispectral image data. The products are generally delivered in 16-bit unsigned integer format and can be rescaled to the Top Of Atmosphere (TOA) reflectance and/or radiance using radiometric rescaling coefficients provided in the product metadata file, as briefly described below.  

Geo Tiff conversion

Only (R,G,B,NIR,SWIR1,SWIR2)# bands are extracted, renamed and layer stacked; thermal and panchromatic bands are  zipped and stored within the same folder. The output directory contains the resulting files with the following naming convention:

Multispectral GeoTiff file:  [sensor]_[path]_[row]_[ddmmyyyy].tif

Multispectral quick look :  [sensor]_[path]_[row]_[ddmmyyyy].gif

Metadata file: [sensor]_[path]_[row]_[ddmmyyyy].met

Zipped files for archive: [sensor]_[path]_[row]_[ddmmyyyy]_[band{1*,61,62,8,9,10,11,BQA}].tif.gz

Projection and spatial resolution:  as derived from the source data. 

# Landsat 4/5/7 : bands 1,2,3,4,5,7    Landsat 8 : bands 2,3,4,5,6,7

* Landsat 8 pre-blue band 1.

TOA Reflectance Conversion

By converting the raw digital number (DN) values to top-of-atmosphere (TOA) reflectance data from different sensors/platforms are calibrated to a common radiometric scale, minimizing spectral differences caused by acquisition time, sun elevation, and sun–earth distance.

Calibrations coefficients:
Landsat  8bit (or 12bit for Oli) DN to TOA correction formula is as follows:

ρλ = π * Lλ * d2 / ESUNλ


  • ρλ= TOA reflectance for band λ
  • Lλ = Radiance for band λ = Mλ * Qcal + Aλ
  • Mλ = Band-specific multiplicative rescaling factor
  • Aλ = Band-specific additive rescaling factor
  • Qcal = Quantized and calibrated standard product pixel values
  • d = (1-0.01672*COS(0.01745*(0.9856*(Julian Day Image - 4)))
  • θSZ= Local solar zenith angle
  • Tm ESUN = [ 1957.0,1826.0,1554.0,1036.0,215.0,80.67]
  • Etm ESUN = [1969.0,1840.0,1551.0,1044.0,225.70,82.07]
  • Oli ESUN = [2067.0,1893.0,1603.0,972.6,245.0,79.72]

Multiplicative and additive rescaling factors are extracted from the metadata file. In order to reduce the size of the calibrated data, 32bit Float reflectance values [0-1] are then rescaled to 8bit Byte [0-255] with a linear multiplication factor of 255.

RapidEye data are usually provided as 5 band layer stacked Geo-Tiff files. To convert 16bit Digital Number (DN) to radiance it is necessary to multiply by the radiometric scale factor, as follows:

Lλ = DNλ * ScaleFactor(λ)

where ScaleFactor(λ) = 0.01

The resulting value is the at sensor radiance of that pixel in watts per steradian per square meter (W/m2 sr μm). The TOA correction formula for RapidEye data is as follows:

ρλ = π * Lλ* d2 / ESUNλ * cos θSZ


  • ρλ= TOA reflectance for band λ
  • Lλ = Radiance for band λ
  • θSZ= Local solar zenith angle
  • d = (1-0.01672*COS(0.01745*(0.9856*(Julian Day Image - 4)))
  • ESUN = [1997.8,1863.5,1560.4,1395.0, 1124.4]

32bit Float reflectance values [0-1] are then rescaled to 16bit Unsigned Integer [0-10000] with a linear multiplication factor of 10000. Formulas and parameter are derived from [1].


to be completed

Note: processed bands name are saved in the metadata tag "Impact_bands"

Image Clip


The Image clip tool can be used with any raster layer to create a clipped layer using: 1) a selected GeoTiff image, and 2) a selected vector layer. Image clipping represents a crucial step to reduce processing time and data volume. The user has the possibility to clip any GeoTiff file from the input directory using predefined vector layer(s) containing one or more features each. Vector projection will be on the fly converted to match the raster one. Note that clipping can be done only after Zip/DN to TOA-Reflectance.


Image Filters allow users to filter raster files by satellite type (e.g. Oli will keep only Lansat 8 Oli images).

Use Individual feature: this option creates output files as many as the individual features of the vector. When No is selected, a unique output files containing the bounding box of the features will be used.

A test vector file has 4 features with ID=”feat_1-4”. In case “Use individual features” is flagged as ‘Yes‘, the ‘ID’ will be used as prefix for the 4 out filenames e.g. Clip_plot{1-4}_oli_226-068_03072014.tif. If ‘ID’ filed is not available, it will be generated on the fly using a sequential number.



If you have collected data from a variety of sources, chances are that not all layers contain the same coordinate system information/projection. For example, the coordinate system of shapefile created with the "Draw Rectangle" option can be different from the coordinate system of the raster data. Specifically, the shapefile created with the "Draw Rectangle" option are automatically saved  in LatLong Geogpraphic projection, while most of the Sentinel 2 / Landsat rasters are in other projections. The user must check the coherence between the projection of the shapefile and the rasters before clipping. Figure below shows that the difference of projections causes small 'edge effects'. Changing the shapefile projection using the fishnet option (see General tool/Fishnet section) should give you a raster clipped to the same shape as your polygon.


Image Classification


Classification tools allows the user to carry out: 1) the automatic classification and the 2) K-means classification. It is possible to reclassify the classification according to the list of classes we want, producing a new raster classification using the 3) recoding tool.

Automatic Classification

The aim of this tool is to offer a fully automatic pixel-based classification product to be used in further processing steps like segmentation and land cover mapping. The Single Date Classification (SDC) algorithm as described and implemented in [2], is based on pre-defined knowledge-based “fuzzy” rules aiming to convert the TOA reflectance input bands into discrete thematic classes (Table 1) In brief, the classification chain is based on 2 steps: 1) NDVI partition into 3 broad categories as follow : [-1,0] = water; ]0,0.45] = soil; [0.45,1] = vegetation; 2) ad-hoc bands conditions (e.g. NIR > RED > 0.5) to split each category in sub-classes and eventually, promote pixels to other categories as it might happen e.g. for turbid water when having NDVI values > 0 (falling into soil range).

Classification rules and satellite bands involved

The current implementation performs best when using B,G,R,NIR,SWIR1-2 bands (Landsat TM/ETM+/OLI, Sentinel 2 and landsat-like imagery ) however sensor like RapidEye, DMC, ALOS/AVNIR2, SPOT4/5 and Komsat, are fully supported although yielding reduced accuracy in water/dark soil discrimination due to the missing SWIR bands as indicated on Figure 8 and Figure 9.

Figure 8
Figure 9
Similar SDC algorithm robustness and scalability among the aforementioned sensor have been confirmed by [3] and [4]; however, SDC’s accuracy is not easily quantifiable since the algorithm delivers broad thematic categories derived from spectral properties observed during a precise time in the vegetation cycle; is therefore possible to classify leafs-off deciduous forest as grass or soil. [5] better explains how to combine and analyze SDCs time series in order to produce more accurate land cover maps. Is worth noting the SDC is capable of retrieving sun azimuth from the correspondent metadata in order to apply post classification 3D models and morphological filters (opening and closing) for better clouds/shadow masking and “salt and pepper” reduction.
Class Id and description

As on Table 1, cloudy pixels (ID 1 and 2) and potential shadow pixels (ID = 10,35,40,41,42 ) are initially treated using the morphological ‘closing’ filter of 500mt; afterward, cloudy pixels are projected in the sun azimuth and possible overlaps are automatically recoded as Shadow/Low Illumination (ID=42). Please note that on off-nadir acquisition sensor like RapidEye, the relative position of clouds and their shadows don’t match the provided sun azimuth angle. The apparent cloud shift distance, in relation to its true position, depends on the off-nadir angle and on the cloud height. Whereas the satellite off-nadir angle is well known, the height of imaged clouds is unknown [6]. For this later reason the clouds and shadow masking as provided by SDC might not be optimal for RapidEye imagery. Ideally the user could replace the ‘real’ sun azimuth with the apparent one within the .xml metadata.

A pop-up interface will easy data selection (single or multiple files) and settings such as filters, overwrite or evergreen forest normalization. This latter option is performing the so called “dark object subtraction”, an image normalization process towards predefined median forest valued improving classification accuracy. See dedicated chapter for more details.

SDC output legend is showed on Table 1. SDC is not aiming to offer detailed and reliable land cover map since is relying on spectral properties observed during a precise time in the vegetation cycle; however, during the leaf-on acquisitions is more likely to have a good match between the proposed class description and actual land cover type.

K-means Classification

K-means is one of the simplest unsupervised learning algorithms that solve the clustering problem. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters fixed a priori [7]. The main idea is to define k centroids, one for each cluster. These centroids shoud be placed in a cunning way because of different location causes different result. So, the better choice is to place them as much as possible far away from each other. The next step is to take each point belonging to a given data set and associate it to the nearest centroid. When no point is pending, the first step is completed and an early groupage is done. At this point we need to re-calculate k new centroids as barycenters of the clusters resulting from the previous step. After we have these k new centroids, a new binding has to be done between the same data set points and the nearest new centroid. A loop has been generated. As a result of this loop we may notice that the k centroids change their location step by step until no more changes are done. In other words centroids do not move any more.

Specifically, the algorithm is composed of the following steps:

1) Place K points into the space represented by the objects that are being clustered. These points represent initial group centroids.
2) Assign each object to the group that has the closest centroid.
3) When all objects have been assigned, recalculate the positions of the K centroids.
4) Repeat Steps 2 and 3 until the centroids no longer move or for a given number of iterations. This produces a separation of the objects into groups from which the metric to be minimized can be calculated.


The user has the possibility to select:

1) the number of clusters,

2) the number of iterations (i.e. step 4 of the kmeans procedure),

3) the suffix of the output file.


The Recoding tool changes the pixel values of an image. The tool works with float or integer values. It allows to modify the values in a raster map. Recoding directly modifies the image. It is however possible to save the recoding in another file, in order to not alter the original image.

Right click on the raster and select "start editing". Then the Editing toolbox will pop up. Firstly, the user is asked to select the band to recode. There are two ways to define how the values will be reclassified in the output raster: Recode by intervals and Recode by values. Either ranges of input values can be assigned to a new output value, or individual values can be assigned to a new output value.

Using recoding by intervals, the user is asked to type the reclassification values: "from", "to", and "recode to". "from" and  "to" define the reclassification range for the new value ("recode to").

Using recoding by values, the user is asked to type the exact reclassification values: "value" and "recode to". The button "use unique values" allows to visualize the unique values for the raster.


Analysis & Enhancement


The tool provides options to apply processes, such as Linear Spectral Unmixing, Evergreen Forest Normalization, Pansharpening, Principal Component Analysis and Index Builder.

Linear Spectral Unmixing

The Linear Spectral Unmixing (LSU) is a tool to decompose a reflectance source spectrum into a set of given endmember spectra. The result of the unmixing is a measure of the membership of the individual endmember to the source spectrum. This measure is called the endmember's abundance. The proposed model, inspired by [8],[9], makes use of predefined endmembers for estimating soil, vegetation and water fraction images. Prior to the LSU, is possible to perform the “Evergreen Forest Normalization” to minimize spectral difference across images acquired at different time and space. The adopted endmembers have the following values expressed in TOA-Reflactance [0-1] for bands [B,G,R,NIR,SWIR1,SWIR2]:

Soil = [0.14, 0.16, 0.22, 0.39, 0.45, 0.27]

Vegetation = [0.086,0.062,0.043,0.247,0.109,0.039]

Water = [0.07, 0.039, 0.023, 0.031, 0.011, 0.007]

When processing RapidEye data, only B,G,R,NIR bands are used and further adjustment could be done to better fit sensor properties and using Red-Edge band as well. The following LSU formula has been implemented in python (

Unmix = I x (E+ ) T


Unmix = 3x1 matrix of endmembers fractions computed using the Unconstrained LSU

I = input image (6 bands) reshaped into a 2D array concatenating a) pixels from each band into a 1D vector and b) concatenating all vector bands

E = 3x1 array of endmembers [Soil, Vegetation, Water]

(E+ ) T = transpose of the pseudo inverse of E

The user is asked to select the input image and choose whether to use or not the “EVG (Evergreen) Forest Normalization”.


Evergreen Forest Normalization

The normalization of multi-temporal data can set the radiometric measurements to a common relative scale and, consequently, ensure the spectral homogeneity of such data. Relative normalization adjusts the spectral values of all images to the values of one reference image. Commonly the reference image is selected as the most recent image or the least affected by atmospheric effects. This approach relies on the ability to identify stable targets between dates, named Pseudo-Invariant Features (PIF), and assumes that reflectance differences in these stable targets are due to atmospheric perturbations. A simple linear relationship among images across time is generally used to normalize images to the same reference level. 

Using this specific approach, dense evergreen forest pixels have been considered as PIFs. Evergreen forest normalisation searches each band for the median value of dense evergreen forest pixels. The image is then corrected by (1) subtracting this value and (2) adding a reference dense evergreen value from every pixel in the band.  

Each reflective band (λ) is normalized, i.e. rescaled to the same reference forest value as follows:

Fnorm formula1.png

RTENOTITLE = median value of dense evergreen forest of the sample site for the band λ, 
RTENOTITLE = reference dense evergreen forest value for the band λ computed from representative areas selected visually in 100 images across all continents (22, 16, 11, 63, 28, 10 for bands B,G,R,NIR,SWIR1,SWIR2).
The median forest value parameter was extracted from a forest mask (/Tools/Evergreen_normalization/Global_EVG_map_ll.tif). The map has been created using the Global Forest Change product [10](Tree cover >65% in 2013) intersect with Globcover 2009 product [11]classes 40,65,70,160.


Evergreen forest mask as derived from the Global Forest Change Product and Globcover 2009 intersection


Pansharpening is a process of using the spatial information in the high-resolution grayscale band (panchromatic, or pan-band) and color information in the multispectral bands to create a single high-resolution color image. Suppose the program is invoked like:

All of the low resolution bands are scaled up to match the resolution of the panchromatic band, using the resampling method selected (e.g. cubic interpolation). The 'Exponential stretch' option applies a non-linear scaling with a power function. 1.5 is the exponent of the power function (that must be positive). This power function stretchs pixel values between mean - 1.5*sd to mean + 1.5*sd where sd is the standard deviation.

Principal Component Analysis

Principal components analysis (PCA) is a technique applied to multispectral remotely sensed data. Adjacent bands in a multispectral remotely sensed image are often highly correlated. If DN values of adjacent bands are plotted against each other, a high correlation may exist, meaning thereby that the two datasets are not statistically independent.  

Principal Components Analysis (PCA) is related to another statistical technique called factor analysis and can be used to transform a set of image bands such that the new bands (called principal components) are uncorrelated with one another and are ordered in terms of the amount of image variation they explain.  Thus the first principal component (PC1) contains the highest variance in a scene, followed by PC3, PC3 and so on.  The components are thus a statistical abstraction of the variability inherent in the original band set.

For an n dimensional dataset, n principal components can be produced. In addition to PC images, the PCA also produces eigenvalues. Eigenvalues contain information about percent of total variance explained by each PC.

An important advantage of PCA is that most of the information dispersed throughout the X bands may be compressed into a few bands with virtually no loss of information.  The first three principal components typically contain over 90% of the variance in the data and hence the information in the scene.  Using the principal components we may prepare a new raster in which the correlation between the bands (PCs now) used is zero.  A false color composite produced by using the first principal component (PC1) as red, the second principal component (PC2) as green and the third principal component (PC3) as blue will thus contain almost all the information in a scene.  It must be remembered however, that the high PC images (PC6 for example) do contain little variance, they must not be discarded without through examination because they may well contain information not contained in the lower principal components. 


The user is just asked to select the Geotiff file and the output suffix.

Figure below shows the principal component analysis in RGB (PC1, PC2,PC3), respectively of a Sentinel 2 image.


ND(V,W,S)I Threshold (Index Builder)

The purpose of the index builder is to cluster images based on normalized index. In remote sensing, normalized index is an enhancement technique in which a raster pixel from one spectral band is divided by the corresponding value in another band. The choice of bands used is what makes them appropriate for a broad range of applications, from minerals to soil to vegetation. 

The user is asked to choose:

1) the raster image,

2) the two bands (e.g. band 4 and band 3 to compute NDVI using Landsat TM)

3) the number of clusters,

4) the suffix of the output image.


Although the formula

Index = (Band 1 - Band 2 ) / (Band 1 + Band 2 ) 

remains the same, the generated index can assume a variety of meaning by selecting a different band combination as reported on Table below

Non exhaustive list of possible indexes to be used for image thresholding and clustering.

The partition (clustering) of the index is done dividing its histogram into a number of equal-width bins in the given range (number of clusters defined by the user); calculated bins are then used to determine the cluster Id and associated Color ramp [blue-red-green].


Image segmentation is the process of delineating object in a image, such as crop fields, wood, roads, houses, etc. Segmentation algorithms usually exploits spectral signatures to delineate the objects within the image.

Impact uses INPE's TerraLib[12]  libraries which offers two algorithms: Region growing which considers only the spectral signatures of the objects and Baatz[13] which also takes into account the objects compactness. The algorithms tries to delineate objets that are sufficiently homogenous in term of color (Region growing and Baatz) and compact (Baatz). The image is then cut in a series of contiguous polygons called "segments". A second step allows to re-unite small size segments that are sufficiently similar.

The image segmentation tool is launched by clicking on the following icon:


It launches the segmentation interface for selecting the image to segment, allowing to control the different parameters of the algorithms.


Segmentation Options:

- Multi-date segmentation:

  • “No”: selected images are treated individually and the correspondent vector file (ESRI Shapefile format) is saved within the image directory; please ensure that segmentation parameters are applicable to all selected images, bands selection above any other.
  • “Yes”: selected images are layer stacked (only selected bands) into a single one using the selected bands and weight. Please ensure the geographical overlap. First file is use to extract output location and reference projection

- Use classification to pre-label objects: “_class.tif” and “_cluster.tif” files could be used to pre-label objects according to occurrence rules (currently the ‘mode’) and lookup-table (described here after) either in single or multi-date mode. In the latter case, the .dbf file will reflect the top-down order in which files are organized in the tree-panel saving (after the ID field) as many fields as the input images as on Table 3. It is possible to drag-and-drop a layer to the right position / chronological order if needed.

- Overwrite: set to “Yes” to automatically delete output files (if any)
- Optimization: if “Yes”, input images are processed using a tiling approach reducing the total amount of memory; however, the final segmentation might reflect the tiling patters


Proposed lookup table (SDC2Class) as implemented in Tools/JRC_SD_libs/

Lookup table showing the recoding strategy adopted

Segmentation Parameters:

- Bands and weights: raster bands and associated weight to be used
- Scale factor: this factor controls the spectral heterogeneity of the image objects and is therefore correlated with their average size; smaller it is, more objects you will get
- Color: Baatz spectral component [0.0, 1.0]
- Compactness: Baatz morphological component [0.0, 1.0]
- Euclidean distance: used only if the memory “optimization” flag is enabled; represents the minimum Euclidean Distance (expressed in DN values) to be used while merging segments crossing two adjacent tiles; higher values will allow aggregation of heterogeneous objects; lower values will keep the straight edges of the tiles.
- Suffix: user define string (only alphanumeric) to be add to the output filename


- Please ensure that the selected band numbers are available within the raster/s (generic “Baatz Failure” error message will be raised)
- The multi-date segmentation creates an ancillary file within the directory of the first selected image (master) containing the name of the processed images and the order of the _class.tif within the .DBF file
- Objects pre-labeling requires a classified raster _class.tif (e.g. the result of the automatic classification) from which to extract statistical information from (e.g. the “mode”) together with a lookup table to convert it into the adopted land cover/use legend. Clustered images _cluster.tif or user-defined classified raster/maps are accepted but will only be used to fill the “T(n)_cluster” attribute since the conversion from cluster ID to Land cover/use cannot be defined a priori. Currently is possible to overcome this limitation by changing the python class MYmode2D and SDC2Class within the file
- Big TIFF files are not supported
Segmentation results could be visualized using the Map Validation Panel

Load Classification Layers






Impact provides two degradation methods: Degradation NBR and the Forest change assessment at Minimum Mapping Unit.

Degradation NBR

Degradation tool allows to monitor forest degradation and to detect short-lived signals of crown cover disturbances.


Changes in canopy cover (signs of forest disturbance and degradation) are monitored applying the Delta Normalized Burned Ratio( ∆NBR) approach.

Basically, the Normalized Burned Ratio (NBR) index is computed for two different periods, and pixels that display a difference greater than a specific threshold are labeled as disturbed forest.

The Normalized Burned Ratio (NBR)  is defined as:



NIR is the Near InfraRed band (Band 4 for Landsat 7, Band 5 for Landsat 8, Band 8 for Sentinel 2)

SWIR is the Short Wave InfraRed band (Band 7 for Landsat 7/8, Band 12 for Sentinel 2)

However, atmospheric influences as well as other effects (e.g. sun incidence angle) can result in artifacts and outliers. To circumvent this issue, the "self-reference" version of the NBR is computed:

NBRself-referenced = NBR - NBRn_median

Where NBRn_median is a kernel moving window median filter. Choice of the kernel size (radius of n pixel) of the circular median filter depends on spatial resolution of the satellite sensor with Landsat (7 pixel) and Sentinel-2 (21 pixel) .

The difference of Self-referenced NBR between two periods allows the assessment of forest disturbance:

ΔNBR = NBRself‐referenced_time1 – NBRself‐referenced_time2

Thresholds are defined on the self-referenced ∆NBR to define undisturbed and disturbed forests. Specifically, the thresholds are:

0 ≥ ΔNBR > -0.05 undisturbed forest
0.05 ≥ ΔNBR > -0.1 medium disturbed
ΔNBR ≤ -0.1 strongly disturbed

Forest change assessment at Minimum Mapping Unit

TheForest change assessment at Minimum Mapping Unitmodule estimates forest areas conversion (deforestation and forest degradation) and associated Carbon emissions.

The module intents to translate the FAO and UNFCC generic definition of forest (requiring defining a minimum tree height, ground area and crown cover), by allowing the user to control the following parameters:

  • The size on the ground of the sampling grid (called Minimum Mapping Unit, MMU);
  • The minimum percentage of tree cover within an MMU required to classify it as a forest.

For most applications, we strongly suggest to leave the minimum percentage of trees to 30% (software default setting).

Following our definition, when tree cover is lost inside an MMU, the term degradation applies if the remaining percentage of tree cover is above the minimum percentage required, and the term deforestation applies if the tree cover is brought below this percentage.

For now, the persistence of the tree cover loss is not taken into account.

The information about the tree cover loss comes from a pixel-based disturbance map such as the Global Forest Change maps [10]( or a country specific map illustrating the changes of tree cover during several years.

So far, the module considers a single number to convert the tree loss into carbon emission (the Carbon emission factor in tons per hectares). It is therefore recommended to the users to process homogeneous areas of forest and to stratify a country in different processing.

The program is invoked in this way:


1) Input Image: The input image must be a raster with 5 classes illustrating the trajectory of a pixel in time (2 periods of time). To visualize the colors in Impact Tool, the raster needs a colormap. A colomap is suggested in the Table below:


2) Output Name: Choose the prefix for the output products name

3) Report Language: Choose the language you want to see the report in, there is a choice between English and French.

4) MMU width (px): Choose the minimum mapping unit width. The unit is in pixel. Choose it according to the spatial resolution of the input image and the minimum ground area of the forest definition. A MMU width of minimum 3 pixels is recommend for a LANDSAT derived-map (30 meter resolution).

5) Periods of change: Indicate the start and end years of the two periods of forest cover change, the historical and the recent period, to be compared in the module. The years need to be inserted in a format YYYY. The module will automatically compute the length of the two periods of time. The years cannot overlap.

Be attentive that the historical and recent periods must have been decided before, when processing the raster input image. The module can only process a pre-classified image.

6) Carbon emission factor: Choose a Carbon emission factor corresponding to the forest type present in the study area. The unit is tons per hectare.

7) Minimum Forest per MMU: Choose a value (in percentage of tree cover in a MMU) for the Minimum Forest per MMU. It corresponds to the crown tree cover of the forest definition. It is set by default to 30% and it is highly recommended not to go under 30% (3 pixels) for a MMU of 3x3 pixels. This variable has to be thought in term of pixels in the MMU and what the meaning of the percentage.

Note: good practice for the variables. The module is initially designed for processing forest cover change maps coming from Landsat images with a MMU width of three pixels (boxes of 9 pixels) and a Minimum Forest per MMU of 30%.


Figure below is illustrating the general flowchart of the module in Impact Tool


For each MMU, the pixels of the input image are counted in this way:

FF = count(class == 1)

NF = count(class == 2)

FNF1 = count(class == 3)

FNF2 = count(class == 4)

ND = count(class == 0)

F0 = FF + FNF1 + FNF2

Decision rules (Figure below) are applied to each MMU to determine their forest conversion class. A new raster is created with a spatial resolution corresponding to the MMU width. The classes of that raster are decided according to decision rules explained in Figure below.


The classes of this new raster are detailed in the Table below:


Some examples of classes conversion attributed to a MMU of a size of 3x3 are illustrated in Figure below:


For each class representing a forest conversion: classes of deforestation 21, 24, 22, 23 and classes of forest degradation 31, 32 and 33, the number of pixels that represent a forest loss (FNF1 or FNF2 trajectory) inside the MMU are counted and attributed to the period of time they represent (FNF1 to the historical period and FNF2 to the recent one).

The Carbon emission are calculated multiplying this surface by the emission factor.


Three output images are generated together with an xml report about the estimation of forest areas changes and resulting Carbon emissions.

Output images
1.  Classification of the MMU. “output name” _class.tif

A single band raster with the classes of the MMU boxes with a colormap corresponding to the legend in Table.

2. Proportion of change in Forest MMU. “output name” _change.tif

A five band raster

Band 1  OUT_ND

Band 2 OUT_FF

Band 3  OUT_NFNF

Band 4  OUT_FNF1

Band 5  OUT_FNF2

 3. Biomass estimation. “output name” _biomass.tif

A three band raster

Band 1  OUT_FNF1 * biomass_value

Band 2  OUT_FNF2 * biomass_value

Band 3  (OUT_FNF1+OUT_FNF1)*biomass_value

Report on the forest area change and resulting emission
The report is created as an html file. To display this file: once processing is done, click on “logs Monitor”, then open the log file of your processing (click on “show info”), and then click on the file blue link (click on “view report result”): the report is displayed in a new tab.
You can either copy the whole report and paste it in a text processor (such as MS-Word) or get a copy of the file: you can find your file in Impact tool data directory, under DATA/USER_data. If you entered “example” for the output_name and chose the French language, you should see a file named example_FR.html. You can copy this html file and edit it.

General Tools

Raster conversion

Users often need to convert raster images in order to change projection, rescale, or to select a given number of band layers.
Raster conversion tool allows to:
1) Reproject rasters
2) Set nodata    
3) Set pixel Size
4) Set Data type (e.g. byte )
5) Rescale value
6) Select the bands of interest
7) Modify the output format (i.e. GeoTiff/JPEG/PNG)



A common task may be to stitch together the available satellite scenes to create a much larger image. It becomes thus necessary to deal with the creation of a mosaic from many tiles. The goal will be to have a large single file as output.


The first processing option, i.e."Place each file into separate bands", when "No" is selected, avoid to duplicate data, combining the bands per scenes. In other word, the output file will contain X bands, where X is the number of bands of the images. Conversely, when "Yes" is selected, the output file will contain N*X bands, where N is the number of scenes, and X is the number of bands.

The resampling algorithm is default to nearest neighbor ("near").


Create Fishnet creates a feature class containing a net of rectangular cells. Creating a fishnet requires two basic pieces of information: the spatial extent of the fishnet and the number of rows and columns. 

The first parameter of the tool is 'Use template for extent'. If selected, the spatial extent of the fishnet will be the spatial extent of the selected raster/shapefile. Otherwise the user will be asked to manualy supply both the minimum and maximum x- and y-coordinates.

The second set of parameters is the the number of rows and columns or height and width of each cell in the fishnet.

It is also possible to select the random percentage of the total samples by checking the random parameter. 

Note that if the user select a raster template for extent and then select one row and a column, the tool will return the shapefile containing the selected raster.


This tool can be used to divides a raster into smaller pieces by tiles. In this case the user has to select  'Use template for extent' and specify the number of Rows and Columns to split the raster into. Once the fishnet shapefile has been created, the user should use "Image Clip" tool selecting the raster, the fishnet shapefile as clipping boundary and "use individual features".


Statistics tool allows the user to analyze and understand what is going on in a given vector dataset. Specifically, it allows to calculate several values of the pixels of a raster layer within a vector layer. The user can calculate (amongst others) the sum, the mean value and the total count of the pixels that are within each polygon within the selected shapefile. The tool generates output columns in the vector layer with a user-defined prefix. To understand the statistics above, refer to this definition list:

- Count: Number of pixels.

- Percent: The percentage of valid pixel.

- Area: The area of each feature of the shapefile.

- i_para: Perimeter-area ratio is the ratio of the patch perimeter (m) to area (m2).

- Mean: The mean (average) value is simply the sum of the values divided by the amount of values.

- Median

- Mode

- Standard Deviation

- Min: The minimum value.

- Max: The maximum value.

The processing option allows users to set value to be considered as Nodata in addition to metadata one.

Note that the uses must check the coordinate system of the data. If the data are in decimal degrees, also the areas are given in degrees. In order to calculate the area for the each polygon of the shapefile in square meters, the data has to be in square meters as well. So, the user needs to reproject it.

How to download Sentinel 2 and Landsat data

Sentinel 2

We here briefly review two different ways of getting the data: one the official ESA channel, the Sentinels Hub; and the other is through JRC repository at the Sentinel2 portal.

Downloading through the Sentinels Hub

We start by visiting at the Sentinels Data Hub web site ( ). A login is necessary to access the data, so if you do not have an account you will have to create one first. Once this is done, it is simple to start searching, just click in the search button, or the advanced search button on the top left:


After we perform the search, the results are shown in this way:


Then, by clicking in the “eye” icon, we are shown the details of each scene. If this scene satisfies our needs, we might download it by just pressing the “arrow” button, at the bottom right. Beware that scenes are packages of about 6 GB in size.


Downloading through JRC Sentinel2 portal

We start by navigating to the JRC Sentinel 2 portal (;

An information box will pop up immediately.  The user is asked to enter his/her email address in order to recover his/her session later on. Note that entering the email address is not mandatory, but if the user does not insert a valid email address, he/she will not been able to recover his/her session. The email address will not be saved but converted into an hash code.

When CID portal starts, you are presented with the GUI as shown below. The numbers 1 through 5 in blue circles refer to the five major areas of the interface as described below:

1) This area lists all the layers in the project.
2) Filters: allows users to look for specific information about content such as timespan, orbit and cloud cover
3) Cart: allows users to download data
4) Map view: maps are displayed in this area. The bar at the top allows user to draw area of interests such as rectangles and polygons.
5) Log Monitor: Log Monitor provides complete monitoring of application logs, log files, event logs and service logs.


Then, by clicking in "Filters”, in the menu to the left, we can choose orbit, time span and admissible cloud cover:


By clicking in the desired cell, we are shown the details of the image. By clicking on Add Image to Cart button (i.e. the cart button on the left of the Sentinel2 thumbnails), we can add the selected Sentinel2 scenes to the cart.


The Sentinel 2 scenes' cart allows users to download the desired images.


Two options are available:

1) Download the Sentinel 2 file "as it is" in a zip format.

2) Process the Sentinel 2 file and rescale image data to the Top Of Atmosphere (TOA) reflectance. The process also entails the selection of bands of interest, change of the projection and resolution Additionally, by selecting a vector file as Area Of Interest (AOI) from the one available in Layers section, only intersecting images and/or part of them are processed, reducing processing time and image size.


The images one downloads as Zip file have to be processed with the Zip/DN to TOA-Reflectance tool (see the dedicated section), whereas the images one downloads as '"processed" are directly usable as rasters in IMPACT. In the next sections we will review how to open and preprocess the files downloaded.


The USGS distributes Landsat imageries through the EarthExplorer data portal ( ). 


1) Downloading data from EarthExplorer requires that you first sign in as a registered user.

2) After you sign in, use the “Search Criteria” tab in the upper left to specify an area of interest (see Figure). You can type a place name in the search field or click on the map to place a pin that you can drag to other places.


3) Click the “Data Sets” tab next. From the long list that appears, select “Landsat Archive” and check the box for the first item: “L8 OLI/TIRS.”


4) Click the “Results” tab to peruse the available images. Then click on the image thumbnails to see larger image previews in false color, which you can evaluate for cloud cover and geographic coverage. Scrolling down the image previews reveals additional metadata.


5) The EarthExplorer portal includes other useful tools. For example, you can filter an image search by acquisition date and percent cloud cover. Landsat image extents are also viewable as transparent map overlays.

6) To download a scene from the search results list, click the “Download Options” icon. Select the last option “Level 1 GeoTIFF Data Product” to download all Landsat data bands. These are very big files (about 1 GB compressed, 2 GB uncompressed).

The data is provided in a compressed (zipped) format. You can use Zip/DN to TOA-Reflectance Tool to unpack the file and create a set of GeoTiff files. The section Zip/DN to TOA-Reflectance Tool goes into more detail on this. 

Troubleshooting and FAQ

Question: What does IMPACT do?

Answer: TBD

Question: My old version of Impact (before 3.2) is freezing at launch time.

Answer: TBD

Question: My old version of Impact (before 3.2) is freezing at launch time.

Answer: At launch time, Impact is processing the statistics of any new layer (rasters and vectors). Depending on the number of files and their size, this process may consume a significant amount of computer resources. In this case, your user interface may think the software is blocked and propose you to kill the script. After version 3.2, we introduced multi-threading, which allows processing the new layers statistics without blocking the user interface. Also, this new approach allows the software update mecanism to run in background (which could have been blocked in some cases before version 3.2).

If your are still on an old version and experience some freezing, you need to close Impact, remove your data and restart Impact so that its resources are not consumed for computing data statistics but can be used to update the software. Here are the operations in detail:

  • First kill Impact and any spawn script: open an MS-DOS prompt (type "cmd" in Windows search tool) and type:
    •  taskkill.exe /F /IM python.exe /T
  • Then move your data folder out of Impact folder (impact/DATA); the point being to have an empty Impact data folder when launching it again. By doing so, there won't be any layer to process, and Impact should have enough resources to run the software update;
  • Launch again Impact, and accept the update (you must be connected to the Internet for that);
  • You can copy your data back to the Impact folder. If your software was updated, you can move everything back to the Impact data folder.


Copyright Copyright (c) 2015,

European Union; All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The IMPACT Toolbox is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.


The IMPACT toolbox is distribute under the GNU General Public License (GPLv3).

Please refer to to access the official versions of the license together with a preamble explaining the purpose of this Free/Open Source Software License. The toolbox includes a number of subcomponents with separate copyright notices and license terms. Your use of the source code for these subcomponents is subject to the terms and conditions stated in the correspondent License.txt file available at:

\Libs\Apache2\ LICENSE.txt
\Libs\Python27\ LICENSE.txt
Numpy \Libs\Python27\Lib\site-packages\numpy\LICENSE.txt
Scipy \Libs\Python27\Lib\site-packages\scipy\LICENSE.txt
Sklearn \Libs\Python27\Lib\site-packages\sklearn\
Gdal/Ogr + Libs
Mapserver + Libs \Libs\mapserver\Licenses\
Firefox Portable


  1. RapidEye Satellite Imagery Product Specifications, "," [Online].
  2. Simonetti, D.; Simonetti, E.; et al,, "First results from the phenology-based synthesis classifier using Landsat 8 imagery," IEEE Geoscience and remote sensing letters, 2015.
  3. Szantoi Z., Simonetti D., "Fast and robust topographic correction method for medium resolution satellite imagery using a stratified approach," IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 6, p. 1921–1933, 2013.
  4. Baraldi, A. et al, "Automatic spectral-rule-based preliminary classification of radiometrically calibrated SPOT-4/-5/IRS,AVHRR/MSG,AATSR,IKONOS/QuickBird/OrbView/GeoEye and DMC/SPOT-1/-2 imagery; Part I: System design and implementation," IEEE Trans. Geosci. Remote Sens.,vol. 48, p. 1299–1325, 2010.
  5. Simonetti, D.; Simonetti, E.; et al,, "First results from the phenology-based synthesis classifier using Landsat 8 imagery," IEEE Geoscience and remote sensing letters, 2015.
  6. Apparent Cloud Shift in RapidEye Imagery, "," [Online].
  7. MacQueen, J. B. (1967). Some Methods for classification and Analysis of Multivariate Observations. Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability. University of California Press. pp. 281–297.
  8. Forrest, G. et al.,, "Remote Sensing of Forest Biophysical Structure Using Mixture Decomposition and Geometric Reflectance Models," Ecological Applications, vol. 5, no. 4, pp. 993-1013, 1995.
  9. Shimabukuro, Y.E., "Landsat derived shade images of forested areas," in International Society for photogrammetry and remote sensing, 1988.
  10. 10.0 10.1 Hansen, M. et al., "High-Resolution Global Maps of 21st-Century Forest Cover Change," Science, vol. 342, pp. pp. 850-853, 2013.
  11. Arino O., et al., "GlobCover 2009," in ESA Living Planet Symposium, Bergen, Norway, 2010.
  12. Câmara, G., Vinhas, L., Ferreira, K., Queiroz, G., Souza, R., "TerraLib: An open source GIS library for largescale environmental and socio-economic applications.," Open Source, pp. 247-270, 2008.
  13. Baatz, M.; Schäpe, A., "Multiresolution segmentation: an optimization approach for high quality multi-scale image segmentation," in XII Angewandte Geographische Informationsverarbeitung, Heidelberg, 2000.

Related Website