Glossary

GOOD AGRICULTURAL AND ENVIRONMENTAL CONDITIONS (GAEC)

Farmers are obliged to maintain their land in ʽgood agricultural and environmental conditionʼ. This concept includes the following: the protection of soil against erosion, the maintenance of soil organic matter and soil structure, and the safe-guarding of landscape features. It is the member states – not the European Union – which decide the exact specification of these parameters.

N.01 – Arable land is used for the cultivation or fallow.

N.03 – Fallow lands are mowed and/or cultivated to prevent the occurrence and spread of weeds at least once a year, before July, 31.

N.04 – On meadows or pastures vegetation cover is being mowed and removed at least once a year, before July, 31 or they were grazed. In case of meadows and pastures declared in the application of financial assistance under: 1) payments for NATURA 2000 areas and related to Water Framework Directive; 2) Agriculture – Environmental programme – vegetation cover was not moved and removed in relation to separate regulations for support of rural development with the participation of the European Agricultural Fund for Rural Development.

N.05 – On areas at risk by water erosion it is necessary to maintain ground cover during the period from 1st of December to 15th of February on at least 40% of the total arable area belonging to separate farm.

N.06 – On arable lands there are no any signs of burning.

N.07 – On arable lands, there were no cropping using heavy equipment during water saturation of soil profile.

N.12 – Arable land located on slopes greater than 20 degrees are not used for crops that require ridges along slopes or kept as black fallow.

N.13 – On arable land located on slopes greater than 20 degrees and used for the cultivation of perennial crops there are vegetation cover.

N.14 – Farmer has not changed permanent grassland to other land use without permission related to specified direct support schemes article.

N.15 – Farmer has transformed arable land to permanent pasture, according to Art. Paragraph 28. 6 of the Act of 26 January 2007 on payments under the direct support schemes.

COMMON AGRICULTURAL POLICY (CAP)

The Common Agricultural Policy (CAP) is the agricultural policy of the European Union. It implements a system of agricultural subsidies and other programmes. It was introduced in 1962 and has undergone several changes since then. It has been criticised on the grounds of its cost, and its environmental and humanitarian impacts.

The policy has evolved significantly since it was created by the Treaty of Rome (1957). Substantial reforms over the years have moved the CAP away from a production-oriented policy. The 2003 reform has introduced the Single Payment Scheme (SPS) or as it is known as well the Single Farm Payment (SFP). The most recent reform was made in 2013 by Commissioner Dacian Ciolos and applies for the period 2014 to 2020.

Each country can choose if the payment will be established at the farm level or at the regional level. Farmers receiving the SFP have the flexibility to produce any commodity on their land except fruit, vegetables and table potatoes. In addition, they are obliged to keep their land in good agricultural and environmental condition (cross-compliance). Farmers have to respect environmental, food safety, phytosanitary and animal welfare standards. This is a penalty measure, if farmers do not respect these standards, their payment will be reduced.

CLASSIFICATION

Digital image classification techniques group pixels to represent land cover features. Land cover could be forested, urban, agricultural and other types of features. There are three main image classification techniques.

Image Classification Techniques in Remote Sensing:

  1. Unsupervised image classification
  2. Supervised image classification
  3. Object-based image analysis

Pixels are the smallest unit represented in an image. Image classification uses the reflectance statistics for individual pixels. Unsupervised and supervised image classification techniques are the two most common approaches. However, object-based classification has been breaking more ground as of late.

Unsupervised Classification

The user manually identifies each cluster with land cover classes. It’s often the case that multiple clusters represent a single land cover class. The user merges clusters into a land cover type. The unsupervised classification image classification technique is commonly used when no sample sites exist. Pixels are grouped based on the reflectance properties of pixels. These groupings are called “clusters”. The user identifies the number of clusters to generate and which bands to use. With this information, the image classification software generates clusters. There are different image clustering algorithms such as K-means and ISODATA.

Unsupervised Classification Steps:

  1. Generate clusters
  2. Assign classes

Supervised Classification

The classification of land cover is based on the spectral signature defined in the training set. The digital image classification software determines each class on what it resembles most in the training set. The common supervised classification algorithms are maximum likelihood and minimum-distance classification. The user selects representative samples for each land cover class in the digital image. These sample land cover classes are called “training sites”. The image classification software uses the training sites to identify the land cover classes in the entire image.

Supervised Classification Steps:

  1. Select training areas
  2. Generate signature file
  3. Classify

SPATIAL RESOLUTION

Spatial Resolution describes how much detail in a photographic image is visible to the human eye. The ability to “resolve,” or separate, small details is one way of describing what we call spatial resolution.

Spatial resolution of images acquired by satellite sensor systems is usually expressed in meters. For example, we often speak of Landsat as having “30- meter” resolution, which means that two objects, thirty meters long or wide, sitting side by side, can be separated (resolved) on a Landsat image. Other sensors have lower or higher spatial resolutions.

SPECTRAL RESOLUTION

In the first instance, a sensor’s spectral resolution specifies the number of spectral bands in which the sensor can collect reflected radiance. But the number of bands is not the only important aspect of spectral resolution. The position of bands in the electromagnetic spectrum is important, too.

Spectral Bands

High spectral resolution: – 220 bands

Medium spectral resolution: 3 – 15 bands

Low spectral resolution: – 3 bands

 

Panchromatic – 1 band (B&W)

Color – 3 bands (RGB)

Multispectral – 4+ bands (e.g. RGBNIR)

Hyperspectral – hundreds of bands

TEMPORAL RESOLUTION

Temporal resolution is the revisit period, and is the length of time for a satellite to complete one entire orbit cycle, i.e. start and back to the exact same area at the same viewing angle. For example, Landsat needs 16 days, MODIS needs one day, NEXRAD needs 6 minutes. †

Temporal resolution depends on several factors–how long it takes for a satellite to return to (approximately) the same location in space, the swath of the sensor (related to its ‘footprint’), and whether or not the sensor can be directed off-nadir. This is more formally known as the ‘revisit period’.

Temporal coverage is the time period of sensor from starting to ending.

RADIOMETRIC RESOLUTION

Every time an image is acquired by a sensor, its sensitivity to the magnitude of the electromagnetic energy determines the radiometric resolution.

The finer the radiometric resolution of a sensor, the more sensitive it is to detecting small differences in reflected or emitted energy. Imagery data are represented by positive digital numbers which vary from 0 to a selected power of 2. This range corresponds to the number of bits used for coding numbers in binary format. Each bit records an exponent of power 2.

The maximum number of brightness levels available depends on the number of bits used in representing the energy recorded. Thus, if a sensor used 8 bits to record the data, there would be 28=256 digital values available, ranging from 0 to 255.

1 bit (21) = 2

2 bits (22) = 4

3 bits (23) = 8

4 bits (24) = 16

8 bits (28) = 256

16 bits (216) = 65 536

REMOTE SENSING

Remote sensing is the acquisition of information about an object or phenomenon without making physical contact with the object and thus in contrast to on site observation. In modern usage, the term generally refers to the use of aerial sensor technologies to detect and classify objects on Earth (both on the surface, and in the atmosphere and oceans) by means of propagated signals (e.g. electromagnetic radiation). It may be split into active remote sensing (when a signal is first emitted from aircraft or satellites) or passive (e.g. sunlight) when information is merely recorded.

Passive sensors gather natural radiation that is emitted or reflected by the object or surrounding areas. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers. Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a sensor then detects and measures the radiation that is reflected or backscattered from the target. RADAR and LiDAR are examples of active remote sensing where the time delay between emission and return is measured, establishing the location, speed and direction of an object.

Remote sensing makes it possible to collect data on dangerous or inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, glacial features in Arctic and Antarctic regions, and depth sounding of coastal and ocean depths. Military collection during the Cold War made use of stand-off collection of data about dangerous border areas. Remote sensing also replaces costly and slow data collection on the ground, ensuring in the process that areas or objects are not disturbed.