Spatial Interpolation is a method to estimate a value of unknown pixels based on measurements/values from known pixels in close proximity or within a range of unknown pixels. Based on Tobler’s First Law of Geography–that points close together in space are more likely to have similar values than points that are far apart, this method uses a neighborhood of sample points to estimate a value at an unsampled location. It can be used to predict unknown values for any geographic point data, such as elevation, rainfall, chemical concentrations, and noise levels.
There are various interpolation techniques that are used to predict the values of unknown sampled points, and it depends upon the available data, spatial and temporal scale, output result, and accuracy to decide which method to use. Here is a list of some popularly used interpolation techniques:
1. Trend Surface Analysis
Trend Surface Analysis is a global polynomial interpolation method that fits a smooth surface defined by a mathematical function (a polynomial) to the input sample points. It changes gradually and captures coarse-scale patterns in the data.
This technique is mostly used to produce a smooth surface that represents a gradual tends in the surface over the area of interest—where the lower the root mean square (RMS) error, the more closely the interpolated surface represents the input points. It is a simple way for describing large variations and its function is to find general tendencies of the sample data, rather than to model a surface precisely.
Trend interpolation creates a gradually varying surface using low-order polynomials that describe a physical process—for example, pollution and wind direction. However, the more complex the polynomial, the more difficult it is to ascribe physical meaning to it. Furthermore, the calculated surfaces are highly susceptible to outliers (extremely high and low values), especially at the edges.
The aim of this method is to develop a general kind of spatial distribution of an observable fact. The surface can be modeled using a linear or trend surface. Linear trends describe only the major direction and rate of change, while the trend surface provides progressively more complex descriptions of spatial patterns.
2. Inverse Distance Weighting (IDW)
The IDW (Inverse Distance Weighted) tool uses a method of interpolation that estimates cell values by averaging the values of sample data points in the neighborhood of each processing cell. The closer a point is to the center of the cell being estimated, the more influence, or weight, it has in the averaging process. It is a very popular technique in GIS and considers one of the simplest interpolation methods. There are a variety of methods that use weighted moving averages of points within a zone of influence.
IDW relies mainly on the inverse of the distance raised to mathematical power. The Power parameter lets you control the significance of known points on the interpolated values based on their distance from the output point. It is a positive, real number, and its default value is 2. By defining a higher power value, more emphasis can be put on the nearest points. Thus, nearby data will have the most influence, and the surface will have more detail (be less smooth). As power increases, the interpolated values begin to approach the value of the nearest sample point. Specifying a lower value for power will give more influence to surrounding points that are farther away, resulting in a smoother surface.
Since the IDW formula is not linked to any real physical process, there is no way to determine that a particular power value is too large. As a general guideline, a power of 30 would be considered extremely large and thus of questionable use. Also keep in mind that if the distances or the power value are large, the results may be incorrect. An optimal value for the power can be considered to be where the minimum mean absolute error is at its lowest. The ArcGIS Geostatistical Analyst extension provides a way to investigate this.
3. Global Polynomial (GP)
Global Polynomial or GP fits a smooth surface that is defined by a polynomial to the input sample points such as the TDS field in the attribute table of the well layer. The GP is similar to taking a piece of paper and fitting it in between the raised TDS values. The result from GP interpolation is a smooth surface that represents gradual trends in the surface over the area of interest. It is used by fitting a surface to the sample points when the surface varies slowly from region to region over the area of interest. While examining and/or removing the effects of long-range or global trends. In such circumstances, the technique is often referred to as trend surface analysis.
Kriging is an advanced geostatistical procedure that generates an estimated surface from a scattered set of points with z-values. Using geostatistical techniques, you can create surfaces incorporating the statistical properties of the measured data. Kriging is based on statistics. These techniques produce not only prediction surfaces but also error or uncertainty surfaces, giving you an indication of how good the predictions are. More so than other interpolation methods, a thorough investigation of the spatial behavior of the phenomenon represented by the z-values should be done before you select the best estimation method for generating the output surface.
Many kriging methods are associated with geostatistics, but they are all in the kriging family. Ordinary, simple, universal, probability, indicator, and disjunctive kriging, along with their counterparts in cokriging, are all available in the Geostatistical Analyst. Not only do these kriging methods create predictions and error surfaces, but they can also produce probability and quantile output maps depending on user needs.
One of the main issues concerning ordinary kriging is whether the assumption of a constant mean is reasonable. Sometimes there are good scientific reasons to reject this assumption. However, as a simple prediction method, it has remarkable flexibility.
5. Natural neighbor
Natural Neighbor interpolation finds the closest subset of input samples to a query point and applies weights to them based on proportionate areas to interpolate a value (Sibson, 1981). It is also known as Sibson or “area-stealing” interpolation.
The algorithm used by the Natural Neighbor interpolation tool finds the closest subset of input samples to a query point and applies weights to them based on proportionate areas to interpolate a value (Sibson 1981). It is also known as Sibson or “area-stealing” interpolation. Its basic properties are that it’s local, using only a subset of samples that surround a query point, and interpolated heights are guaranteed to be within the range of the samples used. It does not infer trends and will not produce peaks, pits, ridges, or valleys that are not already represented by the input samples. The surface passes through the input samples and is smooth everywhere except at the locations of the input samples.
The Spline tool uses an interpolation method that estimates values using a mathematical function that minimizes overall surface curvature, resulting in a smooth surface that passes exactly through the input points.
The basic form of the minimum curvature Spline interpolation imposes the following two conditions on the interpolant:
- The surface must pass exactly through the data points.
- The surface must have minimum curvature. The cumulative sum of the squares of the second derivative terms of the surface taken over each point on the surface must be a minimum.
Some other spatial interpolation techniques to learn about are: Spline with Barriers, Topo to Raster, and Topo to Rater by File. Happy learning!