Interpolation

by on under Bathymetry
12 minute read

Summary

This note discusses the various ways that irregularly spaced data points can be interpolated onto a regularly spaced grid to facilitate the generation of contours of a variable. In particular, the data point set considered here are bathymetric measurements made of Deep Creek Lake and . The emphasis is on understanding the best methodology to use on the most recent measurements made of the whole lake, the bathymetric data obtained in the spring of 2012 by DNR. An extensive analysis performed by this author is reported on the deepcreekanswers.com website. Because of computational limitations at that time and the need for detail in the coves around the lake the Akima method was chosen to interpolate irregularly spaced data. There was always the question of: “How good are the results?” In this paper three independent data sets applicable to one cove are evaluated with different methodologies and the results compared. The emphasis is on providing the best interpolation methodology for the bathymetric data obtained in 2012 by DNR that define the water depths of Deep Creek Lake, MD.

1. Introduction

The introduction to bathymetric work by this author was in 2012. The result of that work is presented on the deepcreekanswers.com website. The bathymetric data were obtained by DNR in 2012 with a shallow draft boat traversing around the lake making simultaneous measurements of water depth and gps location. This 7-day sampling period collected over 600,000 data points from around the lake. Since these measurements are spaced irregularly, the question arises as to how these can be translated onto a regular grid, and then from that grid compute contours of constant depth. Several methods were initially investigated to determine which is the best for producing detailed maps for coves and areas around Deep Creek Lake and the Akima method was chosen mostly because it performed consistently, the contours would go through measured data points, and the computer could handle the detailed calculations that were required. At that time, the purpose of doing the bathymetry was to identify what boat slips would cause potential stranding of boats on the lake bottom at lower water levels. The possibility of water levels to drop too much would be because of operations conducted by the Deep Creek Hydro power generating facility. Another application for good bathymetric maps is to understand dredging requirements. For this one would want to determine how much material to remove from the bottom of an area under consideration. Hence good bathymetry is essential, and the question whether Akima was the way to go has always lingered. This note investigates various interpolation methods performed on three independent bathymetric measurement sets made for the Arrowhead cove. The data sets are the 2012 DNR data, the 2005 USGS data and the 2017 EMS data.

2. Approach

The approach used consists of the following steps:

  1. Extracting a subset of the data from each of the three sources mentioned in the Introduction for the same geographical region. This is not as simple as it sounds, since one has to take care of the open-water boundary values in the same consistent manner.
  2. Review the literature for existing ways to interpolate data onto a regular grid from irregularly spaced measurements. While many of such reviews have been conducted, our interest is specifically on bathymetric measurements, rather than air quality, soil quality, or weather related variables.
  3. Chose methods and computational tools that can address the generation of bathymetric maps.
  4. Perform the necessary analyses and sensitivity studies that are deemed appropriate to develop an understanding of how to use each method. Develop evaluation standards such us prediction the bathymetry with one data set and evaluation the compliance with the other two data sets.
  5. Assess performance and develop recommendations.

The Data

2. The Problem

There are several sources in the literature that discuss and/or review methods for dealing with the interpolation of irregularly spaced data, namely how to generate a bathymetry from a set of measurements. First, however, lets see what kind of data we have to work with. DNR made measurements on April 12, 13, 16, 17, 19, 20, and 21 of 2012. Every measurement taken on April 17 is plotted in Figure 1.

Figure 1 Figure 1. All Data Points from April 17, 2012.

What appear as lines are in reality closely spaced points. This is easily verified by plotting every 50th points, with the results shown in Figure 2.

Figure 2 Figure 2. Every 50th Data Point from April 17, 2012.

These data points have a depth measurement associated with them. This value can be color coded. A plot all of the color coded data points is shown in Figure 3.

Figure 3 Figure 3. All Color Coded Data Point from 2012 Measurements.

In the following section we’ll be reviewing methods that can be used to generate contours of constant depth. It’s clear from Figure 3 that, in order to get a resolution that one can discriminate, the lake must be divided into smaller areas. This has been done by concentrating on the various coves that feed into the main body of the lake.

As with respect to the water depths that are important, one has to resort the the ‘lower rule band’ curve that lake operations are subjected to. Generally the lowest level that the water in the lake can be drawn down to is no less that a lake level of 2455 ft AMSL (Above Means Sea Level). To boating the important level must include the draft of the boat. Clearly this is variable from boat to boat and how it is used.

There are basically two types of boats with significant draft that are seen on Deep Creek Lake, namely ‘pontoon’ boats and ‘power’ boats. The former have drafts up to 24-inches, while the latter may have them up to 37-inches.

3. The Literature

There is a vast literature on methods for interpolating data from an irregular network of data points. All, to some extent, are influenced by the general principle underlying spatial interpolation, that is, the First Law of Geography. This law was formulated by Waldo Tobler in 1970. He stated that ‘everything is related to everything else, but near things are more related than distant things.’

Here are some of the ways that the literature refers to the various interpolation methods:

  • TIN - Triangular Interpolation Network
  • IDW - Inverse distance weighing
  • RBF - Radial basis functions
  • TSA - Trend surface analysis
  • Kriging - global and local kriging, Block, Optimum, Ordinary, Simple, Cokriging, Universal, Residual, Indicator, Probability, Disjunctive, Stratified
  • Angular distance weighting
  • Natural neighbor interpolation
  • Local Polynomial and Global Polynomial Interpolation.
  • Regression
  • 2D and 3D thin plate splines
  • Conditional interpolation.

For variables involving temperature, precipitation, etc. the most common methods explored appear to be IDW, ordinary kriging, and spline. Ordinary kriging often appears to be the best.

The following is a brief overview of some of the more popular approaches.

TIN - The TIN technique is probably one of the most simple spatial interpolation techniques (interpolating within some earth-based coordinate system). The approach relies on the construction of a triangular network between the known sample spatial locations. The TIN method aims at creating non-overlapping triangles (as equilateral as possible) passing through the locations of the observables. Once the network is operational, each position where an attribute value is needed is associated with the triangle in which it lies. The value of the attribute is computed from the weighted contributions of the values at the apexes of the triangle. Clearly, there are many variants of this approach, being that there are multiple ways that one can construct a triangular network and multiple ways of computing the weights feeding the attribute.

IDW - Inverse weighted distance algorithms and are also relatively simple. The original basis of this method is by Sheppard. The value of an attribute that does not correspond to a known value is based on the spatial distance of known observations. Known observation closer to the point of interest are given more weight. Distant observations have a small or perhaps negligible impact. As one can image there are multiple ways that one select nearest neighbors and multiple ways to assign weighting factors.

RBF - Radial Basis Functions methods are a series of deterministic interpolation methods. These methods consist of interpolation functions that pass through the data points, and at the same time, must be as smooth as possible. There are five different basis functions: 1) Thin-plate spline; 2) Spline with tension; 3) Completely regularized splines; 4) Multi-quadric functions, and 5)Inverse multi-quadratic functions. Each basis function has a different shape and results in a slightly different interpolation surface. As with other techniques, there are multiple various of these approaches.

TSA techniques - Given the incredible number of approaches available to fit irregularly spaced data, one has to depend on other people’s experiences and the literature to select a tool that fits the needs.

Kriging - An excellent explanation of the use of Kriging is given by Geoff Bohling.

One of the observations that one can make from Figures 1 and 3 is that there are probably more data points than are needed to obtain a good fit of the lake bottom geometry. Figure 2 provides some insight as to what might be a better subset of the data for use in defining local water depths. Indeed, sensitivity studies have indicated that the differences between every 10th point and every 50th point are very small.

How is this ‘interpolation’ step performed and what is the best way to do this? The answer is complicated. The obvious way is to use a method with a subset of the data and test its ability to compare predictions with the data that were not used in the ‘validation’ step.

From Reference 14, “While a large number of ideas have been proposed for solution of the problem, a much smaller number of working computer programs are readily available.”

A further quote “… most perform adequately in a variety of cases; None of them seems to have a clear edge over all the others, or to be entirely satisfactory. For certain applications, each has its good points. The choice of a method for most users will be based on subjective criteria which vary from person to person and application to application.”

The list of references at the end of this report are a small selection of documents that one can explore to determine the validity of the approach(es) selected here.

4. Methodology Selection

Since all of the work is performed using “R”, the interpolation methodology selected must be in one of its many ‘packages’. The following methods are found: |—–|—–|—–| | ID | Package Name | Function Name | | Akima |akima|interpp|

As a testbed I’ll use Arrowhead cove, because it is at the top of the list for possible dredging, and hence depths in the cove are of particular interest to determine how much material is to be removed. The location of the cove can be found on the following map of Deep Creek Lake. Arrowhead is located almost smack in the middle of the map.

Figure 4 Figure 4. Location of Various Coves at Deep Creek Lake.

This this part of the website is mostly about ‘interpolation’ of the raw data, I will defer to another writeup how the data for the cove was generated. For here it must be assumed that we have:

  1. The outline of the cove - at least the area that contains the cove
  2. The bathymetry data for the cove - extracted from the whole data set
  3. The boat slips that are in the cove

5. Final Note

Despite any claims for performance and accuracy, there is usually no quantitative way of measuring the quality of the approximation to irregularly spaced data. Hence, final judgement must be made on the basis of subjective measures, whatever they are.

The following select references discuss the various issue dealing with how to fit irregularly spaced data into a form from which contours can be produced.

6. List of References

  1. Interpolating spatial observations:… 2016
  2. Scattered Data Interpolation: Test of Some Methods 1982
  3. Scattered Data Interpolation and Applications: A Tutorial and Survey 1990
  4. Triangulated irregular network 2016
  5. Scattered Data Interpolation for Computer Graphics 2014
  6. Spatial interpolation 1999
  7. Single beam bathymetric data modeling techniques for accurate maintenance dredging 2014
  8. What’s the point? Interpolation and extrapolation with a regular grid DEM 1999
  9. How Spline works
  10. Comparing interpolation methods
  11. Spatial Display, Modeling and Interpolation
  12. Understanding interpolation analysis
  13. An introduction to interpolation methods
  14. SMOOTH INTERPOLATION OF SCATTERED DATA BY LOCAL THIN PLATE SPLINES 1982
  15. Least Squares Approximation of Scattered Data with B-splines 2002

NOTE: There is no specific script associated with this section.

PLV: 9/4/2017

R
© 2017 PLV Some Rights Reserved - To comment on any of my work please email me at: moc.hcetsnes@etep