REPRESENTATION OF HYDROGRAPHIC SURVEYS AND OCEAN BOTTOM TOPOGRAPHY BY ANALYTICAL MODELS Alan J. Pickrell NAVAL POSTGRADUATE SCHOOL Monterey, California >. **- THESIS REPRESENTATION OE HYDROGRAPHIC SURVEYS AND OCEAN BOTTOM TOPOGRAPHY BY ANALYTICAL MODELS ■Utt Alan J. Pickrell September 1979 Thesis Advisors: R. \'I. Garv/oc R. H. Dd, Jr. Franke Approved for public release; distribution unlimited T 1903 13 SECURITY CLASSIFICATION OF THIS PAGE (When Dili Entered) REPORT DOCUMENTATION PAGE I. REPORT NUMBER READ INSTRUCTIONS BEFORE COMPLETING FORM 2. OOVT ACCESSION NO. 1. RECIPIENT'S CATALOG NUMIC* 4. TITLE (and Submit) Representation of Hydro graphic Surveys and Ocean Bottom Topography by Analytical Models S. TYPE OF REPORT • PEFHOO COVERED Master's Thesis: September 1979 •■ performing one. report number 7. AUTHORfaJ Alan J. Pickrell, LCDR, NOAA I. CONTRACT OR GRANT NUMBERS) t. PERFORMING ORGANIZATION NAME AND AOONEIS Naval Postgraduate School Monterey, California 93940 10. PROGRAM ELEMENT. PROJECT. TASK AREA * WORK UNIT NUMBERS II. CONTROLLING OFFICE NAME AND ADDRESS Naval Postgraduate School Monterey, California 93940 12. REPORT DATE IS. NUMBER OF PASES 110 14. MONITORING AGENCY NAME * AODRESS/U dlllerent Ham Controlling Olllee) Naval Postgraduate School Monterey, California 93940 IS. SECURITY CLASS, (ol thle riport) Unclassified tla. DECLASSIFICATION/ DOWNGRADING SCHEDULE l«. DISTRIBUTION STATEMENT (ol Mil* Report) Approved for public release; distribution unlimited 17. DISTRIBUTION STATEMENT (ol Ihm eketracl entered In Block 20, II dlllerent from Keport) II. SUPPLEMENTARY NOTES It. KEY WORDS (Continue on rararaa tide II neeeeemrr end Identity or Meek number) Hydrography, surface modeling, topographic modeling, analytical surfaces, ocean bottom topography 20. ABSTRACT (Continue en teeetee tide II neeeeemrr mnd Identity kf block number) Hydrographic surveys for nautical charting contain many discrete data points. Analytical models for ocean bottom topography could save computer storage and reduce the complexity of " auto- mating the nautical charting process, but they must meet stringent accuracy requirements. Polynomials, double Fourier series, finite elements, Duchon's analysis, Shepard's formula, and Hardy's multiquadric analysis were investigated as possible modeling do ,: (Page 1) TK. 1473 EOITION OF I NOV •• IS OBSOLETE S/N 0103-014- ««01 | SECURITY CLASSIFICATION OF THIS PAGE (When Dete Entered) fgcuwry CkAMirtc*TtOM o> Twit »otnw>w n«<« «»«—«. techniques. Multiquadric analysis in which, the surface is represented by an analytical summation of mathematical surfaces such as cones and hyperboloids was the only method found to be suitable. An Iterative method of model point selection was found to give the best results. Smooth and unambiguous junctions of adjacent models were made by using a Hermite polynomial weighted sum of overlapping areas. Highly irregular surfaces can be represented by about 2054 of the original survey data points; more regular bottom topography can be represented by a smaller percentage. DD Form 1473 o 1 Jan 73 ^ S/N 0102-014-6601 iieumTv claudication or t*u p«acr»*«< O* Approved for public release; distribution unlimited Representation of Hydro graphic Surveys and Ocean Bottom Topography by Analytical Models by Alan J. Pickrell Lieutenant Commander, 1T0AA E. A. , University of California at Los Angeles, 1971 Siibmitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN OCEANOGRAPHY (HYDROGRAPHY) from the NAVAL POSTGRADUATE SCHOOL September 1979 "VieS'S P^9 ABSTRACT Hydro graphic surveys for nautical charting contain many discrete data points. Analytical models for ocean bottom topography could save computer storage and reduce the com- plexity of automating the nautical charting process, hut they must meet stringent accuracy requirements. Polynomials, double Fourier series, finite elements, Duchon's analysis, Shepard's formula and Hardy's multiquadric analysis were investigated as possible modeling techniques. Multiquadric analysis in which the surface is represented by an analyti- cal summation of mathematical surfaces such as cones and hyperboloids was the only method found to be suitable. An iterative method of model point selection was found to give the best results. Smooth and unambiguous junctions of adjacent models were made by using a Hermite polynomial weighted sum of overlapping areas. Highly irregular surfaces can be represented by about 20% of the original survey data points; more regular bottom topography can be represented by a smaller percentage. TABLE OP CONTENTS I. INTRODUCTION 12 A. BACKGROUND 12 B. MATHEMATICAL MODELS FOR OCEAN BOTTOM TOPOGRAPHY 15 C. SCOPE OP WORK 17 II. SURFACE MODELING METHODS 18 A. POLYNOMIALS 20 B. DOUBLE FOURIER SERIES 21 C. FINITE ELEMENTS 21 D. SHEPARD'S FORMULA . 22 E. DUCHON'S METHOD 26 F. HARDY'S MULTIOUADRIC ANALYSIS 28 III. HYDROGRAPHIC SURVEYS 33 A. GENERAL DESCRIPTION 33 1. Survey Scale 35 2. Horizontal Position Accuracy 35 3. Depth Accuracy 36 B. DATA SETS 38 1. Monterey Bay, California 40 2. Morro Bay, California 40 3. Auke Bay, Alaska 43 4. Gulf Coast 43 IV. RESEARCH PROCEDURES 47 A. COMPUTER SYSTEM 47 B. DATA SET PREPARATION 47 1. Original Data Condition 47 2. Program TAPCTTV - Tape Conversion 48 3. Program DATPLT - Data Plotting 43 4. Program CONDAT - Data Contouring 49 C. MODEL DEVELOPMENT AND ANALYSIS 49 1. Coefficient Computation - Subroutine LEQ2S 50 2. Quantitative Analysis - Subroutine STAT . 50 3. Qualitative Analysis - Subroutines SET CON and CONTUR 51 V. RESEARCH RESULTS 52 A. SELECTION OF METHODS FOR EXPERIMENTATION. . . 52 3. METHOD COMPARISON PROCEDURES 53 C. RESULTS OF DUCHON'S METHOD 54 1. General Findings 54 2. Dependence on Scale 56 D. RESULTS OF SHEPARD'S METHOD 60 1. Computation of R 60 2. Inverse Distance T.',reighting Function ... 61 3. Inverse Distance Squared Weighting Function 64 S. RESULTS OF HARDY'S MULTIQUADRIC ANALYSIS. . . 64 1. Determination of 0 64 2. Inverse Hyperboloid Kernels 66 3. Hyperboloid and Conic Kernels 68 F. SUMMARY 71 VI. FURTHER TESTING OF MULTIQUADRIC ANALYSIS WITH CONIC AND HYPERBOLOID KERNELS 74 A. SELECTION OF MODEL POINTS 74 1. Regular Spacing Selection 74 2. Iterative Selection 75 3. Complete Selection by Topographic Feature ~. 80 4. Summary 30 B. MODEL JUNCTIONS 31 C. MODEL REFINEMENT 36 VII. CONCLUSIONS 102 LIST OF REFERENCES 104 DISTRIBUTION LIST 107 7 LIST OF TABLES Page TABLE I - Depth Measurement Specifications Recommended by 16 the International Hydro graphic Bureau TABLE II - Depth Recording and Correction Intervals 39 TABLE III - Results of Duchon's Method - Horro Bay 56 TABLE IV - Effects of Scale on Duchon's Method - Morro Bay 59 TABLE "V - Shepard's Formula with Inverse Distance Weighting 62 Function - Morro Bay TAELS VI - Shepard's Formula with Inverse Distance Squared 65 Weighting Function - Morro Bay TABLE VII - Multiquadric Analysis with Inverse Hyperboloid 67 Kernels - Morro Bay TABLE VIII - Multiquadric Analysis with Hyperboloid and 70 Conic Kernels - Morro Bay TABLE IX - Selection of Model Points with Even Spacing 76 TABLE X - Selection of Model Points by Iteration 78 TABLE XI - Results of Model Junction 85 TABLE XII - Monterey Bay Model Results 89 TABLE XIII - Morro Bay Model Results 90 TABLE XIV - Alike Bay Model Results 91 TABLE XV - Gulf Coast Model Results 92 8 LIST OP FIGURES Page Figure 1 - Shepard's Formula with Various Weighting Functions' 25 Figure 2 - Hyperboloid Kernels 30 Figure 3 - Inverse Hyperboloid Kernels 31 Figure 4 - Quadric Summations 32 Figure 5 - Portion of a Hydro graphic Survey Sheet 34 Figure 6 - Components of a Depth Measurement 37 Figure 7 - Monterey Bay Data Set Contours 41 Figure S - Morro Bay Data Set Contours 42 Figure 9 - Auke Bay Data Set Contours 45 Figure 10 - Gulf Coast Data Set Contours 46 Figure 11 - 60 Point Duchon Model of Monterey Bay 55 Figure 12 - 93 Point Model of Morro Bay 57 Figure 13 - 98 Point Shepard's Formula Model of Morro Bay- Figure 14 - 60 Point Multiquadric Model of Morro Bay 69 Figure 15 - Comparison of Modeling Methods 72 Figure 16 - Comparison of Model Point Selection Methods 79 Figure 17 - Model Junctions by Overlap 32 Figure 18 - Model Junctions by Hermit e Polynomial 33 Figure 19 - Hermit e Polynomial 34 Figure 20 - Monterey Bay Model Results 94 Figure 21 - Morro Bay Model Results 95 Figure 22 - Aulce Bay Model Results 96 Figure 23 - Gulf Coast Model Results 97 Figure 24 - 226 Point Monterey Bay Model Contours 98 LIST 0? FIGURES (cont) Paffe id Figure 25 - 144 Point Morro Bay Model Contours 99 Figure 26 - 290 Point Auke Bay Model Contours 100 Figure 27 - 165 Point Gulf Coast Model Contours 101 10 ACOOV/LEDGEMMT S I would like to express my sincere gratitude to Dr. R. ':!. Garwood and Dr. R. H. Franke, my advisors, for their assistance; Dr. R. I. Hardy for his timely response to my inquiries; Mr. Larry Mordock for his ide< on the topic and aid in obtaining survey data; and Mr. James Steensland and lieutenant Maureen Kenny for supplying survey data. 11 I. INTRODUCTION A. BACKGROUND The ocean bottom is a continuous but generally irregular surface. In the deep oceans there are vast areas of abyssal plains interrupted by mid-ocean ridges, sea mounts and con- tinents. The continental shelves and coastal areas vary from smooth flat bottoms to highly irregular surfaces with deeply gouged glacial troughs or coral and rock pinnacles. Many geological formations which are found on land such as canyons, mountains, domes, faults, etc., are also found on the continental shelves. The shape of the ocean bottom is difficult to determine since it cannot be seen or photo- graphed except in very shallow areas and, direct measurement requiring occupation of the ocean bottom is costly and often impossible. There are many reasons for which the shape of the ocean bottom must be known. Historically, safety of navigation has been the most urgent reason. Nautical charts are com- piled from many sources to aid the navigator. These charts depict the coastlines and ocean bottom features using con- tour lines and selected depths. The primary sources of depth data for nautical charts are hydrographic surveys. These surveys represent ocean bottom topography by discrete data points which are defined 12 by geographic position and depth below a specified water level datura. Until the mid-twentieth century, these depths were determined by lowering a weight on a calibrated line until it touched bottom. The vessel position was usually determined by measurements with sextants. Using these manual methods, data acquisition was very slow and only a* minute percentage of the bottom was sampled. There were many sources of error in the observational procedures. A typical survey had a few hundred data points from which the surface shape between points had to be inferred. Data pro- cessing was easily handled by manual methods. More recently, electronic positioning equipment and depth sounding instru- ments have been used in semi-automated and automated systems. These systems allow almost continuous sampling of the ocean bottom along the vessel track. They have increased the accuracy of the data and the completeness of bottom coverage. As a result, depths need to be inferred between vessel tracks but not along the tracks. A typical survey of this type contains between 2,000 and 20,000 data points. These sys- tems increased the data acquisition rate to such an extent that manual data processing methods could not keep up with data acquisition. Computer aided systems for processing and verifying the data were developed in the 1960's and 1970 »s. Producing a nautical chart requires compilation of many hydrographic surveys, shoreline manuscripts, and other 13 documents. This remained a manual process until the mid 1970' s. At this time, the National Ocean Survey (NOS) of the United States National Oceanic and Atmospheric Adminis- tration (NOAA) began development of a computer assisted chart compilation and production system (Moses and Passauer, 1979). This system requires on-line storage and manipu- lation of large blocks of discrete point data from hydro- graphic surveys. The density of these data from modern surveys make this a complex and costly process. In an effort to produce one hundred percent bottom coverage for critical areas, multi-beam sounding systems (Hopkins and Mobley, 1978), airborne laser depth measuring systems (AVCO Everett Research Laboratory, 1978), and airborne water penetrating photography systems (Keller, 1976) have been developed. Some of these systems have proved that one hundred percent bottom coverage is feasible. They have also created another problem concerning representation of the data and its use in the compilation of nautical charts. The data from the multi-beam sounding systems for a typical survey would be equivalent to several hundred thousand dis- crete data points. Data from a laser system would be even more dense. The photo gramme trie method uses stereographic images produced from aerial photographs. This can be con- sidered to be truly continuous data, but such data is diffi- cult to represent in a digital computer. The usual method to represent this data is to select the most representative and most critical depths for use as if they were from a H conventional survey. For a bottom with little relief, this method is satisfactory but as bottom relief increases, considerable detail and completeness is lost. B. MATHEMATICAL MODELS FOR OCEAN BOTTOM TOPOGRAPHY The density of data from modern hydro graphic surveys has made the automation of chart compilation difficult. A possible solution to this problem which is investigated by this thesis is the use of a surface defined by an analytical expression to approximate the ocean bottom topography. Such a mathematical model would be used to compute a depth at any geographic position within the bounds of the model. In order to be useful, such a model must require consi- derably less data storage for the parameters which define the model than was required oy the original set of discrete points. The accuracy of the model is of utmost importance. The United States government can be held liable for vessel groundings or accidents at sea which are due to inaccurate charts. Special Publication 44 of the International Hydro- graphic Bureau (1968) states the accuracy specifications recommended for hydrographic surveys. The depth measure- ment specifications are listed in Table I. 15 Table I - Depth. Measurement Specifications Recommended by the International Hydrographic Bureau Depth Allowable error 0-20 meters (0-11 fathoms) 0.3 meters (1 foot) 20-100 meters (11-55 fathoms) 1.0 meters (0.5 fathoms) Deeper than 100 meters 1% of depth The Hydrographic Manual of the National Ocean Survey (Umbach, 1976) adds that accuracies attained for all hydro- graphic surveys conducted by the National Ocean Survey shall equal or exceed the specifications recommended by the Inter- national Hydrographic Bureau. These standards do not necessarily apply directly to the accuracy requirements for a mathematical model of the bottom, but they are good reference figures. Solution of the dense data problem for nautical charting was the primary motivation for the investigation, but there are other uses for models approximating ocean bottom topo- graphy. Many coastal processes are closely related to bottom topography. These include wave height, wave refrac- tion, energy dissipation, wave runup, storm surge and beach erosion. Design of offshore structures requires input of bottom characteristics. Subsurface, as well as surface navigation, could be aided by an ocean bottom model stored in an onboard computer. The accuracy requirements and model scales for these applications would be different but the modeling methods could be the same. 16 C. SCOPE OP WORK There are several ways to represent surfaces by mathe- matical expressions. Those that seemed most applicable to the problem are discussed in Section II. Three of the models were chosen for experimental analysis. Portions of four hydrographic surveys conducted by the National Ocean Survey were used as experimental data sets for this analysis. These data sets represent a variation from extreme bottom relief to a very flat bottom. The models developed for these areas were analyzed quantitatively by comparing observed survey depths and computed model depths at the same location. Qualitative comparisons of depth contours from the two sources were also made. Por each type of model, the input parameters were varied to investigate minimum requirements for a good representation. Determining the exact location of the shoreline and other boundaries is an important part of any survey, but including this in the models is beyond the scope of this investigation. All the areas used for experimentation were restricted so that they do not include shoreline. 17 II. SURFACE MODELING METHODS Analytical expressions have been used previously to approximate topographic surfaces. Some techniques used in map analysis are also applicable to the problem and there are some appealing methods which have been used for other surface approximations but not for terrain models. None of these methods have been used to represent hydrographic surveys. Ocean bottom topography is often similar to land topography but the research on terrain models has generally been for small scale large area maps. The large scale hydrographic surveys which must represent detail on the order of tenths of fathoms or feet are quite different than those large area maps, so modeling techniques which are good for small scale terrain models may not be appropriate for hydrographic survey modeling. Some important properties of the methods which must be considered aside from accuracy are: • ease of computation - Must a large system of equations be solved to develop the model? • dependence of horizontal scale - Hydrographic surveys and marine charts of different scales often overlap or are adjacent. Eor this reason, it is not good if the accuracy of a modeling method varies with horizontal distance scale. • global versus local models - A global model represents a large area with a single expression. A local modeling IS method represents many adjacent small areas with many corresponding expressions. Generally, there is more computation involved in global methods, whereas, local modeling requires more data searching to find the appropriate local parameters. Global models which attain significant data storage savings are of particular interest in this study. • interpolation versus approximation - Interpolation methods generate a surface which fits some data points exactly and is used to interpolate between those points for surface values at other positions. Approximation methods generate a surface which approximates all the data but may not fit any data points exactly. A "best fit" by some criteria such as least squares is usually used. Approximation methods may not represent the least depth in an area accurately or they may move the position of peaks and deeps significantly. It is impera- tive that the model can be controlled to represent criti- cal data points exactly. Interpolation methods are thus more appropriate for this application. The data points which are selected for interpolation will be called model points in this presentation. Quite often they are significant data points such as a least depth or an area of slope change. The following sections discuss methods and previous research which are applicable to the problem. 19 A. POLYNOMIALS Czegledy (1977), Hardy (1971), Krumbein (1966), and VJhitten (1970), discuss the use of polynomials for surface representation. A polynomial mapping equation of two inde- pendent variables with a specified degree can be produced which fits a few data points exactly or approximates all the data in a least squares sense. In either case, the sys- tem of equations which must be solved becomes ill-conditioned as the degree of the polynomial increases. This can be alleviated by using orthogonal polynomials. In the method of orthogonal polynomials, a collocated series of inde- pendent surfaces, linear, quadratic, cubic, etc., is generated. The summation of these surfaces is the mapping equation which defines the model. Increasing detail is gained by solving for and adding the surface of next higher order. This method has proven useful for trend analysis of maps. However, it has been rejected by some investigators for applications requiring more accuracy. The reason as stated by Hardy (1971) is that the "ordinary collocated polynomial series is unmanageable in representing the sometimes rapid and sharp variations of real topographic surfaces. " Requiring a high degree polynomial to fit closely spaced irregular surface points in one area causes significant invalid variations in other areas. To avoid these problems, low degree poly- nomials have been used in a local approximation mode with success, but this does not produce a global surface model. 20 B. DOUBLE FOURIER SERIES The double Fourier series model is discussed by James (1966) and Krumbein (1966). It is produced by a series of independent harmonic surfaces having wave forms of diminishing wave length as the order of the surface increases. This technique has proven valuable for trend analysis particularly when the surface features show oscillating patterns. Unfor- tunately, the models require high order surfaces to repre- sent sharp terrain features. Such surfaces produce oscillation: with large variations between data points and have many of the same drawbacks as the collocated polynomial series. C. FINITE ELEMENTS Gold, Charters and Ramsden (1976) discuss a method of surface representation in which a system of triangles with data points at the vertices is imposed on a surface. An interpolating function is used to estimate the surface in each triangular element. The interpolant is developed so that the surface passes through the vertices and makes a smooth transition from one triangle to the next. Peucker, Fowler, Little and Mark (1977) have developed a similar system of surface representation by Triangulated Irregular Networks (TIN). Rather than a smooth interpolant, the TIN system uses the planes defined by the three function values at the vertices of each triangle to represent the surface. Considerable work has been done on automated 21 techniques for selecting appropriate points to be used for "vertices and on development of data structures for storage of the vertices, neighboring points, and neigh- boring triangles. The TUT system was developed specifically for digital representation of topographic surfaces. Finite element systems such as these are local methods. Detail can be easily incorporated into the model by adding points where required without affecting the model elsewhere. Very little computation is required but searching the data structure to find the appropriate element is necessary. Such systems are generally independent of scale unless a scale dependent interpolant is used. A single expression which represents the surface is not generated by these methods. D. SHEPARD'S FORMULA Shepard's method as described by Poeppelmeier (1975), Barnhill (1977) and Franke (1979), has been widely used to interpolate random data but has never been used for topo- graphic surface representation. The model is produced by taking a weighted average of the model points to inter- polate the surface value at other points. Shepard's formula is expressed by if d± ± 0 for all i * - < * * (1) if d. = 0 for any i 22 where the f* are the depths at the model points; d- is the distance from the ith model point to the point of computa- tion; and the weight assigned to each model point, w. , is a function of -r-. Two such weighting functions used in i this project were simply the inverse distance (l/d^) and the inverse distance squared (1/d. ). In this method, all model points contribute to the value of f, out the effect of any model point on the inter- polant decreases as the distance from that point increases. Another appealing feature of this method is that the value of f will always be between the minimum and maximum values of the model points. Franke and little's modification to Shepard's method restricts the weighted summation to only those model points within a radius R of the computation point. With this modifi- cation, the weighting function approaches zero as the dis- tance approaches R and remains zero at distances greater than R. The modified Shepard's formula is expressed by I (R-di>+ i Rd± fi r U-d.)+ i Rd± h i <*-di>! f rV 1 r (R-d±)* if di 5^ 0 for all i f = <\ i aa± (2) if d. = 0 for any i or if d. £ 0 for all i t- 1 — T f = ) R di (3) f± if d± = 0 for any i 23 where R-d i for d± < R (R-d±) = / (4) ^ 0 for d. > R The weighting functions l/d. and ^ " 1' + produce surfaces -Mi with cusps at the model points. The weighting functions l/d. and * " i + produce surfaces with flat spots at those i — *-* — TTdT points. For higher order functions of l/d. these flat spots increase in size and the slopes between them become steeper. These properties are shown in two dimensions in Figure 1. 24 • ^ d3 r^ / 3 p ,*^/ 7 ; / c / '' / /# / /**># p1 #C-c. . • I I I ! I 0 — Figure 1 - Shepard's Formula with Various Weighting Functions 25 These formulas do not require solution of systems of equa- tions and are easily modified by simply adding significant data points without recomputing any coefficients. They are independent of scaling, global in nature, and the computa- tion is very simple. E. DUCHON'-S METHOD The method of Duchon (1976) which was developed as thin plate surface theory is described by Meinguet (1979) and Harder and Desmarais (1972). It has never been used for topographic surfaces but has been used for other surface analyses. To develop this model, individual surfaces called basis or kernel functions, which are centered at the model points, are summed to yield a global surface. There is a coefficient associated with each kernel function which determines the magnitude of the effect of that kernel func- tion on the total surface. The expression for the model is f = £C, fp(X, Y, X,, Y,)| + A. + A^ +A, Y (5) Z C± [j(X, Y, X±, Y.)j + A-l + Agsc + k^ Y where n is the number of basis functions and model points used. The last three terms represent a plane which is also added into the model. The n+3 coefficients C., i=l, ..., n, An, Ap and A, are determined by solving the following system of n+3 equations. 26 n fl = &L °i [^Xl' Yl> V YJ\ + Al * A2X1 + A3Y1 fn " Jl Ci ^V V Xi' Vl + Al + V* + A3Yn o = .i± cL (6) o = ,1, cA 0 - ill °iYi where f-, , f«» » f are the surface values at the model points (X^) (X2f Y2), , (X^ Yn). Duchon used two "basis functions F (X, Y, X±, Y±) = d_3 and (7) P (X, Y, Xi, Y±) = d±2 log di where <-, « i /Q\ d. =((X-Xi)2 + (Y - Y^2) * (8) d. is the horizontal distance from each model point to the point of computation. Duchon' s method using the above basis functions is independent of scale. During experimentation, a third basis function P (X, Y, Xif Yi) = di log d± (9) was also used. The models using this latter basis function are dependent on scale. 27 P. HARDY'S MU1TI QUADRIC ANALYSIS Multiquadric analysis, as discussed by Hardy (1971, 1972a, 1972b, 1975 and 1977) resulted from a search, for a satisfactory and efficient method to represent topography by an analytical model. As suggested by its name, the method consists of summing many quadric surfaces (cones, hyperboloids, paraboloids, etc.), each associated with a model point, to obtain a global surface. Superficially, this method is similar to Duchon's method except that the kernel functions are quadric surfaces and the additioanl three terms are not used. The expression for this model is f = X_ Ci CQ (x» Y' Xi> YJ\ (10) where f is the surface value at the point (X, Y) ; Q is the quadric surface or kernel function; (X-t Y. ), i=l, , n are the model points at which the kernels are centered; and C., i=l, , n are coefficients assigned to each sur- face. The following system of equations is used to solve for the unknown coefficients. fi ■ Ji ci [? Yi> V YJ] : (ii) fn J Ci |?^S58V °°ee a, •Jt~.,"» 8V cribbing {ruins) otMHW Z ^4 60$,' 79 8?2 85 87 84 828280 88^ 82 73 °* ~89 90 86 82 80 83g.489 89 83 H 7-4 ff48389 84„ 8° 78 83 79 79 ■7 , f\ 78 a?3 8 ',..81 I5o 7. 67 68 6t]^45g62-766 672 673 RING 675 m&MmwT , a"/? [4 17'. >9'43|43 M0RTON, 1959 67 < Detached soundings in i ad from H-I063(I9II) in \reen from H-960I(I956)WD I9#> 'Yellow. 6 |I6 .54 l6\\Wd(3e 51* 173 •150 ft -s# wall) L^.AoRq "• 37 SUBPLANS 678/&\l' $ft "«l I 5L I * AND WHARF &*K<&1 * ' ^a&3i (cor of wall) 9. ., 5?59\ 64 r 83 6£ 7Q 7? (ihedgab)679 See Subplans 680( (cupi l\ 6^6660iL6^VW373 i 64 06 68 u ?2 6062 63 67 71J" 7?2 •^6S667 6463 63 66 4t> 6572 53 tt" 66 66 66&V3; 72 37^ 63 fi7 67 69 •7 60R9 bb 6 69 7-0 7\72l\51 _T 86 71 697 1 71 Figure 5 - Portion of a Hydro graphic Survey Sheet 34 1. Survey Scale The survey scale is the ratio of distance on the survey sheet to the corresponding distance on the earth. The scale chosen for a survey depends on "the area to be covered and the amount of detail necessary to depict ade- quately the bottom topography and portray the least depths over critical features." (Umbach, 1976) The survey scale is usually at least twice as large as the scale of any chart published for the area. Large scale surveys cover less area than small scale surveys but greater detail can be represented. For this reason, large scale surveys are conducted in harbors, anchorages, restricted navigable waterways, and areas where dangers to navigation are numerous. Areas with considerable detail are the most difficult to adequately represent by a mathematical expres- sion. Three of the four data sets used in this project were from large scale surveys. 2. Horizontal Position Accuracy Umbach (1976) specifies that plotted positions, "whether observed by visual or electronic methods, combined with plotting error shall seldom exceed 1.5 mm (0.05 in.) at the scale of the survey." On a 1:5000 scale survey, the position of each sounding should thus be represented to within 7.5 meters of its actual position on the earth. This is important in evaluating a mathematical model. One of the data sets had some very steep slopes, where an error 35 of a few meters in positioning would produce a depth variation of several fathoms. For areas such as these, a much greater depth discrepancy between the model and the survey data should be tolerable. 3. Depth Accuracy As seen in Figure 6, there are many components that make up the depths represented in a hydrographic survey. In addition to the depth recorded by the sounding instrument, there are corrections for velocity of sound in the water column, the stage of the tide, and the dynamic vessel draft. Sometimes surveys have slight inconsistencies where data from two different vessels or two different days are adjacent or intermixed. These might be due to changes in the water column structure that affect the velocity of sound, an error in determining offshore tide corrections from tide gages near the shore, unrecorded changes in vessel speed affecting the dynamic vessel draft, a slight systematic error in vessel positioning, etc. Even more critical is the effect of waves on the sounding vessel. Small vessels t change vertical position rapidly as waves pass while the instruments record the depth of the water column below the vessel. This depth is too great if the vessel is on a wave crest and too small if the vessel is in a trough. The angular orientation of the vessel is also affected by waves. If the vessel rolls to an angle greater than the sounding beam width, the depth recorded may not be under the vessel 36 Datum of reference Q- CU -o «3 3 +J U «=C L Bottom a. -C ■a Q. 0) e a> T3 o S •r" •n™ "O 4-> +-> (U O > OJ -a S- S- 0) U CD -P IS H JS O •H P O CO U U o o s •H O o CD ^ •P ft a v ||I 2 s a 2 a <£ « o3 1 &s g.s s 93 til o 1.11 2^S " | E 2 4 .— 6 o. ^i ci 6 d (N U3 d d w in d d do * d d m in d d r< rH rf u U u > — 5 a j 39 1. Monterey Bay, California This data set was taken from survey registry number H-9808. It was conducted in 1979 by NOAA Ship DAVIDSON and Naval Postgraduate School personnel and equipment. It covers the southernmost part of Monterey Bay including « Monterey Harbor. The survey was conducted at a scale of 1:5000. Only one vessel was used on the portion of the survey chosen for analysis. The sounding units are fathoms and depths range from 0 to 16 fathoms. The bottom has a large amount of detail. It slopes moderately downward from the shore and consists generally of mud and sand. In the middle there is an area thick v/ith kelp which is attached to a rocky irregular bottom. There are a few rocky areas in the deeper part as well. Figure 7 shows the bottom con- tours in one fathom increments. The scale of the plot has been reduced for presentation herein. 2. Morro Bay, California This data set was taken from survey registry number H-9737. It was conducted in 1978 by the NOAA Ship FAIRWEATHER. It covers a small part of Morro Bay and some navigable water- ways open to the bay. The survey was conducted at a scale of 1:5000. The sounding units are feet and depths range from 16 to 82 feet in the portion used for analysis. Figure 8 (reduced scale) shows the bottom contours in three foot incre- ments. There is one major feature near the center and con- siderable irregularity in the northeast corner of the area. Otherwise, the bottom slopes gently offshore. 40 Figure 7 - Monterey 3ay Data Set Contours (fathoms) 41 Figure 8 - Morro Bay Data Set Contours (feet) 42 3. Auke Bay, Alaska This data set came from a thesis project by Seidel (1979), a student at the Naval Postgraduate School, which investigated the affects of using multiple sounding beam widths for hydrographic surveys. The procedures were some- what non-standard since sounding lines were run much closer than normal in an attempt to gain 100% bottom coverage. Specifications for 1:5000 scale surveys were used but due to the dense sounding spacing, it was plotted at a scale of 1:2500. The data was incorporated into survey registry number H-9818. It was conducted in 1979 by Seidel and the NOAA Ship RANTER. It covers a small portion of Auke Bay in southeast Alaska. The sounding units are fathoms and depths range from 0 to 24 fathoms. The bottom is mostly mud and rock and shows a tremendous amount of variation due to glacial action. Very steep slopes are encountered in the area. At one point, the depth changes from 7 to 22 fathoms in a horizontal position change of only 30 meters. Figure 9 (reduced scale) shows the bottom contours of the central part of the data set in one fathom increments. 4. Gulf Coast The fourth data set was taken from survey registry number H-9785. It was conducted, in 1978 by the NOAA Ship MT MITCHELL at a scale of 1:20000 and covers an area in the 43 Gulf of Mexico off the coast of Louisiana. The sounding units are feet and depths range from 29 to 37 feet in the portion used for analysis. Figure 10 (reduced scale) shows the bottom contours in one foot increments. The bottom is generally flat with a very gentle slope. It consists mostly of mud and shell fragments. Some of the irregulari- ties seen in the bottom contours are in areas where the work of two vessels overlapped because of crosslines or junctions. The flat bottom and small contour increment make these irregularities stand out. The survey party reported that wave action was also a considerable problem during the con- duct of this survey. 44 CO B o -p a w 14 fJ o -p Pi o o •p 0 CO cd -p 03 R >> CO FP CD M a of the area which contains the data points. Dividing this by N gives a measure of the average area which could be assigned to each point. Multiplying by NPPR gives the area which could be associated with that many data points. Taking the square root of this gives a radius which would define that amount of area centered at the point of compu- tation. On the average there should be 1TPPR model points within a distance R from any point of evaluation. The tabulated statistical results express the radius in terms of NPPR instead of R. 2. Inverse Distance Weighting Function Table V gives the statistical results of the tests using the inverse distance weighting functions. The table shows that use of the modified Shepard method improved the results considerably. In all cases, the best results were obtained by including an average of six model points in the radius of influence. The table also indicates that no statistical improvement was made by increasing from 37 to 98 model points. The contours produced by this method (Figure 13) for both data sets were poor. The basic trend of the bottom can hardly be seen. The contours are quite wavy where they 61 TABLE Y - Shepard's Formula with Inverse Distance Weighting Function - Morro Bay Number of model points NPPR RMS difference Maximum positive difference Maximum negative difference Number of data points 37 6 2.06 8.17 -9.37 936 37 9 2.28 7.33 -9.68 936 37 All* 10.88 24.43 -25.96 936 67 4 2.45 8.50 -7.97 936 67 6 2.17 5.69 -6.94 936 67 9 2.31 6.86 -7.61 936 67 25 3.53 9.61 -12.68 936 67 All* 11.41 22.77 -28.58 936 98 4 2.41 8.11 -8.95 936 98 6 2.17 7.37 -6.73 936 98 9 2.22 8.64 -7.51 936 98 25 3.46 11.38 -12.51 936 98 56 5.03 13.33 -16.50 936 98 All* 12.50 21.33 -32.22 936 * Shepard's formula - All model points contributed to the weighted average. 62 Figure 13 - 98 Point Sheoard's Formula Model of Morro Bay (NPPR = 6) 63 should be straight. In some cases, peaks or deeps are pro- duced at the positions of model points which aren't found in the original data. -oJ 3. Inverse Distance Squared Weighting Function Table VI gives the statistical results of similar tests using the inverse distance squared weighting function. There is considerably less variability as NPPR is changed using this weighting function. The results are better for large NPPE. and for the unmodified version, but the best results at smaller NPPR did not improve. S. RESULTS OP HARDY'S MUITIQUADRIC ANALYSIS As indicated in equation 10, Hardy's multiquadric model is generated by summing quadric kernel surfaces, each of which are centered at model points. Hyperboloids, cones and inverse hyperboloids were the kernel surfaces tested. 1. Determination of U Both hyperboloids and inverse hyperboloids require the parameter 0 (Section II. P). Variation of U makes considerable difference in the results. The effect of any value of 0 on the shape of the quadric surfaces with respect to the entire model is related to the scale of the model. Hardy (1977) has indicated that the optimum value of U in his investigations was also related to the distance between model points. The following expression 64 TABLE VI - Shepard's Formula with. Inverse Distance Squared Weighting Function - Morro Bay Number of model points NPPR rms difference Maximum positive difference Maximum negative difference Number of data points 37 4 2.83 9.51 -8.90 936 37 9 2.37 8.75 -9.09 936 37 25 2.36 8.38 -9.57 936 37 All* 5.00 14.62 -17.06 936 67 4 2.81 9.14 -8.87 936 67 9 2.31 6.01 -6.88 936 67 25 2.34 6.71 -9.00 936 67 All* 5.54 15.42 -19.90 936 98 4 2.53 9.33 -9.46 936 98 6 2.33 7.53 -7.84 936 98 9 2.18 7.75 -7.05 936 98 25 2.26 9.56 -8.40 936 98 56 2.72 10.57 -11.66 936 98 All* 6.58 15.50 -22.48 936 * .q Shepard's formula - All model points contributed to the weighted average. 65 was used to relate 0 to the average density of the model points: 2 5 = * \|tt x °-1 x NPPR (17) V/ith this expression, the effect of 0 on- models of different scales will be similar as long as NPPR is the same. The tables in the following sections are expressed in terms of NPPR instead of the absolute value of (j . A cone is a special case of hyperboloid where Q is zero. The tables for hyperboloid kernels include cones by listing NPPR as zero. A zero value of (5 is not valid for inverse hyperboloids since the peak of an inverse hyperboloid increases to infinity as Q approaches zero. 2. Inverse Hyperboloid Kernels For both data sets when Q was small (NPPR=5), the contours showed holes at the model points which were not indicated in the original data. The representation of the actual surface was very poor. Increasing NPPR to 10, 15 and 20 gave somewhat better results and the bottom trends were evident but the representation was still not good. V/ith NPPR greater than 20, very steep slopes were created in large areas where no model points were chosen. The statistical results using the Morro Bay data set are given in Table 711. The results became worse as more model points were added. The best results were con- siderably poorer than the best results from other methods, particularly the maximum differences. 66 TABLE VII - Multiquadric Analysis with Inverse Hyperboloid Kernels - Morro Bay Number Maximum Majcimum Number of model MS positive negative of data points NPPR difference difference difference points 37 5 4.99 4.71 -33.48 936 37 10 2.05 4.82 -21.09 936 37 15 1.59 4.56 -14.73 936 37 20 1.43 4.24 -10.90 936 67 5 6.11 5.50 -32.06 936 67 10 2.54 11.68 -21.15 936 67 15 2.69 17.74 -15.71 936 67 20 4.09 20.16 -15.60 936 98 5 23.35 2.35 -60.04 936 98 10 3.13 9.92 -23.96 936 98 15 3.98 21.97 -20.13 936 98 20 8.23 67.37 -33.72 936 67 3. Hyperboloid and Conic Kernels Hyperbolic and conic kernels were evaluated on both data sets and NPPR was varied from zero to 25 for each set of model points. With 42 regularly spaced model points on* the Monterey Bay data set, not much detail was evident hut the general bottom trends were well represented for all values of NPPR. Increasing to 60 model points gave more variation with NPPR. For NPPR set to zero and one the results were very good. See Figure 14. The detail was improved and the bottom trends v/ere still accu- rate. For NPPR set to 10 and 20, the results became pro- gressively worse. Very steep slopes were generated which created invalid peaks and deeps in areas where no model points were chosen. The reason for these slopes is apparent when examining the magnitude of the coefficients. For NPPR=1, the mean coefficient magnitude was 0.33; for NPPR=20, the mean coefficient magnitude was 151.63. When NPPR was increased to 25, the system of equations became so ill- conditioned that it could not be solved. This is due to the increased flatness of the hyperboloids when NPPR becomes large. In areas where several model points are very close in order to represent sharp irregularities in the bottom, the flat hyperboloids centered at those points can't produce the detail required. Increasing to 90 model points produced similar results. The statistical results from the Morro Bay data set are given in Table VIII. For small NPPR, the results became 68 Figure 14 - 60 Point Multiquadric Model of Monterey Bay (hyperboloid kernels) 69 TABLE VIII - Multiquadric Analysis with Hyperboloid and Conic Kernels - Morro Bay Number Maximum Maximum Number of model RMS positive negative of data points NPPR difference difference difference points 37 0 1.18 8.37 -6.59 936 37 1 1.14 6.00 -6.76 936 37 10 1.18 3.97 -6.92 936 37 25 1.24 3.90 -6.17 936 67 0 0.75 3.61 -3.09 936 67 1 0.78 3.59 -3.01 936 67 10 2.10 15.41 -6.79 936 67 25 12.04 62.50 -44.52 936 98 0 0.65 1.92 -2.14 936 98 1 0.68 2.06 -2.12 936 98 10 2.84 12.72 -14.11 936 98 20* 14.44 117.96 -43.61 936 * system couldn't be solved for NPPR=25 70 continually better as more points were added, for greater detail. As the model points became more dense, the best results were acquired by using conic kernels (NPPR=0). For the original 37 regularly spaced model points, the best results were for small but non-zero NPPR. The con- tour comparisons reflected the model quality demonstrated by the statistics. P. SUIMARY A graphical comparison of the statistical results of the methods is given in Pigure 15. Duchon's method with ■5 2 basis functions d and d logd gave only a fair representa- tion of the bottom with regularly spaced model points. Additional model points did not improve the results so this technique was rejected. The basis function d logd, was introduced which gave good results (comparable to the multiquadric method) in one case and poor results in another. This was due to a dependence on the horizontal scale of the data. Independence of scaling for hydro graphic survey modeling is very important since surveys are plotted at various scales. The method with basis function d logd is unacceptable for this reason. Shepard's formula gave best results in modified form with about six model points in each radius of influence. The inverse distance weighting function was better than the square of the inverse distance. The results were 71 V) 3.5 3.0 2.5 — LU U fe 2.0 LU u z UJ q: uj u. u. 1.5 1.0 0.5 MULTIQUADRIC-v Tinverse hypcrboloid kernels) V. - 1/d2 MULTIQUADRIC "t (conic kernels) T 1 1 1 1 1 1 1 1 1 4 6 8 10 12 •/. OF DATA POINTS USED AS MODEL POINTS Figure 15 - Comparison. of Modeling Methods 72 considerably worse than those of Duchon's or Hardy's methods, Improvement could not "be gained "by increasing the number of model points. For these reasons, Shepard's method was unacceptable. Hardy's multiquadric analysis with inverse hyperboloids gave very poor results. For small Q holes were produced at the model points and the depths between model points were not accurate. Of the methods tested, only multiquadric analysis with conic or sharply pointed hyperboloid kernels gave results which indicated that further tests were warranted. Depic- tion of detail is improved by adding more model points with- out adversely affecting the model in other areas and the method is independent of linear scaling. 73 VI. FURTHER TESTING OF MUITI QUADRI C ANALYSIS WITH CONIC AND HYPERBOLOID KERNELS The results from the previous section showed that multiquadric analysis with conic or sharply pointed hyper- "boloid kernels was the only method tested that could meet the requirements of this application. Additional experi- mentation was done to determine the best procedures for selecting the model points and for joining models together at the boundaries. Tests were also run to determine how accurately the data sets could be represented with addi- tional model points while still saving significant storage space. A. SELECTION OF MODEL POINTS The selection of the data points to be used for the modeling is a critical process in the development of the multiquadric model. Three models for selection of the points were tested using the Auke Bay, Alaska data set. All point selection was done manually but consideration was given to the difficulty in automating the process. 1. Regular Spacing Selection In this method, data points from the survey were chosen at nearly even spacing without regard to depth, 74 bottom features, contour separation or any other factor. To avoid biasing, they were selected from a plot of record numbers rather than a plot of depths or depth contours. Additional points for more detail and accuracy were chosen for subsequent runs maintaining even spacing as much as possible without considering any factor except the hori- zontal distribution. The results of this procedure are presented in Table IX. The MS differences were improved significantly when the number of model points was increased from 53 to 110 but the maximum positive differences were not improved. Additional densification of the model points produced little improvement in either the RMS differences or the maximum differences. 2. Iterative Selection In the iterative selection process, the results of one model were used to eliminate some model points and select additional ones to produce a better model. After developing the model with the first set of points, the comparison of survey data points with the model v/as analyzed, Additional model points were selected wherever single point comparisons showed the largest differences or in areas where several points showed relatively large differences of the same sign. Model points which had very small asso- ciated coefficients were eliminated. A small coefficient indicates that the associated basis function has little effect on the model since it remains near zero within the modeled area. The model was then computed with the new set of points. 75 TABLE IX - Selection of Model Points with Even Spacing Number of model points 1TPPR RMS difference Maximum positive difference Maximum negative difference Number of data points 53 0 2.04 9.51 -5.64 1407 53 1 2.00 9.45 -5.78 1407 53 5 1.91 3.92 -6.39 1407 53 10 1.90 8.61 -7.03 1407 53 15 1.91 8.68 -7.66 1407 53 20 1.93 8.68 -8.10 1407 110 0 1.37 9.84 -4.25 1407 110 1 1.33 9.95 -4.71 1407 110 2 1.30 10.04 -5.02 1407 110 5 1.26 10.15 -5.46 1407 110 7 1.26 10.19 -5.60 1407 110 10 1.26 10.28 -5.76 1407 110 15 1.29 10.46 -5.97 1407 144 1 1.25 9.86 -4.26 1407 144 3 1.21 9.95 -4.66 1407 144 5 1.13 10.07 -5.41 1407 144 7 1.11 10.07 -5.58 1407 144 10 1.10 10.06 -5.76 1407 144 15 1.11 10.10 -6.01 1407 76 This procedure could be repeated until the desired accuracy was attained, the maximum number of model points to be used was reached, or the model accuracy no longer improved with further iterations. For this comparison of selection methods, the process was repeated until the number of model points was approximately the same as the maximum number used in the test of the regular spacing selection method. Table X shows the results of these tests. In all tests, the best results were obtained when NPPR=0 (conic kernels). Both RMS and maximum differences improved sig- nificantly as the selection process was repeated. Two iterations yielded approximately the same number of model points as the maximum used in the regular spacing selection method. Points related to features such as peaks, deeps or sharp changes in slope were chosen for the initial set of model points in these comparisons. Regular spaced points for the initial set were used in other tests with the iterative method. The results were good for both methods of initial selection. After a few iterations relatively few points from the initial set remained so the initial point selection method made little difference. ^A comparison of model point selection by regular pacing and by iteration is shown in Figure 16. 77 TABLE X - Selection of Model Points by Iteration Number Maximum Maximum Number of model RMS positive negative of data points NPPR difference difference difference points 82 0 1.48 4.33 -5.52 1407 82 1 1.56 3.35 -5.50 1407 82 5 2.36 8.09 -9.43 1407 107 0 1.06 3.04 -3.29 1407 107 1 1.15 2.81 -3.32 1407 107 3 1.50 4.39 -4.82 1407 152 0 0.72 1.91 -2.53 1407 152 1 0.73 2.03 -2.72 1407 73 2.08-n R M S I F F 1.50 R §1.29 E ' 1 . 00 — F A J0.75J 0 M S 0.50 BY REGULAR SPACING BY ITERATION 1 I ' I ' I ' I ' I 40 60 80 100 120 140 NUMBER OF MODEL POINTS n 160 ^igure 16 - Comparison of Model Point Selection Methods 79 3. Complete Selection "by Topographic Feature While using the iterative selection process, it was found that the additional points were selected where there were significant changes of slope or where there were large areas without any model points. This led to an attempt to select all the model points in one step "based on the following criteria. • Select points at peaks, deeps, ridges and where slopes change significantly. • Select points to avoid leaving any large areas without model points as a result of the first criterion. A test was done by choosing 145 points to model only half of the Auke Bay data set. The RMS difference was 0.88 and the maximum difference was -3.22. These results show that one-shot selection is not nearly as good as the iterative method and probably not much better than the regular spacing method. 4. Summary The iterative selection method gave by far the best statistical results. It also required the most com- puter time. It would be adaptable to complete automation since the method for point selection has little subjectivity involved. 30 The regular spacing method would be easier to auto- mate hut it doesn't give good results when detail is required. The method of complete selection by feature doesn't give significantly better results than does regu- lar spacing and the method would be difficult to automate. B. MODEL JUNCTIONS Agreement between two or more surveys which have a common boundary or cover a common area is a very important check on the quality of the surveys. Similarly, agreement between the models representing the surveys must be main- tained to avoid ambiguity. The problem is even more acute when several models are joined together to represent a single survey. It would be desirable to represent an entire survey with a single model but in many cases this would be difficult due to the large systems of equations which would have to be solved to generate the model coefficients. Hardy (1971) suggested a simple method of functioning where common points on the boundary were used in adjacent models. This would assure that the models were in agree- ment at these points and if chosen at close intervals, the differences at intermediate points on the boundary would be relatively small. That method is not appropriate for this application since the data points are not along straight lines which could be used as boundaries. Common points on irregular boundaries could be used but this would complicate model boundary definition and storage. 31 Two other more appropriate methods were investigated for this application. Both use overlapping areas in the adjacent models rather than a common boundary. The first method is analogous to the method of using common points on a boundary because the model points within the overlapping area are required to be the same for both models. A line in the middle of the overlapping area is chosen to delineate the areas of model usage. See Figure 17. x x X X X X X X X X * X 1 « a f a " J 8 I* 0 0 0 0 ° 0 0 0 0 0 o |€ model A model B $ use model A — sf— use model E s>| x - model points for model A o - model points for model B a - model points common to models A and 3 , Figure 17 - Model Junctions by Overlap This method could produce small discontinuities at the boundary. The second method eliminates the discontinuity completely but requires more computation. The model points in the over- lapping area are not required to be common in both models. 82 In this area, a weighted sum of the values obtained from each model is used. The weight, w, is determined by the Hermite Polynomial w = 1 - 3s2 + 2s5 where s is the relative distance from the point of compu- tation to the overlap area boundary. The value of s varies from one at the outer model boundary to zero at the inner boundary of the overlap area (see Figure 18). x x x X X x x x X X XX X a x 0 a 0 + o . x a 0 0 0 o c o 0 0 0 D 0 « model A * K- model B 1*-use model A — % K-use model B — ^ le-D x 0 a + -use weighted sum model points for model A model poiivts for model B model points common to models A and B point of computation Figure 18 - Model Junction by Hermite Polynomial The weight assigned to the model A value at the point of computation in Figure 18 is determined by using D-d s = D ' (18) 33 The weight assigned to the model E value at the same point is determined "by using | s = % . (19) The sum of the resultant weights is always one. A plot of the Hermit e polynomial is shown in Figure 19. 1 0 1 Figure 19 - Hermit e Polynomial Because the first derivative of this function is zero at s=0 and s=l, the transition from one model to an adjacent one will be smooth and continuous. The Aulce Bay survey was divided into two overlapping models as pictured in Figures 17 and 18. Each was modeled separately using the iterative method of model point selection. The two models were then joined by the two methods dis- cussed above. The results are presented in Table XI. Both junction methods showed improved results over the individual models since some of the largest errors were located near the outer boundaries of the models. The results with the polynomial method were only slightly better. Con- tour comparisons between the junction methods showed little difference. The possible discontinuity at the boundary when not using the Hermite polynomial was not apparent in the contours. 84 CQ •P O -H O u p. 0) 0 -P 3 03 to, in >£> C\J 00 CO o o H O CM in CO • o I H in en H tn o I o •H -P o H 0) T) O S 0 C- CM 3-H FH CTv <* g -P O O tO CM tOv to EH CQ -P PI «H -H O O P !h 0 H ^ 03 s t3 3 O a S tn H , O o t~ C— o> CTi H H CM CM o H 0 o CD X3 •H CO -p CQ CD 0 T3 •H CQ -P 03 03 -p 0 CQ 0 ■P o3 -H -p a 03 T3 0 0 -H U -P-P •H 2 O o -p 0 CQ 03 -p 03 -d 0 u -p o 0 -p •H s R 0 85 These results have shown that transition from one model to another can he done smoothly. The method using the Hermite polynomial is only slightly better than the method using common points in an overlapping area. V/hen joining models • v/ithout common points, e.g. two different surveys, the ^ Hermite polynomial method will give a smooth non-ambiguous transition from one model to another. C. MODEL REFINEMENT Multiquadric analysis and the iterative method of model point selection were used to refine the models of all four data sets. Tables XII, XIII, XIV and XV give the statisti- cal results of these tests. Figures 20, 21, 22 and 23 show [ the effect of each iteration on the PMS differences. The number of model points is expressed as a percentage of the number of data points represented. This is a direct indi- cation of the storage savings attained by each model. All four figures show a similar trend. Initially, the results improve rapidly as the percentage of data points used in the model increases. The improvement then tends to level off and repeated iterations generate less improvement. Contour plots of the final models for each data set are shown in Figures 24, 25, 26 and 27. Comparison of these with Figures 7, 8, 9 and 10 show the agreement with contours of the original data. 86 For the Monterey Bay model, an RMS difference of 0.2 fathoms and a maximum difference of -0.74 fathoms are good considering the irregularity of the bottom. These results were obtained using 17.2% of the data points as model points. For the Morro Bay model, an RMS difference of 0.55 feet and a maximum difference of -1.58 feet are good consider- ing that the depths range from 16 to 82 feet. This repre- sentation was made using only 15.7/3 of the data points. The allowable error specification (Section 1.3) is one foot in the shallow end of this range and three feet in the deep end. For the Auke Bay data set, an RMS difference of 0.30 fathoms and a maximum difference of -0.95 fathoms using 20.6% of the data points are good considering the steep slopes in the area. A horizontal positioning error of a few meters (within tolerance for the survey scale) could create several fathoms difference in many of the recorded depths. Even though the range of depths in the Gulf Coast data set is small, and RMS difference of 0.30 feet and a maximum difference of -1.16 feet using 9.9% of the data points could be considered good. The allowable error (Section I.B) for these depths is one foot. There are places in the data set where crossline and adjacent soundings from multiple vessels disagree by as much as two feet. The model 87 representation tends to smooth out such discrepancies and this smoothing appears as relatively large differences in the statistics. The Gulf Coast model gave the only case where a large 0 (TTPPR=50) gave much better results than smaller values. Only nine model points were used in that case. When more model points were added a small KPPB. was required for good results. 88 0 ST p pj H-P (D 0) £ O ti-H u O O 0 g ft Pi H CM C~ CO pn H C- CM t<"\ in in c~ CM •^ in t— H H H H CQ P H CQ 0 0) O CvJ pq CD fn CD -P fl O m-p cq 0 CCj P a -h 3 0 p •H (4 g -p 0 H CC 0 P -H h EP 0 •H -H C~ o VD CO o CM H H H CM CM 89 0 M cti -P S3 H 4-> 0 V£> VD Kn ^ K> ^ K^ KN cn en en en en en 01 •p H M 0 CO o s PQ o B o 0 o 0 S3 s > 0) p •H SW S -P © 1 Cd «H 0 -h £ S3 t3 0 O 0 S3 p > •H 0 s +- 0 •H •H=H X GQ ^H d O •H fc— 1 &T3 cn in vo oo en o i tn in co CM I rH I I CM ^3 C\J CO in H I in LT\ CO in in in EH 0 a S3 0 0 **■ **■ cn H CO ■«*■ tn CO cn H H o r- en o t- f- CO in «* c— c- in n H c- MD ^o m in in o Ph 5 H 0 S4 T1 CO 0 O -P r^ a S3 •H 3 =H O j25 o ^ c— c- CO U3 in c— ■**■ m VO cn O cn '* *=*• rH rH H H 90 S ftP4 0 M 03 +> fl CO VO CO Cn C- tO to ■«*■ H in F~ VO H -P 0) CD pj O LTi C— o O VO CO O H f- Cn O O T3 -H ^ H H H rH CO H H H CM (XI O O CD CO -p H CO CD H CD o m CD HH pq EH cd !n -P CO cd nj -p rQ 13 £ F~ •H 3 S-P •H CO «H O >H PhT3 0 O Pi CD CD CO vo CM to cm to l H I I CM I I H cn CO VO to CO to o to CM CM H in [— cm o CO ■** in CO o I "si- o in en O I to CO T3 o -p 0 a •H s o H O PM 0 -P •H 0 >> P xi 0 £ •H o o cn VO in to CM to cn *=f C- cn vo -P -p -r-3 H rH in "tf- ■**- in CO O to O CM o 0 0 CPi VO VO *=*- CO !> H to CO VO t> to CO CO CO O- in H cn en cn CM in CM ■^f cn o 0 *3- o r- C- ■* to t Pi j~ •H 3- c- C- P- C- c~- f- c— C*- VD VO VD M3 vo V£> \o V£> VO M> ^D \D VD OD CO cn o vo in cn CM CM O VD cn rH CO H rH VO r- *5i- H O cn *tf- «i- -«* m m o CM ■P a) Cti •H 0 H S+' 0 H •H ^ S « cn cn cn cn cn cn cn S -H P^o M3 vo VO P- CM CM CM CM CM •=*- 92 cu w cc -p fl H-P CD 0) S3 O tJ -h P) o o CD g ft ftl CO CM CO CM rH CO H O in H en co 00 CM in VO [> MD MD »x> H H H rH H rH H H Pi o o pq •a) EH 0) O 0) PI p > CO ^ •H Pi s += CD •H •H «H £