harmonica.EquivalentSourcesGB#

class harmonica.EquivalentSourcesGB(damping=None, points=None, depth: float | str = 'default', block_size=None, window_size='default', parallel=True, random_state=None, dtype='float64')[source]#

Gradient-boosted equivalent sources for generic harmonic functions.

Gradient-boosted version of the harmonica.EquivalentSources, introduced in [Soler2021]. These equivalent sources are intended to be used to fit very large datasets, where the Jacobian matrices generated by regular equivalent sources (like harmonica.EquivalentSources) are larger than the available memory. They fit the sources coefficients iteratively using overlapping windows of equal size, greatly reducing the memory requirements.

Smaller windows lower the memory requirements. Using very small windows may impact the accuracy of the interpolations. We recommend using the larger windows that generate Jacobian matrices that fit in the available memory.

Parameters:
dampingNone or float

The positive damping regularization parameter. Controls how much smoothness is imposed on the estimated coefficients. If None, no regularization is used.

pointsNone or list of arrays (optional)

List containing the coordinates of the equivalent point sources. Coordinates are assumed to be in the following order: (easting, northing, upward). If None, will place one point source below each observation point at a fixed relative depth below the observation point [Cordell1992]. Defaults to None.

depthfloat or “default”

Parameter used to control the depth at which the point sources will be located. If a value is provided, each source is located beneath each data point (or block-averaged location) at a depth equal to its elevation minus the depth value. If set to "default", the depth of the sources will be estimated as 4.5 times the mean distance between first neighboring sources. This parameter is ignored if points is specified. Defaults to "default".

block_size: float, tuple = (s_north, s_east) or None

Size of the blocks used on block-averaged equivalent sources. If a single value is passed, the blocks will have a square shape. Alternatively, the dimensions of the blocks in the South-North and West-East directions can be specified by passing a tuple. If None, no block-averaging is applied. This parameter is ignored if points are specified. Default to None.

window_sizefloat or “default”

Size of overlapping windows used during the gradient-boosting algorithm. Smaller windows reduce the memory requirements of the source coefficients fitting process. Very small windows may impact on the accuracy of the interpolations. Defaults to estimating a window size such that approximately 5000 data points are in each window.

parallelbool

If True any predictions and Jacobian building is carried out in parallel through Numba’s jit.prange, reducing the computation time. If False, these tasks will be run on a single CPU. Default to True.

dtypedata-type

The desired data-type for the predictions and the Jacobian matrix. Default to "float64".

References

[Soler2021]

Attributes:
points_2d-array

Coordinates of the equivalent point sources.

coefs_array

Estimated coefficients of every point source.

region_tuple

The boundaries ([W, E, S, N]) of the data used to fit the interpolator. Used as the default region for the grid method.

depth_float or None

Estimated depth of the sources calculated as 4.5 times the mean distance between first neighboring sources. This attribute is set to None if points is passed.

window_size_float or None

Size of the overlapping windows used in gradient-boosting equivalent point sources. It will be set to None if window_size = "default" and less than 5000 data points were used to fit the sources; a single window will be used in such case.

Methods

estimate_required_memory(coordinates)

Estimate the memory required for storing the largest Jacobian matrix

filter(coordinates, data[, weights])

Filter the data through the gridder and produce residuals.

fit(coordinates, data[, weights])

Fit the coefficients of the equivalent sources.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

grid(coordinates[, dims, data_names, projection])

Interpolate the data onto a regular grid.

jacobian(coordinates, points)

Make the Jacobian matrix for the equivalent sources.

predict(coordinates)

Evaluate the estimated equivalent sources on the given set of points.

profile(point1, point2, upward, size[, ...])

Interpolate data along a profile between two points.

scatter([region, size, random_state, dims, ...])

score(coordinates, data[, weights])

Score the gridder predictions against the given data.

set_fit_request(*[, coordinates, data, weights])

Request metadata passed to the fit method.

set_params(**params)

Set the parameters of this estimator.

set_predict_request(*[, coordinates])

Request metadata passed to the predict method.

set_score_request(*[, coordinates, data, ...])

Request metadata passed to the score method.

EquivalentSourcesGB.estimate_required_memory(coordinates)[source]#

Estimate the memory required for storing the largest Jacobian matrix

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, upward, …). Only easting, northing, and upward will be used, all subsequent coordinates will be ignored.

Returns:
memory_requiredint

Amount of memory required to store the largest Jacobian matrix in bytes.

Examples

>>> import verde as vd
>>> coordinates = vd.scatter_points(
...     region=(-1e3, 3e3, 2e3, 5e3),
...     size=100,
...     extra_coords=100,
...     random_state=42,
... )
>>> eqs = EquivalentSourcesGB(window_size=2e3)
>>> n_bytes = eqs.estimate_required_memory(coordinates)
>>> int(n_bytes)
9800
EquivalentSourcesGB.filter(coordinates, data, weights=None)#

Filter the data through the gridder and produce residuals.

Calls fit on the data, evaluates the residuals (data - predicted data), and returns the coordinates, residuals, and weights.

Not very useful by itself but this interface makes gridders compatible with other processing operations and is used by verde.Chain to join them together (for example, so you can fit a spline on the residuals of a trend).

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). For the specific definition of coordinate systems and what these names mean, see the class docstring.

dataarray or tuple of arrays

The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component).

weightsNone or array or tuple of arrays

If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None).

Returns:
coordinates, residuals, weights

The coordinates and weights are same as the input. Residuals are the input data minus the predicted data.

EquivalentSourcesGB.fit(coordinates, data, weights=None)[source]#

Fit the coefficients of the equivalent sources.

The fitting process is carried out through the gradient-boosting algorithm. The data region is captured and used as default for the grid method.

All input arrays must have the same shape.

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, upward, …). Only easting, northing, and upward will be used, all subsequent coordinates will be ignored.

dataarray

The data values of each data point.

weightsNone or array

If not None, then the weights assigned to each data point. Typically, this should be 1 over the data uncertainty squared.

Returns:
self

Returns this estimator instance for chaining operations.

EquivalentSourcesGB.get_metadata_routing()#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

EquivalentSourcesGB.get_params(deep=True)#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

EquivalentSourcesGB.grid(coordinates, dims=None, data_names=None, projection=None, **kwargs)#

Interpolate the data onto a regular grid.

The coordinates of the regular grid must be passed through the coordinates argument as a tuple containing three arrays in the following order: (easting, nothing, upward). They can be easily created through the verde.grid_coordinates function. If the grid points must be all at the same height, it can be specified in the extra_coords argument of verde.grid_coordinates.

Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output xarray.Dataset. Default names will be provided if none are given.

Parameters:
coordinatestuple of arrays

Tuple of arrays containing the coordinates of the grid in the following order: (easting, northing, upward). The easting and northing arrays could be 1d or 2d arrays, if they are 2d they must be part of a meshgrid. The upward array should be a 2d array with the same shape of easting and northing (if they are 2d arrays) or with a shape of (northing.size, easting.size) (if they are 1d arrays).

dimslist or None

The names of the northing and easting data dimensions, respectively, in the output grid. Default is determined from the dims attribute of the class. Must be defined in the following order: northing dimension, easting dimension. NOTE: This is an exception to the “easting” then “northing” pattern but is required for compatibility with xarray.

data_nameslist of None

The name(s) of the data variables in the output grid. Defaults to ['scalars'].

projectioncallable or None

If not None, then should be a callable object projection(easting, northing) -> (proj_easting, proj_northing) that takes in easting and northing coordinate arrays and returns projected northing and easting coordinate arrays. This function will be used to project the generated grid coordinates before passing them into predict. For example, you can use this to generate a geographic grid from a Cartesian gridder.

Returns:
gridxarray.Dataset

The interpolated grid. Metadata about the interpolator is written to the attrs attribute.

EquivalentSourcesGB.jacobian(coordinates, points)#

Make the Jacobian matrix for the equivalent sources.

Each column of the Jacobian is the Green’s function for a single point source evaluated on all observation points.

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, upward). Each array must be 1D.

pointstuple of arrays

Tuple of arrays containing the coordinates of the equivalent point sources in the following order: (easting, northing, upward). Each array must be 1D.

Returns:
jacobian2D array

The (n_data, n_points) Jacobian matrix.

EquivalentSourcesGB.predict(coordinates)#

Evaluate the estimated equivalent sources on the given set of points.

Requires a fitted estimator (see fit).

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, upward, …). Only easting, northing and upward will be used, all subsequent coordinates will be ignored.

Returns:
dataarray

The data values evaluated on the given points.

EquivalentSourcesGB.profile(point1, point2, upward, size, dims=None, data_names=None, projection=None, **kwargs)#

Interpolate data along a profile between two points.

Generates the profile along a straight line assuming Cartesian distances and the same upward coordinate for all points. Point coordinates are generated by verde.profile_coordinates. Other arguments for this function can be passed as extra keyword arguments (kwargs) to this method.

Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output pandas.DataFrame. Default names are provided.

Includes the calculated Cartesian distance from point1 for each data point in the profile.

To specify point1 and point2 in a coordinate system that would require projection to Cartesian (geographic longitude and latitude, for example), use the projection argument. With this option, the input points will be projected using the given projection function prior to computations. The generated Cartesian profile coordinates will be projected back to the original coordinate system. Note that the profile points are evenly spaced in projected coordinates, not the original system (e.g., geographic).

Parameters:
point1tuple

The easting and northing coordinates, respectively, of the first point.

point2tuple

The easting and northing coordinates, respectively, of the second point.

upwardfloat

Upward coordinate of the profile points.

sizeint

The number of points to generate.

dimslist or None

The names of the northing and easting data dimensions, respectively, in the output dataframe. Default is determined from the dims attribute of the class. Must be defined in the following order: northing dimension, easting dimension. NOTE: This is an exception to the “easting” then “northing” pattern but is required for compatibility with xarray.

data_nameslist of None

The name(s) of the data variables in the output dataframe. Defaults to ['scalars'] for scalar data, ['east_component', 'north_component'] for 2D vector data, and ['east_component', 'north_component', 'vertical_component'] for 3D vector data.

projectioncallable or None

If not None, then should be a callable object projection(easting, northing, inverse=False) -> (proj_easting, proj_northing) that takes in easting and northing coordinate arrays and returns projected northing and easting coordinate arrays. Should also take an optional keyword argument inverse (default to False) that if True will calculate the inverse transform instead. This function will be used to project the profile end points before generating coordinates and passing them into predict. It will also be used to undo the projection of the coordinates before returning the results.

Returns:
tablepandas.DataFrame

The interpolated values along the profile.

EquivalentSourcesGB.scatter(region=None, size=300, random_state=0, dims=None, data_names=None, projection=None, **kwargs)#

Warning

Not implemented method. The scatter method will be deprecated on Verde v2.0.0.

EquivalentSourcesGB.score(coordinates, data, weights=None)#

Score the gridder predictions against the given data.

Calculates the R^2 coefficient of determination of between the predicted values and the given data values. A maximum score of 1 means a perfect fit. The score can be negative.

Warning

The default scoring will change from R² to negative root mean squared error (RMSE) in Verde 2.0.0. This may change model selection results slightly. The negative version will be used to maintain the behaviour of larger scores being better, which is more compatible with current model selection code.

If the data has more than 1 component, the scores of each component will be averaged.

Parameters:
coordinatestuple of arrays

Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). For the specific definition of coordinate systems and what these names mean, see the class docstring.

dataarray or tuple of arrays

The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component).

weightsNone or array or tuple of arrays

If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None).

Returns:
scorefloat

The R^2 score

EquivalentSourcesGB.set_fit_request(*, coordinates: bool | None | str = '$UNCHANGED$', data: bool | None | str = '$UNCHANGED$', weights: bool | None | str = '$UNCHANGED$') EquivalentSourcesGB#

Request metadata passed to the fit method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
coordinatesstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for coordinates parameter in fit.

datastr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for data parameter in fit.

weightsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for weights parameter in fit.

Returns:
selfobject

The updated object.

EquivalentSourcesGB.set_params(**params)#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

EquivalentSourcesGB.set_predict_request(*, coordinates: bool | None | str = '$UNCHANGED$') EquivalentSourcesGB#

Request metadata passed to the predict method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
coordinatesstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for coordinates parameter in predict.

Returns:
selfobject

The updated object.

EquivalentSourcesGB.set_score_request(*, coordinates: bool | None | str = '$UNCHANGED$', data: bool | None | str = '$UNCHANGED$', weights: bool | None | str = '$UNCHANGED$') EquivalentSourcesGB#

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
coordinatesstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for coordinates parameter in score.

datastr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for data parameter in score.

weightsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for weights parameter in score.

Returns:
selfobject

The updated object.


Examples using harmonica.EquivalentSourcesGB#

Gradient-boosted equivalent sources

Gradient-boosted equivalent sources