API Reference

The heart Module

Core module with functions to calculate Greens Functions and synthetics. Also contains main classes for setup specific parameters.

class heart.ArrivalTaper(**kwargs)[source]

Cosine arrival Taper.

a

float, default: -15.0

start of fading in; [s] w.r.t. phase arrival

b

float, default: -10.0

end of fading in; [s] w.r.t. phase arrival

c

float, default: 50.0

start of fading out; [s] w.r.t. phase arrival

d

float, default: 55.0

end of fading out; [s] w.r.t phase arrival

check_sample_rate_consistency(deltat)[source]

Check if taper durations are consistent with GF sample rate.

get_pyrocko_taper(arrival_time)[source]

Get pyrocko CosTaper object that may be applied to trace operations.

Parameters:

arrival_time (float) – [s] of the reference time around which the taper will be applied

Return type:

pyrocko.trace.CosTaper

nsamples(sample_rate, chop_bounds=['b', 'c'])[source]

Returns the number of samples a tapered trace would have given its sample rate and chop_bounds

Parameters:

sample_rate (float) –

class heart.BandstopFilter(**kwargs)[source]

Filter object defining suppressed frequency range of traces after time-domain filtering.

lower_corner

float, default: 0.12

Lower corner frequency

upper_corner

float, default: 0.25

Upper corner frequency

order

int, default: 4

order of filter, the higher the steeper

exception heart.CollectionError[source]
class heart.Covariance(**kwargs)[source]

Covariance of an observation. Holds data and model prediction uncertainties for one observation object.

data

numpy.ndarray (pyrocko.guts_array.Array), optional

Data covariance matrix

pred_g

numpy.ndarray (pyrocko.guts_array.Array), optional

Model prediction covariance matrix, fault geometry

pred_v

numpy.ndarray (pyrocko.guts_array.Array), optional

Model prediction covariance matrix, velocity model

check_matrix_init(cov_mat_str='')[source]

Check if matrix is initialised and if not set with zeros of size data.

chol(factor=1.0)[source]

Cholesky factor, of ALL uncertainty covariance matrices.

property chol_inverse

Cholesky factor, upper right of the Inverse of the Covariance matrix of sum of ALL uncertainty covariance matrices. To be used as weight in the optimization.

Notes

Uses QR factorization on the inverse of the upper right Cholesky decomposed covariance matrix to obtain a proxy for the Cholesky decomposition of the inverse of the covariance matrix in case the inverse of the covariance matrix is not positive definite.

From: https://math.stackexchange.com/questions/3838462/cholesky-factors-of-covariance-and-precision-matrix/3842840#3842840

Return type:

upper triangle of the cholesky decomposition

inverse(factor=1.0)[source]

Add and invert ALL uncertainty covariance Matrices.

property inverse_d

Invert DATA covariance Matrix.

property inverse_p

Add and invert different MODEL uncertainty covariance Matrices.

property log_pdet

Calculate the log of the determinant of the total matrix.

update_slog_pdet()[source]

Update shared variable with current log_norm_factor (lnf) (for pytensor models).

class heart.DataWaveformCollection(stations, waveforms=None, target_deltat=None)[source]

Collection of available datasets, data-weights, waveforms and DynamicTargets used to create synthetics.

Is used to return Mappings of the waveforms of interest to fit to the involved data, weights and synthetics generating objects.

Parameters:
  • stations (List of pyrocko.model.Station) – List of station objects that are contained in the dataset

  • waveforms (list) – of strings of tabulated phases that are to be used for misfit calculation

  • target_deltat (float) – sampling interval the data is going to be downsampled to

class heart.DiffIFG(**kwargs)[source]

Differential Interferogram class as geodetic target for the calculation of synthetics and container for SAR data.

unwrapped_phase

numpy.ndarray (pyrocko.guts_array.Array), optional

coherence

numpy.ndarray (pyrocko.guts_array.Array), optional

reference_point

tuple of 2 float objects, optional

reference_value

float, optional, default: 0.0

displacement

numpy.ndarray (pyrocko.guts_array.Array), optional

covariance

Covariance, optional

Covariance that holds dataand model prediction covariance matrixes

odw

numpy.ndarray (pyrocko.guts_array.Array), optional

Overlapping data weights, additional weight factor to thedataset for overlaps with other datasets

mask

numpy.ndarray (pyrocko.guts_array.Array), optional

Mask values for Euler pole region determination. Click polygon mask in kite!

get_data_mask(corr_conf)[source]

Returns extracted mask from kite scene

class heart.DynamicTarget(**kwargs)[source]

Undocumented.

response

pyrocko.response.PoleZeroResponse, optional

domain

str (pyrocko.guts.StringChoice), default: 'time'

Domain for signal processing and likelihood calculation.

update_target_times(sources=None, taperer=None)[source]

Update the target attributes tmin and tmax to do the stacking only in this interval. Adds twice taper fade in time to each taper side.

Parameters:
  • source (list) – containing pyrocko.gf.seismosizer.Source Objects

  • taperer (pyrocko.trace.CosTaper) –

class heart.Filter(**kwargs)[source]

Filter object defining frequency range of traces after time-domain filtering.

order

int, default: 4

order of filter, the higher the steeper

stepwise

bool, default: True

If set to true the bandpass filter is done it two consecutive steps, first high-pass then low-pass.

class heart.FilterBase(**kwargs)[source]

Undocumented.

lower_corner

float, default: 0.001

Lower corner frequency

upper_corner

float, default: 0.1

Upper corner frequency

ffactor

float, optional, default: 1.5

Factor for tapering the corner frequencies in spectral domain.

class heart.FrequencyFilter(**kwargs)[source]

Undocumented.

tfade

float, default: 20.0

Rise/fall time in seconds of taper applied in timedomain at both ends of trace.

class heart.GNSSCompoundComponent(**kwargs)[source]

Collecting many GNSS components and merging them into arrays. Make synthetics generation more efficient.

los_vector

numpy.ndarray (pyrocko.guts_array.Array), optional

displacement

numpy.ndarray (pyrocko.guts_array.Array), optional

component

str, default: 'east'

direction of measurement, north/east/up

stations

list of pyrocko.model.gnss.GNSSStation objects, default: []

covariance

Covariance, optional

Covariance that holds dataand model prediction covariance matrixes

odw

numpy.ndarray (pyrocko.guts_array.Array), optional

Overlapping data weights, additional weight factor to thedataset for overlaps with other datasets

class heart.GeodeticDataset(**kwargs)[source]

Overall geodetic data set class

typ

str, default: 'SAR'

Type of geodetic data, e.g. SAR, GNSS, …

name

str, default: 'A'

e.g. GNSS campaign name or InSAR satellite track

get_corrections(hierarchicals, point=None)[source]

Needs to be specified on inherited dataset classes.

setup_corrections(event, correction_configs)[source]

Initialise geodetic dataset corrections such as Ramps or Euler Poles.

update_local_coords(loc)[source]

Calculate local coordinates with respect to given Location.

Parameters:

loc (pyrocko.gf.meta.Location) –

Return type:

numpy.ndarray (n_points, 3)

class heart.GeodeticResult(**kwargs)[source]

Result object assembling different geodetic data.

point

ResultPoint, default: ResultPoint()

processed_obs

numpy.ndarray (pyrocko.guts_array.Array), optional

processed_syn

numpy.ndarray (pyrocko.guts_array.Array), optional

processed_res

numpy.ndarray (pyrocko.guts_array.Array), optional

llk

float, optional, default: 0.0

class heart.IFG(**kwargs)[source]

Interferogram class as a dataset in the optimization.

master

str, optional

Acquisition time of master image YYYY-MM-DD

slave

str, optional

Acquisition time of slave image YYYY-MM-DD

amplitude

numpy.ndarray (pyrocko.guts_array.Array), optional

wrapped_phase

numpy.ndarray (pyrocko.guts_array.Array), optional

incidence

numpy.ndarray (pyrocko.guts_array.Array), optional

heading

numpy.ndarray (pyrocko.guts_array.Array), optional

los_vector

numpy.ndarray (pyrocko.guts_array.Array), optional

satellite

str, default: 'Envisat'

update_los_vector(force=False)[source]

Calculate LOS vector for given attributes incidence and heading angles.

Return type:

numpy.ndarray (n_points, 3)

class heart.Parameter(**kwargs)[source]

Optimization parameter determines the bounds of the search space.

name

str, default: 'depth'

form

str, default: 'Uniform'

Type of prior distribution to use. Options: “Uniform”, …

lower

numpy.ndarray (pyrocko.guts_array.Array), default: array([0., 0.])

upper

numpy.ndarray (pyrocko.guts_array.Array), default: array([1., 1.])

testvalue

numpy.ndarray (pyrocko.guts_array.Array), default: array([0.5, 0.5])

random(shape=None)[source]

Create random samples within the parameter bounds.

Parameters:

shape (int or list) – of int number of draws from distribution

Return type:

numpy.ndarray of size (n, m)

class heart.PolarityResult(**kwargs)[source]

Undocumented.

point

ResultPoint, default: ResultPoint()

processed_obs

numpy.ndarray (pyrocko.guts_array.Array), optional

llk

float, optional, default: 0.0

source_contributions

list of numpy.ndarray (pyrocko.guts_array.Array) objects, default: []

Synthetics of source individual contributions.

class heart.PolarityTarget(**kwargs)[source]

A polarity computation request depending on receiver information.

codes

tuple of 3 str objects, default: ('NW', 'STA', 'L')

network, station and location code

elevation

float, default: 0.0

station surface elevation in [m]

store_id

str (pyrocko.gf.meta.StringID), optional

ID of Green’s function store to use for the computation. If not given, the processor may use a system default.

azimuth_rad

float, optional

azimuth of sensor component in [rad], clockwise from north. If not given, it is guessed from the channel code.

distance

float, optional

epicentral distance of sensor in [m].

takeoff_angle_rad

float, optional

takeoff-angle of ray emitted from source with respect todistance & source depth recorded by sensor in [rad].upward ray when > 90.

phase_id

str, optional, default: 'any_P'

First arrival of seismic phase

exception heart.RayPathError[source]
class heart.ReferenceLocation(**kwargs)[source]

Reference Location for Green’s Function store calculations!

station

str, default: 'Store_Name'

This mimics the station.station attribute which determines the store name!

class heart.ResultPoint(**kwargs)[source]

Containing point in solution space.

events

list of pyrocko.model.event.Event objects, optional

post_llk

str, optional

describes which posterior likelihood value the point belongs to

point

dict of numpy.ndarray (pyrocko.guts_array.Array) objects, default: {}

Point in Solution space for which result is produced.

variance_reductions

dict of float objects, optional, default: {}

Variance reductions for each dataset.

class heart.ResultReport(**kwargs)[source]

Undocumented.

solution_point

dict of pyrocko.guts.Any objects, default: {}

result point

post_llk

str (pyrocko.guts.StringChoice), default: 'max'

Value of point of the likelihood distribution.

mean_point

dict of pyrocko.guts.Any objects, optional

mean of distributions, used for model prediction covariance calculation.

class heart.SeismicDataset(network='', station='STA', location='', channel='', tmin=0.0, tmax=None, deltat=1.0, ydata=None, mtime=None, meta=None, extra='')[source]

Extension to pyrocko.trace.Trace to have Covariance as an attribute.

class heart.SeismicResult(**kwargs)[source]

Result object assembling different traces of misfit.

point

ResultPoint, default: ResultPoint()

processed_obs

Trace, optional

arrival_taper

pyrocko.trace.Taper, optional

llk

float, optional, default: 0.0

taper

pyrocko.trace.Taper, optional

filterer

list of FilterBase objects, default: []

List of Filters that are applied in the order of the list.

source_contributions

list of Trace objects, default: []

synthetics of source individual contributions.

domain

str (pyrocko.guts.StringChoice), default: 'time'

type of trace

class heart.SpectrumDataset(network='', station='STA', location='', channel='', tmin=0.0, tmax=None, deltat=1.0, ydata=None, mtime=None, meta=None, fmin=0.0, fmax=5.0, deltaf=0.1)[source]

Extension to SeismicDataset to have Spectrum dataset.

fmin

float, default: 0.0

fmax

float, default: 5.0

deltaf

float, default: 0.1

get_xdata()[source]

Create array for time axis.

exception heart.StackingError[source]
class heart.StrainRateTensor(**kwargs)[source]

Undocumented.

exx

float, default: 10

eyy

float, default: 0

exy

float, default: 0

rotation

float, default: 0

property azimuth

Direction of eps2 compared towards North [deg].

property eps1

Maximum extension eigenvalue of strain rate tensor, extension positive.

property eps2

Maximum compression eigenvalue of strain rate tensor, extension positive.

class heart.Trace(**kwargs)[source]

Undocumented.

class heart.WaveformMapping(name, config, stations, weights=None, channels=['Z'], datasets=[], targets=[], deltat=None, mapnumber=0)[source]

Maps synthetic waveform parameters to targets, stations and data

Parameters:
  • name (str) – name of the waveform according to travel time tables

  • stations (list) – of pyrocko.model.Station

  • weights (list) – of pytensor.shared variables

  • channels (list) – of channel names valid for all the stations of this wavemap

  • datasets (list) – of heart.Dataset inherited from pyrocko.trace.Trace

  • targets (list) – of pyrocko.gf.target.Target

property hypersize

Return the size of the related hyperparameters as an integer.

prepare_data(source, engine, outmode='array', chop_bounds=['b', 'c'])[source]

Taper, filter data traces according to given reference event. Traces are concatenated to one single array.

station_weeding(event, distances, blacklist=[])[source]

Weed stations and related objects based on distances and blacklist. Works only a single time after init!

heart.calculate_radiation_weights(takeoff_angles_rad, azimuths_rad, wavename)[source]

Get wave radiation pattern for given waveform using station propagation coefficients. Numerically more efficient.

Notes

Pugh et al., A Bayesian method for microseismic source inversion, 2016, GJI Appendix A

heart.check_problem_stores(problem, datatypes)[source]

Check GF stores for empty traces.

heart.choose_backend(fomosto_config, code, source_model, receiver_model, gf_directory='qseis2d_green', version=None)[source]

Get backend related config.

heart.concatenate_datasets(datasets)[source]

Concatenate datasets to single arrays

Parameters:

datasets (list) – of GeodeticDataset

Returns:

  • datasets (1d :class:numpy.NdArray` n x 1)

  • los_vectors (2d :class:numpy.NdArray` n x 3)

  • odws (1d :class:numpy.NdArray` n x 1)

  • Bij (utility.ListToArrayBijection)

heart.ensemble_earthmodel(ref_earthmod, num_vary=10, error_depth=0.1, error_velocities=0.1, depth_limit_variation=600000.0)[source]

Create ensemble of earthmodels that vary around a given input earth model by a Gaussian of 2 sigma (in Percent 0.1 = 10%) for the depth layers and for the p and s wave velocities. Vp / Vs is kept unchanged

Parameters:
  • ref_earthmod (pyrocko.cake.LayeredModel) – Reference earthmodel defining layers, depth, velocities, densities

  • num_vary (scalar, int) – Number of variation realisations

  • error_depth (scalar, float) – 3 sigma error in percent of the depth for the respective layers

  • error_velocities (scalar, float) – 3 sigma error in percent of the velocities for the respective layers

  • depth_limit_variation (scalar, float) – depth threshold [m], layers with depth > than this are not varied

Return type:

List of Varied Earthmodels pyrocko.cake.LayeredModel

heart.filterer_minmax(filterer)[source]

Get minimum and maximum corner frequencies of list of filterers

heart.geo_construct_gf(event, geodetic_config, crust_ind=0, execute=True, force=False)[source]

Calculate geodetic Greens Functions (GFs) and create a fomosto ‘GF store’ that is being used repeatetly later on to calculate the synthetic displacements. Enables various different source geometries.

Parameters:
  • event (pyrocko.model.Event) – The event is used as a reference point for all the calculations According to the its location the earth model is being built

  • geodetic_config (config.GeodeticConfig) –

  • crust_ind (int) – Index to set to the Greens Function store

  • execute (boolean) – Flag to execute the calculation, if False just setup tested

  • force (boolean) – Flag to overwrite existing GF stores

heart.geo_synthetics(engine, targets, sources, outmode='stacked_array', plot=False, nprocs=1)[source]

Calculate synthetic displacements for a given static fomosto Greens Function database for sources and targets on the earths surface.

Parameters:
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (list) – containing pyrocko.gf.seismosizer.Source Objects reference source is the first in the list!!!

  • targets (list) – containing pyrocko.gf.seismosizer.Target Objects

  • plot (boolean) – flag for looking at synthetics - not implemented yet

  • nprocs (int) – number of processors to use for synthetics calculation –> currently no effect !!!

  • outmode (string) – output format of synthetics can be: ‘array’, ‘arrays’, ‘stacked_array’,’stacked_arrays’

Returns:

  • depends on outmode

  • ’stacked_array’

  • numpy.ndarray (n_observations; ux-North, uy-East, uz-Down)

  • ’stacked_arrays’

  • or list of

  • numpy.ndarray (target.samples; ux-North, uy-East, uz-Down)

  • returns Nan in displacements if result is invalid!

heart.get_fomosto_baseconfig(gfconfig, event, station, waveforms, crust_ind)[source]

Initialise fomosto config.

Parameters:
  • gfconfig (config.NonlinearGFConfig) –

  • event (pyrocko.model.Event) – The event is used as a reference point for all the calculations According to the its location the earth model is being built

  • station (pyrocko.model.Station or) – heart.ReferenceLocation

  • waveforms (List of str) – Waveforms to calculate GFs for, determines the length of traces

  • crust_ind (int) – Index to set to the Greens Function store

heart.get_phase_arrival_time(engine, source, target, wavename=None, snap=True)[source]

Get arrival time from Greens Function store for respective pyrocko.gf.seismosizer.Target, pyrocko.gf.meta.Location pair.

Parameters:
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • source (pyrocko.gf.meta.Location) – can be therefore pyrocko.gf.seismosizer.Source or pyrocko.model.Event

  • target (pyrocko.gf.seismosizer.Target) –

  • wavename (string) – of the tabulated phase that determines the phase arrival needs to be the Id of a tabulated phase in the respective target.store if “None” uses first tabulated phase

  • snap (if True) – force arrival time on discrete samples of the store

Return type:

scalar, float of the arrival time of the wave

heart.get_phase_taperer(engine, source, wavename, target, arrival_taper, arrival_time=nan)[source]

Create phase taperer according to synthetic travel times from source- target pair and taper return pyrocko.trace.CosTaper according to defined arrival_taper times.

Parameters:
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • source (pyrocko.gf.meta.Location) – can be therefore pyrocko.gf.seismosizer.Source or pyrocko.model.Event

  • wavename (string) – of the tabulated phase that determines the phase arrival

  • target (pyrocko.gf.seismosizer.Target) –

  • arrival_taper (ArrivalTaper) –

  • arrival_time (shift on arrival time (optional)) –

Return type:

pyrocko.trace.CosTaper

heart.get_ramp_displacement(locx, locy, azimuth_ramp, range_ramp, offset)[source]

Get synthetic residual plane in azimuth and range direction of the satellite.

Parameters:
  • locx (shared array-like numpy.ndarray) – local coordinates [km] in east direction

  • locy (shared array-like numpy.ndarray) – local coordinates [km] in north direction

  • azimuth_ramp (pytensor.tensor.Tensor or numpy.ndarray) – vector with ramp parameter in azimuth

  • range_ramp (pytensor.tensor.Tensor or numpy.ndarray) – vector with ramp parameter in range

  • offset (pytensor.tensor.Tensor or numpy.ndarray) – scalar of offset in [m]

heart.get_slowness_taper(fomosto_config, velocity_model, distances)[source]

Calculate slowness taper for backends that determine wavefield based on the velociy model.

Parameters:
  • fomosto_config (pyrocko.meta.Config) –

  • velocity_model (pyrocko.cake.LayeredModel) –

  • distances (tuple) – minimum and maximum distance [deg]

Return type:

tuple of slownesses

heart.get_velocity_model(location, earth_model_name, crust_ind=0, gf_config=None, custom_velocity_model=None)[source]

Get velocity model at the specified location, combines given or crustal models with the global model.

Parameters:
  • location (pyrocko.meta.Location) –

  • earth_model_name (str) – Name of the base earth model to be used, check pyrocko.cake.builtin_models() for alternatives, default ak135 with medium resolution

  • crust_ind (int) – Index to set to the Greens Function store, 0 is reference store indexes > 0 use reference model and vary its parameters by a Gaussian

  • gf_config (beat.config.GFConfig) –

  • custom_velocity_model (pyrocko.cake.LayeredModel) –

Return type:

pyrocko.cake.LayeredModel

heart.init_datahandler(seismic_config, seismic_data_path='./', responses_path=None)[source]

Initialise datahandler.

Parameters:
  • seismic_config (config.SeismicConfig) –

  • seismic_data_path (str) – absolute path to the directory of the seismic data

Returns:

datahandler

Return type:

DataWaveformCollection

heart.init_geodetic_targets(datasets, event, earth_model_name='ak135-f-average.m', interpolation='nearest_neighbor', crust_inds=[0], sample_rate=0.0)[source]

Initiate a list of Static target objects given a list of indexes to the respective GF store velocity model variation index (crust_inds).

Parameters:
  • datasets (list) – of heart.GeodeticDataset for which the targets are being initialised

  • event (pyrocko.model.Event) – for geographic referencing of the targets

  • str (earth_model_name =) – Name of the earth model that has been used for GF calculation.

  • sample_rate (scalar, float) – sample rate [Hz] of the Greens Functions to use

  • crust_inds (List of int) – Indexes of different velocity model realisations, 0 - reference model

  • interpolation (str) – Method of interpolation for the Greens Functions, can be ‘multilinear’ or ‘nearest_neighbor’

Return type:

List of pyrocko.gf.targets.StaticTarget

heart.init_seismic_targets(stations, earth_model_name='ak135-f-average.m', channels=['T', 'Z'], sample_rate=1.0, crust_inds=[0], interpolation='multilinear', reference_location=None, arrivaltime_config=None)[source]

Initiate a list of target objects given a list of indexes to the respective GF store velocity model variation index (crust_inds).

Parameters:
  • stations (List of pyrocko.model.Station) – List of station objects for which the targets are being initialised

  • str (earth_model_name =) – Name of the earth model that has been used for GF calculation.

  • channels (List of str) – Components of the traces to be optimized for if rotated: T - transversal, Z - vertical, R - radial If not rotated: E - East, N- North, U - Up (Vertical)

  • sample_rate (scalar, float) – sample rate [Hz] of the Greens Functions to use

  • crust_inds (List of int) – Indexes of different velocity model realisations, 0 - reference model

  • interpolation (str) – Method of interpolation for the Greens Functions, can be ‘multilinear’ or ‘nearest_neighbor’

  • reference_location (ReferenceLocation or) – pyrocko.model.Station if given, targets are initialised with this reference location

Return type:

List of DynamicTarget

heart.init_wavemap(waveformfit_config, datahandler=None, event=None, mapnumber=0)[source]

Initialise wavemap, which sets targets, datasets and stations into relation to the seismic Phase of interest and allows individual specifications.

Parameters:
Returns:

wmap

Return type:

WaveformMapping

heart.log_determinant(A, inverse=False)[source]

Calculates the natural logarithm of a determinant of the given matrix ‘ according to the properties of a triangular matrix.

Parameters:
  • A (n x n numpy.ndarray) –

  • inverse (boolean) –

    If true calculates the log determinant of the inverse of the colesky decomposition, which is equivalent to taking the determinant of the inverse of the matrix.

    L.T* L = R inverse=False L-1*(L-1)T = R-1 inverse=True

Return type:

float logarithm of the determinant of the input Matrix A

heart.pol_synthetics(source, takeoff_angles_rad=None, azimuths_rad=None, wavename='any_P', radiation_weights=None)[source]

Calculate synthetic radiation pattern for given source and waveform at receivers defined through azimuths and takeoff-angles.

heart.polarity_construct_gf(stations, event, polarity_config, crust_ind=0, execute=False, force=False, always_raytrace=False)[source]

Calculate polarity Greens Functions (GFs) and create a repository ‘store’ that is being used later on repeatetly to calculate the synthetic radiation patterns.

Parameters:
  • stations (list) – of pyrocko.model.Station Station object that defines the distance from the event for which the GFs are being calculated

  • event (pyrocko.model.Event) – The event is used as a reference point for all the calculations According to the its location the earth model is being built

  • polarity_config (config.PolarityConfig) –

  • crust_ind (int) – Index to set to the Greens Function store, 0 is reference store indexes > 0 use reference model and vary its parameters by a Gaussian

  • execute (boolean) – Flag to execute the calculation, if False just setup tested

  • force (boolean) – Flag to overwrite existing GF stores

heart.post_process_trace(trace, taper, filterer, taper_tolerance_factor=0.0, deltat=None, outmode=None, chop_bounds=['b', 'c'], transfer_function=None)[source]

Taper, filter and then chop one trace in place.

Parameters:
  • trace (SeismicDataset) –

  • arrival_taper (pyrocko.trace.Taper) –

  • filterer (list) – of Filterer

  • taper_tolerance_factor (float) – default: 0 , cut exactly at the taper edges taper.fadein times this factor determines added tolerance

  • chop_bounds (str) – determines where to chop the trace on the taper attributes may be combination of [a, b, c, d]

heart.proto2zpk(magnification, damping, period, quantity='displacement')[source]

Convert magnification, damping and period of a station to poles and zeros.

Parameters:
  • magnification (float) – gain of station

  • damping (float) – in []

  • period (float) – in [s]

  • quantity (string) – in which related data are recorded

Return type:

lists of zeros, poles and gain

heart.radiation_gamma(takeoff_angles_rad, azimuths_rad)[source]

Radiation weights for seismic P- phase

Return type:

array-like 3 x n_stations

heart.radiation_matmul(m9, takeoff_angles_rad, azimuths_rad, wavename)[source]

Get wave radiation pattern for given waveform, using matrix multiplication.

Notes

Pugh et al., A Bayesian method for microseismic source inversion, 2016, GJI Appendix A

heart.radiation_phi(azimuths_rad)[source]

Radiation weights for seismic Sh- phase

Return type:

array-like 3 x n_stations x 3

heart.radiation_theta(takeoff_angles_rad, azimuths_rad)[source]

Radiation weights for seismic Sv- phase

Return type:

array-like 3 x n_stations

heart.radiation_weights_p(takeoff_angles, azimuths)[source]

Station dependent propagation coefficients for P waves

Notes

Pugh et al., A Bayesian method for microseismic source inversion, 2016, GJI Appendix A

heart.radiation_weights_sh(takeoff_angles, azimuths)[source]

Station dependent propagation coefficients for SV waves

Notes

Pugh et al., A Bayesian method for microseismic source inversion, 2016, GJI Appendix A

heart.radiation_weights_sv(takeoff_angles, azimuths)[source]

Station dependent propagation coefficients for SV waves

Notes

Pugh et al., A Bayesian method for microseismic source inversion, 2016, GJI Appendix A

heart.seis_construct_gf(stations, event, seismic_config, crust_ind=0, execute=False, force=False)[source]

Calculate seismic Greens Functions (GFs) and create a repository ‘store’ that is being used later on repeatetly to calculate the synthetic waveforms.

Parameters:
  • stations (list) – of pyrocko.model.Station Station object that defines the distance from the event for which the GFs are being calculated

  • event (pyrocko.model.Event) – The event is used as a reference point for all the calculations According to the its location the earth model is being built

  • seismic_config (config.SeismicConfig) –

  • crust_ind (int) – Index to set to the Greens Function store, 0 is reference store indexes > 0 use reference model and vary its parameters by a Gaussian

  • execute (boolean) – Flag to execute the calculation, if False just setup tested

  • force (boolean) – Flag to overwrite existing GF stores

heart.seis_derivative(engine, sources, targets, arrival_taper, arrival_times, wavename, filterer, h, parameter, stencil_order=3, outmode='tapered_data')[source]

Calculate numerical derivative with respect to source or spatial parameter

Parameters:
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (list) – containing pyrocko.gf.seismosizer.Source Objects reference source is the first in the list!!!

  • targets (list) – containing pyrocko.gf.seismosizer.Target Objects

  • arrival_taper (ArrivalTaper) –

  • arrival_times (list or:class:numpy.ndarray) – containing the start times [s] since 1st.January 1970 to start tapering

  • wavename (string) – of the tabulated phase that determines the phase arrival

  • filterer (Filterer) –

  • h (float) – distance for derivative calculation

  • parameter (str) – parameter with respect to which the derivative is being calculated e.g. ‘strike’, ‘dip’, ‘depth’

  • stencil_order (int) – order N of numerical stencil differentiation, available; 3 or 5

Return type:

num.array ntargets x nsamples with the first derivative

heart.seis_synthetics(engine, sources, targets, arrival_taper=None, wavename='any_P', filterer=None, reference_taperer=None, plot=False, nprocs=1, outmode='array', pre_stack_cut=False, taper_tolerance_factor=0.0, arrival_times=None, chop_bounds=['b', 'c'])[source]

Calculate synthetic seismograms of combination of targets and sources, filtering and tapering afterwards (filterer) tapering according to arrival_taper around P -or S wave. If reference_taper the given taper is always used.

Parameters:
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (list) – containing pyrocko.gf.seismosizer.Source Objects reference source is the first in the list!!!

  • targets (list) – containing pyrocko.gf.seismosizer.Target Objects

  • arrival_taper (ArrivalTaper) –

  • wavename (string) – of the tabulated phase that determines the phase arrival

  • filterer (Filterer) –

  • plot (boolean) – flag for looking at traces

  • nprocs (int) – number of processors to use for synthetics calculation –> currently no effect !!!

  • outmode (string) – output format of synthetics can be ‘array’, ‘stacked_traces’, ‘data’ returns traces unstacked including post-processing, ‘tapered_data’ returns unstacked but tapered traces

  • pre_stack_cut (boolean) – flag to decide whether prior to stacking the GreensFunction traces should be cut according to the phase arrival time and the defined taper

  • taper_tolerance_factor (float) – tolerance to chop traces around taper.a and taper.d

  • arrival_times (None or numpy.NdArray) – of phase to apply taper, if None theoretic arrival of ray tracing used

  • chop_bounds (list of str) – determines where to chop the trace on the taper attributes may be combination of [a, b, c, d]

  • transfer_functions (list) – of transfer functions to convolve the synthetics with

Returns:

heart.taper_filter_traces(traces, arrival_taper=None, filterer=None, deltat=None, arrival_times=None, plot=False, outmode='array', taper_tolerance_factor=0.0, chop_bounds=['b', 'c'])[source]

Taper and filter data_traces according to given taper and filterers. Tapering will start at the given tmin.

Parameters:
  • traces (List) – containing pyrocko.trace.Trace objects

  • arrival_taper (ArrivalTaper) –

  • filterer (list) – of Filterer

  • deltat (float) – if set data is downsampled to that sampling interval

  • arrival_times (list or:class:numpy.ndarray) – containing the start times [s] since 1st.January 1970 to start tapering

  • outmode (str) – defines the output structure, options: “stacked_traces”, “array”, “data”

  • taper_tolerance_factor (float) – tolerance to chop traces around taper.a and taper.d

  • chop_bounds (list of len 2) – of taper attributes a, b, c, or d

Returns:

with tapered and filtered data traces, rows different traces, columns temporal values

Return type:

numpy.ndarray

heart.vary_model(earthmod, error_depth=0.1, error_velocities=0.1, depth_limit_variation=600000.0)[source]

Vary depths and velocities in the given source model by Gaussians with given 2-sigma errors [percent]. Ensures increasing velocity with depth. Stops variating the input model at the given depth_limit_variation [m]. Mantle discontinuity uncertainties are hardcoded based on Mooney et al. 1981 and Woodward et al.1991

Parameters:
  • earthmod (pyrocko.cake.LayeredModel) – Earthmodel defining layers, depth, velocities, densities

  • error_depth (scalar, float) – 2 sigma error in percent of the depth for the respective layers

  • error_velocities (scalar, float) – 2 sigma error in percent of the velocities for the respective layers

  • depth_limit_variations (scalar, float) – depth threshold [m], layers with depth > than this are not varied

Returns:

  • Varied Earthmodel (pyrocko.cake.LayeredModel)

  • Cost (int) – Counts repetitions of cycles to ensure increasing layer velocity, unlikely velocities have high Cost Cost of up to 20 are ok for crustal profiles.

heart.velocities_from_pole(lats, lons, pole_lat, pole_lon, omega, earth_shape='ellipsoid')[source]

Return horizontal velocities at input locations for rotation around given Euler pole

Parameters:
  • lats (numpy.NdArray) – of geographic latitudes [deg] of points to calculate velocities for

  • lons (numpy.NdArray) – of geographic longitudes [deg] of points to calculate velocities for

  • pole_lat (float) – Euler pole latitude [deg]

  • pole_lon (float) – Euler pole longitude [deg]

  • omega (float) – angle of rotation around Euler pole [deg / million yrs]

Return type:

numpy.NdArray of velocities [m / yrs] npoints x 3 (NEU)

heart.velocities_from_strain_rate_tensor(lats, lons, exx=0.0, eyy=0.0, exy=0.0, rotation=0.0)[source]

Get velocities [m] from 2d area strain rate tensor.

Geographic coordinates are reprojected internally wrt. the centroid of the input locations.

Parameters:
  • lats (array-like numpy.ndarray) – geographic latitudes in [deg]

  • lons (array-like numpy.ndarray) – geographic longitudes in [deg]

  • exx (float) – component of the 2d area strain-rate tensor [nanostrain] x-North

  • eyy (float) – component of the 2d area strain-rate tensor [nanostrain] y-East

  • exy (float) – component of the 2d area strain-rate tensor [nanostrain]

  • rotation (float) – clockwise rotation rate around the centroid of input locations

Returns:

v_xyz – Deformation rate in [m] in x - North, y - East, z - Up Direction

Return type:

2d array-like numpy.ndarray

The config Module

The config module contains the classes to build the configuration files that are being read by the beat executable.

So far there are configuration files for the three main optimization problems implemented. Solving the fault geometry, the static distributed slip and the kinematic distributed slip.

class config.BEATconfig(**kwargs)[source]

BEATconfig is the overarching configuration class, providing all the sub-configurations classes for the problem setup, Greens Function generation, optimization algorithm and the data being used.

name

str

date

str

event

pyrocko.model.event.Event, optional

subevents

list of pyrocko.model.event.Event objects, default: []

Event objects of other events that are supposed to be estimated jointly with the main event. May have large temporal separation.

project_dir

str, default: 'event/'

problem_config

ProblemConfig, default: ProblemConfig()

geodetic_config

GeodeticConfig, optional

seismic_config

SeismicConfig, optional

polarity_config

PolarityConfig, optional

sampler_config

SamplerConfig, default: SamplerConfig()

hyper_sampler_config

SamplerConfig, optional, default: SamplerConfig()

update_hierarchicals()[source]

Evaluate the whole config and initialise necessary hierarchical parameters.

update_hypers()[source]

Evaluate the whole config and initialise necessary hyperparameters.

class config.BEMConfig(**kwargs)[source]

Undocumented.

poissons_ratio

float, default: 0.25

Poisson’s ratio

shear_modulus

float, default: 33000000000.0

Shear modulus [Pa]

earth_model_name

str, default: 'homogeneous-elastic-halfspace'

mesh_size

float, default: 0.5

Determines the size of triangles [km], the smaller the finer the discretization.

check_mesh_intersection

bool, default: True

If meshes intersect reject sample.

boundary_conditions

BoundaryConditions, default: BoundaryConditions()

Boundary conditions for the interaction matrix and imposed traction field.

class config.BoundaryCondition(**kwargs)[source]

Undocumented.

slip_component

str (pyrocko.guts.StringChoice), default: 'normal'

Slip-component for Green’s Function calculation, maybe strike, dip, normal

source_idxs

list of int objects, default: [0]

Indices for the sources that are causing the stress.

receiver_idxs

list of int objects, default: [0]

Indices for the sources that receive the stress.

class config.BoundaryConditions(**kwargs)[source]

Undocumented.

conditions

dict of BoundaryCondition objects, default: {'strike': <config.BoundaryCondition object at 0x7fd9bf02cd10>, 'dip': <config.BoundaryCondition object at 0x7fd9bf02ce10>, 'normal': <config.BoundaryCondition object at 0x7fd9bf02cf10>}

exception config.ConfigNeedsUpdatingError(errmess='')[source]
class config.CorrectionConfig(**kwargs)[source]

Undocumented.

dataset_names

list of str objects, default: []

Datasets to include in the correction.

enabled

bool, default: False

Flag to enable Correction.

class config.DatasetConfig(**kwargs)[source]

Base config for datasets.

datadir

str, default: './'

Path to directory of the data

names

list of str objects, default: ['Data prefix filenames here ...']

class config.DatatypeParameterMapping(**kwargs)[source]

Undocumented.

sources_variables

list of dict of int objects objects, default: []

n_sources

int

point_to_sources_mapping() Dict[str, List[int]][source]

Mapping for mixed source setups. Mapping source parameter name to source indexes. Is used by utilit.split_point to split the full point into subsource_points.

class config.DiscretizationConfig(**kwargs)[source]

Config to determine the discretization of the finite fault(s)

extension_widths

list of float objects, default: [0.1]

Extend reference sources by this factor in each dip-direction. 0.1 means extension of the fault by 10% in each direction, i.e. 20% in total. If patches would intersect with the free surface they are constrained to end at the surface. Each value is applied following the list-order to the respective reference source.

extension_lengths

list of float objects, default: [0.1]

Extend reference sources by this factor in each strike-direction. 0.1 means extension of the fault by 10% in each direction, i.e. 20% in total. Each value is applied following the list-order to the respective reference source.

class config.EulerPoleConfig(**kwargs)[source]

Undocumented.

class config.FFIConfig(**kwargs)[source]

Undocumented.

regularization

str (pyrocko.guts.StringChoice), default: 'none'

Flag for regularization in distributed slip-optimization. Choices: laplacian, none

regularization_config

RegularizationConfig, optional

Additional configuration parameters for regularization

initialization

str (pyrocko.guts.StringChoice), default: 'random'

Initialization of chain starting points, default: random. Choices: random, lsq

npatches

int, optional

Number of patches on full fault. Must not be edited manually! Please edit indirectly through patch_widths and patch_lengths parameters!

subfault_npatches

list of int objects, optional, default: []

Number of patches on each sub-fault. Must not be edited manually! Please edit indirectly through patch_widths and patch_lengths parameters!

class config.GFConfig(**kwargs)[source]

Base config for layered GreensFunction calculation parameters.

store_superdir

str, default: './'

Absolute path to the directory where Greens Function stores are located

earth_model_name

str, default: 'ak135-f-continental.f'

Name of the reference earthmodel, see pyrocko.cake.builtin_models() for alternatives.

nworkers

int, default: 1

Number of processors to use for calculating the GFs

class config.GFLibaryConfig(**kwargs)[source]

Baseconfig for GF Libraries

component

str, default: 'uparr'

event

pyrocko.model.event.Event, default: Event()

crust_ind

int, default: 0

reference_sources

list of beat.sources.RectangularSource objects, default: []

Geometry of the reference source(s) to fix

class config.GNSSCorrectionConfig(**kwargs)[source]

Undocumented.

station_blacklist

list of str objects, default: []

GNSS station names to apply no correction.

station_whitelist

list of str objects, default: []

GNSS station names to apply the correction.

class config.GNSSDatasetConfig(**kwargs)[source]

Undocumented.

components

list of str objects, default: ['north', 'east', 'up']

blacklist

list of str objects, default: ['put blacklisted station names here or delete']

GNSS station to be thrown out.

class config.GeodeticConfig(**kwargs)[source]

Config for geodetic data optimization related parameters.

types

dict of DatasetConfig objects, default: {'SAR': <config.SARDatasetConfig object at 0x7fd9bf01b610>, 'GNSS': <config.GNSSDatasetConfig object at 0x7fd9bf01b6d0>}

Types of geodetic data, i.e. SAR, GNSS, with their configs

noise_estimator

GeodeticNoiseAnalyserConfig, default: GeodeticNoiseAnalyserConfig()

Determines the structure of the data-covariance matrix.

interpolation

str (pyrocko.guts.StringChoice), default: 'multilinear'

GF interpolation scheme during synthetics generation. Choices: nearest_neighbor, multilinear

corrections_config

GeodeticCorrectionsConfig, default: GeodeticCorrectionsConfig()

Config for additional corrections to apply to geodetic datasets.

dataset_specific_residual_noise_estimation

bool, default: False

If set, for EACH DATASET specific hyperparameter estimation.For geodetic data: n_hypers = nimages (SAR) or nstations * ncomponents (GNSS).If false one hyperparameter for each DATATYPE and displacement COMPONENT.

gf_config

MediumConfig, default: GeodeticGFConfig()

class config.GeodeticCorrectionsConfig(**kwargs)[source]

Config for corrections to geodetic datasets.

euler_poles

list of EulerPoleConfig objects, default: [<config.EulerPoleConfig object at 0x7fd9bf01a690>]

ramp

RampConfig, default: RampConfig()

strain_rates

list of StrainRateConfig objects, default: [<config.StrainRateConfig object at 0x7fd9bf01a790>]

class config.GeodeticGFConfig(**kwargs)[source]

Geodetic GF parameters for Layered Halfspace.

code

str, default: 'psgrn'

Modeling code to use. (psgrn, … others need to beimplemented!)

sample_rate

float, default: 1.1574074074074073e-05

Sample rate for the Greens Functions. Mainly relevant for viscoelastic modeling. Default: coseismic-one day

sampling_interval

float, default: 1.0

Distance dependent sampling spacing coefficient.1. - equidistant

medium_depth_spacing

float, default: 1.0

Depth spacing [km] for GF medium grid.

medium_distance_spacing

float, default: 10.0

Distance spacing [km] for GF medium grid.

class config.GeodeticGFLibraryConfig(**kwargs)[source]

Config for the linear Geodetic GF Library for dumping and loading.

dimensions

tuple of 2 int objects, default: (0, 0)

datatype

str, default: 'geodetic'

class config.GeodeticLinearGFConfig(**kwargs)[source]

Undocumented.

class config.GeodeticNoiseAnalyserConfig(**kwargs)[source]

Undocumented.

structure

str (pyrocko.guts.StringChoice), default: 'import'

Determines data-covariance matrix structure. Choices: import, non-toeplitz

max_dist_perc

float, default: 0.2

Distance in decimal percent from maximum distance in scene to use for non-Toeplitz noise estimation

exception config.InconsistentParameterNaming(keyname='', parameter='', mode='')[source]
class config.LaplacianRegularizationConfig(**kwargs)[source]

Determines the structure of the Laplacian.

correlation_function

str (pyrocko.guts.StringChoice), default: 'nearest_neighbor'

Determines the correlation function for smoothing across patches. Choices: nearest_neighbor, gaussian, exponential

class config.LinearGFConfig(**kwargs)[source]

Config for linear GreensFunction calculation parameters.

reference_sources

list of beat.sources.RectangularSource objects, default: []

Geometry of the reference source(s) to fix

sample_rate

float, default: 2.0

Sample rate for the Greens Functions.

discretization

str (pyrocko.guts.StringChoice), default: 'uniform'

Flag for discretization of finite sources into patches. Choices: uniform, resolution

discretization_config

DiscretizationConfig, default: UniformDiscretizationConfig()

Discretization configuration that allows customization.

class config.MediumConfig(**kwargs)[source]

Base class for subsurface medium configuration

reference_model_idx

int, default: 0

Index to velocity model to use for the optimization. 0 - reference, 1..n - model of variations

sample_rate

float, optional, default: 0

n_variations

tuple of 2 int objects, default: (0, 1)

Start and end index to vary input velocity model. Important for the calculation of the model prediction covariance matrix with respect to uncertainties in the velocity model.

class config.MetropolisConfig(**kwargs)[source]

Config for optimization parameters of the Adaptive Metropolis algorithm.

n_jobs

int, default: 1

Number of processors to use, i.e. chains to sample in parallel.

n_steps

int, default: 25000

Number of steps for the MC chain.

n_chains

int, default: 20

Number of Metropolis chains for sampling.

thin

int, default: 2

Thinning parameter of the sampled trace. Every “thin”th sample is taken.

burn

float, default: 0.5

Burn-in parameter between 0. and 1. to discard fraction of samples from the beginning of the chain.

class config.ModeConfig(**kwargs)[source]

BaseConfig for optimization mode specific configuration.

class config.NoneRegularizationConfig[source]

Dummy class to return None.

class config.NonlinearGFConfig(**kwargs)[source]

Config for non-linear GreensFunction calculation parameters. Defines how the grid of Green’s Functions in the respective store is created.

use_crust2

bool, default: True

Flag, for replacing the crust from the earthmodelwith crust from the crust2 model.

replace_water

bool, default: True

Flag, for replacing water layers in the crust2 model.

custom_velocity_model

pyrocko.cake.LayeredModel (pyrocko.gf.meta.Earthmodel1D), optional

Custom Earthmodel, in case crust2 and standard model not wanted. Needs to be a :py::class:cake.LayeredModel

source_depth_min

float, default: 0.0

Minimum depth [km] for GF function grid.

source_depth_max

float, default: 10.0

Maximum depth [km] for GF function grid.

source_depth_spacing

float, default: 1.0

Depth spacing [km] for GF function grid.

source_distance_radius

float, default: 20.0

Radius of distance grid [km] for GF function grid around reference event.

source_distance_spacing

float, default: 1.0

Distance spacing [km] for GF function grid w.r.t reference_location.

error_depth

float, default: 0.1

3sigma [%/100] error in velocity model layer depth, translates to interval for varying the velocity model

error_velocities

float, default: 0.1

3sigma [%/100] in velocity model layer wave-velocities, translates to interval for varying the velocity model

depth_limit_variation

float, default: 600.0

Depth limit [km] for varying the velocity model. Below that depth the velocity model is not varied based on the errors defined above!

version

str, default: ''

Version number of the backend codes. If not defined, default versions will be used.

class config.ParallelTemperingConfig(**kwargs)[source]

Undocumented.

n_samples

int, default: 100000

Number of samples of the posterior distribution. Only the samples of processors that sample from the posterior (beta=1) are kept.

n_chains

int, default: 2

Number of PT chains to sample in parallel. A number < 2 will raise an Error, as this is the minimum amount of chains needed.

swap_interval

tuple of 2 int objects, default: (100, 300)

Interval for uniform random integer that is drawn to determine the length of MarkovChains on each worker. When chain is completed the last sample is returned for swapping state between chains. Consequently, lower number will result in more state swapping.

beta_tune_interval

int, default: 5000

Sample interval of master chain after which the chain swap acceptance is evaluated. High acceptance will result in closer spaced betas and vice versa.

n_chains_posterior

int, default: 1

Number of chains that sample from the posterior at beat=1.

resample

bool, default: False

If “true” the testvalue of the priors is taken as seed for all Markov Chains.

thin

int, default: 3

Thinning parameter of the sampled trace. Every “thin”th sample is taken.

burn

float, default: 0.5

Burn-in parameter between 0. and 1. to discard fraction of samples from the beginning of the chain.

record_worker_chains

bool, default: False

If True worker chain samples are written to disc using the specified backend trace objects (during sampler initialization). Very useful for debugging purposes. MUST be False for runs on distributed computing systems!

class config.PolarityConfig(**kwargs)[source]

Undocumented.

datadir

str, default: './'

waveforms

list of PolarityFitConfig objects, default: []

Polarity mapping for potentially fitting several phases.

gf_config

GFConfig, default: PolarityGFConfig()

init_waveforms(wavenames=['any_P'])[source]

Initialise waveform configurations.

class config.PolarityFitConfig(**kwargs)[source]

Undocumented.

name

str, default: 'any_P'

Seismic phase name for picked polarities

include

bool, default: True

Whether to include this FitConfig to the estimation.

polarities_marker_path

str, default: './phase_markers.txt'

Path to table of “PhaseMarker” containing polarity of waveforms at station(s) dumped by pyrocko.gui.marker.save_markers.

blacklist

list of str objects, default: ['']

List of Network.Station name(s) for stations to be thrown out.

event_idx

int, optional, default: 0

Index to event from events list for reference time and data extraction. Default is 0 - always use the reference event.

class config.PolarityGFConfig(**kwargs)[source]

Undocumented.

code

str, default: 'cake'

Raytracing code to use for takeoff-angle computations.

always_raytrace

bool, default: True

Set to true for ignoring the interpolation table.

reference_location

beat.heart.ReferenceLocation, optional

Reference location for the midpoint of the Green’s Function ‘grid.

sample_rate

float, optional, default: 1.0

Sample rate for the polarity Greens Functions.

class config.ProblemConfig(**kwargs)[source]

Config for optimization problem to setup.

mode

str (pyrocko.guts.StringChoice), default: 'geometry'

Problem to solve. Choices: geometry, ffi, bem

mode_config

ModeConfig, optional

Global optimization mode specific parameters.

source_types

list of str (pyrocko.guts.StringChoice) objects, default: []

stf_type

str (pyrocko.guts.StringChoice), default: 'HalfSinusoid'

Source time function type to use. Choices: Boxcar, Triangular, HalfSinusoid

decimation_factors

dict of pyrocko.guts.Any objects, optional

Determines the reduction of discretization of an extended source.

n_sources

list of int objects, default: [1]

List of number of sub-sources for each source-type

datatypes

list of pyrocko.guts.Any objects, default: ['geodetic']

hyperparameters

dict of pyrocko.guts.Any objects, default: {}

Hyperparameters to estimate the noise in different types of datatypes.

priors

dict of pyrocko.guts.Any objects, default: {}

Priors of the variables in question.

hierarchicals

dict of pyrocko.guts.Any objects, default: {}

Hierarchical parameters that affect the posterior likelihood, but do not affect the forward problem. Implemented: Temporal station corrections, orbital ramp estimation

get_derived_variables_shapes()[source]

Get variable names and shapes of derived variables of the problem.

Returns:

varnames list: of tuples of ints (shapes)

Return type:

list

get_random_variables()[source]

Evaluate problem setup and return random variables dictionary.

Returns:

  • rvs (dict) – random variable names and their kwargs

  • fixed_params (dict) – fixed random parameters

get_slip_variables()[source]

Return a list of slip variable names defined in the ProblemConfig.

get_test_point()[source]

Returns dict with test point

get_variables_mapping()[source]

Return model variables depending on problem config.

init_vars(variables=None, sizes=None)[source]

Initiate priors based on the problem mode and datatypes.

Parameters:

variables (list) – of str of variable names to initialise

set_decimation_factor()[source]

Determines the reduction of discretization of an extended source. Influences yet only the RectangularSource.

set_vars(bounds_dict, attribute='priors', init=False)[source]

Set variable bounds to given bounds.

validate_all()[source]

Validate all involved sampling parameters.

validate_hierarchicals()[source]

Check if hierarchicals and their test values do not contradict!

validate_hypers()[source]

Check if hyperparameters and their test values do not contradict!

validate_priors()[source]

Check if priors and their test values do not contradict!

class config.RampConfig(**kwargs)[source]

Undocumented.

class config.RegularizationConfig(**kwargs)[source]

Undocumented.

class config.ResolutionDiscretizationConfig(**kwargs)[source]

Parameters that control the resolution based source discretization.

References

[Atzori2011]

Atzori & Antonioli (2011). Optimal fault resolution in geodetic inversion of coseismic data. Geophysical Journal International, 185(1):529-538

epsilon

float, default: 0.004

Damping constant for Laplacian of Greens Functions. Usually reasonable between: [0.1 to 0.005]

epsilon_search_runs

int, default: 1

If above 1, the algorithm is iteratively run, starting with epsilon as lower bound on equal logspace up to epsilon * 100. “epsilon_search_runs” determines times of repetition and the spacing between epsilons. If this is 1, only the model for “epsilon” is created! The epsilon that minimises Resolution spreading (Atzori et al. 2019) is chosen!

resolution_thresh

float, default: 0.999

Resolution threshold discretization continues until all patches are below this threshold. The lower the finer the discretization. Reasonable between: [0.95, 0.999]

depth_penalty

float, default: 3.5

The higher the number the more penalty on the deeper patches-ergo larger patches.

alpha

float, default: 0.3

Decimal percentage of largest patches that are subdivided further. Reasonable: [0.1, 0.3]

patch_widths_min

list of float objects, default: [1.0]

Patch width [km] for min final discretization of patches.

patch_widths_max

list of float objects, default: [5.0]

Patch width [km] for max initial discretization of patches.

patch_lengths_min

list of float objects, default: [1.0]

Patch length [km] for min final discretization of patches.

patch_lengths_max

list of float objects, default: [5.0]

Patch length [km] for max initial discretization of patches.

get_patch_dimensions()[source]
Return type:

List of patch_widths, List of patch_lengths

class config.SARDatasetConfig(**kwargs)[source]

Undocumented.

class config.SMCConfig(**kwargs)[source]

Config for optimization parameters of the SMC algorithm.

n_jobs

int, default: 1

Number of processors to use, i.e. chains to sample in parallel.

n_steps

int, default: 100

Number of steps for the MC chain.

n_chains

int, default: 1000

Number of Metropolis chains for sampling.

coef_variation

float, default: 1.0

Coefficient of variation, determines the similarity of theintermediate stage pdfs;low - small beta steps (slow cooling),high - wide beta steps (fast cooling)

stage

int, default: 0

Stage where to start/continue the sampling. Has to be int -1 for final stage

proposal_dist

str, default: 'MultivariateNormal'

Multivariate Normal Proposal distribution, for Metropolis stepsalternatives need to be implemented

update_covariances

bool, default: False

Update model prediction covariance matrixes in transition stages.

class config.SamplerConfig(**kwargs)[source]

Config for the sampler specific parameters.

name

str (pyrocko.guts.StringChoice), default: 'SMC'

Sampler to use for sampling the solution space. Choices: PT, SMC, Metropolis

backend

str (pyrocko.guts.StringChoice), default: 'csv'

File type to store output traces. Binary is fast, csv is good for easy sample inspection. Choices: csv, bin. Default: csv

progressbar

bool, default: True

Display progressbar(s) during sampling.

buffer_size

int, default: 5000

number of samples after which the result buffer is written to disk

buffer_thinning

int, default: 1

Factor by which the result trace is thinned before writing to disc.

parameters

SamplerParameters, default: SMCConfig()

Sampler tependend Parameters

class config.SamplerParameters(**kwargs)[source]

Undocumented.

tune_interval

int, default: 50

Tune interval for adaptive tuning of Metropolis step size.

proposal_dist

str, default: 'Normal'

Normal Proposal distribution, for Metropolis steps;Alternatives: Cauchy, Laplace, Poisson, MultivariateNormal

check_bnd

bool, default: True

Flag for checking whether proposed step lies within variable bounds.

rm_flag

bool, default: False

Remove existing results prior to sampling.

class config.SeismicConfig(**kwargs)[source]

Config for seismic data optimization related parameters.

datadir

str, default: './'

noise_estimator

SeismicNoiseAnalyserConfig, default: SeismicNoiseAnalyserConfig()

Determines the structure of the data-covariance matrix.

responses_path

str, optional

Path to response file

pre_stack_cut

bool, default: True

Cut the GF traces before stacking around the specified arrival taper

station_corrections

bool, default: False

If set, optimize for time shift for each station.

waveforms

list of WaveformFitConfig objects, default: []

dataset_specific_residual_noise_estimation

bool, default: False

If set, for EACH DATASET specific hyperparameter estimation.n_hypers = nstations * nchannels.If false one hyperparameter for each DATATYPE and displacement COMPONENT.

gf_config

GFConfig, default: SeismicGFConfig()

init_waveforms(wavenames=['any_P'])[source]

Initialise waveform configurations.

class config.SeismicGFConfig(**kwargs)[source]

Seismic GF parameters for Layered Halfspace.

reference_location

beat.heart.ReferenceLocation, optional

Reference location for the midpoint of the Green’s Function grid.

code

str, default: 'qssp'

Modeling code to use. (qssp, qseis, coming soon: qseis2d)

sample_rate

float, default: 2.0

Sample rate for the Greens Functions.

rm_gfs

bool, default: True

Flag for removing modeling module GF files after completion.

class config.SeismicGFLibraryConfig(**kwargs)[source]

Config for the linear Seismic GF Library for dumping and loading.

wave_config

WaveformFitConfig, default: WaveformFitConfig()

starttime_sampling

float, default: 0.5

duration_sampling

float, default: 0.5

starttime_min

float, default: 0.0

duration_min

float, default: 0.1

dimensions

tuple of 5 int objects, default: (0, 0, 0, 0, 0)

datatype

str, default: 'seismic'

mapnumber

int, optional

class config.SeismicLinearGFConfig(**kwargs)[source]

Config for seismic linear GreensFunction calculation parameters.

reference_location

beat.heart.ReferenceLocation, optional

Reference location for the midpoint of the Green’s Function grid.

duration_sampling

float, default: 1.0

Calculate Green’s Functions for varying Source Time Function durations determined by prior bounds. Discretization between is determined by duration sampling.

starttime_sampling

float, default: 1.0

Calculate Green’s Functions for varying rupture onset times.These are determined by the (rupture) velocity prior bounds and the hypocenter location.

class config.SeismicNoiseAnalyserConfig(**kwargs)[source]

Undocumented.

structure

str (pyrocko.guts.StringChoice), default: 'variance'

Determines data-covariance matrix structure. Choices: variance, exponential, import, non-toeplitz

pre_arrival_time

float, default: 5.0

Time [s] before synthetic P-wave arrival until variance is estimated

class config.SourcesParameterMapping(**kwargs)[source]

Mapping for source parameters to point of variables.

source_types

list of str objects, default: []

n_sources

list of int objects, default: []

datatypes

list of str (pyrocko.guts.StringChoice) objects, default: []

mappings

dict of DatatypeParameterMapping objects, default: {}

unique_variables_sizes() Dict[str, int][source]

Combine source specific variable dicts into a common setup dict

Raises:

ValueError – if no source specific dicts exist

Returns:

of variable names and their combined sizes

Return type:

Dict

class config.StrainRateConfig(**kwargs)[source]

Undocumented.

class config.UniformDiscretizationConfig(**kwargs)[source]

Undocumented.

patch_widths

list of float objects, default: [5.0]

List of Patch width [km] to divide reference sources. Each value is applied following the list-order to the respective reference source

patch_lengths

list of float objects, default: [5.0]

Patch length [km] to divide reference sources Each value is applied following the list-order to the respective reference source

class config.WaveformFitConfig(**kwargs)[source]

Config for specific parameters that are applied to post-process a specific type of waveform and calculate the misfit.

include

bool, default: True

Flag to include waveform into optimization.

preprocess_data

bool, default: True

Flag to filter input data.

name

str, default: 'any_P'

arrivals_marker_path

str, default: './phase_markers.txt'

Path to table of “PhaseMarker” containing arrival times of waveforms at station(s) dumped by pyrocko.gui.marker.save_markers.

blacklist

list of str objects, default: []

Network.Station codes for stations to be thrown out.

quantity

str (pyrocko.guts.StringChoice), default: 'displacement'

Quantity of synthetics to be computed.

channels

list of str objects, default: ['Z']

filterer

list of beat.heart.FilterBase objects, default: []

List of Filters that are applied in the order of the list.

distances

tuple of 2 float objects, default: (30.0, 90.0)

interpolation

str (pyrocko.guts.StringChoice), default: 'multilinear'

GF interpolation scheme. Choices: nearest_neighbor, multilinear

arrival_taper

pyrocko.trace.Taper, default: ArrivalTaper()

Taper a,b/c,d time [s] before/after wave arrival

event_idx

int, optional, default: 0

Index to event from events list for reference time and data extraction. Default is 0 - always use the reference event.

domain

str (pyrocko.guts.StringChoice), default: 'time'

type of trace

config.dump_config(config)[source]

Dump configuration file.

Parameters:

config (BEATConfig) –

config.init_config(name, date=None, min_magnitude=6.0, main_path='./', datatypes=['geodetic'], mode='geometry', source_types=['RectangularSource'], n_sources=[1], waveforms=['any_P'], sampler='SMC', hyper_sampler='Metropolis', use_custom=False, individual_gfs=False)[source]

Initialise BEATconfig File and write it main_path/name . Fine parameters have to be edited in the config file .yaml manually.

Parameters:
  • name (str) – Name of the event

  • date (str) – ‘YYYY-MM-DD’, date of the event

  • min_magnitude (scalar, float) – approximate minimum Mw of the event

  • datatypes (List of strings) – data sets to include in the optimization: either ‘geodetic’ and/or ‘seismic’

  • mode (str) – type of optimization problem: ‘Geometry’ / ‘Static’/ ‘Kinematic’

  • n_sources (int) – number of sources to solve for / discretize depending on mode parameter

  • wavenames (list) – of strings of wavenames to include into the misfit function and GF calculation

  • sampler (str) – Optimization algorithm to use to sample the solution space Options: ‘SMC’, ‘Metropolis’

  • use_custom (boolean) – Flag to setup manually a custom velocity model.

  • individual_gfs (boolean) – Flag to use individual Green’s Functions for each specific station. If false a reference location will be initialised in the config file. If true the reference locations will be taken from the imported station objects.

Return type:

BEATconfig

config.init_reference_sources(source_points, n_sources, source_type, stf_type, event=None)[source]

Initialise sources of specified geometry

Parameters:

source_points (list) – of dicts or kite sources

config.load_config(project_dir, mode)[source]

Load configuration file.

Parameters:
  • project_dir (str) – path to the directory of the configuration file

  • mode (str) – type of optimization problem: ‘geometry’ / ‘static’/ ‘kinematic’

  • update (list) – of strings to update parameters ‘hypers’ or/and ‘hierarchicals’

Return type:

BEATconfig

The sampler Module

metropolis

Metropolis algorithm module, wrapping the pymc implementation.

Provides the possibility to update the involved covariance matrixes within the course of sampling the chain.

class sampler.metropolis.Metropolis(*args, **kwargs)[source]

Metropolis-Hastings sampler

Parameters:
  • value_vars (list) – List of variables for sampler

  • n_chains (int) – Number of chains per stage has to be a large number of number of n_jobs (processors to be used) on the machine.

  • scaling (float) – Factor applied to the proposal distribution i.e. the step size of the Markov Chain

  • likelihood_name (string) – name of the pymc.determinsitic variable that contains the model likelihood - defaults to ‘like’

  • backend (str) – type of backend to use for sample results storage, for alternatives see backend.backend:catalog

  • proposal_distbeat.sampler.base.Proposal Type of proposal distribution

  • tune (boolean) – Flag for adaptive scaling based on the acceptance rate

  • model (pymc.model.Model) – Optional model for sampling step. Defaults to None (taken from context).

apply_sampler_state(state)[source]

Update sampler state to given state (obtained by ‘get_sampler_state’)

Parameters:

state (dict) – with sampler parameters

get_sampler_state()[source]

Return dictionary of sampler state.

Return type:

dict of sampler state

sampler.metropolis.get_trace_stats(mtrace, step, burn=0.5, thin=2, n_jobs=1)[source]

Get mean value of trace variables and return point.

Parameters:
  • mtrace (pymc.backends.base.MultiTrace) – Multitrace sampling result

  • step (initialised smc.SMC sampler object) –

  • burn (float) – Burn-in parameter to throw out samples from the beginning of the trace

  • thin (int) – Thinning of samples in the trace

Return type:

dict with points, covariance matrix

sampler.metropolis.metropolis_sample(n_steps=10000, homepath=None, start=None, progressbar=False, rm_flag=False, buffer_size=5000, buffer_thinning=1, step=None, model=None, n_jobs=1, update=None, burn=0.5, thin=2)[source]

Execute Metropolis algorithm repeatedly depending on the number of chains.

smc

Sequential Monte Carlo Sampler module;

Runs on any pymc model.

class sampler.smc.SMC(*args, **kwargs)[source]

Sequential Monte-Carlo sampler class.

Parameters:
  • vars (list) – List of variables for sampler

  • out_vars (list) – List of output variables for trace recording. If empty unobserved_RVs are taken.

  • n_chains (int) – Number of chains per stage has to be a large number of number of n_jobs (processors to be used) on the machine.

  • scaling (float) – Factor applied to the proposal distribution i.e. the step size of the Markov Chain

  • likelihood_name (string) – name of the pymc.determinsitic variable that contains the model likelihood - defaults to ‘like’

  • proposal_distpymc.metropolis.Proposal Type of proposal distribution, see pymc.step_methods.metropolis for options

  • tune (boolean) – Flag for adaptive scaling based on the acceptance rate

  • coef_variation (scalar, float) – Coefficient of variation, determines the change of beta from stage to stage, i.e.indirectly the number of stages, low coef_variation –> slow beta change, results in many stages and vice verca (default: 1.)

  • check_bound (boolean) – Check if current sample lies outside of variable definition speeds up computation as the forward model won’t be executed default: True

  • model (pymc.Model) – Optional model for sampling step. Defaults to None (taken from context).

  • backend (str) – type of backend to use for sample results storage, for alternatives see backend.backend:catalog

References

[Ching2007]

Ching, J. and Chen, Y. (2007). Transitional Markov Chain Monte Carlo Method for Bayesian Model Updating, Model Class Selection, and Model Averaging. J. Eng. Mech., 10.1061/(ASCE)0733-9399(2007)133:7(816), 816-832. link

calc_beta()[source]

Calculate next tempering beta and importance weights based on current beta and sample likelihoods.

Returns:

  • beta(m+1) (scalar, float) – tempering parameter of the next stage

  • beta(m) (scalar, float) – tempering parameter of the current stage

  • weights (numpy.ndarray) – Importance weights (floats)

calc_covariance()[source]

Calculate trace covariance matrix based on importance weights.

Returns:

cov – weighted covariances (NumPy > 1.10. required)

Return type:

numpy.ndarray

get_chain_previous_lpoint(mtrace)[source]

Read trace results and take end points for each chain and set as previous chain result for comparison of metropolis select.

Parameters:

mtrace (pymc.backend.base.MultiTrace) –

Returns:

chain_previous_lpoint – all unobservedRV values, including dataset likelihoods

Return type:

list

get_map_end_points()[source]

Calculate mean of the end-points and return point.

Return type:

Dictionary of trace variables

resample()[source]

Resample pdf based on importance weights. based on Kitagawas deterministic resampling algorithm.

Returns:

outindex – Array of resampled trace indexes

Return type:

numpy.ndarray

select_end_points(mtrace)[source]

Read trace results (variables and model likelihood) and take end points for each chain and set as start population for the next stage.

Parameters:

mtrace (pymc.backend.base.MultiTrace) –

Returns:

  • population (list) – of pymc.Point() dictionaries

  • array_population (numpy.ndarray) – Array of trace end-points

  • likelihoods (numpy.ndarray) – Array of likelihoods of the trace end-points

sampler.smc.smc_sample(n_steps, step=None, start=None, homepath=None, stage=0, n_jobs=1, progressbar=False, buffer_size=5000, buffer_thinning=1, model=None, update=None, random_seed=None, rm_flag=False)[source]

Sequential Monte Carlo samlping

Samples the solution space with n_chains of Metropolis chains, where each chain has n_steps iterations. Once finished, the sampled traces are evaluated:

  1. Based on the likelihoods of the final samples, chains are weighted

  2. the weighted covariance of the ensemble is calculated and set as new proposal distribution

  3. the variation in the ensemble is calculated and the next tempering parameter (beta) calculated

  4. New n_chains Metropolis chains are seeded on the traces with high weight for n_steps iterations

  5. Repeat until beta > 1.

Parameters:
  • n_steps (int) – The number of samples to draw for each Markov-chain per stage

  • step (SMC) – SMC initialisation object

  • start (List of dictionaries) – with length of (n_chains) Starting points in parameter space (or partial point) Defaults to random draws from variables (defaults to empty dict)

  • stage (int) – Stage where to start or continue the calculation. It is possible to continue after completed stages (stage should be the number of the completed stage + 1). If None the start will be at stage = 0.

  • n_jobs (int) – The number of cores to be used in parallel. Be aware that pytensor has internal parallelisation. Sometimes this is more efficient especially for simple models. step.n_chains / n_jobs has to be an integer number!

  • homepath (string) – Result_folder for storing stages, will be created if not existing.

  • progressbar (bool) – Flag for displaying a progress bar

  • buffer_size (int) – this is the number of samples after which the buffer is written to disk or if the chain end is reached

  • buffer_thinning (int) – every nth sample of the buffer is written to disk default: 1 (no thinning)

  • model (pymc.Model) – (optional if in with context) has to contain deterministic variable name defined under step.likelihood_name’ that contains the model likelihood

  • update (models.Problem) – Problem object that contains all the observed data and (if applicable) covariances to be updated each transition step.

  • rm_flag (bool) – If True existing stage result folders are being deleted prior to sampling.

References

[Minson2013]

Minson, S. E. and Simons, M. and Beck, J. L., (2013), Bayesian inversion for finite fault earthquake source models I- Theory and algorithm. Geophysical Journal International, 2013, 194(3), pp.1701-1726, link

pt

Parallel Tempering algorithm with mpi4py

class sampler.pt.TemperingManager(step, n_workers, model, progressbar, buffer_size, swap_interval, beta_tune_interval, n_workers_posterior)[source]

Manages worker related work attributes and holds mappings between workers, betas and counts acceptance of chain swaps.

Provides methods for chain_swapping and beta adaptation.

property betas

Inverse of Sampler Temperatures. The lower the more likely a step is accepted.

get_acceptance_swap(beta, beta_tune_interval)[source]

Returns acceptance rate for swapping states between chains.

get_package(source, trace=None, resample=False, burnin=1000)[source]

Register worker to the manager and get assigned the annealing parameter and the work package. If worker was registered previously continues old task. To ensure book-keeping of workers and their sampler states.

Parameters:
  • source (int) – MPI source id from a worker message

  • trace (:class:beat.backend.BaseTrace) – Trace object keeping the samples of the Markov Chain

  • resample (bool) – If True all the Markov Chains are starting sampling in the testvalue

  • burnin (int) – Number of samples the worker is taking before updating the proposal covariance matrix based on the trace samples

Returns:

step – object that contains the step method how to sample the solution space

Return type:

class:beat.sampler.Metropolis

get_posterior_workers(idxs=False)[source]

Worker indexes that are sampling from the posterior (beta == 1.) If idxs is True return indexes to sample and acceptance arrays

get_workers_ge_beta(beta, idxs=False)[source]

Get worker source indexes greater, equal given beta. If idxs is True return indexes to sample and acceptance arrays

propose_chain_swap(m1, m2, source1, source2)[source]

Propose a swap between chain samples.

tune_betas()[source]

Evaluate the acceptance rate of posterior workers and the lowest tempered worker. This scaling here has the inverse behaviour of metropolis step scaling! If there is little acceptance more exploration is needed and lower beta values are desired.

update_betas(t_scale=None)[source]

Update annealing schedule for all the workers.

Parameters:
  • t_scale (float) – factor to adjust the step size in the temperatures the base step size is 1.e1

  • update (bool) – if true the current scale factor is updated by given

Return type:

list of inverse temperatures (betas) in decreasing beta order

worker_a2l(m, source)[source]

Map m from array to list space worker deoendend.

worker_beta_updated(source, check=False)[source]

Check if source beta is updated.

Parameters:
  • source (int) – mpi worker index

  • check (boolean) – if True worker beta status is set to “updated”

Return type:

boolean, True if beta is updated

sampler.pt.pt_sample(step, n_chains, n_samples=100000, start=None, swap_interval=(100, 300), beta_tune_interval=10000, n_workers_posterior=1, homepath='', progressbar=True, buffer_size=5000, buffer_thinning=1, model=None, rm_flag=False, resample=False, keep_tmp=False, record_worker_chains=False)[source]

Parallel Tempering algorithm

(adaptive) Metropolis sampling over n_jobs of MC chains. Half (floor) of these are sampling at beta = 1 (the posterior). The other half of the MC chains are tempered linearly down to beta = 1e-6. Randomly, the states of chains are swapped based on the Metropolis-Hastings acceptance criterion to the power of the differences in beta of the involved chains. The samples are written to disk only by the master process. Once the specified number of samples is reached sampling is stopped.

Parameters:
  • step (beat.sampler.Metropolis) – sampler object

  • n_chains (int) – number of Markov Chains to use

  • n_samples (int) – number of samples in the result trace, if reached sampling stops

  • swap_interval (tuple) – interval for uniform random integer that determines the length of each MarkovChain on each worker. The chain end values of workers are proposed for swapping state and are written in the final trace

  • beta_tune_interval (int) – Evaluate acceptance rate of chain swaps and tune betas similar to proposal step tuning

  • n_workers_posterior (int) – number of workers that sample from the posterior distribution at beta=1

  • homepath (string) – Result_folder for storing stages, will be created if not existing

  • progressbar (bool) – Flag for displaying a progress bar

  • buffer_size (int) – this is the number of samples after which the buffer is written to disk or if the chain end is reached

  • buffer_thinning (int) – every nth sample of the buffer is written to disk, default: 1 (no thinning)

  • model (pymc.Model) – (optional if in with context) has to contain deterministic variable name defined under step.likelihood_name’ that contains the model likelihood

  • rm_flag (bool) – If True existing stage result folders are being deleted prior to sampling.

  • resample (bool) – If True all the Markov Chains are starting sampling at the testvalue

  • keep_tmp (bool) – If True the execution directory (under ‘/tmp/’) is not being deleted after process finishes

  • record_worker_chains (bool) – If True worker chain samples are written to disc using the specified backend trace objects (during sampler initialization). Very useful for debugging purposes. MUST be False for runs on distributed computing systems!

sampler.pt.sample_pt_chain(draws, step=None, start=None, trace=None, chain=0, tune=None, progressbar=True, model=None, random_seed=-1)[source]

Sample a single chain of the Parallel Tempering algorithm and return the last sample of the chain. Depending on the step object the MarkovChain can have various step behaviour, e.g. Metropolis, NUTS, …

Parameters:
  • draws (int or beat.sampler.base.Proposal) – The number of samples to draw for each Markov-chain per stage or a Proposal distribution

  • step (sampler.metropolis.Metropolis) – Metropolis initialisation object

  • start (dict) – Starting point in parameter space (or partial point) Defaults to random draws from variables (defaults to empty dict)

  • chain (int) – Chain number used to store sample in backend.

  • stage (int) – Stage where to start or continue the calculation. It is possible to continue after completed stages (stage should be the number of the completed stage + 1). If None the start will be at stage = 0.

  • tune (int) – Number of iterations to tune, if applicable (defaults to None)

  • progressbar (bool) – Flag for displaying a progress bar

  • model (pymc.Model) – (optional if in with context) has to contain deterministic variable name defined under step.likelihood_name’ that contains the model likelihood

Return type:

numpy.NdArray with end-point of the MarkovChain

sampler.pt.tune(scale, acc_rate)[source]

Tunes the temperature scaling parameter according to the acceptance rate over the last tune_interval:

Rate Variance adaptation —- ——————- <0.001 x 0.8 <0.05 x 0.9 <0.2 x 0.95 >0.5 x 1.05 >0.75 x 1.1 >0.95 x 1.2

The ffi Module

The parallel Module

exception parallel.TimeoutException(jobstack=[])[source]

Exception raised if a per-task timeout fires.

class parallel.WatchedWorker(task, work, initializer=None, initargs=(), timeout=65535)[source]

Wrapper class for parallel execution of a task.

Parameters:
  • task (function to execute) –

  • work (List) – of arguments to specified function

  • timeout (int) – time [s] after which worker is fired, default 65536s

run()[source]

Start working on the task!

parallel.borrow_all_memories(shared_params, memshared_instances)[source]

Run pytensor_borrow_memory on a list of params and shared memory sharedctypes.

Parameters:
  • shared_params (list) – of pytensor.tensor.sharedvar.TensorSharedVariable the pytensor shared variable where shared memory should be used instead.

  • memshared_instances (dict of tuples) – of multiprocessing.RawArray and their shapes the memory shared across processes (e.g.from memshare_sparams)

Notes

Same as borrow_memory but for lists of shared memories and pytensor variables. See borrow_memory

parallel.borrow_memory(shared_param, memshared_instance, shape)[source]

Spawn different processes with the shared memory of your pytensor model’s variables.

Parameters:
  • shared_param (pytensor.tensor.sharedvar.TensorSharedVariable) – the pytensor shared variable where shared memory should be used instead.

  • memshared_instance (multiprocessing.RawArray) – the memory shared across processes (e.g.from memshare_sparams)

  • shape (tuple) – of shape of shared instance

Notes

Modiefied from: https://github.com/JonathanRaiman/pytensor_lstm/blob/master/pytensor_lstm/shared_memory.py

For each process in the target function run the pytensor_borrow_memory method on the parameters you want to have share memory across processes. In this example we have a model called “mymodel” with parameters stored in a list called “params”. We loop through each pytensor shared variable and call borrow_memory on it to share memory across processes.

Examples

>>> def spawn_model(path, wrapped_params):
    # prevent recompilation and arbitrary locks
>>>     pytensor.config.reoptimize_unpickled_function = False
>>>     pytensor.gof.compilelock.set_lock_status(False)
    # load your function from its pickled instance (from path)
>>>     myfunction = MyFunction.load(path)
    # for each parameter in your function
    # apply the borrow memory strategy to replace
    # the internal parameter's memory with the
    # across-process memory:
>>>     for param, memshared_instance in zip(
>>>             myfunction.get_shared(), memshared_instances):
>>>         borrow_memory(param, memory)
    # acquire your dataset (either through some smart shared memory
    # or by reloading it for each process)
    # dataset, dataset_labels = acquire_dataset()
    # then run your model forward in this process
>>>     epochs = 20
>>>     for epoch in range(epochs):
>>>         model.update_fun(dataset, dataset_labels)

See borrow_all_memories for list usage.

parallel.check_available_memory(filesize)[source]

Checks if the system memory can handle the given filesize.

Parameters:

filesize (float) – in [Mb] megabyte

parallel.exception_tracer(func)[source]

Function decorator that returns a traceback if an Error is raised in a child process of a pool.

parallel.get_process_id()[source]

Returns the process id of the current process

parallel.memshare(parameternames)[source]

Add parameters to set of variables that are to be put into shared memory.

Parameters:

parameternames (list of str) – off names to pytensor.tensor.sharedvar.TensorSharedVariable

parallel.memshare_sparams(shared_params)[source]

For each parameter in a list of pytensor TensorSharedVariable we substitute the memory with a sharedctype using the multiprocessing library.

The wrapped memory can then be used by other child processes thereby synchronising different instances of a model across processes (e.g. for multi cpu gradient descent using single cpu pytensor code).

Parameters:

shared_params (list) – of pytensor.tensor.sharedvar.TensorSharedVariable

Returns:

memshared_instances – of multiprocessing.sharedctypes.RawArray list of sharedctypes (shared memory arrays) that point to the memory used by the current process’s pytensor variable.

Return type:

list

Notes

Modified from: https://github.com/JonathanRaiman/pytensor_lstm/blob/master/pytensor_lstm/shared_memory.py

# define some pytensor function: myfunction = myfunction(20, 50, etc…)

# wrap the memory of the pytensor variables: memshared_instances = make_params_shared(myfunction.get_shared())

Then you can use this memory in child processes (See usage of borrow_memory)

parallel.overseer(timeout)[source]

Function decorator that raises a TimeoutException exception after timeout seconds, if the decorated function did not return.

parallel.paripool(function, workpackage, nprocs=None, chunksize=1, timeout=65535, initializer=None, initargs=(), worker_initializer=None, winitargs=())[source]

Initialises a pool of workers and executes a function in parallel by forking the process. Does forking once during initialisation.

Parameters:
  • function (function) – python function to be executed in parallel

  • workpackage (list) – of iterables that are to be looped over/ executed in parallel usually these objects are different for each task.

  • nprocs (int) – number of processors to be used in parallel process

  • chunksize (int) – number of work packages to throw at workers in each instance

  • timeout (int) – time [s] after which processes are killed, default: 65536s

  • initializer (function) – to init pool with may be container for shared arrays

  • initargs (tuple) – of arguments for the initializer

  • worker_initializer (function) – to initialize each worker process

  • winitargs (tuple) – of argument to worker_initializer

The backend Module

File trace backends modified from pymc to work efficiently with SMC

Store sampling values as CSV or binary files.

File format

Sampling values for each chain are saved in a separate file (under a directory specified by the dir_path argument). The rows correspond to sampling iterations. The column names consist of variable names and index labels. For example, the heading

x,y__0_0,y__0_1,y__1_0,y__1_1,y__2_0,y__2_1

represents two variables, x and y, where x is a scalar and y has a shape of (3, 2).

class backend.ArrayStepSharedLLK(*args, **kwargs)[source]

Modified ArrayStepShared To handle returned larger point including the likelihood values. Takes additionally a list of output vars including the likelihoods.

Parameters:
  • value_vars (list) – variables to be sampled

  • out_vars (list) – variables to be stored in the traces

  • shared (dict) – pytensor variable -> shared variables

step(point)[source]

Perform a single step of the sampler.

class backend.BaseChain(model=None, value_vars=None, buffer_size=5000, buffer_thinning=1)[source]

Base chain object, independent of file or memory output.

Parameters:
  • model (Model) – If None, the model is taken from the with context.

  • value_vars (list of variables) – Sampling values will be stored for these variables. If None, model.unobserved_RVs is used.

buffer_write(lpoint, draw)[source]

Write sampling results into buffer. If buffer is full trow an error.

get_sample_covariance(step)[source]

Return sample Covariance matrix from buffer.

class backend.FileChain(dir_path='', model=None, value_vars=None, buffer_size=5000, buffer_thinning=1, progressbar=False, k=None)[source]

Base class for a trace written to a file with buffer functionality and rogressbar. Buffer is a list of tuples of lpoints and a draw index. Inheriting classes must define the methods: ‘_write_data_to_file’ and ‘_load_df’

clear_data()[source]

Clear the data loaded from file.

write(lpoint, draw)[source]

Write sampling results into buffer. If buffer is full write samples to file.

class backend.MemoryChain(buffer_size=5000)[source]

Slim memory trace object. Keeps points in a list in memory.

Parameters:
  • draws (int) – Number of samples

  • chain (int) – Chain number

class backend.NumpyChain(dir_path, model=None, value_vars=None, buffer_size=5000, progressbar=False, k=None, buffer_thinning=1)[source]

Numpy binary trace object based on ‘.bin’ files. Fast in reading and writing. Bad for debugging.

Parameters:
  • dir_path (str) – Name of directory to store text files

  • model (Model) – If None, the model is taken from the with context.

  • vars (list of variables) – Sampling values will be stored for these variables. If None, model.unobserved_RVs is used.

  • buffer_size (int) – this is the number of samples after which the buffer is written to disk or if the chain end is reached

  • buffer_thinning (int) – every nth sample of the buffer is written to disk

  • progressbar (boolean) – flag if a progressbar is active, if not a logmessage is printed every time the buffer is written to disk

  • k (int, optional) – if given dont use shape from testpoint as size of transd variables

construct_data_structure()[source]

Create a dtype to store the data based on varnames in a numpy array.

Return type:

A numpy.dtype

point(idx)[source]

Get point of current chain with variables names as keys.

Parameters:

idx (int) – Index of the nth step of the chain

Return type:

dictionary of point values

setup(draws, chain, overwrite=False)[source]

Perform chain-specific setup. Creates file with header. If exist not overwritten again unless flag is set.

Parameters:
  • draws (int.) – Expected number of draws

  • chain – int. Chain number

  • overwrite – Bool (optional). True(default) if file need to be overwrite, false otherwise.

class backend.TextChain(dir_path, model=None, value_vars=None, buffer_size=5000, buffer_thinning=1, progressbar=False, k=None)[source]

Text trace object based on ‘.csv’ files. Slow in reading and writing. Good for debugging.

Parameters:
  • dir_path (str) – Name of directory to store text files

  • model (Model) – If None, the model is taken from the with context.

  • vars (list of variables) – Sampling values will be stored for these variables. If None, model.unobserved_RVs is used.

  • buffer_size (int) – this is the number of samples after which the buffer is written to disk or if the chain end is reached

  • buffer_thinning (int) – every nth sample of the buffer is written to disk

  • progressbar (boolean) – flag if a progressbar is active, if not a logmessage is printed every time the buffer is written to disk

  • k (int, optional) – if given dont use shape from testpoint as size of transd variables

get_values(varname, burn=0, thin=1)[source]

Get values from trace.

Parameters:
  • varname (str) – Variable name for which values are to be retrieved.

  • burn (int) – Burn-in samples from trace. This is the number of samples to be thrown out from the start of the trace

  • thin (int) – Number of thinning samples. Throw out every ‘thin’ sample of the trace.

Return type:

numpy.array

point(idx)[source]

Get point of current chain with variables names as keys.

Parameters:

idx (int) – Index of the nth step of the chain

Return type:

dictionary of point values

setup(draws, chain, overwrite=False)[source]

Perform chain-specific setup.

Parameters:
  • draws (int) – Expected number of draws

  • chain (int) – Chain number

class backend.TransDTextChain(name, model=None, value_vars=None, buffer_size=5000, progressbar=False)[source]

Result Trace object for trans-d problems. Manages several TextChains one for each dimension.

point(idx)[source]

Get point of current chain with variables names as keys.

Parameters:

idx (int) – Index of the nth step of the chain

Returns:

dict

Return type:

of point values

backend.check_multitrace(mtrace, draws, n_chains, buffer_thinning=1)[source]

Check multitrace for incomplete sampling and return indexes from chains that need to be resampled.

Parameters:
  • mtrace (pymc.backend.base.MultiTrace) – Multitrace object containing the sampling traces

  • draws (int) – Number of steps (i.e. chain length for each Markov Chain)

  • n_chains (int) – Number of Markov Chains

Return type:

list of indexes for chains that need to be resampled

backend.concatenate_traces(mtraces)[source]

Concatenate a List of MultiTraces with same chain indexes.

backend.extract_bounds_from_summary(summary, varname, shape, roundto=None, alpha=0.01)[source]

Extract lower and upper bound of random variable.

Return type:

list of num.Ndarray

backend.extract_variables_from_df(dataframe)[source]

Extract random variables and their shapes from the pymc-pandas data-frame

Parameters:

dataframe (pandas.DataFrame) –

Returns:

  • flat_names (dict) – with variable-names and respective flat-name indexes to data-frame

  • var_shapes (dict) – with variable names and shapes

backend.get_highest_sampled_stage(homedir, return_final=False)[source]

Return stage number of stage that has been sampled before the final stage.

Parameters:

homedir (str) – Directory to the sampled stage results

Returns:

stage number

Return type:

int

backend.load_multitrace(dirname, varnames=[], chains=None, backend='csv')[source]

Load TextChain database.

Parameters:
  • dirname (str) – Name of directory with files (one per chain)

  • varnames (list) – of strings with variable names

  • chains (list optional) –

Return type:

A pymc.backend.base.MultiTrace instance

backend.load_sampler_params(project_dir, stage_number, mode)[source]

Load saved parameters from given smc stage.

Parameters:
  • project_dir (str) – absolute path to directory of BEAT project

  • number (stage) – of stage number or ‘final’ for last stage

  • mode (str) – problem mode that has been solved (‘geometry’, ‘static’, ‘kinematic’)

backend.thin_buffer(buffer, buffer_thinning, ensure_last=True)[source]

Reduce a list of objects by a given value.

Parameters:
  • buffer (list) – of objects to be thinned

  • buffer_thinning (int) – every nth object in list is returned

  • ensure_last (bool) – enable to ensure that last object in list is returned

The models Module

problems

class models.problems.DistributionOptimizer(config, hypers=False)[source]

Defines the model setup to solve the linear slip-distribution and returns the model object.

Parameters:

config (:class:'config.BEATconfig') – Contains all the information about the model setup and optimization boundaries, as well as the sampler parameters.

lsq_solution(point, plot=False)[source]

Returns non-negtive least-squares solution for given input point.

Parameters:

point (dict) – in solution space

Return type:

point with least-squares solution

class models.problems.GeometryOptimizer(config, hypers=False)[source]

Defines the model setup to solve for the non-linear fault geometry.

Parameters:

config (:class:'config.BEATconfig') – Contains all the information about the model setup and optimization boundaries, as well as the sampler parameters.

models.problems.load_model(project_dir, mode, hypers=False, build=True)[source]

Load config from project directory and return BEAT problem including model.

Parameters:
  • project_dir (string) – path to beat model directory

  • mode (string) – problem name to be loaded

  • hypers (boolean) – flag to return hyper parameter estimation model instead of main model.

  • build (boolean) – flag to build models

Returns:

problem

Return type:

Problem

seismic

class models.seismic.SeismicDistributerComposite(sc, project_dir, events, hypers=False)[source]

Comprises how to solve the seismic (kinematic) linear forward model. Distributed slip

get_synthetics(point, **kwargs)[source]

Get synthetics for given point in solution space.

Parameters:
  • point (pymc.Point()) – Dictionary with model parameters

  • model (kwargs especially to change output of the forward) – outmode: stacked_traces/ tapered_data/ array

Return type:

list with heart.SeismicDataset synthetics for each target

load_fault_geometry()[source]

Load fault-geometry, i.e. discretized patches.

Return type:

heart.FaultGeometry

load_gfs(crust_inds=None, make_shared=True)[source]

Load Greens Function matrixes for each variable to be inverted for. Updates gfs and gf_names attributes.

Parameters:
  • crust_inds (list) – of int to indexes of Green’s Functions

  • make_shared (bool) – if True transforms gfs to pytensor.shared variables

point2sources(point)[source]

Returns the fault source patche(s) with the point values updated.

Parameters:

point (dict) – with random variables from solution space

update_weights(point, n_jobs=1, plot=False, chop_bounds=['b', 'c'])[source]

Updates weighting matrixes (in place) with respect to the point in the solution space.

Parameters:

point (dict) – with numpy array-like items and variable name keys

class models.seismic.SeismicGeometryComposite(sc, project_dir, sources, mapping, events, hypers=False)[source]

Comprises how to solve the non-linear seismic forward model.

Parameters:
  • sc (config.SeismicConfig) – configuration object containing seismic setup parameters

  • project_dir (str) – directory of the model project, where to find the data

  • sources (list) – of pyrocko.gf.seismosizer.Source

  • mapping (list) – of dict of varnames and their sizes

  • events (list) – of pyrocko.model.Event contains information of reference event(s), coordinates of reference point(s) and source time(s)

  • hypers (boolean) – if true initialise object for hyper parameter optimization

get_formula(input_rvs, fixed_rvs, hyperparams, problem_config)[source]

Get seismic likelihood formula for the model built. Has to be called within a with model context.

Parameters:
  • input_rvs (list) – of pymc.distribution.Distribution of source parameters

  • fixed_rvs (dict) – of numpy.array

  • hyperparams (dict) – of pymc.distribution.Distribution

  • problem_config (config.ProblemConfig) –

Returns:

posterior_llk

Return type:

pytensor.tensor.Tensor

get_pyrocko_events(point=None)[source]

Transform sources to pyrocko events.

Returns:

events – of pyrocko.model.Event

Return type:

list

get_synthetics(point, **kwargs)[source]

Get synthetics for given point in solution space.

Parameters:
  • point (pymc.Point()) – Dictionary with model parameters

  • model (kwargs especially to change output of seismic forward) – outmode = ‘traces’/ ‘array’ / ‘data’

Returns:

default

Return type:

array of synthetics for all targets

point2sources(point)[source]

Updates the composite source(s) (in place) with the point values.

Parameters:

point (dict) – with random variables from solution space

update_weights(point, n_jobs=1, plot=False, chop_bounds=['b', 'c'])[source]

Updates weighting matrixes (in place) with respect to the point in the solution space.

Parameters:

point (dict) – with numpy array-like items and variable name keys

geodetic

class models.geodetic.GeodeticBEMComposite(gc, project_dir, sources, mapping, events, hypers=False)[source]
get_synthetics(point)[source]

Get synthetics for given point in solution space.

Parameters:
  • point (pymc.Point()) – Dictionary with model parameters

  • model (kwargs especially to change output of the forward) –

Return type:

list with numpy.ndarray synthetics for each target

update_weights(point, n_jobs=1, plot=False)[source]

Updates weighting matrixes (in place) with respect to the point in the solution space.

Parameters:

point (dict) – with numpy array-like items and variable name keys

class models.geodetic.GeodeticDistributerComposite(gc, project_dir, events, hypers=False)[source]

Comprises how to solve the geodetic (static) linear forward model. Distributed slip

get_formula(input_rvs, fixed_rvs, hyperparams, problem_config)[source]

Formulation of the distribution problem for the model built. Has to be called within a with-model-context.

Parameters:
  • input_rvs (list) – of pymc.distribution.Distribution

  • hyperparams (dict) – of pymc.distribution.Distribution

Returns:

llk – log-likelihood for the distributed slip

Return type:

pytensor.tensor.Tensor

get_synthetics(point)[source]

Get synthetics for given point in solution space.

Parameters:
  • point (pymc.Point()) – Dictionary with model parameters

  • model (kwargs especially to change output of the forward) –

Return type:

list with numpy.ndarray synthetics for each target

load_fault_geometry()[source]

Load fault-geometry, i.e. discretized patches.

Return type:

heart.FaultGeometry

load_gfs(crust_inds=None, make_shared=True)[source]

Load Greens Function matrixes for each variable to be inverted for. Updates gfs and gf_names attributes.

Parameters:
  • crust_inds (list) – of int to indexes of Green’s Functions

  • make_shared (bool) – if True transforms gfs to pytensor.shared variables

point2sources(point)[source]

Returns the fault source patche(s) with the point values updated.

Parameters:

point (dict) – with random variables from solution space

update_weights(point, n_jobs=1, plot=False)[source]

Updates weighting matrixes (in place) with respect to the point in the solution space.

Parameters:

point (dict) – with numpy array-like items and variable name keys

class models.geodetic.GeodeticGeometryComposite(gc, project_dir, sources, mapping, events, hypers=False)[source]
get_synthetics(point)[source]

Get synthetics for given point in solution space.

Parameters:

point (pymc.Point()) – Dictionary with model parameters

Return type:

list with numpy.ndarray synthetics for each target

update_weights(point, n_jobs=1, plot=False)[source]

Updates weighting matrixes (in place) with respect to the point in the solution space.

Parameters:

point (dict) – with numpy array-like items and variable name keys

The covariance Module

class covariance.SeismicNoiseAnalyser(structure='variance', pre_arrival_time=5.0, engine=None, events=None, sources=None, chop_bounds=['b', 'c'])[source]

Seismic noise analyser

Parameters:
  • structure (string) – either, variance, exponential, import, non-toeplitz

  • pre_arrival_time (float) – in [s], time before P arrival until variance is estimated

  • engine (pyrocko.gf.seismosizer.LocalEngine) – processing object for synthetics calculation

  • events (list) – of pyrocko.meta.Event reference event(s) from catalog

  • chop_bounds (list of len 2) – of taper attributes a, b, c, or d

get_data_covariances(wmap, sample_rate, results=None, chop_bounds=None)[source]

Estimated data covariances of seismic traces

Parameters:
  • wmap (beat.WaveformMapping) –

  • results

  • sample_rate (float) – sampling rate of data_traces and GreensFunction stores

Return type:

numpy.ndarray

covariance.geodetic_cov_velocity_models(engine, sources, targets, dataset, plot=False, event=None, n_jobs=1)[source]

Calculate model prediction uncertainty matrix with respect to uncertainties in the velocity model for geodetic targets using fomosto GF stores.

Parameters:
  • engine (pyrocko.gf.seismosizer.LocalEngine) – contains synthetics generation machine

  • target (pyrocko.gf.targets.StaticTarget) – dataset and observation points to calculate covariance for

  • sources (list) – of pyrocko.gf.seismosizer.Source determines the covariance matrix

  • plot (boolean) – if set, a plot is produced and not covariance matrix is returned

Return type:

numpy.ndarray with Covariance due to velocity model uncertainties

covariance.geodetic_cov_velocity_models_pscmp(store_superdir, crust_inds, target, sources)[source]

Calculate model prediction uncertainty matrix with respect to uncertainties in the velocity model for geodetic targets based on pscmp. Deprecated!!!

Parameters:
  • store_superdir (str) – Absolute path to the geodetic GreensFunction directory

  • crust_inds (list) – of int of indices for respective GreensFunction store indexes

  • target (heart.GeodeticDataset) – dataset and observation points to calculate covariance for

  • sources (list) – of pscmp.PsCmpRectangularSource determines the covariance matrix

Return type:

numpy.ndarray with Covariance due to velocity model uncertainties

covariance.seismic_cov_velocity_models(engine, sources, targets, arrival_taper, arrival_time, wavename, filterer, plot=False, n_jobs=1, chop_bounds=['b', 'c'])[source]

Calculate model prediction uncertainty matrix with respect to uncertainties in the velocity model for station and channel.

Parameters:
  • engine (pyrocko.gf.seismosizer.LocalEngine) – contains synthetics generation machine

  • sources (list) – of pyrocko.gf.seismosizer.Source

  • targets (list) – of pyrocko.gf.seismosizer.Targets

  • arrival_taper – determines tapering around phase Arrival

  • arrival_time (None or numpy.NdArray or float) – of phase to apply taper, if None theoretic arrival of ray tracing used

  • filterer (list) – of heart.Filter determining the filtering corner frequencies of various filters

  • plot (boolean) – open snuffler and browse traces if True

  • n_jobs (int) – number of processors to be used for calculation

Return type:

numpy.ndarray with Covariance due to velocity model uncertainties

The pytensorf Module

Package for wrapping various functions into pytensor-Ops to be able to include them into pytensor graphs as is needed by the pymc models.

Far future:

include a ‘def grad:’ -method to each Op in order to enable the use of gradient based optimization algorithms

class pytensorf.EulerPole(lats, lons, data_mask)[source]

pytensor Op for rotation of geodetic observations around Euler Pole.

Parameters:
  • lats (float, vector) – of Latitudes [deg] of points to be rotated

  • lons (float, vector) – of Longitudes [deg] of points to be rotated

  • data_mask (bool, vector) – of indexes to points to be masked

make_node(inputs)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:

node – The constructed Apply node.

Return type:

Apply

perform(node, inputs, output)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.

  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class pytensorf.GeoSynthesizer(engine, sources, targets, mapping)[source]

pytensor wrapper for a geodetic forward model with synthetic displacements. Uses pyrocko engine and fomosto GF stores. Input order does not matter anymore! Did in previous version.

Parameters:
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (List) – containing pyrocko.gf.seismosizer.Source Objects

  • targets (List) – containing pyrocko.gf.targets.StaticTarget Objects

  • mapping (Dict) – variable names and list of integers how they map to source objects

make_node(inputs)[source]

Transforms pytensor tensors to node and allocates variables accordingly.

Parameters:

inputs (dict) – keys being strings of source attributes of the pscmp.RectangularSource that was used to initialise the Operator values are pytensor.tensor.Tensor

perform(node, inputs, output)[source]

Perform method of the Operator to calculate synthetic displacements.

Parameters:
class pytensorf.PolaritySynthesizer(engine, source, pmap, is_location_fixed, always_raytrace)[source]
make_node(inputs)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:

node – The constructed Apply node.

Return type:

Apply

perform(node, inputs, output)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.

  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class pytensorf.SeisDataChopper(sample_rate, traces, arrival_taper, filterer)[source]

Deprecated!

make_node(*inputs)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:

node – The constructed Apply node.

Return type:

Apply

perform(node, inputs, output)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.

  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class pytensorf.SeisSynthesizer(engine, sources, mapping, targets, event, arrival_taper, arrival_times, wavename, filterer, pre_stack_cut, station_corrections, domain)[source]

pytensor wrapper for a seismic forward model with synthetic waveforms. Input order does not matter anymore! Did in previous version.

Parameters:
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (List) – containing pyrocko.gf.seismosizer.Source Objects

  • targets (List) – containing pyrocko.gf.seismosizer.Target Objects

  • arrival_taper (heart.ArrivalTaper) –

  • arrival_times (ǹumpy.NdArray) – with synthetic arrival times wrt reference event

  • filterer (heart.Filterer) –

make_node(inputs)[source]

Transforms pytensor tensors to node and allocates variables accordingly.

Parameters:

inputs (dict) – keys being strings of source attributes of the pscmp.RectangularSource that was used to initialise the Operator values are pytensor.tensor.Tensor

perform(node, inputs, output)[source]

Perform method of the Operator to calculate synthetic displacements.

Parameters:
class pytensorf.StrainRateTensor(lats, lons, data_mask)[source]

pytensorOp for internal block deformation through 2d area strain rate tensor.

Parameters:
  • lats (float, vector) – of Latitudes [deg] of points to be strained

  • lons (float, vector) – of Longitudes [deg] of points to be strained

  • data_mask (bool, vector) – of indexes to points to be masked

make_node(inputs)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:

node – The constructed Apply node.

Return type:

Apply

perform(node, inputs, output)[source]

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.

  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class pytensorf.Sweeper(patch_size, n_patch_dip, n_patch_strike, implementation)[source]

pytensor Op for C implementation of the fast sweep algorithm.

Parameters:
  • patch_size (float) – size of fault patches [km]

  • n_patch_strike (int) – number of patches in strike direction

  • n_patch_dip (int) – number of patches in dip-direction

make_node(*inputs)[source]

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:

node – The constructed Apply node.

Return type:

Apply

perform(node, inputs, output)[source]

Return start-times of rupturing patches with respect to given hypocenter.

Parameters:
  • slownesses (float, vector) – inverse of the rupture velocity across each patch

  • nuc_dip (int, scalar) – rupture nucleation point on the fault in dip-direction, index to patch

  • nuc_strike (int, scalar) – rupture nucleation point on the fault in strike-direction, index to patch

Returns:

starttimes

Return type:

float, vector

Notes

Here we call the C-implementation on purpose with swapped strike and dip directions, because we need the fault dipping in row directions of the array. The C-implementation has it along columns!!!

The utility Module

This module provides a namespace for various functions: coordinate transformations, loading and storing objects, book-keeping of indexes in arrays that relate to defined variable names, manipulation of various pyrocko objects and many more …

class utility.Counter[source]

Counts calls of types with string_ids. Repeated calls with the same string id increase the count.

class utility.DataMap(list_ind, slc, shp, dtype, name)
dtype

Alias for field number 3

list_ind

Alias for field number 0

name

Alias for field number 4

shp

Alias for field number 2

slc

Alias for field number 1

class utility.ListArrayOrdering(list_arrays, intype='numpy')[source]

An ordering for a list to an array space. Takes also non pytensor.tensors. Modified from pymc blocking.

Parameters:
  • list_arrays (list) – numpy.ndarray or pytensor.tensor.Tensor

  • intype (str) – defining the input type ‘tensor’ or ‘numpy’

class utility.ListToArrayBijection(ordering, list_arrays, blacklist=[])[source]

A mapping between a List of arrays and an array space

Parameters:
a2l(array)[source]

Maps value from array space to List space Inverse operation of fmap.

Parameters:

array (numpy.ndarray) –

Returns:

a_list – of numpy.ndarray

Return type:

list

a_nd2l(array)[source]

Maps value from ndarray space (ndims, data) to List space Inverse operation of fmap. Nd

Parameters:

array (numpy.ndarray) –

Returns:

a_list – of numpy.ndarray

Return type:

list

d2l(dpt)[source]

Maps values from dict space to List space If variable expected from ordering is not in point it is filled with a low dummy value -999999.

Parameters:

dpt (list) – of numpy.ndarray

Return type:

lpoint

f3map(list_arrays)[source]

Maps values from List space to array space with 3 columns

Parameters:

list_arrays (list) – of numpy.ndarray with size: n x 3

Returns:

array – single array comprising all the input arrays

Return type:

numpy.ndarray

l2a(list_arrays)[source]

Maps values from List space to array space

Parameters:

list_arrays (list) – of numpy.ndarray

Returns:

array – single array comprising all the input arrays

Return type:

numpy.ndarray

l2d(a_list)[source]

Maps values from List space to dict space

Parameters:

list_arrays (list) – of numpy.ndarray

Return type:

pymc.model.Point

srmap(tarray)[source]

Maps value from symbolic variable array space to List space

Parameters:

tarray (pytensor.tensor.Tensor) –

Returns:

a_list – of pytensor.tensor.Tensor

Return type:

list

utility.PsGrnArray2LayeredModel(psgrn_input_path)[source]

Read PsGrn Input file and return velocity model.

Parameters:

psgrn_input_path (str) – Absolute path to the psgrn input file.

Return type:

LayeredModel

utility.RS_center(source)[source]

Get 3d fault center coordinates. Depth attribute is top depth!

Parameters:

source (RedctangularSource) –

Returns:

  • numpy.ndarray with x, y, z coordinates of the center of the

  • fault

utility.RS_dipvector(source)[source]

Get 3 dimensional dip-vector of a planar fault.

Parameters:

source (RectangularSource) –

Return type:

numpy.ndarray

utility.RS_strikevector(source)[source]

Get 3 dimensional strike-vector of a planar fault.

Parameters:

source (RedctangularSource) –

Return type:

numpy.ndarray

class utility.StencilOperator(**kwargs)[source]

Undocumented.

h

float, default: 0.1

step size left and right of the reference value

order

int, default: 3

number of points of central differences

utility.adjust_fault_reference(source, input_depth='top')[source]

Adjusts source depth and east/north-shifts variables of fault according to input_depth mode ‘top/center’.

Parameters:
  • source (RectangularSource or pscmp.RectangularSource or) – pyrocko.gf.seismosizer.RectangularSource

  • input_depth (string) – if ‘top’ the depth in the source is interpreted as top depth if ‘center’ the depth in the source is interpreted as center depth

Return type:

Updated input source object

utility.adjust_point_units(point)[source]

Transform variables with [km] units to [m]

Parameters:

point (dict) – pymc.model.Point() of model parameter units as keys

Returns:

mpointpymc.model.Point()

Return type:

dict

utility.apply_station_blacklist(stations, blacklist)[source]

Weed stations listed in the blacklist.

Parameters:
  • stations (list) – pyrocko.model.Station

  • blacklist (list) – strings of station names

Returns:

stations

Return type:

list of pyrocko.model.Station

utility.biggest_common_divisor(a, b)[source]

Find the biggest common divisor of two float numbers a and b.

Parameters:
Return type:

int

utility.check_hyper_flag(problem)[source]

Check problem setup for type of model standard/hyperparameters.

:param models.Problem:

Returns:

flag

Return type:

boolean

utility.check_point_keys(point, phrase)[source]

Searches point keys for a phrase, returns list of keys with the phrase.

utility.distances(points, ref_points)[source]

Calculate distances in Cartesian coordinates between points and reference points in N-D.

Parameters:
  • points (numpy.Ndarray (n points x n spatial dimensions)) –

  • ref_points (numpy.Ndarray (m points x n spatial dimensions)) –

Return type:

ndarray (n_points x n_ref_points)

utility.downsample_trace(data_trace, deltat=None, snap=False)[source]

Downsample data_trace to given sampling interval ‘deltat’.

Parameters:
  • data_trace (pyrocko.trace.Trace) –

  • deltat (sampling interval [s] to which trace should be downsampled) –

Returns:

new instance

Return type:

pyrocko.trace.Trace

utility.dump_objects(outpath, outlist)[source]

Dump objects in outlist into pickle file.

Parameters:
  • outpath (str) – absolute path and file name for the file to be stored

  • outlist (list) – of objects to save pickle

utility.ensure_cov_psd(cov)[source]

Ensure that the input covariance matrix is positive definite. If not, find the nearest positive semi-definite matrix.

Parameters:

cov (numpy.ndarray) – symmetric covariance matrix

Returns:

cov – positive definite covariance matrix

Return type:

numpy.ndarray

utility.error_not_whole(f, errstr='')[source]

Test if float is a whole number, if not raise Error.

utility.find_elbow(data, theta=None, rotate_left=False)[source]

Get point closest to turning point in data by rotating it by theta.

Adapted from: https://datascience.stackexchange.com/questions/57122/in-elbow-curve- how-to-find-the-point-from-where-the-curve-starts-to-rise

Parameters:
  • data (array like,) – [n, 2]

  • theta (rotation angle) –

Returns:

  • Index (int) – closest to elbow.

  • rotated_data (array-like [n, 2])

utility.gather(any_list, key, sort=None, filter=None)[source]

Return dictionary of input l grouped by key.

utility.get_data_radiant(data)[source]

Data needs to be [n, 2]

utility.get_fit_indexes(llk)[source]

Find indexes of various likelihoods in a likelihood distribution.

Parameters:

llk (numpy.ndarray) –

Return type:

dict with array indexes

utility.get_random_uniform(lower, upper, dimension=1)[source]

Get uniform random values between given bounds

Parameters:
  • lower (float) –

  • upper (float) –

  • dimension (size of result vector) –

utility.get_rotation_matrix(axes=['x', 'y', 'z'])[source]

Return a function for 3-d rotation matrix for a specified axis.

Parameters:

axes (str or list of str) – x, y or z for the axis

Return type:

func that takes an angle [rad]

utility.get_valid_spectrum_data(deltaf, taper_frequencies=[0, 1.0])[source]

extract valid frequency range of spectrum

utility.join_models(global_model, crustal_model)[source]

Replace the part of the ‘global model’ that is covered by ‘crustal_model’.

Parameters:
  • global_model (pyrocko.cake.LayeredModel) –

  • crustal_model (pyrocko.cake.LayeredModel) –

Returns:

joined_model

Return type:

cake.LayeredModel

utility.join_points(ldicts)[source]

Join list of dicts into one dict with concatenating values of keys that are present in multiple dicts.

utility.line_intersect(e1, e2, n1, n2)[source]

Get intersection point of n-lines.

Parameters:
  • arrays (end points of each line in (n x 2)) –

  • e1 (numpy.array (n x 2)) – east coordinates of first line

  • e2 (numpy.array (n x 2)) – east coordinates of second line

  • n1 (numpy.array (n x 2)) – north coordinates of first line

  • n2 (numpy.array (n x 2)) – east coordinates of second line

Return type:

numpy.array (n x 2) of intersection points (easts, norths)

utility.list2string(any_list, fill=', ')[source]

Convert list of string to single string.

Parameters:

l (list) – of strings

utility.load_objects(loadpath)[source]

Load (unpickle) saved (pickled) objects from specified loadpath.

Parameters:

loadpath (absolute path and file name to the file to be loaded) –

Returns:

objects – of saved objects

Return type:

list

utility.mod_i(i, cycle)[source]

Calculates modulus of a function and returns number of full cycles and the rest.

Parameters:
  • i (int or float) – Number to be cycled over

  • cycle (int o float) – Cycle length

Returns:

  • fullc (int or float depending on input)

  • rest (int or float depending on input)

utility.near_psd(x, epsilon=2.220446049250313e-16)[source]

Calculates the nearest positive semi-definite matrix for a correlation/ covariance matrix

Parameters:
  • x (numpy.ndarray) – Covariance/correlation matrix

  • epsilon (float) – Eigenvalue limit here set to accuracy of numbers in numpy, otherwise the resulting matrix, likely is still not going to be positive definite

Returns:

near_cov – closest positive definite covariance/correlation matrix

Return type:

numpy.ndarray

Notes

Numpy number precision not high enough to resolve this for low valued covariance matrixes! The result will have very small negative eigvals!!!

See repair_covariance below for a simpler implementation that can resolve the numbers!

Algorithm after Rebonato & Jaekel 1999

utility.positions2idxs(positions, cell_size, min_pos=0.0, backend=<module 'numpy' from '/home/vasyurhm/virtualenvs/beattest2/lib/python3.11/site-packages/numpy/__init__.py'>, dtype='int16')[source]

Return index to a grid with a given cell size.npatches

Parameters:
  • positions (numpy.NdArray float) – of positions [km]

  • cell_size (float) – size of grid cells

  • backend (str) –

  • dtype (str) – data type of returned array, default: int16

utility.repair_covariance(x, epsilon=2.220446049250313e-16)[source]

Make covariance input matrix A positive definite. Setting eigenvalues that are lower than the of numpy floats to at least that precision and backtransform.

Parameters:
  • x (numpy.ndarray) – Covariance/correlation matrix

  • epsilon (float) – Eigenvalue limit here set to accuracy of numbers in numpy, otherwise the resulting matrix, likely is still not going to be positive definite

Returns:

near_cov – closest positive definite covariance/correlation matrix

Return type:

numpy.ndarray

Notes

Algorithm after Gilbert Strange, ‘Introduction to linear Algebra’

utility.running_window_rms(data, window_size, mode='valid')[source]

Calculate the standard deviations of a running window over data.

Parameters:
  • data (numpy.ndarray 1-d) – containing data to calculate stds from

  • window_size (int) – sample size of running window

  • mode (str) – see numpy.convolve for modes

Returns:

with stds, size data.size - window_size + 1

Return type:

numpy.ndarray 1-d

utility.search_catalog(date, min_magnitude, dayrange=1.0)[source]

Search the gcmt catalog for the specified date (+- 1 day), filtering the events with given magnitude threshold.

Parameters:
  • date (str) – ‘YYYY-MM-DD’, date of the event

  • min_magnitude (float) – approximate minimum Mw of the event

  • dayrange (float) – temporal search interval [days] around date

Returns:

event

Return type:

pyrocko.model.Event

utility.setup_logging(project_dir, levelname, logfilename='BEAT_log.txt')[source]

Setup function for handling BEAT logging. The logfile ‘BEAT_log.txt’ is saved in the ‘project_dir’.

Parameters:
  • project_dir (str) – absolute path to the output directory for the Log file

  • levelname (str) – defining the level of logging

utility.slice2string(slice_obj)[source]

Wrapper for better formatted string method for slices.

Return type:

str

utility.split_off_list(any_list, off_length)[source]

Split a list with length ‘off_length’ from the beginning of an input list l. Modifies input list!

Parameters:
  • l (list) – of objects to be separated

  • off_length (int) – number of elements from l to be split off

Return type:

list

utility.split_point(point, mapping=None, n_sources_total=None, weed_params=False)[source]

Split point in solution space into List of dictionaries with source parameters for each source.

Parameters:
  • point (dict) – pymc.model.Point()

  • mapping

  • n_sources_total (int) – total number of sources for each type in setup

  • weed_params (bool) – if True only source related parameters are kept in the point if False it may raise an error.

Returns:

source_points – of pymc.model.Point()

Return type:

list

utility.string2slice(slice_string)[source]

Convert string of slice form to python slice object.

Parameters:

slice_string (str) – of form “0:2” i.e. two integer numbers separated by colon

utility.swap_columns(array, index1, index2)[source]

Swaps the column of the input array based on the given indexes.

utility.transform_sources(sources, datatypes, decimation_factors=None)[source]

Transforms a list of heart.RectangularSource to a dictionary of sources pscmp.PsCmpRectangularSource for geodetic data and pyrocko.gf.seismosizer.RectangularSource for seismic data.

Parameters:
  • sources (list) – heart.RectangularSource

  • datatypes (list) – of strings with the datatypes to be included ‘geodetic’ or ‘seismic’

  • decimation_factors (dict) – of datatypes and their respective decimation factor

Returns:

d – of transformed sources with datatypes as keys

Return type:

dict

utility.unique_list(any_list)[source]

Find unique entries in list and return them in a list. Keeps variable order.

Parameters:

l (list) –

Return type:

list with only unique elements

utility.update_source(source, **point)[source]

Update source keeping stf and source params separate. Modifies input source Object!

Parameters:
  • source (pyrocko.gf.seismosizer.Source) –

  • point (dict) – pymc.model.Point()

utility.weed_data_traces(data_traces, stations)[source]

Throw out data traces belonging to stations that are not in the stations list. Keeps list orders!

Parameters:
  • data_traces (list) – of pyrocko.trace.Trace

  • stations (list) – of pyrocko.model.Station

Returns:

weeded_data_traces – of pyrocko.trace.Trace

Return type:

list

utility.weed_input_rvs(input_rvs, mode, datatype)[source]

Throw out random variables (RV)s from input list that are not included by the respective synthetics generating functions.

Parameters:
  • input_rvs (dict) – of pymc.Distribution or set of variable names

  • mode (str) – ‘geometry’, ‘static, ‘kinematic’, ‘interseismic’ determining the discarded RVs

  • datatype (str) – ‘seismic’ or ‘geodetic’ determining the discarded RVs

Returns:

weeded_input_rvs – of pymc.Distribution

Return type:

dict

utility.weed_stations(stations, event, distances=(30.0, 90.0), remove_duplicate=False)[source]

Weed stations, that are not within the given distance range(min, max) to a reference event.

Parameters:
  • stations (list) – of pyrocko.model.Station

  • eventpyrocko.model.Event

  • distances (tuple) – of minimum and maximum distance [deg] for station-event pairs

Returns:

weeded_stations – of pyrocko.model.Station

Return type:

list

utility.weed_targets(targets, stations, discard_targets=[])[source]

Throw out targets belonging to stations that are not in the stations list. Keeps list orders and returns new list!

Parameters:
  • targets (list) – of pyrocko.gf.targets.Target

  • stations (list) – of pyrocko.model.Station

Returns:

weeded_targets – of pyrocko.gf.targets.Target

Return type:

list

The plotting Module

common

class plotting.common.PlotOptions(**kwargs)[source]

Undocumented.

post_llk

str, default: 'max'

Which model to plot on the specified plot; Default: “max”; Options: “max”, “min”, “mean”, “all”

plot_projection

str (pyrocko.guts.StringChoice), default: 'local'

Projection to use for plotting geodetic data; options: “latlon”

utm_zone

int, optional, default: 36

Only relevant if plot_projection is “utm”

load_stage

int, default: -1

Which stage to select for plotting

figure_dir

str, default: 'figures'

Name of the output directory of plots

reference

dict of pyrocko.guts.Any objects, optional, default: {}

Reference point for example from a synthetic test.

outformat

str, default: 'pdf'

dpi

int, default: 300

force

bool, default: False

varnames

list of pyrocko.guts.Any objects, optional, default: []

Names of variables to plot

source_idxs

list of pyrocko.guts.Any objects, optional

Indexes to patches of slip distribution to draw marginals for

nensemble

int, default: 1

Number of draws from the PPD to display fuzzy results.

plotting.common.draw_line_on_array(X, Y, grid=None, extent=[], grid_resolution=(400, 400), linewidth=1)[source]

Draw line on given array by adding 1 to its fields.

Parameters:
  • X (array_like) – timeseries on xcoordinate (columns of array)

  • Y (array_like) – timeseries on ycoordinate (rows of array)

  • grid (array_like 2d) – input array that is used for drawing

  • extent (array extent) – [xmin, xmax, ymin, ymax] (cols, rows)

  • grid_resolution (tuple) – shape of given grid or grid that is being used for allocation

  • linewidth (int) – weight (width) of line drawn on grid

Return type:

grid, extent

plotting.common.format_axes(ax, remove=['right', 'top', 'left'], linewidth=None, visible=False)[source]

Removes box top, left and right.

plotting.common.get_llk_idx_to_trace(mtrace, point_llk='max')[source]

Return Point idx to multitrace

Parameters:
  • mtrace (pm.MultiTrace) – sampled result trace containing the posterior ensemble

  • point_llk (str) – returning according point with ‘max’, ‘min’, ‘mean’ likelihood

plotting.common.get_nice_plot_bounds(dmin, dmax, override_mode='min-max')[source]

Get nice min, max and increment for plots

plotting.common.get_result_point(mtrace, point_llk='max')[source]

Return Point dict from multitrace

Parameters:
  • mtrace (pm.MultiTrace) – sampled result trace containing the posterior ensemble

  • point_llk (str) – returning according point with ‘max’, ‘min’, ‘mean’ likelihood

Returns:

point – keys varnames, values numpy ndarrays

Return type:

dict

plotting.common.hide_ticks(ax, axis='yaxis')[source]

Hide ticks from plot axes. Still draws grid.

plotting.common.histplot_op(ax, data, reference=None, alpha=0.35, color=None, cmap=None, bins=None, tstd=None, qlist=[0.01, 99.99], cbounds=None, kwargs={})[source]

Modified from pymc3. Additional color argument.

data: array_like

samples of one group for the histogram are expected row-wise ordering.

plotting.common.plot_matrix(A)[source]

Very simple plot of a matrix for fast inspections.

plotting.common.set_axes_equal_3d(ax, axes='xyz')[source]

Make axes of 3D plot have equal scale so that spheres appear as spheres, cubes as cubes, etc.. This is one possible solution to Matplotlib’s ax.set_aspect(‘equal’) and ax.axis(‘equal’) not working for 3D.

Parameters:

ax (a matplotlib axis, e.g., as output from plt.gca().) –

plotting.common.str_dist(dist)[source]

Return string representation of distance.

plotting.common.str_duration(t)[source]

Convert time to str representation.

plotting.common.str_unit(quantity)[source]

Return string representation of waveform unit.

seismic

plotting.seismic.draw_data_stations(gmt, stations, data, dist, data_cpt=None, scale_label=None, *args)[source]

Draw MAP time-shifts at station locations as colored triangles

plotting.seismic.draw_hudson(problem, po)[source]

Modified from grond. Plot the hudson graph for the reference event(grey) and the best solution (red beachball). Also a random number of models from the selected stage are plotted as smaller beachballs on the hudson graph.

plotting.seismic.draw_station_map_gmt(problem, po)[source]

Draws distance dependent for teleseismic vs regional/local setups

plotting.seismic.extract_mt_components(problem, po, include_magnitude=False)[source]

Extract Moment Tensor components from problem results for plotting.

plotting.seismic.fuzzy_mt_decomposition(axes, list_m6s, labels=None, colors=None, fontsize=12)[source]

Plot fuzzy moment tensor decompositions for list of mt ensembles.

plotting.seismic.fuzzy_waveforms(ax, traces, linewidth, zorder=0, extent=None, grid_size=(500, 500), cmap=None, alpha=0.6)[source]

Fuzzy waveforms

traceslist

of class:pyrocko.trace.Trace, the times of the traces should not vary too much

zorderint

the higher number is drawn above the lower number

extentlist

of [xmin, xmax, ymin, ymax] (tmin, tmax, min/max of amplitudes) if None, the default is to determine it from traces list

plotting.seismic.gmt_station_map_azimuthal(gmt, stations, event, data_cpt=None, data=None, max_distance=90, width=20, bin_width=15, fontsize=12, font='1', plot_names=True, scale_label='time-shifts [s]')[source]

Azimuth equidistant station map, if data given stations are colored accordingly

Parameters:
  • gmt (pyrocko.plot.gmtpy.GMT) –

  • stations (list) – of pyrocko.model.station.Station

  • event (pyrocko.model.event.Event) –

  • data_cpt (str) – path to gmt *.cpt file for coloring

  • data (numoy.NdArray) – 1d vector length of stations to color stations

  • max_distance (float) – maximum distance [deg] of event to map bound

  • width (float) – plot width [cm]

  • bin_width (float) – grid spacing [deg] for distance/ azimuth grid

  • fontsize (int) – font-size in points for station labels

  • font (str) – GMT font specification (number or name)

plotting.seismic.n_model_plot(models, axes=None, draw_bg=True, highlightidx=[])[source]

Plot cake layered earth models.

plotting.seismic.plot_fuzzy_beachball_mpl_pixmap(mts, axes, best_mt=None, beachball_type='deviatoric', wavename='any_P', position=(0.0, 0.0), size=None, zorder=0, color_t='red', color_p='white', edgecolor='black', best_color='red', linewidth=2, alpha=1.0, projection='lambert', size_units='data', grid_resolution=100, method='imshow', view='top')[source]

Plot fuzzy beachball from a list of given MomentTensors

Parameters:
  • mts – list of pyrocko.moment_tensor.MomentTensor object or an array or sequence which can be converted into an MT object

  • best_mtpyrocko.moment_tensor.MomentTensor object or an array or sequence which can be converted into an MT object of most likely or minimum misfit solution to extra highlight

  • best_color – mpl color for best MomentTensor edges, polygons are not plotted

See plot_beachball_mpl for other arguments

plotting.seismic.point2array(point, varnames, idx_source=1, rpoint=None)[source]

Concatenate values of point according to order of given varnames.

plotting.seismic.seismic_fits(problem, stage, plot_options)[source]

Modified from grond. Plot synthetic and data waveforms and the misfit for the selected posterior model.

geodetic

plotting.geodetic.scene_fits(problem, stage, plot_options)[source]

Plot geodetic data, synthetics and residuals.

plotting.geodetic.shaded_displacements(displacement, shad_data, cmap='RdBu', shad_lim=(0.4, 0.98), tick_step=0.01, contrast=1.0, mask=None, data_limits=(-0.5, 0.5))[source]

Map color data (displacement) on shaded relief.

marginals

plotting.marginals.correlation_plot(mtrace, varnames=None, figsize=None, cmap=None, grid=200, point=None, point_style='.', point_color='white', point_size='8')[source]

Plot 2d marginals (with kernel density estimation) showing the correlations of the model parameters.

Parameters:
  • mtrace (base.MutliTrace) – Mutlitrace instance containing the sampling results

  • varnames (list of variable names) – Variables to be plotted, if None all variable are plotted

  • figsize (figure size tuple) – If None, size is (12, num of variables * 2) inch

  • cmap (matplotlib colormap) –

  • grid (resolution of kernel density estimation) –

  • point (dict) – Dictionary of variable name / value to be overplotted as marker to the posteriors e.g. mean of posteriors, true values of a simulation

  • point_style (str) – style of marker according to matplotlib conventions

  • point_color (str or tuple of 3) – color according to matplotlib convention

  • point_size (str) – marker size according to matplotlib conventions

Returns:

  • fig (figure object)

  • axs (subplot axis handles)

plotting.marginals.correlation_plot_hist(mtrace, varnames=None, figsize=None, hist_color=None, cmap=None, grid=50, chains=None, ntickmarks=2, point=None, point_style='.', point_color='red', point_size=4, alpha=0.35, unify=True)[source]

Plot 2d marginals (with kernel density estimation) showing the correlations of the model parameters. In the main diagonal is shown the parameter histograms.

Parameters:
  • mtrace (pymc.backends.base.MultiTrace) – Mutlitrace instance containing the sampling results

  • varnames (list of variable names) – Variables to be plotted, if None all variable are plotted

  • figsize (figure size tuple) – If None, size is (12, num of variables * 2) inch

  • cmap (matplotlib colormap) –

  • hist_color (str or tuple of 3) – color according to matplotlib convention

  • grid (resolution of kernel density estimation) –

  • chains (int or list of ints) – chain indexes to select from the trace

  • ntickmarks (int) – number of ticks at the axis labels

  • point (dict) – Dictionary of variable name / value to be overplotted as marker to the posteriors e.g. mean of posteriors, true values of a simulation

  • point_style (str) – style of marker according to matplotlib conventions

  • point_color (str or tuple of 3) – color according to matplotlib convention

  • point_size (str) – marker size according to matplotlib conventions

  • unify (bool) – If true axis units that belong to one group e.g. [km] will have common axis increments

Returns:

  • fig (figure object)

  • axs (subplot axis handles)

plotting.marginals.draw_correlation_hist(problem, plot_options)[source]

Draw parameter correlation plot and histograms for a model result ensemble. Only feasible for ‘geometry’ problem.

plotting.marginals.draw_posteriors(problem, plot_options)[source]

Identify which stage is the last complete stage and plot posteriors.

plotting.marginals.traceplot(trace, varnames=None, lines={}, chains=None, combined=False, grid=False, varbins=None, nbins=40, color=None, source_idxs=None, alpha=0.35, priors=None, prior_alpha=1, prior_style='--', posterior=None, plot_style='kde', prior_bounds={}, unify=True, qlist=[0.1, 99.9], kwargs={})[source]

Plots posterior pdfs as histograms from multiple mtrace objects.

Modified from pymc3.

Parameters:
  • trace (result of MCMC run) –

  • varnames (list of variable names) – Variables to be plotted, if None all variable are plotted

  • posterior (str) – To mark posterior value in distribution ‘max’, ‘min’, ‘mean’, ‘all’

  • lines (dict) – Dictionary of variable name / value to be overplotted as vertical lines to the posteriors and horizontal lines on sample values e.g. mean of posteriors, true values of a simulation

  • chains (int or list of ints) – chain indexes to select from the trace

  • combined (bool) – Flag for combining multiple chains into a single chain. If False (default), chains will be plotted separately.

  • source_idxs (list) – array like, indexes to sources to plot marginals

  • grid (bool) – Flag for adding gridlines to histogram. Defaults to True.

  • varbins (list of arrays) – List containing the binning arrays for the variables, if None they will be created.

  • nbins (int) – Number of bins for each histogram

  • color (tuple) – mpl color tuple

  • alpha (float) – Alpha value for plot line. Defaults to 0.35.

  • unify (bool) – If true axis units that belong to one group e.g. [km] will have common axis increments

  • kwargs (dict) – for histplot op

  • qlist (list) – of quantiles to plot. Default: (almost all, 0.01, 99.99)

Returns:

ax

Return type:

matplotlib axes

plotting.marginals.unify_tick_intervals(axs, varnames, ntickmarks_max=5, axis='x')[source]

Take figure axes objects and determine unit ranges between common unit classes (see utility.grouped_vars). Assures that the number of increments is not larger than ntickmarks_max. Will thus overwrite

Returns:

dict

Return type:

with types_sets keys and (min_range, max_range) as values

The inputf Module

inputf.load_SAR_data(datadir, names)[source]

Load SAR data in given directory and filenames. Returns Diff_IFG objects.

inputf.load_and_blacklist_gnss(datadir, filename, blacklist, campaign=False, components=['north', 'east', 'up'])[source]

Load ascii GNSS data from GLOBK, apply blacklist and initialise targets.

Parameters:
  • datadir (string) – of path to the directory

  • filename (string) – filename to load

  • blacklist (list) – of strings with station names to blacklist

  • campaign (boolean) – if True return gnss.GNSSCampaign otherwise list of heart.GNSSCompoundComponent

  • components (tuple) – of strings (‘north’, ‘east’, ‘up’) for displacement components to return

inputf.load_and_blacklist_stations(datadir, blacklist)[source]

Load stations from autokiwi output and apply blacklist

inputf.load_ascii_gnss_globk(filedir, filename, components=['east', 'north', 'up'])[source]

Load ascii file columns containing: station name, Lon, Lat, ve, vn, vu, sigma_ve, sigma_vn, sigma_vu location [decimal deg] measurement unit [mm/yr]

Return type:

pyrocko.model.gnss.GNSSCampaign

inputf.load_data_traces(datadir, stations, load_channels=[], name_prefix=None, name_suffix=None, data_format='mseed', divider='-', convert=False, no_network=False)[source]

Load data traces for the given stations from datadir.

inputf.load_kite_scenes(datadir, names)[source]

Load SAR data from the kite format.

inputf.load_obspy_data(datadir)[source]

Load data from the directory through obspy and convert to pyrocko objects.

Parameters:

datadir (string) – absolute path to the data directory

Return type:

data_traces, stations

inputf.rotate_traces_and_stations(datatraces, stations, event)[source]

Rotate traces and stations into RTZ with respect to the event. Updates channels of stations in place!

Parameters:
  • datatraces (list) – of pyrocko.trace.Trace

  • stations (list) – of pyrocko.model.Station

  • event (pyrocko.model.Event) –

Return type:

rotated traces to RTZ

inputf.setup_stations(lats, lons, names, networks, event, rotate=True)[source]

Setup station objects, based on station coordinates and reference event.

Parameters:
  • lats (num.ndarray) – of station location latitude

  • lons (num.ndarray) – of station location longitude

  • names (list) – of strings of station names

  • networks (list) – of strings of network names for each station

  • event (pyrocko.model.Event) –

  • Results

  • -------

  • stations (list) – of pyrocko.model.Station