API Reference

The heart Module

Core module with functions to calculate Greens Functions and synthetics. Also contains main classes for setup specific parameters.

class heart.ArrivalTaper(**kwargs)[source]

Cosine arrival Taper.

a

float, default: -15.0

start of fading in; [s] w.r.t. phase arrival

b

float, default: -10.0

end of fading in; [s] w.r.t. phase arrival

c

float, default: 50.0

start of fading out; [s] w.r.t. phase arrival

d

float, default: 55.0

end of fading out; [s] w.r.t phase arrival

check_sample_rate_consistency(deltat)[source]

Check if taper durations are consistent with GF sample rate.

get_pyrocko_taper(arrival_time)[source]

Get pyrocko CosTaper object that may be applied to trace operations.

Parameters

arrival_time (float) – [s] of the reference time around which the taper will be applied

Returns

Return type

pyrocko.trace.CosTaper

nsamples(sample_rate, chop_bounds=['b', 'c'])[source]

Returns the number of samples a tapered trace would have given its sample rate and chop_bounds

Parameters

sample_rate (float) –

class heart.BandstopFilter(**kwargs)[source]

Filter object defining suppressed frequency range of traces after time-domain filtering.

lower_corner

float, default: 0.12

Lower corner frequency

upper_corner

float, default: 0.25

Upper corner frequency

order

int, default: 4

order of filter, the higher the steeper

exception heart.CollectionError[source]
class heart.Covariance(**kwargs)[source]

Covariance of an observation. Holds data and model prediction uncertainties for one observation object.

data

numpy.ndarray (pyrocko.guts_array.Array), optional

Data covariance matrix

pred_g

numpy.ndarray (pyrocko.guts_array.Array), optional

Model prediction covariance matrix, fault geometry

pred_v

numpy.ndarray (pyrocko.guts_array.Array), optional

Model prediction covariance matrix, velocity model

check_matrix_init(cov_mat_str='')[source]

Check if matrix is initialised and if not set with zeros of size data.

property chol

Cholesky factor, of ALL uncertainty covariance matrices.

property chol_inverse

Cholesky factor, upper right of the Inverse of the Covariance matrix of sum of ALL uncertainty covariance matrices. To be used as weight in the optimization.

Notes

Uses QR factorization on the inverse of the upper right Cholesky decomposed covariance matrix to obtain a proxy for the Cholesky decomposition of the inverse of the covariance matrix in case the inverse of the covariance matrix is not positive definite.

From: https://math.stackexchange.com/questions/3838462/cholesky-factors-of-covariance-and-precision-matrix/3842840#3842840

Returns

Return type

upper triangle of the cholesky decomposition

property inverse

Add and invert ALL uncertainty covariance Matrices.

property inverse_d

Invert DATA covariance Matrix.

property inverse_p

Add and invert different MODEL uncertainty covariance Matrices.

property log_pdet

Calculate the log of the determinant of the total matrix.

update_slog_pdet()[source]

Update shared variable with current log_norm_factor (lnf) (for theano models).

class heart.DataWaveformCollection(stations, waveforms=None, target_deltat=None)[source]

Collection of available datasets, data-weights, waveforms and DynamicTargets used to create synthetics.

Is used to return Mappings of the waveforms of interest to fit to the involved data, weights and synthetics generating objects.

Parameters
  • stations (List of pyrocko.model.Station) – List of station objects that are contained in the dataset

  • waveforms (list) – of strings of tabulated phases that are to be used for misfit calculation

  • target_deltat (float) – sampling interval the data is going to be downsampled to

class heart.DiffIFG(**kwargs)[source]

Differential Interferogram class as geodetic target for the calculation of synthetics and container for SAR data.

unwrapped_phase

numpy.ndarray (pyrocko.guts_array.Array), optional

coherence

numpy.ndarray (pyrocko.guts_array.Array), optional

reference_point

tuple of 2 float objects, optional

reference_value

float, optional, default: 0.0

displacement

numpy.ndarray (pyrocko.guts_array.Array), optional

covariance

Covariance, optional

Covariance that holds dataand model prediction covariance matrixes

odw

numpy.ndarray (pyrocko.guts_array.Array), optional

Overlapping data weights, additional weight factor to thedataset for overlaps with other datasets

mask

numpy.ndarray (pyrocko.guts_array.Array), optional

Mask values for Euler pole region determination. Click polygon mask in kite!

get_data_mask(corr_conf)[source]

Extracts mask from kite scene and returns mask indexes- maybe during import?!!!

class heart.DynamicTarget(**kwargs)[source]

Undocumented.

response

pyrocko.trace.PoleZeroResponse, optional

update_target_times(sources=None, taperer=None)[source]

Update the target attributes tmin and tmax to do the stacking only in this interval. Adds twice taper fade in time to each taper side.

Parameters
  • source (list) – containing pyrocko.gf.seismosizer.Source Objects

  • taperer (pyrocko.trace.CosTaper) –

class heart.Filter(**kwargs)[source]

Filter object defining frequency range of traces after time-domain filtering.

lower_corner

float, default: 0.001

Lower corner frequency

upper_corner

float, default: 0.1

Upper corner frequency

order

int, default: 4

order of filter, the higher the steeper

stepwise

bool, default: True

If set to true the bandpass filter is done it two consecutive steps, first high-pass then low-pass.

class heart.FilterBase(**kwargs)[source]

Undocumented.

class heart.FrequencyFilter(**kwargs)[source]

Undocumented.

freqlimits

tuple of 4 float objects, default: (0.005, 0.006, 166.0, 200.0)

Corner frequencies 4-tuple [Hz] for frequency domain filter.

tfade

float, default: 20.0

Rise/fall time in seconds of taper applied in timedomain at both ends of trace.

class heart.GNSSCompoundComponent(**kwargs)[source]

Collecting many GNSS components and merging them into arrays. Make synthetics generation more efficient.

los_vector

numpy.ndarray (pyrocko.guts_array.Array), optional

displacement

numpy.ndarray (pyrocko.guts_array.Array), optional

component

str, default: 'east'

direction of measurement, north/east/up

stations

list of pyrocko.model.gnss.GNSSStation objects, default: []

covariance

Covariance, optional

Covariance that holds dataand model prediction covariance matrixes

odw

numpy.ndarray (pyrocko.guts_array.Array), optional

Overlapping data weights, additional weight factor to thedataset for overlaps with other datasets

class heart.GeodeticDataset(**kwargs)[source]

Overall geodetic data set class

typ

str, default: 'SAR'

Type of geodetic data, e.g. SAR, GNSS, …

name

str, default: 'A'

e.g. GNSS campaign name or InSAR satellite track

get_corrections(hierarchicals, point=None)[source]

Needs to be specified on inherited dataset classes.

setup_corrections(event, correction_configs)[source]

Initialise geodetic dataset corrections such as Ramps or Euler Poles.

update_local_coords(loc)[source]

Calculate local coordinates with respect to given Location.

Parameters

loc (pyrocko.gf.meta.Location) –

Returns

Return type

numpy.ndarray (n_points, 3)

class heart.GeodeticResult(**kwargs)[source]

Result object assembling different geodetic data.

point

ResultPoint, default: ResultPoint()

processed_obs

numpy.ndarray (pyrocko.guts_array.Array), optional

processed_syn

numpy.ndarray (pyrocko.guts_array.Array), optional

processed_res

numpy.ndarray (pyrocko.guts_array.Array), optional

llk

float, optional, default: 0.0

class heart.IFG(**kwargs)[source]

Interferogram class as a dataset in the optimization.

master

str, optional

Acquisition time of master image YYYY-MM-DD

slave

str, optional

Acquisition time of slave image YYYY-MM-DD

amplitude

numpy.ndarray (pyrocko.guts_array.Array), optional

wrapped_phase

numpy.ndarray (pyrocko.guts_array.Array), optional

incidence

numpy.ndarray (pyrocko.guts_array.Array), optional

heading

numpy.ndarray (pyrocko.guts_array.Array), optional

los_vector

numpy.ndarray (pyrocko.guts_array.Array), optional

satellite

str, default: 'Envisat'

update_los_vector(force=False)[source]

Calculate LOS vector for given attributes incidence and heading angles.

Returns

Return type

numpy.ndarray (n_points, 3)

class heart.Parameter(**kwargs)[source]

Optimization parameter determines the bounds of the search space.

name

str, default: 'depth'

form

str, default: 'Uniform'

Type of prior distribution to use. Options: “Uniform”, …

lower

numpy.ndarray (pyrocko.guts_array.Array), default: array([0., 0.])

upper

numpy.ndarray (pyrocko.guts_array.Array), default: array([1., 1.])

testvalue

numpy.ndarray (pyrocko.guts_array.Array), default: array([0.5, 0.5])

random(shape=None)[source]

Create random samples within the parameter bounds.

Parameters

shape (int or list) – of int number of draws from distribution

Returns

Return type

numpy.ndarray of size (n, m)

exception heart.RayPathError[source]
class heart.ReferenceLocation(**kwargs)[source]

Reference Location for Green’s Function store calculations!

station

str, default: 'Store_Name'

This mimics the station.station attribute which determines the store name!

class heart.ResultPoint(**kwargs)[source]

Containing point in solution space.

post_llk

str, optional

describes which posterior likelihood value the point belongs to

point

dict of numpy.ndarray (pyrocko.guts_array.Array) objects, default: {}

Point in Solution space for which result is produced.

variance_reductions

dict of float objects, optional, default: {}

Variance reductions for each dataset.

class heart.ResultReport(**kwargs)[source]

Undocumented.

solution_point

dict of pyrocko.guts.Any objects, default: {}

result point

post_llk

str (pyrocko.guts.StringChoice), default: 'max'

Value of point of the likelihood distribution.

mean_point

dict of pyrocko.guts.Any objects, optional

mean of distributions, used for model prediction covariance calculation.

class heart.SeismicDataset(network='', station='STA', location='', channel='', tmin=0.0, tmax=None, deltat=1.0, ydata=None, mtime=None, meta=None)[source]

Extension to pyrocko.trace.Trace to have Covariance as an attribute.

class heart.SeismicResult(**kwargs)[source]

Result object assembling different traces of misfit.

point

ResultPoint, default: ResultPoint()

processed_obs

Trace, optional

arrival_taper

pyrocko.trace.Taper, optional

llk

float, optional, default: 0.0

taper

pyrocko.trace.Taper, optional

source_contributions

list of Trace objects, default: []

synthetics of source individual contributions.

exception heart.StackingError[source]
class heart.StrainRateTensor(**kwargs)[source]

Undocumented.

exx

float, default: 10

eyy

float, default: 0

exy

float, default: 0

rotation

float, default: 0

property azimuth

Direction of eps2 compared towards North [deg].

property eps1

Maximum extension eigenvalue of strain rate tensor, extension positive.

property eps2

Maximum compression eigenvalue of strain rate tensor, extension positive.

class heart.Trace(**kwargs)[source]

Undocumented.

class heart.WaveformMapping(name, stations, weights=None, channels=['Z'], datasets=[], targets=[], deltat=None)[source]

Maps synthetic waveform parameters to targets, stations and data

Parameters
  • name (str) – name of the waveform according to travel time tables

  • stations (list) – of pyrocko.model.Station

  • weights (list) – of theano.shared variables

  • channels (list) – of channel names valid for all the stations of this wavemap

  • datasets (list) – of heart.Dataset inherited from pyrocko.trace.Trace

  • targets (list) – of pyrocko.gf.target.Target

get_highest_frequency()[source]

Loop over filterers and return highest frequency.

get_station_names()[source]

Returns list of strings of station names

property hypersize

Return the size of the related hyperparameters as an integer.

prepare_data(source, engine, outmode='array', chop_bounds=['b', 'c'])[source]

Taper, filter data traces according to given reference event. Traces are concatenated to one single array.

station_weeding(event, distances, blacklist=[])[source]

Weed stations and related objects based on distances and blacklist. Works only a single time after init!

heart.check_problem_stores(problem, datatypes)[source]

Check GF stores for empty traces.

heart.choose_backend(fomosto_config, code, source_model, receiver_model, gf_directory='qseis2d_green')[source]

Get backend related config.

heart.concatenate_datasets(datasets)[source]

Concatenate datasets to single arrays

Parameters

datasets (list) – of GeodeticDataset

Returns

  • datasets (1d :class:numpy.NdArray` n x 1)

  • los_vectors (2d :class:numpy.NdArray` n x 3)

  • odws (1d :class:numpy.NdArray` n x 1)

  • Bij (utility.ListToArrayBijection)

heart.ensemble_earthmodel(ref_earthmod, num_vary=10, error_depth=0.1, error_velocities=0.1, depth_limit_variation=600000.0)[source]

Create ensemble of earthmodels that vary around a given input earth model by a Gaussian of 2 sigma (in Percent 0.1 = 10%) for the depth layers and for the p and s wave velocities. Vp / Vs is kept unchanged

Parameters
  • ref_earthmod (pyrocko.cake.LayeredModel) – Reference earthmodel defining layers, depth, velocities, densities

  • num_vary (scalar, int) – Number of variation realisations

  • error_depth (scalar, float) – 3 sigma error in percent of the depth for the respective layers

  • error_velocities (scalar, float) – 3 sigma error in percent of the velocities for the respective layers

  • depth_limit_variation (scalar, float) – depth threshold [m], layers with depth > than this are not varied

Returns

Return type

List of Varied Earthmodels pyrocko.cake.LayeredModel

heart.geo_construct_gf(event, geodetic_config, crust_ind=0, execute=True, force=False)[source]

Calculate geodetic Greens Functions (GFs) and create a fomosto ‘GF store’ that is being used repeatetly later on to calculate the synthetic displacements. Enables various different source geometries.

Parameters
  • event (pyrocko.model.Event) – The event is used as a reference point for all the calculations According to the its location the earth model is being built

  • geodetic_config (config.GeodeticConfig) –

  • crust_ind (int) – Index to set to the Greens Function store

  • execute (boolean) – Flag to execute the calculation, if False just setup tested

  • force (boolean) – Flag to overwrite existing GF stores

heart.geo_construct_gf_psgrn(event, geodetic_config, crust_ind=0, execute=True, force=False)[source]

Calculate geodetic Greens Functions (GFs) and create a repository ‘store’ that is being used later on repeatetly to calculate the synthetic displacements.

Parameters
  • event (pyrocko.model.Event) – The event is used as a reference point for all the calculations According to the its location the earth model is being built

  • geodetic_config (config.GeodeticConfig) –

  • crust_ind (int) – Index to set to the Greens Function store

  • execute (boolean) – Flag to execute the calculation, if False just setup tested

  • force (boolean) – Flag to overwrite existing GF stores

heart.geo_layer_synthetics_pscmp(store_superdir, crust_ind, lons, lats, sources, keep_tmp=False, outmode='data')[source]

Calculate synthetic displacements for a given Greens Function database sources and observation points on the earths surface.

Parameters
  • store_superdir (str) – main path to directory containing the different Greensfunction stores

  • crust_ind (int) – index of Greens Function store to use

  • lons (List of floats) – Longitudes [decimal deg] of observation points

  • lats (List of floats) – Latitudes [decimal deg] of observation points

  • sources (List of pscmp.PsCmpRectangularSource) – Sources to calculate synthetics for

  • keep_tmp (boolean) – Flag to keep directories (in ‘/tmp’) where calculated synthetics are stored.

  • outmode (str) – determines type of output

Returns

Return type

numpy.ndarray (n_observations; ux-North, uy-East, uz-Down)

heart.geo_synthetics(engine, targets, sources, outmode='stacked_array', plot=False, nprocs=1)[source]

Calculate synthetic displacements for a given static fomosto Greens Function database for sources and targets on the earths surface.

Parameters
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (list) – containing pyrocko.gf.seismosizer.Source Objects reference source is the first in the list!!!

  • targets (list) – containing pyrocko.gf.seismosizer.Target Objects

  • plot (boolean) – flag for looking at synthetics - not implemented yet

  • nprocs (int) – number of processors to use for synthetics calculation –> currently no effect !!!

  • outmode (string) – output format of synthetics can be: ‘array’, ‘arrays’, ‘stacked_array’,’stacked_arrays’

Returns

  • depends on outmode

  • ’stacked_array’

  • numpy.ndarray (n_observations; ux-North, uy-East, uz-Down)

  • ’stacked_arrays’

  • or list of

  • numpy.ndarray (target.samples; ux-North, uy-East, uz-Down)

heart.get_fomosto_baseconfig(gfconfig, event, station, waveforms, crust_ind)[source]

Initialise fomosto config.

Parameters
  • gfconfig (config.NonlinearGFConfig) –

  • event (pyrocko.model.Event) – The event is used as a reference point for all the calculations According to the its location the earth model is being built

  • station (pyrocko.model.Station or) – heart.ReferenceLocation

  • waveforms (List of str) – Waveforms to calculate GFs for, determines the length of traces

  • crust_ind (int) – Index to set to the Greens Function store

heart.get_phase_arrival_time(engine, source, target, wavename=None, snap=True)[source]

Get arrival time from Greens Function store for respective pyrocko.gf.seismosizer.Target, pyrocko.gf.meta.Location pair.

Parameters
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • source (pyrocko.gf.meta.Location) – can be therefore pyrocko.gf.seismosizer.Source or pyrocko.model.Event

  • target (pyrocko.gf.seismosizer.Target) –

  • wavename (string) – of the tabulated phase that determines the phase arrival needs to be the Id of a tabulated phase in the respective target.store if “None” uses first tabulated phase

  • snap (if True) – force arrival time on discrete samples of the store

Returns

Return type

scalar, float of the arrival time of the wave

heart.get_phase_taperer(engine, source, wavename, target, arrival_taper, arrival_time=nan)[source]

Create phase taperer according to synthetic travel times from source- target pair and taper return pyrocko.trace.CosTaper according to defined arrival_taper times.

Parameters
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • source (pyrocko.gf.meta.Location) – can be therefore pyrocko.gf.seismosizer.Source or pyrocko.model.Event

  • wavename (string) – of the tabulated phase that determines the phase arrival

  • target (pyrocko.gf.seismosizer.Target) –

  • arrival_taper (ArrivalTaper) –

  • arrival_time (shift on arrival time (optional)) –

Returns

Return type

pyrocko.trace.CosTaper

heart.get_ramp_displacement(locx, locy, azimuth_ramp, range_ramp, offset)[source]

Get synthetic residual plane in azimuth and range direction of the satellite.

Parameters
  • locx (shared array-like numpy.ndarray) – local coordinates [km] in east direction

  • locy (shared array-like numpy.ndarray) – local coordinates [km] in north direction

  • azimuth_ramp (theano.tensor.Tensor or numpy.ndarray) – vector with ramp parameter in azimuth

  • range_ramp (theano.tensor.Tensor or numpy.ndarray) – vector with ramp parameter in range

  • offset (theano.tensor.Tensor or numpy.ndarray) – scalar of offset in [m]

heart.get_slowness_taper(fomosto_config, velocity_model, distances)[source]

Calculate slowness taper for backends that determine wavefield based on the velociy model.

Parameters
  • fomosto_config (pyrocko.meta.Config) –

  • velocity_model (pyrocko.cake.LayeredModel) –

  • distances (tuple) – minimum and maximum distance [deg]

Returns

Return type

tuple of slownesses

heart.get_velocity_model(location, earth_model_name, crust_ind=0, gf_config=None, custom_velocity_model=None)[source]

Get velocity model at the specified location, combines given or crustal models with the global model.

Parameters
  • location (pyrocko.meta.Location) –

  • earth_model_name (str) – Name of the base earth model to be used, check pyrocko.cake.builtin_models() for alternatives, default ak135 with medium resolution

  • crust_ind (int) – Index to set to the Greens Function store, 0 is reference store indexes > 0 use reference model and vary its parameters by a Gaussian

  • gf_config (beat.config.GFConfig) –

  • custom_velocity_model (pyrocko.cake.LayeredModel) –

Returns

Return type

pyrocko.cake.LayeredModel

heart.init_datahandler(seismic_config, seismic_data_path='./', responses_path=None)[source]

Initialise datahandler.

Parameters
  • seismic_config (config.SeismicConfig) –

  • seismic_data_path (str) – absolute path to the directory of the seismic data

Returns

datahandler

Return type

DataWaveformCollection

heart.init_geodetic_targets(datasets, earth_model_name='ak135-f-average.m', interpolation='nearest_neighbor', crust_inds=[0], sample_rate=0.0)[source]

Initiate a list of Static target objects given a list of indexes to the respective GF store velocity model variation index (crust_inds).

Parameters
  • datasets (list) – of heart.GeodeticDataset for which the targets are being initialised

  • = str (earth_model_name) – Name of the earth model that has been used for GF calculation.

  • sample_rate (scalar, float) – sample rate [Hz] of the Greens Functions to use

  • crust_inds (List of int) – Indexes of different velocity model realisations, 0 - reference model

  • interpolation (str) – Method of interpolation for the Greens Functions, can be ‘multilinear’ or ‘nearest_neighbor’

Returns

Return type

List of pyrocko.gf.targets.StaticTarget

heart.init_seismic_targets(stations, earth_model_name='ak135-f-average.m', channels=['T', 'Z'], sample_rate=1.0, crust_inds=[0], interpolation='multilinear', reference_location=None)[source]

Initiate a list of target objects given a list of indexes to the respective GF store velocity model variation index (crust_inds).

Parameters
  • stations (List of pyrocko.model.Station) – List of station objects for which the targets are being initialised

  • = str (earth_model_name) – Name of the earth model that has been used for GF calculation.

  • channels (List of str) – Components of the traces to be optimized for if rotated: T - transversal, Z - vertical, R - radial If not rotated: E - East, N- North, U - Up (Vertical)

  • sample_rate (scalar, float) – sample rate [Hz] of the Greens Functions to use

  • crust_inds (List of int) – Indexes of different velocity model realisations, 0 - reference model

  • interpolation (str) – Method of interpolation for the Greens Functions, can be ‘multilinear’ or ‘nearest_neighbor’

  • reference_location (ReferenceLocation or) – pyrocko.model.Station if given, targets are initialised with this reference location

Returns

Return type

List of DynamicTarget

heart.init_wavemap(waveformfit_config, datahandler=None, event=None, mapnumber=0)[source]

Initialise wavemap, which sets targets, datasets and stations into relation to the seismic Phase of interest and allows individual specificiations.

Parameters
Returns

wmap

Return type

WaveformMapping

heart.log_determinant(A, inverse=False)[source]

Calculates the natural logarithm of a determinant of the given matrix ‘ according to the properties of a triangular matrix.

Parameters
  • A (n x n numpy.ndarray) –

  • inverse (boolean) –

    If true calculates the log determinant of the inverse of the colesky decomposition, which is equvalent to taking the determinant of the inverse of the matrix.

    L.T* L = R inverse=False L-1*(L-1)T = R-1 inverse=True

Returns

Return type

float logarithm of the determinant of the input Matrix A

heart.post_process_trace(trace, taper, filterer, taper_tolerance_factor=0.0, deltat=None, outmode=None, chop_bounds=['b', 'c'], transfer_function=None)[source]

Taper, filter and then chop one trace in place.

Parameters
  • trace (SeismicDataset) –

  • arrival_taper (pyrocko.trace.Taper) –

  • filterer (list) – of Filterer

  • taper_tolerance_factor (float) – default: 0 , cut exactly at the taper edges taper.fadein times this factor determines added tolerance

  • chop_bounds (str) – determines where to chop the trace on the taper attributes may be combination of [a, b, c, d]

heart.proto2zpk(magnification, damping, period, quantity='displacement')[source]

Convert magnification, damping and period of a station to poles and zeros.

Parameters
  • magnification (float) – gain of station

  • damping (float) – in []

  • period (float) – in [s]

  • quantity (string) – in which related data are recorded

Returns

Return type

lists of zeros, poles and gain

heart.seis_construct_gf(stations, event, seismic_config, crust_ind=0, execute=False, force=False)[source]

Calculate seismic Greens Functions (GFs) and create a repository ‘store’ that is being used later on repeatetly to calculate the synthetic waveforms.

Parameters
  • stations (list) – of pyrocko.model.Station Station object that defines the distance from the event for which the GFs are being calculated

  • event (pyrocko.model.Event) – The event is used as a reference point for all the calculations According to the its location the earth model is being built

  • seismic_config (config.SeismicConfig) –

  • crust_ind (int) – Index to set to the Greens Function store, 0 is reference store indexes > 0 use reference model and vary its parameters by a Gaussian

  • execute (boolean) – Flag to execute the calculation, if False just setup tested

  • force (boolean) – Flag to overwrite existing GF stores

heart.seis_synthetics(engine, sources, targets, arrival_taper=None, wavename='any_P', filterer=None, reference_taperer=None, plot=False, nprocs=1, outmode='array', pre_stack_cut=False, taper_tolerance_factor=0.0, arrival_times=None, chop_bounds=['b', 'c'])[source]

Calculate synthetic seismograms of combination of targets and sources, filtering and tapering afterwards (filterer) tapering according to arrival_taper around P -or S wave. If reference_taper the given taper is always used.

Parameters
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (list) – containing pyrocko.gf.seismosizer.Source Objects reference source is the first in the list!!!

  • targets (list) – containing pyrocko.gf.seismosizer.Target Objects

  • arrival_taper (ArrivalTaper) –

  • wavename (string) – of the tabulated phase that determines the phase arrival

  • filterer (Filterer) –

  • plot (boolean) – flag for looking at traces

  • nprocs (int) – number of processors to use for synthetics calculation –> currently no effect !!!

  • outmode (string) – output format of synthetics can be ‘array’, ‘stacked_traces’, ‘data’ returns traces unstacked including post-processing, ‘tapered_data’ returns unstacked but tapered traces

  • pre_stack_cut (boolean) – flag to decide wheather prior to stacking the GreensFunction traces should be cutted according to the phase arival time and the defined taper

  • taper_tolerance_factor (float) – tolerance to chop traces around taper.a and taper.d

  • arrival_times (None or numpy.NdArray) – of phase to apply taper, if None theoretic arrival of ray tracing used

  • chop_bounds (list of str) – determines where to chop the trace on the taper attributes may be combination of [a, b, c, d]

  • transfer_functions (list) – of transfer functions to convolve the synthetics with

Returns

heart.taper_filter_traces(traces, arrival_taper=None, filterer=None, deltat=None, arrival_times=None, plot=False, outmode='array', taper_tolerance_factor=0.0, chop_bounds=['b', 'c'])[source]

Taper and filter data_traces according to given taper and filterers. Tapering will start at the given tmin.

Parameters
  • traces (List) – containing pyrocko.trace.Trace objects

  • arrival_taper (ArrivalTaper) –

  • filterer (list) – of Filterer

  • deltat (float) – if set data is downsampled to that sampling interval

  • arrival_times (list or:class:numpy.ndarray) – containing the start times [s] since 1st.January 1970 to start tapering

  • outmode (str) – defines the output structure, options: “stacked_traces”, “array”, “data”

  • taper_tolerance_factor (float) – tolerance to chop traces around taper.a and taper.d

  • chop_bounds (list of len 2) – of taper attributes a, b, c, or d

Returns

with tapered and filtered data traces, rows different traces, columns temporal values

Return type

numpy.ndarray

heart.vary_model(earthmod, error_depth=0.1, error_velocities=0.1, depth_limit_variation=600000.0)[source]

Vary depths and velocities in the given source model by Gaussians with given 2-sigma errors [percent]. Ensures increasing velocity with depth. Stops variating the input model at the given depth_limit_variation [m]. Mantle discontinuity uncertainties are hardcoded based on Mooney et al. 1981 and Woodward et al.1991

Parameters
  • earthmod (pyrocko.cake.LayeredModel) – Earthmodel defining layers, depth, velocities, densities

  • error_depth (scalar, float) – 2 sigma error in percent of the depth for the respective layers

  • error_velocities (scalar, float) – 2 sigma error in percent of the velocities for the respective layers

  • depth_limit_variations (scalar, float) – depth threshold [m], layers with depth > than this are not varied

Returns

  • Varied Earthmodel (pyrocko.cake.LayeredModel)

  • Cost (int) – Counts repetitions of cycles to ensure increasing layer velocity, unlikely velocities have high Cost Cost of up to 20 are ok for crustal profiles.

heart.velocities_from_pole(lats, lons, pole_lat, pole_lon, omega, earth_shape='ellipsoid')[source]

Return horizontal velocities at input locations for rotation around given Euler pole

Parameters
  • lats (numpy.NdArray) – of geographic latitudes [deg] of points to calculate velocities for

  • lons (numpy.NdArray) – of geographic longitudes [deg] of points to calculate velocities for

  • pole_lat (float) – Euler pole latitude [deg]

  • pole_lon (float) – Euler pole longitude [deg]

  • omega (float) – angle of rotation around Euler pole [deg / million yrs]

Returns

Return type

numpy.NdArray of velocities [m / yrs] npoints x 3 (NEU)

heart.velocities_from_strain_rate_tensor(lats, lons, exx=0.0, eyy=0.0, exy=0.0, rotation=0.0)[source]

Get velocities [m] from 2d area strain rate tensor.

Geographic coordinates are reprojected internally wrt. the centroid of the input locations.

Parameters
  • lats (array-like numpy.ndarray) – geographic latitudes in [deg]

  • lons (array-like numpy.ndarray) – geographic longitudes in [deg]

  • exx (float) – component of the 2d area strain-rate tensor [nanostrain] x-North

  • eyy (float) – component of the 2d area strain-rate tensor [nanostrain] y-East

  • exy (float) – component of the 2d area strain-rate tensor [nanostrain]

  • rotation (float) – clockwise rotation rate around the centroid of input locations

Returns

v_xyz – Deformation rate in [m] in x - East, y - North, z - Up Direction

Return type

2d array-like numpy.ndarray

The config Module

The config module contains the classes to build the configuration files that are being read by the beat executable.

So far there are configuration files for the three main optimization problems implemented. Solving the fault geometry, the static distributed slip and the kinematic distributed slip.

class config.BEATconfig(**kwargs)[source]

BEATconfig is the overarching configuration class, providing all the sub-configurations classes for the problem setup, Greens Function generation, optimization algorithm and the data being used.

name

str

date

str

event

pyrocko.model.event.Event, optional

subevents

list of pyrocko.model.event.Event objects, default: []

Event objects of other events that are supposed to be estimatedjointly with the main event. May have large temporal separation.

project_dir

str, default: 'event/'

problem_config

ProblemConfig, default: ProblemConfig()

geodetic_config

GeodeticConfig, optional

seismic_config

SeismicConfig, optional

sampler_config

SamplerConfig, default: SamplerConfig()

hyper_sampler_config

SamplerConfig, optional, default: SamplerConfig()

update_hierarchicals()[source]

Evaluate the whole config and initialise necessary hierarchical parameters.

update_hypers()[source]

Evaluate the whole config and initialise necessary hyperparameters.

exception config.ConfigNeedsUpdatingError(errmess='')[source]
class config.CorrectionConfig(**kwargs)[source]

Undocumented.

dataset_names

list of str objects, default: []

Datasets to include in the correction.

enabled

bool, default: False

Flag to enable Correction.

class config.DatasetConfig(**kwargs)[source]

Base config for datasets.

datadir

str, default: './'

Path to directory of the data

names

list of str objects, default: ['Data prefix filenames here ...']

class config.DiscretizationConfig(**kwargs)[source]

Config to determine the discretization of the finite fault(s)

extension_widths

list of float objects, default: [0.1]

Extend reference sources by this factor in each dip-direction. 0.1 means extension of the fault by 10% in each direction, i.e. 20% in total. If patches would intersect with the free surface they are constrained to end at the surface. Each value is applied following the list-order to the respective reference source.

extension_lengths

list of float objects, default: [0.1]

Extend reference sources by this factor in each strike-direction. 0.1 means extension of the fault by 10% in each direction, i.e. 20% in total. Each value is applied following the list-order to the respective reference source.

class config.EulerPoleConfig(**kwargs)[source]

Undocumented.

class config.FFIConfig(**kwargs)[source]

Undocumented.

regularization

str (pyrocko.guts.StringChoice), default: 'none'

Flag for regularization in distributed slip-optimization. Choices: laplacian, none

regularization_config

RegularizationConfig, optional

Additional configuration parameters for regularization

initialization

str (pyrocko.guts.StringChoice), default: 'random'

Initialization of chain starting points, default: random. Choices: random, lsq

npatches

int, optional

Number of patches on full fault. Must not be edited manually! Please edit indirectly through patch_widths and patch_lengths parameters!

subfault_npatches

list of int objects, optional, default: []

Number of patches on each sub-fault. Must not be edited manually! Please edit indirectly through patch_widths and patch_lengths parameters!

class config.GFConfig(**kwargs)[source]

Base config for GreensFunction calculation parameters.

store_superdir

str, default: './'

Absolute path to the directory where Greens Function stores are located

reference_model_idx

int, default: 0

Index to velocity model to use for the optimization. 0 - reference, 1..n - model of variations

n_variations

tuple of 2 int objects, default: (0, 1)

Start and end index to vary input velocity model. Important for the calculation of the model prediction covariance matrix with respect to uncertainties in the velocity model.

earth_model_name

str, default: 'ak135-f-continental.f'

Name of the reference earthmodel, see pyrocko.cake.builtin_models() for alternatives.

nworkers

int, default: 1

Number of processors to use for calculating the GFs

class config.GFLibaryConfig(**kwargs)[source]

Baseconfig for GF Libraries

component

str, default: 'uparr'

event

pyrocko.model.event.Event, default: Event()

crust_ind

int, default: 0

reference_sources

list of beat.sources.RectangularSource objects, default: []

Geometry of the reference source(s) to fix

class config.GNSSCorrectionConfig(**kwargs)[source]

Undocumented.

station_blacklist

list of str objects, default: []

GNSS station names to apply no correction.

station_whitelist

list of str objects, default: []

GNSS station names to apply the correction.

class config.GNSSDatasetConfig(**kwargs)[source]

Undocumented.

components

list of str objects, default: ['north', 'east', 'up']

blacklist

list of str objects, default: ['put blacklisted station names here or delete']

GNSS station to be thrown out.

class config.GeodeticConfig(**kwargs)[source]

Config for geodetic data optimization related parameters.

types

dict of DatasetConfig objects, default: {'SAR': <config.SARDatasetConfig object at 0x7ff76fb49438>, 'GNSS': <config.GNSSDatasetConfig object at 0x7ff76fb49470>}

Types of geodetic data, i.e. SAR, GNSS, with their configs

calc_data_cov

bool, default: True

Flag for calculating the data covariance matrix, outsourced to “kite”

interpolation

str (pyrocko.guts.StringChoice), default: 'multilinear'

GF interpolation scheme during synthetics generation. Choices: nearest_neighbor, multilinear

corrections_config

GeodeticCorrectionsConfig, default: GeodeticCorrectionsConfig()

Config for additional corrections to apply to geodetic datasets.

dataset_specific_residual_noise_estimation

bool, default: False

If set, for EACH DATASET specific hyperparameter estimation.For geodetic data: n_hypers = nimages (SAR) or nstations * ncomponents (GNSS).If false one hyperparameter for each DATATYPE and displacement COMPONENT.

gf_config

GFConfig, default: GeodeticGFConfig()

class config.GeodeticCorrectionsConfig(**kwargs)[source]

Config for corrections to geodetic datasets.

euler_poles

list of EulerPoleConfig objects, default: [<config.EulerPoleConfig object at 0x7ff76fb3aba8>]

ramp

RampConfig, default: RampConfig()

strain_rates

list of StrainRateConfig objects, default: [<config.StrainRateConfig object at 0x7ff76fb3ac50>]

class config.GeodeticGFConfig(**kwargs)[source]

Geodetic GF parameters for Layered Halfspace.

code

str, default: 'psgrn'

Modeling code to use. (psgrn, … others need to beimplemented!)

sample_rate

float, default: 1.1574074074074073e-05

Sample rate for the Greens Functions. Mainly relevant for viscoelastic modeling. Default: coseismic-one day

sampling_interval

float, default: 1.0

Distance dependend sampling spacing coefficient.1. - equidistant

medium_depth_spacing

float, default: 1.0

Depth spacing [km] for GF medium grid.

medium_distance_spacing

float, default: 10.0

Distance spacing [km] for GF medium grid.

class config.GeodeticGFLibraryConfig(**kwargs)[source]

Config for the linear Geodetic GF Library for dumping and loading.

dimensions

tuple of 2 int objects, default: (0, 0)

datatype

str, default: 'geodetic'

class config.GeodeticLinearGFConfig(**kwargs)[source]

Undocumented.

exception config.InconsistentParameterNaming(keyname='', parameter='', mode='')[source]
class config.LaplacianRegularizationConfig(**kwargs)[source]

Determines the structure of the Laplacian.

correlation_function

str (pyrocko.guts.StringChoice), default: 'nearest_neighbor'

Determines the correlation function for smoothing across patches. Choices: nearest_neighbor, gaussian, exponential

class config.LinearGFConfig(**kwargs)[source]

Config for linear GreensFunction calculation parameters.

reference_sources

list of beat.sources.RectangularSource objects, default: []

Geometry of the reference source(s) to fix

sample_rate

float, default: 2.0

Sample rate for the Greens Functions.

discretization

str (pyrocko.guts.StringChoice), default: 'uniform'

Flag for discretization of finite sources into patches. Choices: uniform, resolution

discretization_config

DiscretizationConfig, default: UniformDiscretizationConfig()

Discretization configuration that allows customization.

class config.MetropolisConfig(**kwargs)[source]

Config for optimization parameters of the Adaptive Metropolis algorithm.

n_jobs

int, default: 1

Number of processors to use, i.e. chains to sample in parallel.

n_steps

int, default: 25000

Number of steps for the MC chain.

n_chains

int, default: 20

Number of Metropolis chains for sampling.

thin

int, default: 2

Thinning parameter of the sampled trace. Every “thin”th sample is taken.

burn

float, default: 0.5

Burn-in parameter between 0. and 1. to discard fraction of samples from the beginning of the chain.

class config.ModeConfig(**kwargs)[source]

BaseConfig for optimization mode specific configuration.

class config.NoneRegularizationConfig[source]

Dummy class to return None.

class config.NonlinearGFConfig(**kwargs)[source]

Config for non-linear GreensFunction calculation parameters. Defines how the grid of Green’s Functions in the respective store is created.

use_crust2

bool, default: True

Flag, for replacing the crust from the earthmodelwith crust from the crust2 model.

replace_water

bool, default: True

Flag, for replacing water layers in the crust2 model.

custom_velocity_model

pyrocko.cake.LayeredModel (pyrocko.gf.meta.Earthmodel1D), optional

Custom Earthmodel, in case crust2 and standard model not wanted. Needs to be a :py::class:cake.LayeredModel

source_depth_min

float, default: 0.0

Minimum depth [km] for GF function grid.

source_depth_max

float, default: 10.0

Maximum depth [km] for GF function grid.

source_depth_spacing

float, default: 1.0

Depth spacing [km] for GF function grid.

source_distance_radius

float, default: 20.0

Radius of distance grid [km] for GF function grid around reference event.

source_distance_spacing

float, default: 1.0

Distance spacing [km] for GF function grid w.r.t reference_location.

error_depth

float, default: 0.1

3sigma [%/100] error in velocity model layer depth, translates to interval for varying the velocity model

error_velocities

float, default: 0.1

3sigma [%/100] in velocity model layer wave-velocities, translates to interval for varying the velocity model

depth_limit_variation

float, default: 600.0

Depth limit [km] for varying the velocity model. Below that depth the velocity model is not varied based on the errors defined above!

class config.ParallelTemperingConfig(**kwargs)[source]

Undocumented.

n_samples

int, default: 100000

Number of samples of the posterior distribution. Only the samples of processors that sample from the posterior (beta=1) are kept.

n_chains

int, default: 2

Number of PT chains to sample in parallel. A number < 2 will raise an Error, as this is the minimum amount of chains needed.

swap_interval

tuple of 2 int objects, default: (100, 300)

Interval for uniform random integer that is drawn to determine the length of MarkovChains on each worker. When chain is completed the last sample is returned for swapping state between chains. Consequently, lower number will result in more state swapping.

beta_tune_interval

int, default: 5000

Sample interval of master chain after which the chain swap acceptance is evaluated. High acceptance will result in closer spaced betas and vice versa.

n_chains_posterior

int, default: 1

Number of chains that sample from the posterior at beat=1.

resample

bool, default: False

If “true” the testvalue of the priors is taken as seed for all Markov Chains.

thin

int, default: 3

Thinning parameter of the sampled trace. Every “thin”th sample is taken.

burn

float, default: 0.5

Burn-in parameter between 0. and 1. to discard fraction of samples from the beginning of the chain.

record_worker_chains

bool, default: False

If True worker chain samples are written to disc using the specified backend trace objects (during sampler initialization). Very useful for debugging purposes. MUST be False for runs on distributed computing systems!

class config.ProblemConfig(**kwargs)[source]

Config for optimization problem to setup.

mode

str (pyrocko.guts.StringChoice), default: 'geometry'

Problem to solve. Choices: geometry, ffi

mode_config

ModeConfig, optional

Global optimization mode specific parameters.

source_type

str (pyrocko.guts.StringChoice), default: 'RectangularSource'

Source type to optimize for. Choices: ExplosionSource, RectangularExplosionSource, DCSource, CLVDSource, MTSource, MTQTSource, RectangularSource, DoubleDCSource, RingfaultSource

stf_type

str (pyrocko.guts.StringChoice), default: 'HalfSinusoid'

Source time function type to use. Choices: Boxcar, Triangular, HalfSinusoid

decimation_factors

dict of pyrocko.guts.Any objects, optional

Determines the reduction of discretization of an extended source.

n_sources

int, default: 1

Number of Sub-sources to solve for

datatypes

list of pyrocko.guts.Any objects, default: ['geodetic']

hyperparameters

dict of pyrocko.guts.Any objects, default: {}

Hyperparameters to estimate the noise in different types of datatypes.

priors

dict of pyrocko.guts.Any objects, default: {}

Priors of the variables in question.

hierarchicals

dict of pyrocko.guts.Any objects, default: {}

Hierarchical parameters that affect the posterior likelihood, but do not affect the forward problem. Implemented: Temporal station corrections, orbital ramp estimation

get_random_variables()[source]

Evaluate problem setup and return random variables dictionary.

Returns

  • rvs (dict) – variable random variables

  • fixed_params (dict) – fixed random parameters

get_slip_variables()[source]

Return a list of slip variable names defined in the ProblemConfig.

get_test_point()[source]

Returns dict with test point

init_vars(variables=None, nvars=None)[source]

Initiate priors based on the problem mode and datatypes.

Parameters

variables (list) – of str of variable names to initialise

select_variables()[source]

Return model variables depending on problem config.

set_decimation_factor()[source]

Determines the reduction of discretization of an extended source. Influences yet only the RectangularSource.

set_vars(bounds_dict, attribute='priors', init=False)[source]

Set variable bounds to given bounds.

validate_all()[source]

Validate all involved sampling parameters.

validate_hierarchicals()[source]

Check if hierarchicals and their test values do not contradict!

validate_hypers()[source]

Check if hyperparameters and their test values do not contradict!

validate_priors()[source]

Check if priors and their test values do not contradict!

class config.RampConfig(**kwargs)[source]

Undocumented.

class config.RegularizationConfig(**kwargs)[source]

Undocumented.

class config.ResolutionDiscretizationConfig(**kwargs)[source]

Parameters that control the resolution based source discretization.

References

Atzori2011

Atzori & Antonioli (2011). Optimal fault resolution in geodetic inversion of coseismic data. Geophysical Journal International, 185(1):529-538

epsilon

float, default: 0.004

Damping constant for Laplacian of Greens Functions. Usually reasonable between: [0.1 to 0.005]

epsilon_search_runs

int, default: 1

If above 1, the algorithm is iteratively run, starting with epsilon as lower bound on equal logspace up to epsilon * 100. “epsilon_search_runs” determines times of repetition and the spacing between epsilons. If this is 1, only the model for “epsilon” is created! The epsilon that minimises Resolution spreading (Atzori et al. 2019) is chosen!

resolution_thresh

float, default: 0.999

Resolution threshold discretization continues until all patches are below this threshold. The lower the finer the discretization. Reasonable between: [0.95, 0.999]

depth_penalty

float, default: 3.5

The higher the number the more penalty on the deeper patches-ergo larger patches.

alpha

float, default: 0.3

Decimal percentage of largest patches that are subdivided further. Reasonable: [0.1, 0.3]

patch_widths_min

list of float objects, default: [1.0]

Patch width [km] for min final discretization of patches.

patch_widths_max

list of float objects, default: [5.0]

Patch width [km] for max initial discretization of patches.

patch_lengths_min

list of float objects, default: [1.0]

Patch length [km] for min final discretization of patches.

patch_lengths_max

list of float objects, default: [5.0]

Patch length [km] for max initial discretization of patches.

get_patch_dimensions()[source]
Returns

Return type

List of patch_widths, List of patch_lengths

class config.SARDatasetConfig(**kwargs)[source]

Undocumented.

class config.SMCConfig(**kwargs)[source]

Config for optimization parameters of the SMC algorithm.

n_jobs

int, default: 1

Number of processors to use, i.e. chains to sample in parallel.

n_steps

int, default: 100

Number of steps for the MC chain.

n_chains

int, default: 1000

Number of Metropolis chains for sampling.

coef_variation

float, default: 1.0

Coefficient of variation, determines the similarity of theintermediate stage pdfs;low - small beta steps (slow cooling),high - wide beta steps (fast cooling)

stage

int, default: 0

Stage where to start/continue the sampling. Has to be int -1 for final stage

proposal_dist

str, default: 'MultivariateNormal'

Multivariate Normal Proposal distribution, for Metropolis stepsalternatives need to be implemented

update_covariances

bool, default: False

Update model prediction covariance matrixes in transition stages.

class config.SamplerConfig(**kwargs)[source]

Config for the sampler specific parameters.

name

str (pyrocko.guts.StringChoice), default: 'SMC'

Sampler to use for sampling the solution space. Choices: PT, SMC, Metropolis

backend

str (pyrocko.guts.StringChoice), default: 'csv'

File type to store output traces. Binary is fast, csv is good for easy sample inspection. Choices: csv, bin. Default: csv

progressbar

bool, default: True

Display progressbar(s) during sampling.

buffer_size

int, default: 5000

number of samples after which the result buffer is written to disk

buffer_thinning

int, default: 1

Factor by which the result trace is thinned before writing to disc.

parameters

SamplerParameters, default: SMCConfig()

Sampler dependend Parameters

class config.SamplerParameters(**kwargs)[source]

Undocumented.

tune_interval

int, default: 50

Tune interval for adaptive tuning of Metropolis step size.

proposal_dist

str, default: 'Normal'

Normal Proposal distribution, for Metropolis steps;Alternatives: Cauchy, Laplace, Poisson, MultivariateNormal

check_bnd

bool, default: True

Flag for checking whether proposed step lies within variable bounds.

rm_flag

bool, default: False

Remove existing results prior to sampling.

class config.SeismicConfig(**kwargs)[source]

Config for seismic data optimization related parameters.

datadir

str, default: './'

noise_estimator

SeismicNoiseAnalyserConfig, default: SeismicNoiseAnalyserConfig()

Determines the structure of the data-covariance matrix.

responses_path

str, optional

Path to response file

pre_stack_cut

bool, default: True

Cut the GF traces before stacking around the specified arrival taper

station_corrections

bool, default: False

If set, optimize for time shift for each station.

waveforms

list of WaveformFitConfig objects, default: []

dataset_specific_residual_noise_estimation

bool, default: False

If set, for EACH DATASET specific hyperparameter estimation.n_hypers = nstations * nchannels.If false one hyperparameter for each DATATYPE and displacement COMPONENT.

gf_config

GFConfig, default: SeismicGFConfig()

init_waveforms(wavenames=['any_P'])[source]

Initialise waveform configurations.

class config.SeismicGFConfig(**kwargs)[source]

Seismic GF parameters for Layered Halfspace.

reference_location

beat.heart.ReferenceLocation, optional

Reference location for the midpoint of the Green’s Function grid.

code

str, default: 'qssp'

Modeling code to use. (qssp, qseis, comming soon: qseis2d)

sample_rate

float, default: 2.0

Sample rate for the Greens Functions.

rm_gfs

bool, default: True

Flag for removing modeling module GF files after completion.

class config.SeismicGFLibraryConfig(**kwargs)[source]

Config for the linear Seismic GF Library for dumping and loading.

wave_config

WaveformFitConfig, default: WaveformFitConfig()

starttime_sampling

float, default: 0.5

duration_sampling

float, default: 0.5

starttime_min

float, default: 0.0

duration_min

float, default: 0.1

dimensions

tuple of 5 int objects, default: (0, 0, 0, 0, 0)

datatype

str, default: 'seismic'

mapnumber

int, optional

class config.SeismicLinearGFConfig(**kwargs)[source]

Config for seismic linear GreensFunction calculation parameters.

reference_location

beat.heart.ReferenceLocation, optional

Reference location for the midpoint of the Green’s Function grid.

duration_sampling

float, default: 1.0

Calculate Green’s Functions for varying Source Time Function durations determined by prior bounds. Discretization between is determined by duration sampling.

starttime_sampling

float, default: 1.0

Calculate Green’s Functions for varying rupture onset times.These are determined by the (rupture) velocity prior bounds and the hypocenter location.

class config.SeismicNoiseAnalyserConfig(**kwargs)[source]

Undocumented.

structure

str (pyrocko.guts.StringChoice), default: 'variance'

Determines data-covariance matrix structure. Choices: variance, exponential, import, non-toeplitz

pre_arrival_time

float, default: 5.0

Time [s] before synthetic P-wave arrival until variance is estimated

class config.StrainRateConfig(**kwargs)[source]

Undocumented.

class config.UniformDiscretizationConfig(**kwargs)[source]

Undocumented.

patch_widths

list of float objects, default: [5.0]

List of Patch width [km] to divide reference sources. Each value is applied following the list-order to the respective reference source

patch_lengths

list of float objects, default: [5.0]

Patch length [km] to divide reference sources Each value is applied following the list-order to the respective reference source

class config.WaveformFitConfig(**kwargs)[source]

Config for specific parameters that are applied to post-process a specific type of waveform and calculate the misfit.

include

bool, default: True

Flag to include waveform into optimization.

preprocess_data

bool, default: True

Flag to filter input data.

name

str, default: 'any_P'

blacklist

list of str objects, default: []

Station name for stations to be thrown out.

quantity

str (pyrocko.guts.StringChoice), default: 'displacement'

Quantity of synthetics to be computed.

channels

list of str objects, default: ['Z']

filterer

list of beat.heart.FilterBase objects, default: []

List of Filters that are applied in the order of the list.

distances

tuple of 2 float objects, default: (30.0, 90.0)

interpolation

str (pyrocko.guts.StringChoice), default: 'multilinear'

GF interpolation sceme. Choices: nearest_neighbor, multilinear

arrival_taper

pyrocko.trace.Taper, default: ArrivalTaper()

Taper a,b/c,d time [s] before/after wave arrival

event_idx

int, optional, default: 0

Index to event from events list for reference time and data extraction. Default is 0 - always use the reference event.

config.dump_config(config)[source]

Dump configuration file.

Parameters

config (BEATConfig) –

config.init_config(name, date=None, min_magnitude=6.0, main_path='./', datatypes=['geodetic'], mode='geometry', source_type='RectangularSource', n_sources=1, waveforms=['any_P'], sampler='SMC', hyper_sampler='Metropolis', use_custom=False, individual_gfs=False)[source]

Initialise BEATconfig File and write it main_path/name . Fine parameters have to be edited in the config file .yaml manually.

Parameters
  • name (str) – Name of the event

  • date (str) – ‘YYYY-MM-DD’, date of the event

  • min_magnitude (scalar, float) – approximate minimum Mw of the event

  • datatypes (List of strings) – data sets to include in the optimization: either ‘geodetic’ and/or ‘seismic’

  • mode (str) – type of optimization problem: ‘Geometry’ / ‘Static’/ ‘Kinematic’

  • n_sources (int) – number of sources to solve for / discretize depending on mode parameter

  • wavenames (list) – of strings of wavenames to include into the misfit function and GF calculation

  • sampler (str) – Optimization algorithm to use to sample the solution space Options: ‘SMC’, ‘Metropolis’

  • use_custom (boolean) – Flag to setup manually a custom velocity model.

  • individual_gfs (boolean) – Flag to use individual Green’s Functions for each specific station. If false a reference location will be initialised in the config file. If true the reference locations will be taken from the imported station objects.

Returns

Return type

BEATconfig

config.init_reference_sources(source_points, n_sources, source_type, stf_type, event=None)[source]

Initialise sources of specified geometry

Parameters

source_points (list) – of dicts or kite sources

config.load_config(project_dir, mode)[source]

Load configuration file.

Parameters
  • project_dir (str) – path to the directory of the configuration file

  • mode (str) – type of optimization problem: ‘geometry’ / ‘static’/ ‘kinematic’

  • update (list) – of strings to update parameters ‘hypers’ or/and ‘hierarchicals’

Returns

Return type

BEATconfig

The sampler Module

metropolis

Metropolis algorithm module, wrapping the pymc3 implementation. Provides the possibility to update the involved covariance matrixes within the course of sampling the chain.

class sampler.metropolis.Metropolis(*args, **kwargs)[source]

Metropolis-Hastings sampler

Parameters
  • vars (list) – List of variables for sampler

  • out_vars (list) – List of output variables for trace recording. If empty unobserved_RVs are taken.

  • n_chains (int) – Number of chains per stage has to be a large number of number of n_jobs (processors to be used) on the machine.

  • scaling (float) – Factor applied to the proposal distribution i.e. the step size of the Markov Chain

  • covariance (numpy.ndarray) – (n_chains x n_chains) for MutlivariateNormal, otherwise (n_chains) Initial Covariance matrix for proposal distribution, if None - identity matrix taken

  • likelihood_name (string) – name of the pymc3.determinsitic variable that contains the model likelihood - defaults to ‘like’

  • backend (str) – type of backend to use for sample results storage, for alternatives see backend.backend:catalog

  • proposal_distpymc3.metropolis.Proposal Type of proposal distribution, see :module:`pymc3.step_methods.metropolis` for options

  • tune (boolean) – Flag for adaptive scaling based on the acceptance rate

  • model (pymc3.Model) – Optional model for sampling step. Defaults to None (taken from context).

apply_sampler_state(state)[source]

Update sampler state to given state (obtained by ‘get_sampler_state’)

Parameters

state (dict) – with sampler parameters

get_sampler_state()[source]

Return dictionary of sampler state.

Returns

Return type

dict of sampler state

sampler.metropolis.get_final_stage(homepath, n_stages, model)[source]

Combine Metropolis results into final stage to get one single chain for plotting results.

sampler.metropolis.get_trace_stats(mtrace, step, burn=0.5, thin=2, n_jobs=1)[source]

Get mean value of trace variables and return point.

Parameters
  • mtrace (pymc3.backends.base.MultiTrace) – Multitrace sampling result

  • step (initialised smc.SMC sampler object) –

  • burn (float) – Burn-in parameter to throw out samples from the beginning of the trace

  • thin (int) – Thinning of samples in the trace

Returns

Return type

dict with points, covariance matrix

sampler.metropolis.metropolis_sample(n_steps=10000, homepath=None, start=None, progressbar=False, rm_flag=False, buffer_size=5000, buffer_thinning=1, step=None, model=None, n_jobs=1, update=None, burn=0.5, thin=2)[source]

Execute Metropolis algorithm repeatedly depending on the number of chains.

smc

Sequential Monte Carlo Sampler module;

Runs on any pymc3 model.

class sampler.smc.SMC(*args, **kwargs)[source]

Sequential Monte-Carlo sampler class.

Parameters
  • vars (list) – List of variables for sampler

  • out_vars (list) – List of output variables for trace recording. If empty unobserved_RVs are taken.

  • n_chains (int) – Number of chains per stage has to be a large number of number of n_jobs (processors to be used) on the machine.

  • scaling (float) – Factor applied to the proposal distribution i.e. the step size of the Markov Chain

  • covariance (numpy.ndarray) – (n_chains x n_chains) for MutlivariateNormal, otherwise (n_chains) Initial Covariance matrix for proposal distribution, if None - identity matrix taken

  • likelihood_name (string) – name of the pymc3.determinsitic variable that contains the model likelihood - defaults to ‘like’

  • proposal_distpymc3.metropolis.Proposal Type of proposal distribution, see :module:`pymc3.step_methods.metropolis` for options

  • tune (boolean) – Flag for adaptive scaling based on the acceptance rate

  • coef_variation (scalar, float) – Coefficient of variation, determines the change of beta from stage to stage, i.e.indirectly the number of stages, low coef_variation –> slow beta change, results in many stages and vice verca (default: 1.)

  • check_bound (boolean) – Check if current sample lies outside of variable definition speeds up computation as the forward model wont be executed default: True

  • model (pymc3.Model) – Optional model for sampling step. Defaults to None (taken from context).

  • backend (str) – type of backend to use for sample results storage, for alternatives see backend.backend:catalog

References

Ching2007

Ching, J. and Chen, Y. (2007). Transitional Markov Chain Monte Carlo Method for Bayesian Model Updating, Model Class Selection, and Model Averaging. J. Eng. Mech., 10.1061/(ASCE)0733-9399(2007)133:7(816), 816-832. link

calc_beta()[source]

Calculate next tempering beta and importance weights based on current beta and sample likelihoods.

Returns

  • beta(m+1) (scalar, float) – tempering parameter of the next stage

  • beta(m) (scalar, float) – tempering parameter of the current stage

  • weights (numpy.ndarray) – Importance weights (floats)

calc_covariance()[source]

Calculate trace covariance matrix based on importance weights.

Returns

cov – weighted covariances (NumPy > 1.10. required)

Return type

numpy.ndarray

get_chain_previous_lpoint(mtrace)[source]

Read trace results and take end points for each chain and set as previous chain result for comparison of metropolis select.

Parameters

mtrace (pymc3.backend.base.MultiTrace) –

Returns

chain_previous_lpoint – all unobservedRV values, including dataset likelihoods

Return type

list

get_map_end_points()[source]

Calculate mean of the end-points and return point.

Returns

Return type

Dictionary of trace variables

resample()[source]

Resample pdf based on importance weights. based on Kitagawas deterministic resampling algorithm.

Returns

outindex – Array of resampled trace indexes

Return type

numpy.ndarray

select_end_points(mtrace)[source]

Read trace results (variables and model likelihood) and take end points for each chain and set as start population for the next stage.

Parameters

mtrace (pymc3.backend.base.MultiTrace) –

Returns

  • population (list) – of pymc3.Point() dictionaries

  • array_population (numpy.ndarray) – Array of trace end-points

  • likelihoods (numpy.ndarray) – Array of likelihoods of the trace end-points

sampler.smc.smc_sample(n_steps, step=None, start=None, homepath=None, stage=0, n_jobs=1, progressbar=False, buffer_size=5000, buffer_thinning=1, model=None, update=None, random_seed=None, rm_flag=False)[source]

Sequential Monte Carlo samlping

Samples the solution space with n_chains of Metropolis chains, where each chain has n_steps iterations. Once finished, the sampled traces are evaluated:

  1. Based on the likelihoods of the final samples, chains are weighted

  2. the weighted covariance of the ensemble is calculated and set as new proposal distribution

  3. the variation in the ensemble is calculated and the next tempering parameter (beta) calculated

  4. New n_chains Metropolis chains are seeded on the traces with high weight for n_steps iterations

  5. Repeat until beta > 1.

Parameters
  • n_steps (int) – The number of samples to draw for each Markov-chain per stage

  • step (SMC) – SMC initialisation object

  • start (List of dictionaries) – with length of (n_chains) Starting points in parameter space (or partial point) Defaults to random draws from variables (defaults to empty dict)

  • stage (int) – Stage where to start or continue the calculation. It is possible to continue after completed stages (stage should be the number of the completed stage + 1). If None the start will be at stage = 0.

  • n_jobs (int) – The number of cores to be used in parallel. Be aware that theano has internal parallelisation. Sometimes this is more efficient especially for simple models. step.n_chains / n_jobs has to be an integer number!

  • homepath (string) – Result_folder for storing stages, will be created if not existing.

  • progressbar (bool) – Flag for displaying a progress bar

  • buffer_size (int) – this is the number of samples after which the buffer is written to disk or if the chain end is reached

  • buffer_thinning (int) – every nth sample of the buffer is written to disk default: 1 (no thinning)

  • model (pymc3.Model) – (optional if in with context) has to contain deterministic variable name defined under step.likelihood_name’ that contains the model likelihood

  • update (models.Problem) – Problem object that contains all the observed data and (if applicable) covariances to be updated each transition step.

  • rm_flag (bool) – If True existing stage result folders are being deleted prior to sampling.

References

Minson2013

Minson, S. E. and Simons, M. and Beck, J. L., (2013), Bayesian inversion for finite fault earthquake source models I- Theory and algorithm. Geophysical Journal International, 2013, 194(3), pp.1701-1726, link

pt

Parallel Tempering algorithm with mpi4py

class sampler.pt.TemperingManager(step, n_workers, model, progressbar, buffer_size, swap_interval, beta_tune_interval, n_workers_posterior)[source]

Manages worker related work attributes and holds mappings between workers, betas and counts acceptance of chain swaps.

Provides methods for chain_swapping and beta adaptation.

property betas

Inverse of Sampler Temperatures. The lower the more likely a step is accepted.

get_acceptance_swap(beta, beta_tune_interval)[source]

Returns acceptance rate for swapping states between chains.

get_package(source, trace=None, resample=False, burnin=1000)[source]

Register worker to the manager and get assigned the annealing parameter and the work package. If worker was registered previously continues old task. To ensure book-keeping of workers and their sampler states.

Parameters
  • source (int) – MPI source id from a worker message

  • trace (:class:beat.backend.BaseTrace) – Trace object keeping the samples of the Markov Chain

  • resample (bool) – If True all the Markov Chains are starting sampling in the testvalue

  • burnin (int) – Number of samples the worker is taking before updating the proposal covariance matrix based on the trace samples

Returns

step – object that contains the step method how to sample the solution space

Return type

class:beat.sampler.Metropolis

get_posterior_workers(idxs=False)[source]

Worker indexes that are sampling from the posterior (beta == 1.) If idxs is True return indexes to sample and acceptance arrays

get_workers_ge_beta(beta, idxs=False)[source]

Get worker source indexes greater, equal given beta. If idxs is True return indexes to sample and acceptance arrays

propose_chain_swap(m1, m2, source1, source2)[source]

Propose a swap between chain samples.

tune_betas()[source]

Evaluate the acceptance rate of posterior workers and the lowest tempered worker. This scaling here has the inverse behaviour of metropolis step scaling! If there is little acceptance more exploration is needed and lower beta values are desired.

update_betas(t_scale=None)[source]

Update annealing schedule for all the workers.

Parameters
  • t_scale (float) – factor to adjust the step size in the temperatures the base step size is 1.e1

  • update (bool) – if true the current scale factor is updated by given

Returns

Return type

list of inverse temperatures (betas) in decreasing beta order

worker_a2l(m, source)[source]

Map m from array to list space worker deoendend.

worker_beta_updated(source, check=False)[source]

Check if source beta is updated.

Parameters
  • source (int) – mpi worker index

  • check (boolean) – if True worker beta status is set to “updated”

Returns

Return type

boolean, True if beta is updated

sampler.pt.pt_sample(step, n_chains, n_samples=100000, start=None, swap_interval=(100, 300), beta_tune_interval=10000, n_workers_posterior=1, homepath='', progressbar=True, buffer_size=5000, buffer_thinning=1, model=None, rm_flag=False, resample=False, keep_tmp=False, record_worker_chains=False)[source]

Paralell Tempering algorithm

(adaptive) Metropolis sampling over n_jobs of MC chains. Half (floor) of these are sampling at beta = 1 (the posterior). The other half of the MC chains are tempered linearly down to beta = 1e-6. Randomly, the states of chains are swapped based on the Metropolis-Hastings acceptance criterion to the power of the differences in beta of the involved chains. The samples are written to disk only by the master process. Once the specified number of samples is reached sampling is stopped.

Parameters
  • step (beat.sampler.Metropolis) – sampler object

  • n_chains (int) – number of Markov Chains to use

  • n_samples (int) – number of samples in the result trace, if reached sampling stops

  • swap_interval (tuple) – interval for uniform random integer that determines the length of each MarkovChain on each worker. The chain end values of workers are proposed for swapping state and are written in the final trace

  • beta_tune_interval (int) – Evaluate acceptance rate of chain swaps and tune betas similar to proposal step tuning

  • n_workers_posterior (int) – number of workers that sample from the posterior distribution at beta=1

  • homepath (string) – Result_folder for storing stages, will be created if not existing

  • progressbar (bool) – Flag for displaying a progress bar

  • buffer_size (int) – this is the number of samples after which the buffer is written to disk or if the chain end is reached

  • buffer_thinning (int) – every nth sample of the buffer is written to disk, default: 1 (no thinning)

  • model (pymc3.Model) – (optional if in with context) has to contain deterministic variable name defined under step.likelihood_name’ that contains the model likelihood

  • rm_flag (bool) – If True existing stage result folders are being deleted prior to sampling.

  • resample (bool) – If True all the Markov Chains are starting sampling at the testvalue

  • keep_tmp (bool) – If True the execution directory (under ‘/tmp/’) is not being deleted after process finishes

  • record_worker_chains (bool) – If True worker chain samples are written to disc using the specified backend trace objects (during sampler initialization). Very useful for debugging purposes. MUST be False for runs on distributed computing systems!

sampler.pt.sample_pt_chain(draws, step=None, start=None, trace=None, chain=0, tune=None, progressbar=True, model=None, random_seed=- 1)[source]

Sample a single chain of the Parallel Tempering algorithm and return the last sample of the chain. Depending on the step object the MarkovChain can have various step behaviour, e.g. Metropolis, NUTS, …

Parameters
  • draws (int or beat.sampler.base.Proposal) – The number of samples to draw for each Markov-chain per stage or a Proposal distribution

  • step (sampler.metropolis.Metropolis) – Metropolis initialisation object

  • start (dict) – Starting point in parameter space (or partial point) Defaults to random draws from variables (defaults to empty dict)

  • chain (int) – Chain number used to store sample in backend.

  • stage (int) – Stage where to start or continue the calculation. It is possible to continue after completed stages (stage should be the number of the completed stage + 1). If None the start will be at stage = 0.

  • tune (int) – Number of iterations to tune, if applicable (defaults to None)

  • progressbar (bool) – Flag for displaying a progress bar

  • model (pymc3.Model) – (optional if in with context) has to contain deterministic variable name defined under step.likelihood_name’ that contains the model likelihood

Returns

Return type

numpy.NdArray with end-point of the MarkovChain

sampler.pt.tune(scale, acc_rate)[source]

Tunes the temperature scaling parameter according to the acceptance rate over the last tune_interval:

Rate Variance adaptation —- ——————- <0.001 x 0.8 <0.05 x 0.9 <0.2 x 0.95 >0.5 x 1.05 >0.75 x 1.1 >0.95 x 1.2

The ffi Module

The parallel Module

exception parallel.TimeoutException(jobstack=[])[source]

Exception raised if a per-task timeout fires.

class parallel.WatchedWorker(task, work, initializer=None, initargs=(), timeout=65535)[source]

Wrapper class for parallel execution of a task.

Parameters
  • task (function to execute) –

  • work (List) – of arguments to specified function

  • timeout (int) – time [s] after which worker is fired, default 65536s

run()[source]

Start working on the task!

parallel.borrow_all_memories(shared_params, memshared_instances)[source]

Run theano_borrow_memory on a list of params and shared memory sharedctypes.

Parameters
  • shared_params (list) – of theano.tensor.sharedvar.TensorSharedVariable the Theano shared variable where shared memory should be used instead.

  • memshared_instances (dict of tuples) – of multiprocessing.RawArray and their shapes the memory shared across processes (e.g.from memshare_sparams)

Notes

Same as borrow_memory but for lists of shared memories and theano variables. See borrow_memory

parallel.borrow_memory(shared_param, memshared_instance, shape)[source]

Spawn different processes with the shared memory of your theano model’s variables.

Parameters
  • shared_param (theano.tensor.sharedvar.TensorSharedVariable) – the Theano shared variable where shared memory should be used instead.

  • memshared_instance (multiprocessing.RawArray) – the memory shared across processes (e.g.from memshare_sparams)

  • shape (tuple) – of shape of shared instance

Notes

Modiefied from: https://github.com/JonathanRaiman/theano_lstm/blob/master/theano_lstm/shared_memory.py

For each process in the target function run the theano_borrow_memory method on the parameters you want to have share memory across processes. In this example we have a model called “mymodel” with parameters stored in a list called “params”. We loop through each theano shared variable and call borrow_memory on it to share memory across processes.

Examples

>>> def spawn_model(path, wrapped_params):
    # prevent recompilation and arbitrary locks
>>>     theano.config.reoptimize_unpickled_function = False
>>>     theano.gof.compilelock.set_lock_status(False)
    # load your function from its pickled instance (from path)
>>>     myfunction = MyFunction.load(path)
    # for each parameter in your function
    # apply the borrow memory strategy to replace
    # the internal parameter's memory with the
    # across-process memory:
>>>     for param, memshared_instance in zip(
>>>             myfunction.get_shared(), memshared_instances):
>>>         borrow_memory(param, memory)
    # acquire your dataset (either through some smart shared memory
    # or by reloading it for each process)
    # dataset, dataset_labels = acquire_dataset()
    # then run your model forward in this process
>>>     epochs = 20
>>>     for epoch in range(epochs):
>>>         model.update_fun(dataset, dataset_labels)

See borrow_all_memories for list usage.

parallel.check_available_memory(filesize)[source]

Checks if the system memory can handle the given filesize.

Parameters

filesize (float) – in [Mb] megabyte

parallel.exception_tracer(func)[source]

Function decorator that returns a traceback if an Error is raised in a child process of a pool.

parallel.get_process_id()[source]

Returns the process id of the current process

parallel.memshare(parameternames)[source]

Add parameters to set of variables that are to be put into shared memory.

Parameters

parameternames (list of str) – off names to theano.tensor.sharedvar.TensorSharedVariable

parallel.memshare_sparams(shared_params)[source]

For each parameter in a list of Theano TensorSharedVariable we substitute the memory with a sharedctype using the multiprocessing library.

The wrapped memory can then be used by other child processes thereby synchronising different instances of a model across processes (e.g. for multi cpu gradient descent using single cpu Theano code).

Parameters

shared_params (list) – of theano.tensor.sharedvar.TensorSharedVariable

Returns

memshared_instances – of multiprocessing.sharedctypes.RawArray list of sharedctypes (shared memory arrays) that point to the memory used by the current process’s Theano variable.

Return type

list

Notes

Modified from: https://github.com/JonathanRaiman/theano_lstm/blob/master/theano_lstm/shared_memory.py

# define some theano function: myfunction = myfunction(20, 50, etc…)

# wrap the memory of the Theano variables: memshared_instances = make_params_shared(myfunction.get_shared())

Then you can use this memory in child processes (See usage of borrow_memory)

parallel.overseer(timeout)[source]

Function decorator that raises a TimeoutException exception after timeout seconds, if the decorated function did not return.

parallel.paripool(function, workpackage, nprocs=None, chunksize=1, timeout=65535, initializer=None, initargs=(), worker_initializer=None, winitargs=())[source]

Initialises a pool of workers and executes a function in parallel by forking the process. Does forking once during initialisation.

Parameters
  • function (function) – python function to be executed in parallel

  • workpackage (list) – of iterables that are to be looped over/ executed in parallel usually these objects are different for each task.

  • nprocs (int) – number of processors to be used in paralell process

  • chunksize (int) – number of work packages to throw at workers in each instance

  • timeout (int) – time [s] after which processes are killed, default: 65536s

  • initializer (function) – to init pool with may be container for shared arrays

  • initargs (tuple) – of arguments for the initializer

  • worker_initializer (function) – to initialize each worker process

  • winitargs (tuple) – of argument to worker_initializer

The backend Module

Text file trace backend modified from pymc3 to work efficiently with SMC

Store sampling values as CSV files.

File format

Sampling values for each chain are saved in a separate file (under a directory specified by the dir_path argument). The rows correspond to sampling iterations. The column names consist of variable names and index labels. For example, the heading

x,y__0_0,y__0_1,y__1_0,y__1_1,y__2_0,y__2_1

represents two variables, x and y, where x is a scalar and y has a shape of (3, 2).

class backend.ArrayStepSharedLLK(*args, **kwargs)[source]

Modified ArrayStepShared To handle returned larger point including the likelihood values. Takes additionally a list of output vars including the likelihoods.

Parameters
  • vars (list) – variables to be sampled

  • out_vars (list) – variables to be stored in the traces

  • shared (dict) – theano variable -> shared variables

  • blocked (boolen) – (default True)

class backend.BaseChain(model=None, vars=None, buffer_size=5000, buffer_thinning=1)[source]

Base chain object, independent of file or memory output.

Parameters
  • model (Model) – If None, the model is taken from the with context.

  • vars (list of variables) – Sampling values will be stored for these variables. If None, model.unobserved_RVs is used.

buffer_write(lpoint, draw)[source]

Write sampling results into buffer. If buffer is full trow an error.

get_sample_covariance(step)[source]

Return sample Covariance matrix from buffer.

class backend.FileChain(dir_path='', model=None, vars=None, buffer_size=5000, buffer_thinning=1, progressbar=False, k=None)[source]

Base class for a trace written to a file with buffer functionality and rogressbar. Buffer is a list of tuples of lpoints and a draw index. Inheriting classes must define the methods: ‘_write_data_to_file’ and ‘_load_df’

clear_data()[source]

Clear the data loaded from file.

write(lpoint, draw)[source]

Write sampling results into buffer. If buffer is full write samples to file.

class backend.MemoryChain(buffer_size=5000)[source]

Slim memory trace object. Keeps points in a list in memory.

Parameters
  • draws (int) – Number of samples

  • chain (int) – Chain number

class backend.NumpyChain(dir_path, model=None, vars=None, buffer_size=5000, progressbar=False, k=None, buffer_thinning=1)[source]

Numpy binary trace object based on ‘.bin’ files. Fast in reading and writing. Bad for debugging.

Parameters
  • dir_path (str) – Name of directory to store text files

  • model (Model) – If None, the model is taken from the with context.

  • vars (list of variables) – Sampling values will be stored for these variables. If None, model.unobserved_RVs is used.

  • buffer_size (int) – this is the number of samples after which the buffer is written to disk or if the chain end is reached

  • buffer_thinning (int) – every nth sample of the buffer is written to disk

  • progressbar (boolean) – flag if a progressbar is active, if not a logmessage is printed everytime the buffer is written to disk

  • k (int, optional) – if given dont use shape from testpoint as size of transd variables

construct_data_structure()[source]

Create a dtype to store the data based on varnames in a numpy array.

Returns

Return type

A numpy.dtype

point(idx)[source]

Get point of current chain with variables names as keys.

Parameters

idx (int) – Index of the nth step of the chain

Returns

Return type

dictionary of point values

setup(draws, chain, overwrite=False)[source]

Perform chain-specific setup. Creates file with header. If exist not overwritten again unless flag is set.

Parameters
  • draws (int.) – Expected number of draws

  • chain – int. Chain number

  • overwrite – Bool (optional). True(default) if file need to be overwrite, false otherwise.

class backend.TextChain(dir_path, model=None, vars=None, buffer_size=5000, buffer_thinning=1, progressbar=False, k=None)[source]

Text trace object based on ‘.csv’ files. Slow in reading and writing. Good for debugging.

Parameters
  • dir_path (str) – Name of directory to store text files

  • model (Model) – If None, the model is taken from the with context.

  • vars (list of variables) – Sampling values will be stored for these variables. If None, model.unobserved_RVs is used.

  • buffer_size (int) – this is the number of samples after which the buffer is written to disk or if the chain end is reached

  • buffer_thinning (int) – every nth sample of the buffer is written to disk

  • progressbar (boolean) – flag if a progressbar is active, if not a logmessage is printed everytime the buffer is written to disk

  • k (int, optional) – if given dont use shape from testpoint as size of transd variables

get_values(varname, burn=0, thin=1)[source]

Get values from trace.

Parameters
  • varname (str) – Variable name for which values are to be retrieved.

  • burn (int) – Burn-in samples from trace. This is the number of samples to be thrown out from the start of the trace

  • thin (int) – Nuber of thinning samples. Throw out every ‘thin’ sample of the trace.

Returns

Return type

numpy.array

point(idx)[source]

Get point of current chain with variables names as keys.

Parameters

idx (int) – Index of the nth step of the chain

Returns

Return type

dictionary of point values

setup(draws, chain, overwrite=False)[source]

Perform chain-specific setup.

Parameters
  • draws (int) – Expected number of draws

  • chain (int) – Chain number

class backend.TransDTextChain(name, model=None, vars=None, buffer_size=5000, progressbar=False)[source]

Result Trace object for trans-d problems. Manages several TextChains one for each dimension.

point(idx)[source]

Get point of current chain with variables names as keys.

Parameters

idx (int) – Index of the nth step of the chain

Returns

dict

Return type

of point values

backend.check_multitrace(mtrace, draws, n_chains, buffer_thinning=1)[source]

Check multitrace for incomplete sampling and return indexes from chains that need to be resampled.

Parameters
  • mtrace (pymc3.backend.base.MultiTrace) – Multitrace object containing the sampling traces

  • draws (int) – Number of steps (i.e. chain length for each Markov Chain)

  • n_chains (int) – Number of Markov Chains

Returns

Return type

list of indexes for chains that need to be resampled

backend.concatenate_traces(mtraces)[source]

Concatenate a List of MultiTraces with same chain indexes.

backend.extract_bounds_from_summary(summary, varname, shape, roundto=None, alpha=0.01)[source]

Extract lower and upper bound of random variable.

Returns

Return type

list of num.Ndarray

backend.extract_variables_from_df(dataframe)[source]

Extract random variables and their shapes from the pymc3-pandas data-frame

Parameters

dataframe (pandas.DataFrame) –

Returns

  • flat_names (dict) – with variable-names and respective flat-name indexes to data-frame

  • var_shapes (dict) – with variable names and shapes

backend.get_highest_sampled_stage(homedir, return_final=False)[source]

Return stage number of stage that has been sampled before the final stage.

Parameters

homedir (str) – Directory to the sampled stage results

Returns

stage number

Return type

int

backend.load_multitrace(dirname, varnames=[], chains=None, backend='csv')[source]

Load TextChain database.

Parameters
  • dirname (str) – Name of directory with files (one per chain)

  • varnames (list) – of strings with variable names

  • chains (list optional) –

Returns

Return type

A pymc3.backend.base.MultiTrace instance

backend.load_sampler_params(project_dir, stage_number, mode)[source]

Load saved parameters from given ATMIP stage.

Parameters
  • project_dir (str) – absolute path to directory of BEAT project

  • number (stage) – of stage number or ‘final’ for last stage

  • mode (str) – problem mode that has been solved (‘geometry’, ‘static’, ‘kinematic’)

backend.thin_buffer(buffer, buffer_thinning, ensure_last=True)[source]

Reduce a list of objects by a given value.

Parameters
  • buffer (list) – of objects to be thinned

  • buffer_thinning (int) – every nth object in list is returned

  • ensure_last (bool) – enable to ensure that last object in list is returned

The models Module

problems

class models.problems.DistributionOptimizer(config, hypers=False)[source]

Defines the model setup to solve the linear slip-distribution and returns the model object.

Parameters

config (:class:'config.BEATconfig') – Contains all the information about the model setup and optimization boundaries, as well as the sampler parameters.

lsq_solution(point, plot=False)[source]

Returns non-negtive least-squares solution for given input point.

Parameters

point (dict) – in solution space

Returns

Return type

point with least-squares solution

class models.problems.GeometryOptimizer(config, hypers=False)[source]

Defines the model setup to solve for the non-linear fault geometry.

Parameters

config (:class:'config.BEATconfig') – Contains all the information about the model setup and optimization boundaries, as well as the sampler parameters.

models.problems.load_model(project_dir, mode, hypers=False, build=True)[source]

Load config from project directory and return BEAT problem including model.

Parameters
  • project_dir (string) – path to beat model directory

  • mode (string) – problem name to be loaded

  • hypers (boolean) – flag to return hyper parameter estimation model instead of main model.

  • build (boolean) – flag to build models

Returns

problem

Return type

Problem

seismic

class models.seismic.SeismicDistributerComposite(sc, project_dir, events, hypers=False)[source]

Comprises how to solve the seismic (kinematic) linear forward model. Distributed slip

get_synthetics(point, **kwargs)[source]

Get synthetics for given point in solution space.

Parameters
  • point (pymc3.Point()) – Dictionary with model parameters

  • especially to change output of the forward model (kwargs) – outmode: stacked_traces/ tapered_data/ array

Returns

Return type

list with heart.SeismicDataset synthetics for each target

load_fault_geometry()[source]

Load fault-geometry, i.e. discretized patches.

Returns

Return type

heart.FaultGeometry

load_gfs(crust_inds=None, make_shared=True)[source]

Load Greens Function matrixes for each variable to be inverted for. Updates gfs and gf_names attributes.

Parameters
  • crust_inds (list) – of int to indexes of Green’s Functions

  • make_shared (bool) – if True transforms gfs to theano.shared variables

point2sources(point)[source]

Returns the fault source patche(s) with the point values updated.

Parameters

point (dict) – with random variables from solution space

update_weights(point, n_jobs=1, plot=False, chop_bounds=['b', 'c'])[source]

Updates weighting matrixes (in place) with respect to the point in the solution space.

Parameters

point (dict) – with numpy array-like items and variable name keys

class models.seismic.SeismicGeometryComposite(sc, project_dir, sources, events, hypers=False)[source]

Comprises how to solve the non-linear seismic forward model.

Parameters
  • sc (config.SeismicConfig) – configuration object containing seismic setup parameters

  • project_dir (str) – directory of the model project, where to find the data

  • sources (list) – of pyrocko.gf.seismosizer.Source

  • events (list) – of pyrocko.model.Event contains information of reference event(s), coordinates of reference point(s) and source time(s)

  • hypers (boolean) – if true initialise object for hyper parameter optimization

get_formula(input_rvs, fixed_rvs, hyperparams, problem_config)[source]

Get seismic likelihood formula for the model built. Has to be called within a with model context.

Parameters
  • input_rvs (list) – of pymc3.distribution.Distribution of source parameters

  • fixed_rvs (dict) – of numpy.array

  • hyperparams (dict) – of pymc3.distribution.Distribution

  • problem_config (config.ProblemConfig) –

Returns

posterior_llk

Return type

theano.tensor.Tensor

get_synthetics(point, **kwargs)[source]

Get synthetics for given point in solution space.

Parameters
  • point (pymc3.Point()) – Dictionary with model parameters

  • especially to change output of seismic forward model (kwargs) – outmode = ‘traces’/ ‘array’ / ‘data’

Returns

default

Return type

array of synthetics for all targets

point2sources(point)[source]

Updates the composite source(s) (in place) with the point values.

Parameters

point (dict) – with random variables from solution space

update_weights(point, n_jobs=1, plot=False, chop_bounds=['b', 'c'])[source]

Updates weighting matrixes (in place) with respect to the point in the solution space.

Parameters

point (dict) – with numpy array-like items and variable name keys

geodetic

class models.geodetic.GeodeticDistributerComposite(gc, project_dir, events, hypers=False)[source]

Comprises how to solve the geodetic (static) linear forward model. Distributed slip

get_formula(input_rvs, fixed_rvs, hyperparams, problem_config)[source]

Formulation of the distribution problem for the model built. Has to be called within a with-model-context.

Parameters
  • input_rvs (list) – of pymc3.distribution.Distribution

  • hyperparams (dict) – of pymc3.distribution.Distribution

Returns

llk – log-likelihood for the distributed slip

Return type

theano.tensor.Tensor

get_synthetics(point, outmode='data')[source]

Get synthetics for given point in solution space.

Parameters
  • point (pymc3.Point()) – Dictionary with model parameters

  • especially to change output of the forward model (kwargs) –

Returns

Return type

list with numpy.ndarray synthetics for each target

load_fault_geometry()[source]

Load fault-geometry, i.e. discretized patches.

Returns

Return type

heart.FaultGeometry

load_gfs(crust_inds=None, make_shared=True)[source]

Load Greens Function matrixes for each variable to be inverted for. Updates gfs and gf_names attributes.

Parameters
  • crust_inds (list) – of int to indexes of Green’s Functions

  • make_shared (bool) – if True transforms gfs to theano.shared variables

point2sources(point)[source]

Returns the fault source patche(s) with the point values updated.

Parameters

point (dict) – with random variables from solution space

update_weights(point, n_jobs=1, plot=False)[source]

Updates weighting matrixes (in place) with respect to the point in the solution space.

Parameters

point (dict) – with numpy array-like items and variable name keys

class models.geodetic.GeodeticGeometryComposite(gc, project_dir, sources, events, hypers=False)[source]
get_synthetics(point, **kwargs)[source]

Get synthetics for given point in solution space.

Parameters
  • point (pymc3.Point()) – Dictionary with model parameters

  • especially to change output of the forward model (kwargs) –

Returns

Return type

list with numpy.ndarray synthetics for each target

update_weights(point, n_jobs=1, plot=False)[source]

Updates weighting matrixes (in place) with respect to the point in the solution space.

Parameters

point (dict) – with numpy array-like items and variable name keys

class models.geodetic.GeodeticInterseismicComposite(gc, project_dir, sources, events, hypers=False)[source]
get_synthetics(point, **kwargs)[source]

Get synthetics for given point in solution space.

Parameters
  • point (pymc3.Point()) – Dictionary with model parameters

  • especially to change output of the forward model (kwargs) –

Returns

Return type

list with numpy.ndarray synthetics for each target

The psgrn Module

This module got merged with the pscmp module and is now part of the pyrocko library and may be removed from the beat repository in the future.

exception psgrn.Interrupted[source]
class psgrn.PsGrnConfig(**kwargs)[source]

Undocumented.

psgrn_version

str, default: '2008a'

n_snapshots

int, default: 1

max_time

float, default: 1.0

observation_depth

float, default: 0.0

class psgrn.PsGrnConfigFull(**kwargs)[source]

Undocumented.

earthmodel_1d

pyrocko.cake.LayeredModel (pyrocko.gf.meta.Earthmodel1D), optional

psgrn_outdir

str, default: 'psgrn_green/'

sampling_interval

float, default: 1.0

sw_source_regime

int, default: 1

sw_gravity

int, default: 0

accuracy_wavenumber_integration

float, default: 0.025

displ_filenames

tuple of 3 str objects, default: ('uz', 'ur', 'ut')

stress_filenames

tuple of 6 str objects, default: ('szz', 'srr', 'stt', 'szr', 'srt', 'stz')

tilt_filenames

tuple of 3 str objects, default: ('tr', 'tt', 'rot')

gravity_filenames

tuple of 2 str objects, default: ('gd', 'gr')

exception psgrn.PsGrnError[source]
class psgrn.PsGrnSpatialSampling(**kwargs)[source]

Undocumented.

n_steps

int, default: 10

start_distance

float, default: 0.0

end_distance

float, default: 100.0

The pscmp Module

This module got merged with the psgrn module and is now part of the pyrocko library and may be removed from the beat repository in the future.

exception pscmp.Interrupted[source]
class pscmp.PsCmpArray(**kwargs)[source]

Undocumented.

n_steps_x

int, default: 10

n_steps_y

int, default: 10

start_distance_x

float, default: 0.0

minimum distance in x-direction (E) from source [m]

end_distance_x

float, default: 100000.0

maximum distance in x-direction (E) from source [m]

start_distance_y

float, default: 0.0

minimum distance in y-direction (N) from source [m]

end_distance_y

float, default: 100000.0

minimum distance in y-direction (N) from source [m]

class pscmp.PsCmpConfig(**kwargs)[source]

Undocumented.

pscmp_version

str, default: '2008a'

observation

PsCmpObservation, default: PsCmpScatter()

pscmp_outdir

str, default: './'

psgrn_outdir

str, default: './psgrn_functions/'

los_vector

tuple of 3 float objects, optional

times_snapshots

list of float objects, optional

rectangular_source_patches

list of pyrocko.guts.Any objects, default: [<pscmp.PsCmpRectangularSource object at 0x7ff76ebc46a0>]

class pscmp.PsCmpConfigFull(**kwargs)[source]

Undocumented.

sw_los_displacement

int, default: 0

sw_coulomb_stress

int, default: 0

coulomb_master_field

PsCmpCoulombStress, optional, default: PsCmpCoulombStressMasterFault()

displ_sw_output_types

tuple of 3 int objects, default: (1, 1, 1)

stress_sw_output_types

tuple of 6 int objects, default: (0, 0, 0, 0, 0, 0)

tilt_sw_output_types

tuple of 3 int objects, default: (0, 0, 0)

gravity_sw_output_types

tuple of 2 int objects, default: (0, 0)

displ_filenames

tuple of 3 str objects, default: ('uz', 'ur', 'ut')

stress_filenames

tuple of 6 str objects, default: ('szz', 'srr', 'stt', 'szr', 'srt', 'stz')

tilt_filenames

tuple of 3 str objects, default: ('tr', 'tt', 'rot')

gravity_filenames

tuple of 2 str objects, default: ('gd', 'gr')

snapshot_basefilename

str, default: 'snapshot'

class pscmp.PsCmpCoulombStress(**kwargs)[source]

Undocumented.

class pscmp.PsCmpCoulombStressMasterFault(**kwargs)[source]

Undocumented.

friction

float, default: 0.7

skempton_ratio

float, default: 0.0

master_fault_strike

float, default: 300.0

master_fault_dip

float, default: 15.0

master_fault_rake

float, default: 90.0

sigma1

float, default: 1000000.0

sigma2

float, default: -1000000.0

sigma3

float, default: 0.0

exception pscmp.PsCmpError[source]
class pscmp.PsCmpObservation(**kwargs)[source]

Undocumented.

class pscmp.PsCmpProfile(**kwargs)[source]

Undocumented.

n_steps

int, default: 10

start_distance

float, default: 0.0

minimum distance from source [m]

end_distance

float, default: 100000.0

minimum distance from source [m]

class pscmp.PsCmpRectangularSource(**kwargs)[source]

Input parameters have to be in: [deg] for reference point (lat, lon) and angles (rake, strike, dip) [m] shifting with respect to reference position [m] for fault dimensions and source depth. The default shift of the origin (pos_s, pos_d) with respect to the reference coordinates (lat, lon) is zero. The depth parameter can be either top edge or center can be modified by using the update method (use flag: top_depth=(True)/False).

length

float, default: 6000.0

width

float, default: 5000.0

strike

float, default: 0.0

dip

float, default: 90.0

rake

float, default: 0.0

torigin

float, default: 0.0

slip

float, optional, default: 1.0

pos_s

float, optional

pos_d

float, optional

opening

float, default: 0.0

update(top_depth=True, **kwargs)[source]
Change some of the source models parameters.

Input update depth per default: Top depth!

Example:

>>> from pyrocko import gf
>>> s = gf.DCSource()
>>> s.update(strike=66., dip=33.)
>>> print s
--- !pf.DCSource
depth: 0.0
time: 1970-01-01 00:00:00
magnitude: 6.0
strike: 66.0
dip: 33.0
rake: 0.0
class pscmp.PsCmpScatter(**kwargs)[source]

Undocumented.

lats

list of float objects, optional, default: [10.4, 10.5]

lons

list of float objects, optional, default: [12.3, 13.4]

pscmp.distributed_fault_patches_to_config(patches)[source]

Input: List of PsCmpRectangularSource(s)

The interseismic Module

Module for interseismic models.

Block-backslip model

The fault is assumed to be locked above a certain depth “locking_depth” and it is creeping with the rate of the defined plate- which is handled as a rigid block.

STILL EXPERIMENTAL!

References

Savage & Prescott 1978 Metzger et al. 2011

interseismic.geo_backslip_synthetics(engine, sources, targets, lons, lats, reference, amplitude, azimuth, locking_depth)[source]

Interseismic backslip model: forward model for synthetic displacements(n,e,d) [m] caused by a rigid moving block defined by the bounding geometry of rectangular faults. The reference location determines the stable regions. The amplitude and azimuth determines the amount and direction of the moving block. Based on this block-movement the upper part of the crust that is not locked is assumed to slip back. Thus the final synthetics are the superposition of the block-movement and the backslip.

Parameters
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (list) – of pyrocko.gf.seismosizer.RectangularSource Sources to calculate synthetics for

  • targets (list) – of pyrocko.gf.targets.StaticTarget

  • lons (list of floats, or numpy.ndarray) – longitudes [deg] of observation points

  • lats (list of floats, or numpy.ndarray) – latitudes [deg] of observation points

  • amplitude (float) – slip [m] of the moving block

  • azimuth (float) – azimuth-angle[deg] ergo direction of moving block towards North

  • locking_depth (numpy.ndarray) – locking_depth [km] of the fault(s) below there is no movement

  • reference (heart.ReferenceLocation) – reference location that determines the stable block

Returns

(n x 3) [North, East, Down] displacements [m]

Return type

numpy.ndarray

The covariance Module

class covariance.SeismicNoiseAnalyser(structure='identity', pre_arrival_time=5.0, engine=None, events=None, sources=None, chop_bounds=['b', 'c'])[source]

Seismic noise analyser

Parameters
  • structure (string) – either identity, exponential, import

  • pre_arrival_time (float) – in [s], time before P arrival until variance is estimated

  • engine (pyrocko.gf.seismosizer.LocalEngine) – processing object for synthetics calculation

  • events (list) – of pyrocko.meta.Event reference event(s) from catalog

  • chop_bounds (list of len 2) – of taper attributes a, b, c, or d

get_data_covariances(wmap, sample_rate, results=None, chop_bounds=None)[source]

Estimated data covariances of seismic traces

Parameters
  • wmap (eat.WaveformMapping) –

  • results

  • sample_rate (float) – sampling rate of data_traces and GreensFunction stores

Returns

Return type

numpy.ndarray

covariance.geodetic_cov_velocity_models(engine, sources, targets, dataset, plot=False, event=None, n_jobs=1)[source]

Calculate model prediction uncertainty matrix with respect to uncertainties in the velocity model for geodetic targets using fomosto GF stores.

Parameters
  • engine (pyrocko.gf.seismosizer.LocalEngine) – contains synthetics generation machine

  • target (pyrocko.gf.targets.StaticTarget) – dataset and observation points to calculate covariance for

  • sources (list) – of pyrocko.gf.seismosizer.Source determines the covariance matrix

  • plot (boolean) – if set, a plot is produced and not covariance matrix is returned

Returns

Return type

numpy.ndarray with Covariance due to velocity model uncertainties

covariance.geodetic_cov_velocity_models_pscmp(store_superdir, crust_inds, target, sources)[source]

Calculate model prediction uncertainty matrix with respect to uncertainties in the velocity model for geodetic targets based on pscmp. Deprecated!!!

Parameters
  • store_superdir (str) – Absolute path to the geodetic GreensFunction directory

  • crust_inds (list) – of int of indices for respective GreensFunction store indexes

  • target (heart.GeodeticDataset) – dataset and observation points to calculate covariance for

  • sources (list) – of pscmp.PsCmpRectangularSource determines the covariance matrix

Returns

Return type

numpy.ndarray with Covariance due to velocity model uncertainties

covariance.seismic_cov_velocity_models(engine, sources, targets, arrival_taper, arrival_time, wavename, filterer, plot=False, n_jobs=1, chop_bounds=['b', 'c'])[source]

Calculate model prediction uncertainty matrix with respect to uncertainties in the velocity model for station and channel.

Parameters
  • engine (pyrocko.gf.seismosizer.LocalEngine) – contains synthetics generation machine

  • sources (list) – of pyrocko.gf.seismosizer.Source

  • targets (list) – of pyrocko.gf.seismosizer.Targets

  • arrival_taper – determines tapering around phase Arrival

  • arrival_time (None or numpy.NdArray or float) – of phase to apply taper, if None theoretic arrival of ray tracing used

  • filterer (list) – of heart.Filter determining the filtering corner frequencies of various filters

  • plot (boolean) – open snuffler and browse traces if True

  • n_jobs (int) – number of processors to be used for calculation

Returns

Return type

numpy.ndarray with Covariance due to velocity model uncertainties

The theanof Module

Package for wrapping various functions into Theano-Ops to be able to include them into theano graphs as is needed by the pymc3 models.

Far future:

include a ‘def grad:’ -method to each Op in order to enable the use of gradient based optimization algorithms

class theanof.EulerPole(lats, lons, data_mask)[source]

Theano Op for rotation of geodetic observations around Euler Pole.

Parameters
  • lats (float, vector) – of Latitudes [deg] of points to be rotated

  • lons (float, vector) – of Longitudes [deg] of points to be rotated

  • data_mask (bool, vector) – of indexes to points to be masked

make_node(inputs)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theanof.GeoInterseismicSynthesizer(lats, lons, engine, targets, sources, reference)[source]

Theano wrapper to transform the parameters of block model to parameters of a fault.

make_node(inputs)[source]

Transforms theano tensors to node and allocates variables accordingly.

Parameters

inputs (dict) – keys being strings of source attributes of the pyrocko.gf.seismosizer.RectangularSource that was used to initialise the Operator. values are theano.tensor.Tensor

perform(node, inputs, output)[source]

Perform method of the Operator to calculate synthetic displacements.

Parameters
class theanof.GeoLayerSynthesizerPsCmp(lats, lons, store_superdir, crust_ind, sources)[source]

Theano wrapper for a geodetic forward model for static observation points. Direct call to PsCmp, needs PsGrn Greens Function store! Deprecated, currently not used in composites.

Parameters
  • lats (n x 1 numpy.ndarray) – with latitudes of observation points

  • lons (n x 1 numpy.ndarray) – with longitudes of observation points

  • store_superdir (str) – with absolute path to the GF store super directory

  • crust_ind (int) – with the index to the GF store

  • sources (pscmp.RectangularSource) – to be used in generating the synthetic displacements

make_node(inputs)[source]

Transforms theano tensors to node and allocates variables accordingly.

Parameters

inputs (dict) – keys being strings of source attributes of the pscmp.RectangularSource that was used to initialise the Operator values are theano.tensor.Tensor

perform(node, inputs, output)[source]

Perform method of the Operator to calculate synthetic displacements.

Parameters
class theanof.GeoSynthesizer(engine, sources, targets)[source]

Theano wrapper for a geodetic forward model with synthetic displacements. Uses pyrocko engine and fomosto GF stores. Input order does not matter anymore! Did in previous version.

Parameters
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (List) – containing pyrocko.gf.seismosizer.Source Objects

  • targets (List) – containing pyrocko.gf.targets.StaticTarget Objects

make_node(inputs)[source]

Transforms theano tensors to node and allocates variables accordingly.

Parameters

inputs (dict) – keys being strings of source attributes of the pscmp.RectangularSource that was used to initialise the Operator values are theano.tensor.Tensor

perform(node, inputs, output)[source]

Perform method of the Operator to calculate synthetic displacements.

Parameters
class theanof.SeisDataChopper(sample_rate, traces, arrival_taper, filterer)[source]

Deprecated!

make_node(*inputs)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theanof.SeisSynthesizer(engine, sources, targets, event, arrival_taper, arrival_times, wavename, filterer, pre_stack_cut, station_corrections)[source]

Theano wrapper for a seismic forward model with synthetic waveforms. Input order does not matter anymore! Did in previous version.

Parameters
  • engine (pyrocko.gf.seismosizer.LocalEngine) –

  • sources (List) – containing pyrocko.gf.seismosizer.Source Objects

  • targets (List) – containing pyrocko.gf.seismosizer.Target Objects

  • arrival_taper (heart.ArrivalTaper) –

  • arrival_times (ǹumpy.NdArray) – with synthetic arrival times wrt reference event

  • filterer (heart.Filterer) –

make_node(inputs)[source]

Transforms theano tensors to node and allocates variables accordingly.

Parameters

inputs (dict) – keys being strings of source attributes of the pscmp.RectangularSource that was used to initialise the Operator values are theano.tensor.Tensor

perform(node, inputs, output)[source]

Perform method of the Operator to calculate synthetic displacements.

Parameters
class theanof.StrainRateTensor(lats, lons, data_mask)[source]

TheanoOp for internal block deformation through 2d area strain rate tensor.

Parameters
  • lats (float, vector) – of Latitudes [deg] of points to be strained

  • lons (float, vector) – of Longitudes [deg] of points to be strained

  • data_mask (bool, vector) – of indexes to points to be masked

make_node(inputs)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output)[source]

Required: Calculate the function on the inputs and put the variables in the output storage. Return None.

Parameters
  • node (Apply instance) – Contains the symbolic inputs and outputs.

  • inputs (list) – Sequence of inputs (immutable).

  • output_storage (list) – List of mutable 1-element lists (do not change the length of these lists)

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a Numpy ndarray, with the right number of dimensions, and the correct dtype. Its shape and stride pattern, can be arbitrary. It not is guaranteed that it was produced by a previous call to impl. It could be allocated by another Op impl is free to reuse it as it sees fit, or to discard it and allocate new memory.

Raises

MethodNotDefined – The subclass does not override this method.

class theanof.Sweeper(patch_size, n_patch_dip, n_patch_strike, implementation)[source]

Theano Op for C implementation of the fast sweep algorithm.

Parameters
  • patch_size (float) – size of fault patches [km]

  • n_patch_strike (int) – number of patches in strike direction

  • n_patch_dip (int) – number of patches in dip-direction

make_node(*inputs)[source]

Create a “apply” nodes for the inputs in that order.

perform(node, inputs, output)[source]

Return start-times of rupturing patches with respect to given hypocenter.

Parameters
  • slownesses (float, vector) – inverse of the rupture velocity across each patch

  • nuc_dip (int, scalar) – rupture nucleation point on the fault in dip-direction, index to patch

  • nuc_strike (int, scalar) – rupture nucleation point on the fault in strike-direction, index to patch

Returns

starttimes

Return type

float, vector

Notes

Here we call the C-implementation on purpose with swapped strike and dip directions, because we need the fault dipping in row directions of the array. The C-implementation has it along columns!!!

The utility Module

This module provides a namespace for various functions: coordinate transformations, loading and storing objects, book-keeping of indexes in arrays that relate to defined variable names, manipulation of various pyrocko objects and many more …

class utility.Counter[source]

Counts calls of types with string_ids. Repeated calls with the same string id increase the count.

class utility.DataMap(list_ind, slc, shp, dtype, name)
property dtype

Alias for field number 3

property list_ind

Alias for field number 0

property name

Alias for field number 4

property shp

Alias for field number 2

property slc

Alias for field number 1

class utility.ListArrayOrdering(list_arrays, intype='numpy')[source]

An ordering for a list to an array space. Takes also non theano.tensors. Modified from pymc3 blocking.

Parameters
  • list_arrays (list) – numpy.ndarray or theano.tensor.Tensor

  • intype (str) – defining the input type ‘tensor’ or ‘numpy’

class utility.ListToArrayBijection(ordering, list_arrays, blacklist=[])[source]

A mapping between a List of arrays and an array space

Parameters
a2l(array)[source]

Maps value from array space to List space Inverse operation of fmap.

Parameters

array (numpy.ndarray) –

Returns

a_list – of numpy.ndarray

Return type

list

a_nd2l(array)[source]

Maps value from ndarray space (ndims, data) to List space Inverse operation of fmap. Nd

Parameters

array (numpy.ndarray) –

Returns

a_list – of numpy.ndarray

Return type

list

d2l(dpt)[source]

Maps values from dict space to List space If variable expected from ordering is not in point it is filled with a low dummy value -999999.

Parameters

dpt (list) – of numpy.ndarray

Returns

Return type

lpoint

f3map(list_arrays)[source]

Maps values from List space to array space with 3 columns

Parameters

list_arrays (list) – of numpy.ndarray with size: n x 3

Returns

array – single array comprising all the input arrays

Return type

numpy.ndarray

l2a(list_arrays)[source]

Maps values from List space to array space

Parameters

list_arrays (list) – of numpy.ndarray

Returns

array – single array comprising all the input arrays

Return type

numpy.ndarray

l2d(a_list)[source]

Maps values from List space to dict space

Parameters

list_arrays (list) – of numpy.ndarray

Returns

Return type

pymc3.model.Point

srmap(tarray)[source]

Maps value from symbolic variable array space to List space

Parameters

tarray (theano.tensor.Tensor) –

Returns

a_list – of theano.tensor.Tensor

Return type

list

utility.PsGrnArray2LayeredModel(psgrn_input_path)[source]

Read PsGrn Input file and return velocity model.

Parameters

psgrn_input_path (str) – Absolute path to the psgrn input file.

Returns

Return type

LayeredModel

utility.RS_center(source)[source]

Get 3d fault center coordinates. Depth attribute is top depth!

Parameters

source (RedctangularSource) –

Returns

  • numpy.ndarray with x, y, z coordinates of the center of the

  • fault

utility.RS_dipvector(source)[source]

Get 3 dimensional dip-vector of a planar fault.

Parameters

source (RectangularSource) –

Returns

Return type

numpy.ndarray

utility.RS_strikevector(source)[source]

Get 3 dimensional strike-vector of a planar fault.

Parameters

source (RedctangularSource) –

Returns

Return type

numpy.ndarray

utility.adjust_fault_reference(source, input_depth='top')[source]

Adjusts source depth and east/north-shifts variables of fault according to input_depth mode ‘top/center’.

Parameters
  • source (RectangularSource or pscmp.RectangularSource or) – pyrocko.gf.seismosizer.RectangularSource

  • input_depth (string) – if ‘top’ the depth in the source is interpreted as top depth if ‘center’ the depth in the source is interpreted as center depth

Returns

Return type

Updated input source object

utility.adjust_point_units(point)[source]

Transform variables with [km] units to [m]

Parameters

point (dict) – pymc3.model.Point() of model parameter units as keys

Returns

mpointpymc3.model.Point()

Return type

dict

utility.apply_station_blacklist(stations, blacklist)[source]

Weed stations listed in the blacklist.

Parameters
  • stations (list) – pyrocko.model.Station

  • blacklist (list) – strings of station names

Returns

stations

Return type

list of pyrocko.model.Station

utility.biggest_common_divisor(a, b)[source]

Find the biggest common divisor of two float numbers a and b.

Parameters
Returns

Return type

int

utility.check_hyper_flag(problem)[source]

Check problem setup for type of model standard/hyperparameters.

:param models.Problem:

Returns

flag

Return type

boolean

utility.check_point_keys(point, phrase)[source]

Searches point keys for a phrase, returns list of keys with the phrase.

utility.downsample_trace(data_trace, deltat=None, snap=False)[source]

Downsample data_trace to given sampling interval ‘deltat’.

Parameters
  • data_trace (pyrocko.trace.Trace) –

  • deltat (sampling interval [s] to which trace should be downsampled) –

Returns

new instance

Return type

pyrocko.trace.Trace

utility.dump_objects(outpath, outlist)[source]

Dump objects in outlist into pickle file.

Parameters
  • outpath (str) – absolute path and file name for the file to be stored

  • outlist (list) – of objects to save pickle

utility.ensure_cov_psd(cov)[source]

Ensure that the input covariance matrix is positive definite. If not, find the nearest positive semi-definite matrix.

Parameters

cov (numpy.ndarray) – symmetric covariance matrix

Returns

cov – positive definite covariance matrix

Return type

numpy.ndarray

utility.error_not_whole(f, errstr='')[source]

Test if float is a whole number, if not raise Error.

utility.find_elbow(data, theta=None, rotate_left=False)[source]

Get point closest to turning point in data by rotating it by theta.

Adapted from: https://datascience.stackexchange.com/questions/57122/in-elbow-curve- how-to-find-the-point-from-where-the-curve-starts-to-rise

Parameters
  • data (array like,) – [n, 2]

  • theta (rotation angle) –

Returns

  • Index (int) – closest to elbow.

  • rotated_data (array-like [n, 2])

utility.gather(l, key, sort=None, filter=None)[source]

Return dictionary of input l grouped by key.

utility.get_data_radiant(data)[source]

Data needs to be [n, 2]

utility.get_fit_indexes(llk)[source]

Find indexes of various likelihoods in a likelihood distribution.

Parameters

llk (numpy.ndarray) –

Returns

Return type

dict with array indexes

utility.get_random_uniform(lower, upper, dimension=1)[source]

Get uniform random values between given bounds

Parameters
  • lower (float) –

  • upper (float) –

  • dimension (size of result vector) –

utility.get_rotation_matrix(axes=['x', 'y', 'z'])[source]

Return a function for 3-d rotation matrix for a specified axis.

Parameters

axes (str or list of str) – x, y or z for the axis

Returns

Return type

func that takes an angle [rad]

utility.join_models(global_model, crustal_model)[source]

Replace the part of the ‘global model’ that is covered by ‘crustal_model’.

Parameters
  • global_model (pyrocko.cake.LayeredModel) –

  • crustal_model (pyrocko.cake.LayeredModel) –

Returns

joined_model

Return type

cake.LayeredModel

utility.join_points(ldicts)[source]

Join list of dicts into one dict with concatenating values of keys that are present in multiple dicts.

utility.line_intersect(e1, e2, n1, n2)[source]

Get intersection point of n-lines.

Parameters
  • points of each line in (n x 2) arrays (end) –

  • e1 (numpy.array (n x 2)) – east coordinates of first line

  • e2 (numpy.array (n x 2)) – east coordinates of second line

  • n1 (numpy.array (n x 2)) – north coordinates of first line

  • n2 (numpy.array (n x 2)) – east coordinates of second line

Returns

Return type

numpy.array (n x 2) of intersection points (easts, norths)

utility.list2string(l, fill=', ')[source]

Convert list of string to single string.

Parameters

l (list) – of strings

utility.load_objects(loadpath)[source]

Load (unpickle) saved (pickled) objects from specified loadpath.

Parameters

loadpath (absolute path and file name to the file to be loaded) –

Returns

objects – of saved objects

Return type

list

utility.mod_i(i, cycle)[source]

Calculates modulus of a function and returns number of full cycles and the rest.

Parameters
  • i (int or float) – Number to be cycled over

  • cycle (int o float) – Cycle length

Returns

  • fullc (int or float depending on input)

  • rest (int or float depending on input)

utility.near_psd(x, epsilon=2.220446049250313e-16)[source]

Calculates the nearest postive semi-definite matrix for a correlation/ covariance matrix

Parameters
  • x (numpy.ndarray) – Covariance/correlation matrix

  • epsilon (float) – Eigenvalue limit here set to accuracy of numbers in numpy, otherwise the resulting matrix, likely is still not going to be positive definite

Returns

near_cov – closest positive definite covariance/correlation matrix

Return type

numpy.ndarray

Notes

Numpy number precission not high enough to resolve this for low valued covariance matrixes! The result will have very small negative eigvals!!!

See repair_covariance below for a simpler implementation that can resolve the numbers!

Algorithm after Rebonato & Jaekel 1999

utility.positions2idxs(positions, cell_size, min_pos=0.0, backend=<module 'numpy' from '/home/vasyurhm/.local/lib/python3.6/site-packages/numpy/__init__.py'>, dtype='int16')[source]

Return index to a grid with a given cell size.npatches

Parameters
  • positions (numpy.NdArray float) – of positions [km]

  • cell_size (float) – size of grid cells

  • backend (str) –

  • dtype (str) – data type of returned array, default: int16

utility.repair_covariance(x, epsilon=2.220446049250313e-16)[source]

Make covariance input matrix A positive definite. Setting eigenvalues that are lower than the precission of numpy floats to at least that precision and backtransform.

Parameters
  • x (numpy.ndarray) – Covariance/correlation matrix

  • epsilon (float) – Eigenvalue limit here set to accuracy of numbers in numpy, otherwise the resulting matrix, likely is still not going to be positive definite

Returns

near_cov – closest positive definite covariance/correlation matrix

Return type

numpy.ndarray

Notes

Algorithm after Gilbert Strange, ‘Introduction to linear Algebra’

utility.running_window_rms(data, window_size, mode='valid')[source]

Calculate the standard deviations of a running window over data.

Parameters
  • data (numpy.ndarray 1-d) – containing data to calculate stds from

  • window_size (int) – sample size of running window

  • mode (str) – see numpy.convolve for modes

Returns

with stds, size data.size - window_size + 1

Return type

numpy.ndarray 1-d

utility.search_catalog(date, min_magnitude, dayrange=1.0)[source]

Search the gcmt catalog for the specified date (+- 1 day), filtering the events with given magnitude threshold.

Parameters
  • date (str) – ‘YYYY-MM-DD’, date of the event

  • min_magnitude (float) – approximate minimum Mw of the event

  • dayrange (float) – temporal search interval [days] around date

Returns

event

Return type

pyrocko.model.Event

utility.setup_logging(project_dir, levelname, logfilename='BEAT_log.txt')[source]

Setup function for handling BEAT logging. The logfile ‘BEAT_log.txt’ is saved in the ‘project_dir’.

Parameters
  • project_dir (str) – absolute path to the output directory for the Log file

  • levelname (str) – defining the level of logging

utility.split_off_list(l, off_length)[source]

Split a list with length ‘off_length’ from the beginning of an input list l. Modifies input list!

Parameters
  • l (list) – of objects to be seperated

  • off_length (int) – number of elements from l to be split off

Returns

Return type

list

utility.split_point(point)[source]

Split point in solution space into List of dictionaries with source parameters for each source.

Parameters

point (dict) – pymc3.model.Point()

Returns

source_points – of pymc3.model.Point()

Return type

list

utility.swap_columns(array, index1, index2)[source]

Swaps the column of the input array based on the given indexes.

utility.transform_sources(sources, datatypes, decimation_factors=None)[source]

Transforms a list of heart.RectangularSource to a dictionary of sources pscmp.PsCmpRectangularSource for geodetic data and pyrocko.gf.seismosizer.RectangularSource for seismic data.

Parameters
  • sources (list) – heart.RectangularSource

  • datatypes (list) – of strings with the datatypes to be included ‘geodetic’ or ‘seismic’

  • decimation_factors (dict) – of datatypes and their respective decimation factor

Returns

d – of transformed sources with datatypes as keys

Return type

dict

utility.unique_list(l)[source]

Find unique entries in list and return them in a list. Keeps variable order.

Parameters

l (list) –

Returns

Return type

list with only unique elements

utility.update_source(source, **point)[source]

Update source keeping stf and source params seperate. Modifies input source Object!

Parameters
utility.weed_data_traces(data_traces, stations)[source]

Throw out data traces belonging to stations that are not in the stations list. Keeps list orders!

Parameters
  • data_traces (list) – of pyrocko.trace.Trace

  • stations (list) – of pyrocko.model.Station

Returns

weeded_data_traces – of pyrocko.trace.Trace

Return type

list

utility.weed_input_rvs(input_rvs, mode, datatype)[source]

Throw out random variables (RV)s from input list that are not included by the respective synthetics generating functions.

Parameters
  • input_rvs (dict) – of pymc3.Distribution or set of variable names

  • mode (str) – ‘geometry’, ‘static, ‘kinematic’, ‘interseismic’ determining the discarded RVs

  • datatype (str) – ‘seismic’ or ‘geodetic’ determining the discarded RVs

Returns

weeded_input_rvs – of pymc3.Distribution

Return type

dict

utility.weed_stations(stations, event, distances=(30.0, 90.0), remove_duplicate=False)[source]

Weed stations, that are not within the given distance range(min, max) to a reference event.

Parameters
  • stations (list) – of pyrocko.model.Station

  • eventpyrocko.model.Event

  • distances (tuple) – of minimum and maximum distance [deg] for station-event pairs

Returns

weeded_stations – of pyrocko.model.Station

Return type

list

utility.weed_targets(targets, stations, discard_targets=[])[source]

Throw out targets belonging to stations that are not in the stations list. Keeps list orders and returns new list!

Parameters
  • targets (list) – of pyrocko.gf.targets.Target

  • stations (list) – of pyrocko.model.Station

Returns

weeded_targets – of pyrocko.gf.targets.Target

Return type

list

The plotting Module

class plotting.PlotOptions(**kwargs)[source]

Undocumented.

post_llk

str, default: 'max'

Which model to plot on the specified plot; Default: “max”; Options: “max”, “min”, “mean”, “all”

plot_projection

str (pyrocko.guts.StringChoice), default: 'local'

Projection to use for plotting geodetic data; options: “latlon”

utm_zone

int, optional, default: 36

Only relevant if plot_projection is “utm”

load_stage

int, default: -1

Which stage to select for plotting

figure_dir

str, default: 'figures'

Name of the output directory of plots

reference

dict of pyrocko.guts.Any objects, optional, default: {}

Reference point for example from a synthetic test.

outformat

str, default: 'pdf'

dpi

int, default: 300

force

bool, default: False

varnames

list of pyrocko.guts.Any objects, optional, default: []

Names of variables to plot

source_idxs

list of pyrocko.guts.Any objects, optional

Indexes to patches of slip distribution to draw marginals for

nensemble

int, default: 1

Number of draws from the PPD to display fuzzy results.

plotting.correlation_plot(mtrace, varnames=None, transform=<function <lambda>>, figsize=None, cmap=None, grid=200, point=None, point_style='.', point_color='white', point_size='8')[source]

Plot 2d marginals (with kernel density estimation) showing the correlations of the model parameters.

Parameters
  • mtrace (pymc3.base.MutliTrace) – Mutlitrace instance containing the sampling results

  • varnames (list of variable names) – Variables to be plotted, if None all variable are plotted

  • transform (callable) – Function to transform data (defaults to identity)

  • figsize (figure size tuple) – If None, size is (12, num of variables * 2) inch

  • cmap (matplotlib colormap) –

  • grid (resolution of kernel density estimation) –

  • point (dict) – Dictionary of variable name / value to be overplotted as marker to the posteriors e.g. mean of posteriors, true values of a simulation

  • point_style (str) – style of marker according to matplotlib conventions

  • point_color (str or tuple of 3) – color according to matplotlib convention

  • point_size (str) – marker size according to matplotlib conventions

Returns

  • fig (figure object)

  • axs (subplot axis handles)

plotting.correlation_plot_hist(mtrace, varnames=None, transform=<function <lambda>>, figsize=None, hist_color='orange', cmap=None, grid=50, chains=None, ntickmarks=2, point=None, point_style='.', point_color='red', point_size=4, alpha=0.35, unify=True)[source]

Plot 2d marginals (with kernel density estimation) showing the correlations of the model parameters. In the main diagonal is shown the parameter histograms.

Parameters
  • mtrace (pymc3.base.MutliTrace) – Mutlitrace instance containing the sampling results

  • varnames (list of variable names) – Variables to be plotted, if None all variable are plotted

  • transform (callable) – Function to transform data (defaults to identity)

  • figsize (figure size tuple) – If None, size is (12, num of variables * 2) inch

  • cmap (matplotlib colormap) –

  • hist_color (str or tuple of 3) – color according to matplotlib convention

  • grid (resolution of kernel density estimation) –

  • chains (int or list of ints) – chain indexes to select from the trace

  • ntickmarks (int) – number of ticks at the axis labels

  • point (dict) – Dictionary of variable name / value to be overplotted as marker to the posteriors e.g. mean of posteriors, true values of a simulation

  • point_style (str) – style of marker according to matplotlib conventions

  • point_color (str or tuple of 3) – color according to matplotlib convention

  • point_size (str) – marker size according to matplotlib conventions

  • unify (bool) – If true axis units that belong to one group e.g. [km] will have common axis increments

Returns

  • fig (figure object)

  • axs (subplot axis handles)

plotting.histplot_op(ax, data, reference=None, alpha=0.35, color=None, cmap=None, bins=None, tstd=None, qlist=[0.01, 99.99], cbounds=None, kwargs={})[source]

Modified from pymc3. Additional color argument.

plotting.scene_fits(problem, stage, plot_options)[source]

Plot geodetic data, synthetics and residuals.

plotting.seismic_fits(problem, stage, plot_options)[source]

Modified from grond. Plot synthetic and data waveforms and the misfit for the selected posterior model.

plotting.traceplot(trace, varnames=None, transform=<function <lambda>>, figsize=None, lines={}, chains=None, combined=False, grid=False, varbins=None, nbins=40, color=None, source_idxs=None, alpha=0.35, priors=None, prior_alpha=1, prior_style='--', axs=None, posterior=None, fig=None, plot_style='kde', prior_bounds={}, unify=True, qlist=[0.1, 99.9], kwargs={})[source]

Plots posterior pdfs as histograms from multiple mtrace objects.

Modified from pymc3.

Parameters
  • trace (result of MCMC run) –

  • varnames (list of variable names) – Variables to be plotted, if None all variable are plotted

  • transform (callable) – Function to transform data (defaults to identity)

  • posterior (str) – To mark posterior value in distribution ‘max’, ‘min’, ‘mean’, ‘all’

  • figsize (figure size tuple) – If None, size is (12, num of variables * 2) inch

  • lines (dict) – Dictionary of variable name / value to be overplotted as vertical lines to the posteriors and horizontal lines on sample values e.g. mean of posteriors, true values of a simulation

  • chains (int or list of ints) – chain indexes to select from the trace

  • combined (bool) – Flag for combining multiple chains into a single chain. If False (default), chains will be plotted separately.

  • source_idxs (list) – array like, indexes to sources to plot marginals

  • grid (bool) – Flag for adding gridlines to histogram. Defaults to True.

  • varbins (list of arrays) – List containing the binning arrays for the variables, if None they will be created.

  • nbins (int) – Number of bins for each histogram

  • color (tuple) – mpl color tuple

  • alpha (float) – Alpha value for plot line. Defaults to 0.35.

  • axs (axes) – Matplotlib axes. Defaults to None.

  • fig (figure) – Matplotlib figure. Defaults to None.

  • unify (bool) – If true axis units that belong to one group e.g. [km] will have common axis increments

  • kwargs (dict) – for histplot op

  • qlist (list) – of quantiles to plot. Default: (all, 0., 100.)

Returns

ax

Return type

matplotlib axes

The inputf Module

inputf.load_SAR_data(datadir, names)[source]

Load SAR data in given directory and filenames. Returns Diff_IFG objects.

inputf.load_and_blacklist_gnss(datadir, filename, blacklist, campaign=False, components=['north', 'east', 'up'])[source]

Load ascii GNSS data from GLOBK, apply blacklist and initialise targets.

Parameters
  • datadir (string) – of path to the directory

  • filename (string) – filename to load

  • blacklist (list) – of strings with station names to blacklist

  • campaign (boolean) – if True return gnss.GNSSCampaign otherwise list of heart.GNSSCompoundComponent

  • components (tuple) – of strings (‘north’, ‘east’, ‘up’) for displacement components to return

inputf.load_and_blacklist_stations(datadir, blacklist)[source]

Load stations from autokiwi output and apply blacklist

inputf.load_ascii_gnss_globk(filedir, filename, components=['east', 'north', 'up'])[source]

Load ascii file columns containing: station name, Lon, Lat, ve, vn, vu, sigma_ve, sigma_vn, sigma_vu location [decimal deg] measurement unit [mm/yr]

Returns

Return type

pyrocko.model.gnss.GNSSCampaign

inputf.load_data_traces(datadir, stations, load_channels=[], name_prefix=None, name_suffix=None, data_format='mseed', divider='-', convert=False, no_network=False)[source]

Load data traces for the given stations from datadir.

inputf.load_kite_scenes(datadir, names)[source]

Load SAR data from the kite format.

inputf.load_obspy_data(datadir)[source]

Load data from the directory through obspy and convert to pyrocko objects.

Parameters

datadir (string) – absolute path to the data directory

Returns

Return type

data_traces, stations

inputf.rotate_traces_and_stations(datatraces, stations, event)[source]

Rotate traces and stations into RTZ with respect to the event. Updates channels of stations in place!

Parameters
  • datatraces (list) – of pyrocko.trace.Trace

  • stations (list) – of pyrocko.model.Station

  • event (pyrocko.model.Event) –

Returns

Return type

rotated traces to RTZ

inputf.setup_stations(lats, lons, names, networks, event, rotate=True)[source]

Setup station objects, based on station coordinates and reference event.

Parameters
  • lats (num.ndarray) – of station location latitude

  • lons (num.ndarray) – of station location longitude

  • names (list) – of strings of station names

  • networks (list) – of strings of network names for each station

  • event (pyrocko.model.Event) –

  • Results

  • -------

  • stations (list) – of pyrocko.model.Station