Grond can be run as a command line tool (recommended) or by calling its library functions from a Python script.

The command line tool offers several subcommands. To find out what subcommands are available, run

grond --help

To get more information about a subcommand and its options, run

grond <subcommand> --help

or see section Command line interface.

The basic work-flow when using Grond is as follows:

  1. Set up a project folder containing input data and Green’s functions.

  2. Set up a configuration file for Grond.

  3. Check the setup with grond check.

  4. Run the optimisation with grond go.

  5. Create result plots and report with grond report.

  6. Export results with grond export.

Details on these steps are given in the following sections.

Project folder layout

To use Grond with your own dataset, we suggest the following folder structure.

Single files and data formats listed here are explained below. The folders runs and report are generated during and after the optimisation, respectively.


Use grond init <example> <project_folder> to create an example project following this layout! See Initializing an empty project for more information.

├── config
│   ├── laquila2009_joint.gronf
│   ├── ...
│   :
├── data
│   └── events  # several events could be set up here
│       ├── laquila2009
│       │   ├── event.txt
│       │   ├── insar
│       │   │   ├── dsc_insar.npz
│       │   │   ├── dsc_insar.yml
│       │   │   :
│       │   │
│       │   ├── waveforms
│       │   │   ├── raw    # contains Mini-SEED files
│       │   │   │   ├── trace_BK-CMB--BHE_2009-04-06_00-38-31.mseed
│       │   │   │   ├── trace_BK-CMB--BHN_2009-04-06_00-38-31.mseed
│       │   │   │   :
│       │   │   └── stations.xml
│       │   │
│       │   └── gnss
│       │       └── gnss.yml
│       :
├── gf_stores  # contains Green's functions
│   ├── Abruzzo_Ameri_nearfield # static near-field GF store
│   │   └── ...
│   ├── global_2s_25km  # dynamic far-field GF store
│   │   └── ...
│   :
├── runs  # created at runtime, contains individual optimisation results
│   └── ...
└── report
    └── ...

Input data (observations)

Grond can combine different observational input data in an earthquake source optimisation.

Seismic waveform data

Required input files are:

  • raw waveform data (Mini-SEED format or other formats supported by Pyrocko)

  • instrument response information (StationXML format)

Various tools exists to download raw waveforms and instrument response information from FDSN web services (here is a basic Python script example). Grond can use continuous data (recommended) as well as event-based cut-outs. If event-based data is used, make sure that the time windows are long enough. Generously enlarge the window before and after the signal to be analysed. Add at least 5 times the longest period to be analysed to both sides. Add more if pre-event noise should be analysed for data-weighting.

InSAR data

Grond uses Kite containers for surface deformation maps. Kite provides an interactive tool for inspection and transport of static displacement maps. It can be used for data noise estimations, easy quadtree data sub-sampling and calculation of data error variance-covariance matrices for proper data weighting.

Grond requires files like kite_scene.yml and kite_scene.npz which can be generated by Kite.

GNSS data

Required input file is a simple YAML file containing GNSS station positions, displacement values and measurement uncertainties. The Pyrocko manual provides more information on the GNSS data handling.

Green’s function stores

A Pyrocko Green’s function (GF) store is needed for forward modelling seismograms and surface displacements. Such a GF store holds transfer functions for many possible source-receiver configurations which can be looked up quickly.

You can either download them from the online repository (online GF databases) or compute them with the fomosto module of Pyrocko. Depending on the application, different setups of GF stores or methods for calculation are suitable:

GFs for global teleseismic waveform data

For the point-source analysis of large global earthquakes, a global GF store with a sampling frequency of 0.5 Hz may suffice. Such a store can be downloaded with Fomosto, using

fomosto download kinherd global_2s

GFs for regional and local seismic waveform data

Regional analyses may require region-specific Green’s functions. Given a suitable 1D-layered velocity model, GF stores can be built with the Fomosto QSEIS backend.

GFs for near-field static displacement data (InSAR, GNSS)

Near-field static displacements require high spatial sampling and mostly only little temporal sampling. With the Fomosto PSGRN/PSCMP backend, you can build your own GF store for any given local 1D-layered velocity model.

Initializing a Grond project

Grond ships with two options to quickstart a new project folder structure (see Project folder layout), including Grond’s YAML configuration files. For real data, you may use grond init <example> <project-folder> (section Initializing an empty project). For synthetic testing, with grond scenario <project-folder> a fully synthetic dataset can be customised and forward modelled (section Initializing a scenario project from forward modelling).

Initializing an empty project

Grond can handle many different kinds of optimisation problems so there can be no generic Grond configuration. However, to quickly create an empty project we offer initial configurations for a few standard problems.

Check your options with

grond init list

and then create your configuration with one of the Example projects.

grond init <example>

The configuration can be automatically embedded in a new project folder with

grond init <example> <project-folder>
cd <project-folder>


Existing project folders are overwritten using grond init <example> <project-folder> --force

Also only certain parts of a configuration file can be initialised, e. g. for certain targets:

grond init target_waveform

Consult grond init --help for your options. The different targets (data and misfit setups for seismic waveforms, InSAR and or GNSS data) can be combined and source model types can be exchanged.

Initializing a scenario project from forward modelling

The subcommand grond scenario will forward model observations for a modelled earthquake and create a ready-to-go Grond project. Different observations and source problems can be added by flags - see grond scenario --help for possible combinations and options.

The scenario can contain the following synthetic observations:

  • Seismic waveforms

  • InSAR surface displacements

  • GNSS surface displacements

grond scenario --targets=waveforms,insar <project-folder>

A map of the random scenario is plotted in scenario_map.pdf.


Grond is configured in .gronf files using YAML markup language, see section Configuration.

The Example projects section provides commented configuration files for different earthquake source problems explaining many of the options:

See the Example projects for a detailed walk-through.


Before running the optimisation, you may want to check your dataset and configuration file and debug it if needed with the command:

grond check <configfile> <eventname>

Now, you may start the optimization for a given event using

grond go <configfile> <eventname>

During the optimisation, results are aggregated in an output directory, referred to as <rundir> in the configuration and documentation.

├── config
│   └── ...
├── data
│   └── ...
├── gf_stores
│   └── ...
├── runs  # contains individual optimisation results   ├── laquila2009_joint.grun
│      ├── ... # some bookkeeping yaml-files      ├── optimiser.yaml
│      ├── models
│      ├── misfits
│      └── harvest
│          ├── misfits
│          └── models
│   :
└── report
    └── ...

You find detailed information on the misfit configuration and model space sampling in the section Optimisers.

Results and visualisation

Finally, you may run

grond report <rundir>

to aggregate and visualize results to a browsable summary, (by default) under the directory report.

├── config
│   └── ...
├── data
│   └── ...
├── gf_stores
│   └── ...
├── runs
│   └── ...
└── report  # contains all graphical presentations of the results in 'runs'
    ├── index.html # open in browser to surf through all 'runs'
    ├── ... # more bookeeping yaml-files
        ├── laquila2009 # event-wise organisation of different optimisation runs
       ├── laquila2009_joint # report information of an optimisation run
          ├── ...  # some bookkeeping yaml-files
          └── plots # individual plots sorted by type
              ├── contributions # overview of the target's misfit contributions
                 └── ...
              ├── sequence  # parameter value development in the optimisation
                 └── ...
              ├── fits_waveforms # visual comparison of data and synthetics
                 └── ...
              ├── fits_satellite # visual comparison of data and synthetics
                 └── ...

Please find detailed information on the report and automatic plots in the section Report.

The results can be exported in various ways by running the subcommand

grond export <what> <rundir>

See the command reference of grond export for more details.


Grond is a rather large system. The following terminology may help to understand its configuration and the underlying concepts and implementation strategies.


A seismic event which has a unique name among all events available to a specific configuration of Grond. An event usually has a preliminary origin location and sometimes a reference mechanism attached to it.


The earthquake dislocation source, can be e.g. a moment tensor point source or a finite fault.


Is the recipient side of the source’s excitation. This can be a modelled seismometer, a GNSS station or a InSAR satellite.


In the context of a Grond setup, the “problem” groups the choice of source model and parameter bounds to be used in the optimisation. See Problems.


In a typical Grond setup, many modelling targets may contribute to the global misfit. For example, an individual modelling target could be a single component seismogram at a given station, an InSAR scene, or an amplitude ratio at one station. The target knows how to filter, taper, and weight the data. It also contains configuration about how to compare synthetics with the observations to obtain a misfit contribution value (e.g. time-domain traces/amplitude spectra/cross correlations, L1-norm/L2-norm, etc.).

Config file

A YAML file, by convention ending with the suffix .gronf, containing a Grond configuration. The config file can be made to work with multiple events. It can be generated using grond init <example> after consulting grond init list. See Configuration file structure and format (YAML).


The directory, by convention ending with the suffix .grun, where Grond stores intermediate and final results during an optimisation run. The rundir is created by Grond when running the grond go subcommand.


The dataset is a section in the config file telling Grond where to look for input data (waveforms, InSAR scenes, GNSS data) and meta-data (station coordinates, instrument responses, blacklists, picks, event catalogues, etc.). See Dataset.


The misfit is the value of the objective function obtained for a proposed source model. The global misfit may by aggregated from weighted contributions of multiple Grond targets (see below).


Before running the optimisation, station weights and other internal parameters may need to be adapted to the observed data and configured setup of Grond. Such pre-optimisation tasks are done by one or more of Grond’s analysers.

Objective function

The objective function gives a scalar misfit value how well the source model fits the observed data. A smaller misfit value is better than a large one. It is often called misfit function.


This refers to the optimisation strategy, how to sample model space to find solutions in a given Grond setup.


In statistics, bootstrapping is any test or metric that relies on random sampling with replacement. Bootstrapping allows assigning measures of accuracy (defined in terms of bias, variance, confidence intervals, prediction error or some other such measure) to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods. Generally, it falls in the broader class of re-sampling methods. Wiki

Green’s function store

Refers to Green’s function databases to be used for the forward modelling. In Grond these stores are addressed with directory paths and an individual store_id.


Forward modelling in Grond is done through the Pyrocko GF engine, which allows fast forward modelling for arbitrary source models based on pre-calculated Green’s function stores (databases). Its configuration may contain information about where to find the pre-calculated Pyrocko Green’s function stores.