DataFrameExtractor

Warning: This section is a bit technical and many users won’t need this functionality. Also, it is a bit experimental and the API may change in future versions. Proceed with caution.

The callables `picks_to_df, events_to_df <../datastructures/events_to_pandas.ipynb>`__, and `inventory_to_df <../datastructures/stations_to_pandas.ipynb>`__ are instances of DataFrameExtractor, which provides an extensible and customizable way for creating callables that extract DataFrames from arbitrary objects.

To demonstrate, let’s create a new extractor to put arrival objects in the Crandall catalog into a dataframe. The table can be joined together with the picks table to do some (possibly) interesting things.

[1]:
import obspy

import obsplus

crandall = obsplus.load_dataset("crandall_test")
cat = crandall.event_client.get_events()

Start by initializing the extractor with a list of expected columns and data types. This is optional, but helps ensure the output dataframe has a consistent shape and data type. The arrival documentation may be useful to understand these. Rather than collecting all the data contained in the Arrival instances, only a few columns of interest will be created.

[2]:
from collections import OrderedDict

import obspy.core.event as ev

# declare datatypes (order to double as required columns)
dtypes = OrderedDict(
    resource_id=str,
    pick_id=str,
    event_id=str,
    origin_id=str,
    phase=str,
    time_correction=float,
    distance=float,
    time_residual=float,
    time_weight=float,
)

# init the DataFrameExtractor
arrivals_to_df = obsplus.DataFrameExtractor(
    ev.Arrival, required_columns=list(dtypes), dtypes=dtypes
)

The next step it to define some “extractors”. These are callables that will take an Arrival instance and return the desired data. The extractors can return:

  1. A dict of values where each key corresponds to a column name and each value is the row value of that column for the current object.

  2. Anything else, which is interpreted as the row value, and the column name is obtained from the function name.

[3]:
# an extractor which returns a dictionary
@arrivals_to_df.extractor
def _get_basic(arrival):
    out = dict(
        resource_id=str(arrival.resource_id),
        pick_id=str(arrival.pick_id),
        time_correction=arrival.time_correction,
        distance=arrival.distance,
        time_residual=arrival.time_residual,
        time_weight=arrival.time_weight,
    )
    return out


# an extractor which returns a single value
@arrivals_to_df.extractor
def _get_phase(arrival):
    return arrival.phase

Notice that there is no way of extracting information from the parent Origin or Event objects. The extractor also doesn’t know how to find the arrivals in a Catalog object. Defining the types of data the extractor can operate on, and injecting the event and origin data into arrival rows will accomplish both of these tasks.

[4]:
@arrivals_to_df.register(obspy.Catalog)
def _get_arrivals_from_catalogs(cat):
    arrivals = []  # a list of arrivals
    extras = {}  # dict of data to inject to arrival level
    for event in cat:
        for origin in event.origins:
            arrivals.extend(origin.arrivals)
            data = dict(event_id=event.resource_id, origin_id=origin.resource_id)
            # use arrival id to inject extra to each arrival row
            extras.update({id(x): data for x in origin.arrivals})
    return arrivals_to_df(arrivals, extras=extras)

The next step is to initiate the extractor.

[5]:
df = arrivals_to_df(cat)
df.head()
[5]:
resource_id pick_id event_id origin_id phase time_correction distance time_residual time_weight
0 smi:local/f537bad8-80a4-4296-8e84-551b78a8c614 smi:local/21695672 smi:local/248925 smi:local/404444 P NaN 0.360 0.057 -1.0
1 smi:local/754c3d1a-ddf9-4769-8f0f-c5e1b39ed1ad smi:local/21695673 smi:local/248925 smi:local/404444 P NaN 0.392 0.322 -1.0
2 smi:local/d6fdbf30-7c1b-45a5-9530-b852ac1e2e80 smi:local/21695674 smi:local/248925 smi:local/404444 P NaN 0.520 0.086 -1.0
3 smi:local/21b3be94-a309-4343-a3d3-c0e48395ec4c smi:local/21695675 smi:local/248925 smi:local/404444 P NaN 0.619 0.203 -1.0
4 smi:local/a4fb84c6-32bb-40f1-9417-9d4b6c91f1f4 smi:local/21695676 smi:local/248925 smi:local/404444 S NaN 0.360 0.580 -1.0
[6]:
df.phase.value_counts()
[6]:
phase
pPn    238
P      224
Pb     129
Sb      87
Pg      79
S       66
Sg      53
Pn      53
Sn      22
pPb      3
Name: count, dtype: int64

If only the P phases were needed, the easiest thing to do is filter the dataframe. For demonstration let’s modify our phase extractor so that any row that is not a P phase is skipped. This is done by raising a SkipRow exception which is an attribute of the DataFrameExtractor.

[7]:
@arrivals_to_df.extractor
def _get_phase(arrival):
    phase = arrival.phase
    if phase.upper() != "P":
        raise arrivals_to_df.SkipRow
    return phase
/home/runner/work/obsplus/obsplus/src/obsplus/structures/dfextractor.py:122: UserWarning: _get_phase is already a registered extractor, overwriting
  warnings.warn(msg)
[8]:
df = arrivals_to_df(cat)

Get a picks dataframe and perform a left join on the phases:

[9]:
# get picks and filter out non-P phases
picks = obsplus.picks_to_df(cat)
picks = picks[picks.phase_hint.str.upper() == "P"]
[10]:
df_merged = df.merge(picks, how="left", right_on="resource_id", left_on="pick_id")
[11]:
df_merged.head()
[11]:
resource_id_x pick_id event_id_x origin_id phase time_correction distance time_residual time_weight resource_id_y ... agency_id event_id_y network station location channel uncertainty lower_uncertainty upper_uncertainty confidence_level
0 smi:local/f537bad8-80a4-4296-8e84-551b78a8c614 smi:local/21695672 smi:local/248925 smi:local/404444 P NaN 0.360 0.057 -1.0 smi:local/21695672 ... smi:local/248925 TA P17A BHZ NaN NaN NaN NaN
1 smi:local/754c3d1a-ddf9-4769-8f0f-c5e1b39ed1ad smi:local/21695673 smi:local/248925 smi:local/404444 P NaN 0.392 0.322 -1.0 smi:local/21695673 ... smi:local/248925 TA P16A BHZ NaN NaN NaN NaN
2 smi:local/d6fdbf30-7c1b-45a5-9530-b852ac1e2e80 smi:local/21695674 smi:local/248925 smi:local/404444 P NaN 0.520 0.086 -1.0 smi:local/21695674 ... smi:local/248925 TA Q16A BHZ NaN NaN NaN NaN
3 smi:local/21b3be94-a309-4343-a3d3-c0e48395ec4c smi:local/21695675 smi:local/248925 smi:local/404444 P NaN 0.619 0.203 -1.0 smi:local/21695675 ... smi:local/248925 UU SRU BHZ NaN NaN NaN NaN
4 smi:local/344ff568-12b4-4e3d-af56-724c81d07568 smi:local/21695679 smi:local/248925 smi:local/404444 P NaN 0.763 0.031 -1.0 smi:local/21695679 ... smi:local/248925 TA P18A BHZ NaN NaN NaN NaN

5 rows × 34 columns

[12]:
df_merged.columns
[12]:
Index(['resource_id_x', 'pick_id', 'event_id_x', 'origin_id', 'phase',
       'time_correction', 'distance', 'time_residual', 'time_weight',
       'resource_id_y', 'time', 'seed_id', 'filter_id', 'method_id',
       'horizontal_slowness', 'backazimuth', 'onset', 'phase_hint', 'polarity',
       'evaluation_mode', 'event_time', 'evaluation_status', 'creation_time',
       'author', 'agency_id', 'event_id_y', 'network', 'station', 'location',
       'channel', 'uncertainty', 'lower_uncertainty', 'upper_uncertainty',
       'confidence_level'],
      dtype='object')

Calculate how often the phase attribute in the arrival is different from the phase_hint in the pick, which could indicate a quality issue.

[13]:
# calculate fraction of phase_hints that match phase
(df_merged["phase"] == df_merged["phase_hint"]).sum() / len(df_merged)
[13]:
np.float64(1.0)