RecoConf

The RecoConf package holds the configuration of the reconstruction. It is structured as follows.

  • python/RecoConf includes all python modules which setup reconstruction data flows.

  • options includes options files to run different reconstruction options in Moore.

  • test/qmtest includes the definition of nightly tests.

The dataflow configuration follows a few simple principles.

  1. Reconstructed objects “flow” through the make_* functions. Tracks are one such example. Each configuration function produces a new data by adding some new information. E.g. the VELO UT tracking adds UT cluster information to VELO tracks, thereby extending them. DataHandle objects of reconstructed objects are passed as positional arguments, while all others are passed as keyword arguments with appropriate defaults. For example, make_velout_tracks

    • takes velo_tracks as a positional argument (no default)

    • takes make_ut_clusters (the function that returns the UT clusters) as a keyword argument with a default

    • returns the output tracks of the VeloUTTracking algorithm

    Functions that start from raw data (e.g. make_velo_tracks) take no positional arguments. Steps that combine multiple objects representing particles (e.g. combiners or the TrackBestTrackCreator) take multiple positional arguments as inputs.

  2. Conversions are configured “immediately” after producers. For example, make_upstream_tracks

    • follows the first principle and takes VELO tracks as input

    • uses an algorithm to do the VELO-UT tracking (by default make_velout_tracks, but can be changed for anything with the same output type)

    • configures the possible conversions and returns all track objects of different types (currently in a dictionary)

  3. The functions above only define a single “logical” step (e.g. extending with UT information). The global data flow is configured in as “flat” as possible functions, where the logical steps are pieced together. These functions can return multiple outputs, e.g. make_hlt2_tracks returns all intermediate track containers plus the “Best” track container.

Writing your own reconstruction data flow

The first step of the reconstruction usually starts with raw data and decoding them to clusters or hits. As an example the SciFi decoding is shown:

from PyConf.application import default_raw_event
from PyConf.Algorithms import FTRawBankDecoder, PrStoreSciFiHits

@configurable
def make_ft_hits(make_raw_event = default_raw_event):
    ft_clusters = FTRawBankDecoder(RawEventLocations=make_raw(["FTCluster"])).OutputLocation
    return PrStoreSciFiHits(
        HitsLocation=ft_clusters(),
        LayerMasks=tuple(my_disabled_layers)).Output

The function default_raw_event takes care of providing the raw event given the input file type and by making it a default argument, the configuration is simplified. If in this example, the clusters would be needed by a different algorithm, consider putting their creation in a dedicated function.

The next step is to use the hits and reconstruct tracks from them:

from PyConf.Algorithms import PrHybridSeeding

@configurable
def make_PrHybridSeeding_tracks(make_hits=make_ft_hits, min_pt = 1000.):
    scifi_tracks = PrHybridSeeding(FTHitsLocation=make_hits(), MinPt = min_pt).OutputName
    return scifi_tracks

To build Long tracks out of Velo and SciFi tracks, we can do the following:

from PyConf.Algorithms import PrMatchNN

@configurable
def make_long_tracks(velo_tracks, scifi_tracks, make_ut_hits = ):
    match_tracks = PrMatchNN(
        VeloInput=velo_tracks,
        SeedInput=scifi_tracks,
    ).MatchOutput
    return match_tracks

def make_tracks():
    velo_tracks = make_velo_tracks()
    scifi_tracks = make_PrHybridSeeding_tracks()
    long_tracks = make_long_tracks(velo_tracks, scifi_tracks)
    return {"Velo" : velo_tracks,
            "Seed" : scifi_tracks,
            "Long" : long_tracks}

The function make_tracks returns a collection of different track types as all of them are used later and they are returned together to guarantee consistency.