Analysing HLT2 output

After Writing an HLT2 line, it can be useful to analyse the candidates produced by some HLT2 lines. This means telling Moore to produce an output file, and then running over this file with a subsequent application, typically DaVinci.

The DaVinci documentation can be found here.

The following tutorial is mainly needed if you rely on features available in DecayTreeTuple.

Enabling HLT2 output

Moore will write output if it’s given a name and type for the output file. This is done by setting properties on the PyConf.application.ApplicationOptions object passed to Moore.run_moore:

from Moore import options, run_moore

options.output_file = 'hlt2_example.dst'
options.output_type = 'ROOT'

# Assuming the `all_lines` function is defined elsewhere
run_moore(options, all_lines)

The file hlt2_example.dst will be produced in the directory you run Moore in.

Some of the Moore configuration parameters must also be written out, so that they can be used to configure the reading in DaVinci. This can be done by setting the output_manifest_file option:

options.output_manifest_file = “my_hlt2.tck.json”

Note

The information saved to the my_hlt2.tck.json file can only be used to analyse the DST (or MDF) file that was produced alongside it.

If you try to use a JSON file from a different run of Moore when analysing the DST output file you may encounter strange and potentially unreported errors.

Output format

Output objects created by HLT2 lines are placed in the transient event store (TES) in predefined locations. The lines _candidates_, whose existence defines whether or not the line fired, are placed in the location:

/Event/HLT2/<HLT2 line name>/Particles

The HLT2 line name is defined by the string passed to the initialisation of Moore.lines.Hlt2Line:

def d0_to_kk():
    dzeros = make_dzeros()
    return Hlt2Line(name="Hlt2CharmD0ToKmPip", algs=[dzeros])

In this example, the line has the name Hlt2CharmD0ToKmPip, so the candidates (the objects created by the dzeros algorithm) will be stored in the output file at:

/Event/HLT2/Hlt2CharmD0ToKmPip/Particles

Extra outputs are stored in a similar hierarchy:

/Event/HLT2/<HLT2 line name>/<extra selection name>/Particles

Given this line maker:

def d0_to_kk():
    dzeros = make_dzeros()
    # Select pions and kaons that vertex well with the trigger candidate
    soft_pions = make_soft_pions(dzeros)
    return Hlt2Line(
        name="Hlt2CharmD0ToKmPip",
        algs=[dzeros],
        extra_outputs=[
            ("SoftPions", soft_pions),
        ]
    )

The first element of each 2-tuple in extra_outputs defines the extra selection’s name. So, the soft pions will be available in the output file at:

/Event/HLT2/Hlt2CharmD0ToKmPip/SoftPions/Particles

Currently, the locations of other objects in the event, such tracks and primary vertices, are not guaranteed by Moore and may change in future versions.

Reading data with MooreAnalysis

Today, the HLT2 output format is similar to that produced by the Run 2 application Tesla. If you’ve analysed Turbo data this configuration will look familiar to you:

from Configurables import (ApplicationMgr, DecayTreeTuple, LHCbApp, createODIN)
from DecayTreeTuple import Configuration  # noqa: magic import to augment DecayTreeTuple
from DecayTreeTuple import DecayTreeTupleTruthUtils
from PhysSelPython.Selections import (
    AutomaticData,
    CombineSelection,
    SelectionSequence,
)
from GaudiConf import reading
from PyConf.application import configured_ann_svc


def get_hlt2_unpackers(manifest_file, is_simulation=False):
    """Configures algorithms for reading HLT2 output.

    This is a temporary measure until support for Run 3 HLT2 output is added to
    an LHCb application.
    """

    unpack_raw_event = reading.unpack_rawevent(
        bank_types=['ODIN', 'DstData', 'HltDecReports'])

    reading_algs = [unpack_raw_event]
    mc_unpackers = []
    if is_simulation:
        mc_unpackers = reading.mc_unpackers()

    manifest = reading.load_manifest(manifest_file)
    print(manifest)
    locations = reading.make_locations(manifest, "/Event/HLT2")

    decoder = reading.decoder()

    unpackers = reading.unpackers(locations, manifest, decoder.OutputBuffers)

    reading_algs += [decoder]
    reading_algs += mc_unpackers
    reading_algs += unpackers
    reading_algs += [createODIN()]

    return reading_algs


# The output of the HLT2 line
kk_line = AutomaticData("/Event/HLT2/Hlt2CharmD0ToKmKp/Particles")
# Extra pions
soft_pions = AutomaticData(
    "/Event/HLT2/Hlt2CharmD0ToKmKp/SoftPions/Particles")
dstars = CombineSelection(
    "CombineD0pi",
    inputs=[kk_line, soft_pions],
    DecayDescriptors=[
        "D*(2010)+ -> D0 pi+",
        "D*(2010)- -> D0 pi-",
    ],
    DaughtersCuts={
        "pi+": "PT > 250 * MeV",
    },
    CombinationCut=("in_range(0, (AM - AM1 - AM2), 170) &"
                    "ADOCACHI2CUT(15, '')"),
    MotherCut=("(VFASPF(VCHI2PDOF) < 10) &"
            "in_range(0, (M - M1 - M2), 150)"))
selseq = SelectionSequence(dstars.name() + "Sequence", TopSelection=dstars)

relations = [
    "Relations/ChargedPP2MCP",
    "Relations/NeutralPP2MCP",
]
mc_tools = [
    'MCTupleToolKinematic',
    # ...plus any other MC tuple tools you'd like to use
]

dtt_kk = DecayTreeTuple(
    "TupleD0ToKK",
    Inputs=[kk_line.outputLocation()],
    Decay="D0 -> ^K- ^K+",
)
dtt_kk.addBranches({
    "D0": "D0 -> K- K+",
    "D0_h1": "D0 -> ^K- K+",
    "D0_h2": "D0 -> K- ^K+",
})
dtt_kk.ToolList += [
    "TupleToolMCBackgroundInfo",
    "TupleToolMCTruth",
]
DecayTreeTupleTruthUtils.makeTruth(
    dtt_kk, relations, mc_tools, stream="/Event/HLT2")

dtt_kk_dst = DecayTreeTuple(
    "TupleDstToD0pi_D0ToKK",
    Inputs=[selseq.outputLocation()],
    # HLT2 did not fit this candidate so we don't have P2PV relations for the
    # D*; to create them on demand DTT must know where the PVs live
    InputPrimaryVertices="/Event/HLT2/Rec/Vertex/Primary",
    Decay="Charm -> ^(D0 -> ^K- ^K+) ^X",
)
dtt_kk_dst.addBranches({
    "Dst": "Charm -> (D0 -> K- K+) X",
    "Dst_pi": "Charm -> (D0 -> K- K+) ^X",
    "D0": "Charm -> ^(D0 -> K- K+) X",
    "D0_h1": "Charm -> (D0 -> ^K- K+) X",
    "D0_h2": "Charm -> (D0 -> K- ^K+) X",
})
dtt_kk_dst.ToolList += [
    "TupleToolMCBackgroundInfo",
    "TupleToolMCTruth",
]
DecayTreeTupleTruthUtils.makeTruth(
    dtt_kk_dst, relations, mc_tools, stream="/Event/HLT2")

from Configurables import GaudiSequencer
user_algs = [
    GaudiSequencer(
        "Blah",
        Members=[selseq.sequence(), dtt_kk, dtt_kk_dst],
        IgnoreFilterPassed=True)
]

LHCbApp().DataType = "Upgrade"
LHCbApp().Simulation = True
LHCbApp().CondDBtag = "sim-20171127-vc-md100"
LHCbApp().DDDBtag = "dddb-20171126"

# Load the 'TCK' dumped from the Moore job, assuming the TCK file was named
# like the Moore output file
manifest_file = LHCbApp().TupleFile.replace(".root", "") + ".tck.json"
print("manifest_file", manifest_file)

ApplicationMgr().TopAlg = get_hlt2_unpackers(
    manifest_file=manifest_file, is_simulation=LHCbApp().Simulation)
ApplicationMgr().TopAlg += user_algs
ApplicationMgr().ExtSvc += [configured_ann_svc()]

These options demonstrate using the output of extra selections. If your line doesn’t produce these, the corresponding options will look a lot simpler (you will only need the dtt_kpi algorithm).

Additional options are needed to define the input data, the database tags, and the simulation flag. These typically looks like this, but may be different for your specific use-case:

from GaudiConf import IOExtension

LHCbApp().Simulation = True
LHCbApp().CondDBtag = "sim-20171127-vc-md100"
LHCbApp().DDDBtag = "dddb-20171126"
inputFiles = ['./hlt2_example.dst']
IOExtension().inputFiles(inputFiles, clear=True)

Note

In the past the above code would have been run in DaVinci, however DecayTreeTuple has been removed from DaVinci since v60r0. You can instead run your options within the MooreAnalysis runtime environment using lb-run:

lb-run MooreAnalysis/VERSION gaudirun.py OPTIONS

Or using your local stack (see Developing Moore for details):

./MooreAnalysis/run gaudirun.py OPTIONS

Monte Carlo association

If you ran Moore over a DST-like input, you will have Monte Carlo relations tables available in the output. They are persisted in the two locations defined by the values of Moore.persistence.truth_matching.CHARGED_PP2MC_LOC and Moore.persistence.truth_matching.NEUTRAL_PP2MC_LOC.

To access them, you will first need these lines in your options file:

from DecayTreeTuple import DecayTreeTupleTruthUtils


relations = [
    "Relations/ChargedPP2MCP",
    "Relations/NeutralPP2MCP",
]
mc_tools = [
    'MCTupleToolKinematic',
    # ...plus any other MC tuple tools you'd like to use
]

To include tuple tools that use truth information, you need to ensure the ToolList includes the TupleToolMCBackgroundInfo and TupleToolMCTruth tools, and then call a helper method:

dtt.ToolList += [
    "TupleToolMCBackgroundInfo",
    "TupleToolMCTruth",
]
DecayTreeTupleTruthUtils.makeTruth(dtt, relations, mc_tools, stream="/Event/HLT2")

Here, dtt is the DecayTreeTuple instance you want to add the truth tools to.

Analysing HLT1 decision reports after HLT2

If you run your simulation data through HLT1 and then through HLT2 you probably want to access both sets of decision reports. The HLT1 and HLT2 decision reports are (currently) “decoded” differently (the HLT1 decision reports must use the TCKANNSvc) so there are a couple of steps to this.

First run your simulation through the HLT1 step taking inspriration from the Hlt/Hlt1Conf/tests/qmtest/persistency.qms/allen_mdf_write.qmt unit test which runs:

./run gaudirun.py Hlt/Moore/tests/options/default_input_and_conds_hlt1.py Hlt/Hlt1Conf/tests/options/allen_hlt1_mdf_output.py Hlt/Hlt1Conf/options/allen_hlt1_pp_default.py

(The simulation files you are using should be .(x)digi files)

You now need to both make and assign a tck for HLT1 to use:

./run python Hlt/Hlt1Conf/tests/options/make_allen_tck.py

This uses the get_allen_hlt1_decision_ids method from Moore to collect the int->str decoding relations for the HLT1 lines. After running this you should see a file TCKData/config.cdb.

We now assign this to the tck ID 0x11300000:

./run python ../LHCb/Hlt/HltServices/tests/options/assign_tck.py 0x11300000

The HLT1 decision reports can now by accessed using the following:

ConfigCDBAccessSvc().File = "TCKData/config.cdb"
HltDecReportsDecoder(
    "Hlt1",
    SourceID="Hlt1",
    OutputHltDecReportsLocation="/Event/Hlt1/DecReports")

The HLT2 decision reports will still use the passed tck.json file and so you will need:

HltDecReportsDecoder(
    "Hlt2",
    SourceID="Hlt2",
    OutputHltDecReportsLocation="/Event/Hlt2/DecReports")

Your HLT1 decision reports will then be located at /Event/Hlt1/DecReports and you HLT2 decision reports at /Event/Hlt2/DecReports.