Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Line: 11 to 11 | ||||||||
Summary:
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
| ||||||||
Line: 23 to 22 | ||||||||
The L0TP processes the L0 trigger decision from the detector signals, then the PC farm processes L1 and auto-passes L2 (assuming all signals present). The Mergers then buffer the events and write to Castor. | ||||||||
Changed: | ||||||||
< < | 3: Raw data is reconstructed by the framework | |||||||
> > | 3: Raw data is reconstructed by the framework then the reconstructed data is filtered to purpose and stored on EOS | |||||||
The data is reconstructed using a version or revision of the framework that is dependent on the time the data was taken, reco efficiency is important at this stage. | ||||||||
Deleted: | ||||||||
< < | 4: Reconstructed data is filtered to purpose then stored on EOS | |||||||
The Pnn filtering code or others are used to reduce the file sizes and separate the events based on the analysis group that will use them. | ||||||||
Changed: | ||||||||
< < | 5: Filtered data is then processed by the user directory files | |||||||
> > | 4: Filtered data is then processed by the user directory files | |||||||
Changed: | ||||||||
< < | User directory pre-analysing files: GigaTrackerEvtReco, AccidentalAnalysis.cc, TwoPhotonAnalysis.cc, TrackAnalysis.cc and OneTrackEventAnalysis.cc. | |||||||
> > | User directory pre-analysing files: GigaTrackerEvtReco, TwoPhotonAnalysis.cc, TrackAnalysis.cc and OneTrackEventAnalysis.cc. | |||||||
Changed: | ||||||||
< < | 6: KaonEventAnalysis.cc then processes the data in stages | |||||||
> > | 5: KaonEventAnalysis.cc then processes the data in stages | |||||||
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
| ||||||||
Line: 46 to 42 | ||||||||
The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i: | ||||||||
Changed: | ||||||||
< < | Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ Ai ⋅ Ei | |||||||
> > | Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ Ai | |||||||
Changed: | ||||||||
< < | where fK is the frequency of kaons in the beam, t is the time period of data taking, Ai is the "acceptance" or number of decays in the detector's fiducial region and Ei is the product of efficiencies ∏rεr for the efficiencies r relating to the trigger, reconstruction, kaon ID, daughter ID, track matching and all other processes used in the analysis (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring). | |||||||
> > | where fK is the frequency of kaons in the beam, t is the time period of data taking and Ai is the "acceptance" or number of decays in the detector's fiducial region (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring). | |||||||
From this we can construct an equation for: | ||||||||
Changed: | ||||||||
< < | BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+ν ⋅ Aμ+ν/Aπ+νν ⋅ Eμ+ν/Eπ+νν where the fK and t terms cancel, along with many of the εr efficiencies and BR(K+→μ+ν) can be taken from the PDG listings as it has been thoroughly measured by previous experiments. | |||||||
> > | BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+ν ⋅ Aμ+ν/Aπ+νν where the fK and t terms cancel, along with many of the efficiencies included in the acceptance and BR(K+→μ+ν) can be taken from the PDG listings as it has been thoroughly measured by previous experiments. | |||||||
Step 1: Generate a Kμ2 normalisation sample. [done]
| ||||||||
Line: 64 to 60 | ||||||||
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν". | ||||||||
Changed: | ||||||||
< < |
Step 4: Start looking at the efficiencies "εr".
| |||||||
> > |
| |||||||
Run the (updated) 2016 cuts based analysis on the 2017 data [future]
|