Return the the
main analysis page.
Analysis Timeline
Upstream backgrounds study
Contributions to beam (with percentages at the start of the decay vessel):
- pions (71%) cause the majority of dangerous upstream backgrounds due to the one pion final state of the signal
- protons (23%) don't cause much of an issue due to the differences in properties with both the upstream kaon and downstream pion signal requirements
- kaons (6%) cause issues due to interactions and pileup and the signal requirement of an upstream kaon
Halo muons from beam particle decays can pass through upstream elements and contribute by pileup and downstream miss ID.
Type 1: Snakes and Mambas
The largest contribution to upstream backgrounds are the events that are in time with the RICH-KTAG, but out of time with GTK-KTAG. This type of event suggests that the downstream particle does originate from the beam kaon, but the GTK track is from a pileup particle. This can occur when the kaon decays within the several meter range of the GTK, most likely between GTK2 and 3, as earlier decays are more likely to be swept out of acceptance by the acromat magnets due to their lower than beam momentum.
The final dipole magnet of the acromat allows particles to exit in a narrow vertical band, however the final upstream elements will either absorb (colimator and quadrupole yoke) or veto (CHANTI) these events in a square frame around the beam axis. Therefore the only early-decay pions that can pass into the decay vessel are those that pass very close to the beam axis to avoid the CHANTI (standard Snakes), or those that sweep above and below the final elements (mambas). Standard Snakes are the majority of type 1 events before PNN cuts are applied, but Mambas are more dangerous, as a higher percentage of Mamba events pass the PNN cuts.
Type 2: pion interactions
The second largest contribution to upstream backgrounds are the events that are in time with the GTK-KTAG, but out of time with RICH-KTAG. This type of event suggests that the downstream particle does not originate from the beam kaon. Therefore, the downstream pion has to originate from an interacting, non-kaon, beam particle that produces a low momentum pion (therefore most likely from a beam pion). This interaction is most likely to occur in GTK3 due to it's positioning with respect to the beam.
Type 3: kaon interactions
Theoretically, if pions are interacting with the upstream equipment, so should the kaons. This has not yet been observed as the GTK-KTAG and RICH-KTAG would both be in time for such events
Halo muons
Both kaons and pions can decay upstream into muons with lower-than-beam momentum, a more dispersed distribution with respect to the beam axis and the ability to pass through upstream beam equipment and cause backgrounds in unexpected locations.
The upstream background region in missing mass
In the negative missing mass region, below the muon band, the only good decays should be positronic decays or semileptonic decays with a single neutral pion. Therefore, due to the branching ratio of such decays, combined with the rejection of positrons, muons and neutral pions, no such events should remain. However, a region with a clear dependence on momentum is seen in the data, this should correspond to contamination by upstream events. Therefore, this region offers an upstream-background-enriched sample that can be used for testing.
Run the (updated) 2016 cuts based analysis on the 2017 data [paused for processing]
Summary:
- Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check files are the correct versions)
- Run on one file to check functionality
- Ensure blinding is maintained
- Test over at least one full run
- Run on as much data as possible
- Try to run on any failed data files to complete the selection
- Discus unblinding
- Unblind starting with control regions
- Write up results with the intention to publish eventually
1. Update the version of the user directory files and start testing [in progress]
Copied over a recent version of Giuseppe's codes (after a full backup), we need to check that these all work as intended with 2017 data and check that these are the most recent versions of the codes (and check for any missing, new codes).
- Critical codes to check: status of the framework revision you are using [updated but not checked for bug reports], status of the user directory background files [done by building a new user directory and starting from scratch], fullanalysis.conf and .settingsna62 files [in progress], HTCondor revision and file specifics [not started], any pre-analyzers required [just using GTK code for now], TwoPhotonAnalysis.cc(.hh) [Giuseppe checking this], TrackAnalysis.cc(.hh) [Done timing and spacial plots look good], OneTrackEventAnalysis.cc(.hh) [next], KaonEventAnalysis.cc(.hh) [final step].
- Starting with a test run on just TrackAnalysis (responsible for track to sub-detector matching, but no cuts) and TwoPhotonAnalysis (responsible for the pi0 variables), with just GigaTrackerEvtReco as a pre-analyzer and with the usual dependency libraries. This was done with a single, reprocessed, 2017, golden run file; specifically run 8252, filtered for Pnn, bursts 13-15. Files may be reprocessed again after the release of v0.11.2 (is CVMFS a better choce for final 2017 analysis?).
- Initial results suggest timing issues in the CHOD as expected
- After correcting CHODAnalysis.cc with the path to the newest light correction files this issue was resolved
- Next we will add the OneTrackEventAnalysis code and test that
- Later we will add KaonEventAnalysis and any new pre-analyzers
Using Kμ2 as a normalisation sample [done]
The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K
+→π
+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ
+ν or π
+π
0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:
N
i = f
K ⋅ t ⋅ BR(K→i) ⋅ A
totali
where f
K is the frequency of kaons in the beam, t is the total time period of data taking and A
totali is the total "acceptance" or fraction of decays in the detector's fiducial region that pass all processing and cuts (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring, pileup, matter interactions etc...).
We can define the total acceptance as the product of three contributions:
A
totali=A
geoi ⋅ A
cutsi ⋅ A
cori
where A
geoi is the Geometric acceptance (number of events that can be reconstructed by the detector equipment), A
cutsi is the acceptance due to the selection cuts (calculated from MC) and A
cori is the correction to the acceptance due to elements not modeled in the MC (such as the trigger efficiency).
From here we define A
i = A
cutsi
From this we can construct an equation for:
BR(K
+→π
+νν) = BR(K
+→μ
+ν) ⋅
Nπ+νν/
D⋅Nμ+ν ⋅
Aμ+ν/
Aπ+νν where the f
K and t terms cancel, along with the geometric acceptance and many of the correction efficiencies included in the total acceptance, D is the control trigger random downscaling factor (400), and BR(K
+→μ
+ν) can be taken from the PDG listings, as it has been thoroughly measured by previous experiments.
Step 1: Generate a K
μ2 normalisation sample. [done]
- Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms
- Write a new analyser and tree to group useful output
- Apply Pnn like cuts without PID
- Add a muon missing mass cut and a timing based MUV3 cut to select muon events
- Confirm compatibility of Pnn cuts, comment cuts as muon or Pnn, generally tidy up the code to finalise the cuts
Step 2: Calculate the acceptance "A
μ+ν" using all the muon MC with HTCondor. [done]
- Select muon events at MC truth level
- Take basic acceptance cut of 105 (possibly 115) to 165m
- Bin events that pass all cuts by momentum as has been done in previous studies (15-20, 20-25, 25-30, 30-35 GeV)
- Calculate binned acceptance by dividing by all events that pass Kmu2 cuts, recorded using the binning system, by the number that passed the truth geometric acceptance (such that the bins sum to the total acceptance)
Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "N
μ+ν". [done]
- Run selection on run number 6431 (large, good quality run) to start with
- Run on Giuseppe's list of all good runs of 2016
Step 4: Start looking at the efficiencies that don't cancel in the acceptances fraction "A
cori".
- Pion ID efficiency: where the efficiency of pion ID in Pnn data "επdata(π+νν)" can be described by: [studied by others, determined negligible with respect to MC approximation for precision of Pnn analysis]
ε
πdata(π
+νν) = ε
πMC(π
+νν)⋅
επdata(π+π0)/
επMC(π+π0)
where "ε
πMC(π
+νν)" is the efficiency of pion ID in Pnn MC (which is used to calculate the Pnn acceptance), "ε
πdata(π
+π
0)" is the efficiency of pion ID in π
+π
0 data and "ε
πMC(π
+π
0)" is the efficiency of pion ID in π
+π
0 MC. Therefore, the Acceptance of Pnn "A
π+νν" can be corrected to:
A
π+νν⋅
επdata(π+π0)/
επMC(π+π0)
- Matter interaction differences of muon and pion (assumed small).
- Muon ID efficiency (similarly to pion ID efficiency)
- Trigger efficiencies
- Anything we missed?
- Random veto: the additional loss of events due to both pileup and matter interactions affecting the multiplicity and photon rejection cuts, is not an issue for this normalisation as it cancels in the acceptance ratio (unlike π+π0 as it doesn't include these cuts)
Step 5: Calculate the single event sensitivity (SES) and compare it to the π
+π
0 normalisation. [done]
- SES defined as the BR assuming a single event of signal with no background contributions
- This is simplified by introducing the calculated number of kaon decays from the normalisation sample information:
N
K =
Nμ+ν⋅D/
Aμ+ν⋅BR(K+→μ+ν)
- Then add the trigger efficiency correction and sum over the 4 pion momentum bins to give:
SES =
1/
NK⋅∑j[Aπ+νν(pj)⋅εtriggerπ+νν(pj)]
- Which differs from the π+π0 normalisation only in the lack of a random veto correction and the result was clearly consistent with the π+π0 SES to an estimated 10% muon sample error due to MC and within the fully calculated errors on the π+π0 sample
The seqence of processes involved in NA62 Pnn (and similar) data analysis
This section is written to later discus the efficiencies of the NA62 analysis and which efficiencies do not cancel between the Pnn channel and the muon normalisation.
Summary:
- The events occur in the detector
- Detector information is processed in the triggers and stored on Castor
- Raw data is reconstructed by the framework then the reconstructed data is filtered to purpose and stored on EOS
- Filtered data is then processed by the user directory files
- KaonEventAnalysis.cc then processes the data in stages
1: The events occur in the detector
Beam particles (mostly pions and protons, with 6% kaons) exit the T10 target and pass through the colimators, magnetic fields and GTK, possibly interacting or decaying (largely muon and photon decays from the pions). Negligible background from matter emissions (E too low) and high energy cosmic particles (low frequency below ground and usually wrong kinematics).
2: Detector information is processed in the triggers and stored on Castor
The L0TP processes the L0 trigger decision from the detector signals, then the PC farm processes L1 and auto-passes L2 (assuming all signals present). The Mergers then buffer the events and write to Castor.
3: Raw data is reconstructed by the framework then the reconstructed data is filtered to purpose and stored on EOS
The data is reconstructed using a version or revision of the framework that is dependent on the time the data was taken, reco efficiency is important at this stage.
The Pnn filtering code or others are used to reduce the file sizes and separate the events based on the analysis group that will use them.
4: Filtered data is then processed by the user directory files
User directory pre-analysing files: GigaTrackerEvtReco, TwoPhotonAnalysis.cc, TrackAnalysis.cc and OneTrackEventAnalysis.cc.
5: KaonEventAnalysis.cc then processes the data in stages
- The main function containing the base analysis
- Start of Job, Run then Burst; Initialise histograms, trees and output
- Process each event and call the relevant analysis functions
- Run the specific analysis function on each event that passed the previous stages
- Post-processing with: PostProcess; End of Burst, Run, Job; DrawPlot
GTK3 interaction MC work [Cancelled]
- Altering the Geant4 setup to change the hadronic interaction probability in GTK3 to 1 or reject non-interacting events (more likely solution)
- Generate on the order of 100M events
- Compare with data
- Make an estimation of this background for the PNN errors
Cancelled as generation of the statistics isn't feasible.
This has been left for the experts to look into further, as considerable work would be required to make this possible.
Initail work completed to set up the framework and user directory codes:
Build flag issue with --old-specialtrigger, causing a dependency on the framaework's UserMethods.cc file.
- Everything works as expected if you comment out the OLD_SPECIALTRIGGER block as described in the analysisbuild.txt readme file and replace it manually with whichever trigger you want to use (either works or complains that you're using the wrong one at runtime). However, you then have a dependency on a framework file.
- When using the flag, it seems that the "#define OLD_SPECIALTRIGGER" line in NA62AnalysisBuilder.py causes this definition to become stuck in the pre-processor, such that it continues to be defined if you try to build without the flag at a later stage
- Solution 1: run a CleanAll command with NA62AnalysisBuilder.py then re-source the env.sh file
- Solution 2: CleanAll then log out of your ssh session and log back in, source then build
Generating a Pnn sample from the Kaon code given to me by Giuseppe.
- Build fails due to class conflict in the public directory files
- Solution: fixed manually by Giuseppe in the codes, largely by replacing the conflicting "Particle" class with "MyParticle"
- Further run fails due to a special trigger issue not dependent on the build flag
- Solution: a special trigger element of the code needs changing when swapping between MC and data files
- Now working on afs and should be able to set up on any other system (copy placed in: userDir2)
A test analyser, to understand how to generate an analyser from scratch and plot variables in the data files, using the framework as a basis.
- This analyser is now setup such that it builds and begins to run with the current setup, designed to record the number of spectrometer candidates, but it fails at runtime due to an issue with a special trigger which is not specifically used in the code.
- Solution: frameworks are not yet completely backwards compatible, I need to use the --old-specialtrigger flag after "build" to get this to work