Difference: ConnorsAnalysis-AnalysisTimeline (1 vs. 20)

Revision 202018-08-07 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.

Analysis Timeline

Changed:
<
<

Run the (updated) 2016 cuts based analysis on the 2017 data [started]

>
>

Upstream backgrounds study

Contributions to beam (with percentages at the start of the decay vessel):

  1. pions (71%) cause the majority of dangerous upstream backgrounds due to the one pion final state of the signal
  2. protons (23%) don't cause much of an issue due to the differences in properties with both the upstream kaon and downstream pion signal requirements
  3. kaons (6%) cause issues due to interactions and pileup and the signal requirement of an upstream kaon
Halo muons from beam particle decays can pass through upstream elements and contribute by pileup and downstream miss ID.

Type 1: Snakes and Mambas

The largest contribution to upstream backgrounds are the events that are in time with the RICH-KTAG, but out of time with GTK-KTAG. This type of event suggests that the downstream particle does originate from the beam kaon, but the GTK track is from a pileup particle. This can occur when the kaon decays within the several meter range of the GTK, most likely between GTK2 and 3, as earlier decays are more likely to be swept out of acceptance by the acromat magnets due to their lower than beam momentum.

The final dipole magnet of the acromat allows particles to exit in a narrow vertical band, however the final upstream elements will either absorb (colimator and quadrupole yoke) or veto (CHANTI) these events in a square frame around the beam axis. Therefore the only early-decay pions that can pass into the decay vessel are those that pass very close to the beam axis to avoid the CHANTI (standard Snakes), or those that sweep above and below the final elements (mambas). Standard Snakes are the majority of type 1 events before PNN cuts are applied, but Mambas are more dangerous, as a higher percentage of Mamba events pass the PNN cuts.

Type 2: pion interactions

The second largest contribution to upstream backgrounds are the events that are in time with the GTK-KTAG, but out of time with RICH-KTAG. This type of event suggests that the downstream particle does not originate from the beam kaon. Therefore, the downstream pion has to originate from an interacting, non-kaon, beam particle that produces a low momentum pion (therefore most likely from a beam pion). This interaction is most likely to occur in GTK3 due to it's positioning with respect to the beam.

Type 3: kaon interactions

Theoretically, if pions are interacting with the upstream equipment, so should the kaons. This has not yet been observed as the GTK-KTAG and RICH-KTAG would both be in time for such events

Halo muons

Both kaons and pions can decay upstream into muons with lower-than-beam momentum, a more dispersed distribution with respect to the beam axis and the ability to pass through upstream beam equipment and cause backgrounds in unexpected locations.

The upstream background region in missing mass

In the negative missing mass region, below the muon band, the only good decays should be positronic decays or semileptonic decays with a single neutral pion. Therefore, due to the branching ratio of such decays, combined with the rejection of positrons, muons and neutral pions, no such events should remain. However, a region with a clear dependence on momentum is seen in the data, this should correspond to contamination by upstream events. Therefore, this region offers an upstream-background-enriched sample that can be used for testing.

Run the (updated) 2016 cuts based analysis on the 2017 data [paused for processing]

  Summary:
  1. Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check files are the correct versions)
Line: 19 to 48
 1. Update the version of the user directory files and start testing [in progress]

Copied over a recent version of Giuseppe's codes (after a full backup), we need to check that these all work as intended with 2017 data and check that these are the most recent versions of the codes (and check for any missing, new codes).

Changed:
<
<
  • Critical codes to check: status of the framework revision you are using [updated but not checked for bug reports], status of the user directory background files [done by building a new user directory and starting from scratch], fullanalysis.conf and .settingsna62 files [in progress], HTCondor revision and file specifics [not started], any pre-analyzers required [just using GTK code for now], !TwoPhotonAnalysis.cc(.hh) [Giuseppe checking this], TrackAnalysis.cc(.hh) [Done timing and spacial plots look good], !OneTrackEventAnalysis.cc(.hh) [next], !KaonEventAnalysis.cc(.hh) [final step].
>
>
  • Critical codes to check: status of the framework revision you are using [updated but not checked for bug reports], status of the user directory background files [done by building a new user directory and starting from scratch], fullanalysis.conf and .settingsna62 files [in progress], HTCondor revision and file specifics [not started], any pre-analyzers required [just using GTK code for now], TwoPhotonAnalysis.cc(.hh) [Giuseppe checking this], TrackAnalysis.cc(.hh) [Done timing and spacial plots look good], OneTrackEventAnalysis.cc(.hh) [next], KaonEventAnalysis.cc(.hh) [final step].
 
  • Starting with a test run on just TrackAnalysis (responsible for track to sub-detector matching, but no cuts) and TwoPhotonAnalysis (responsible for the pi0 variables), with just GigaTrackerEvtReco as a pre-analyzer and with the usual dependency libraries. This was done with a single, reprocessed, 2017, golden run file; specifically run 8252, filtered for Pnn, bursts 13-15. Files may be reprocessed again after the release of v0.11.2 (is CVMFS a better choce for final 2017 analysis?).
    • Initial results suggest timing issues in the CHOD as expected
    • After correcting CHODAnalysis.cc with the path to the newest light correction files this issue was resolved

Revision 192018-04-09 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 19 to 19
 1. Update the version of the user directory files and start testing [in progress]

Copied over a recent version of Giuseppe's codes (after a full backup), we need to check that these all work as intended with 2017 data and check that these are the most recent versions of the codes (and check for any missing, new codes).

Changed:
<
<
  • Starting with a test run on just TrackAnalysis (responsible for track to sub-detector matching, but no cuts) and TwoPhotonAnalysis (responsible for the pi0 variables), with just GigaTrackerEvtReco as a pre-analyzer and with the usual dependency libraries. This was done with a single, reprocessed, 2017, golden run file; specifically run 8252, filtered for Pnn, bursts 13-15.
>
>
  • Critical codes to check: status of the framework revision you are using [updated but not checked for bug reports], status of the user directory background files [done by building a new user directory and starting from scratch], fullanalysis.conf and .settingsna62 files [in progress], HTCondor revision and file specifics [not started], any pre-analyzers required [just using GTK code for now], !TwoPhotonAnalysis.cc(.hh) [Giuseppe checking this], TrackAnalysis.cc(.hh) [Done timing and spacial plots look good], !OneTrackEventAnalysis.cc(.hh) [next], !KaonEventAnalysis.cc(.hh) [final step].
  • Starting with a test run on just TrackAnalysis (responsible for track to sub-detector matching, but no cuts) and TwoPhotonAnalysis (responsible for the pi0 variables), with just GigaTrackerEvtReco as a pre-analyzer and with the usual dependency libraries. This was done with a single, reprocessed, 2017, golden run file; specifically run 8252, filtered for Pnn, bursts 13-15. Files may be reprocessed again after the release of v0.11.2 (is CVMFS a better choce for final 2017 analysis?).
 
    • Initial results suggest timing issues in the CHOD as expected
Added:
>
>
    • After correcting CHODAnalysis.cc with the path to the newest light correction files this issue was resolved
 
  • Next we will add the OneTrackEventAnalysis code and test that
  • Later we will add KaonEventAnalysis and any new pre-analyzers

Using Kμ2 as a normalisation sample [done]

Revision 182018-04-04 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 27 to 27
  The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:
Changed:
<
<
Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ Atotali
>
>
Ni = fK ⋅ t ⋅ BR(K→i) ⋅ Atotali
  where fK is the frequency of kaons in the beam, t is the total time period of data taking and Atotali is the total "acceptance" or fraction of decays in the detector's fiducial region that pass all processing and cuts (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring, pileup, matter interactions etc...).
Line: 41 to 41
  From this we can construct an equation for:
Changed:
<
<
BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+νAμ+ν/Aπ+νν
where the fK and t terms cancel, along with the geometric acceptance and many of the correction efficiencies included in the total acceptance, and BR(K+→μ+ν) can be taken from the PDG listings, as it has been thoroughly measured by previous experiments.
>
>
BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/D⋅Nμ+νAμ+ν/Aπ+νν
where the fK and t terms cancel, along with the geometric acceptance and many of the correction efficiencies included in the total acceptance, D is the control trigger random downscaling factor (400), and BR(K+→μ+ν) can be taken from the PDG listings, as it has been thoroughly measured by previous experiments.
  Step 1: Generate a Kμ2 normalisation sample. [done]
  • Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms
Line: 71 to 71
 
  • Random veto: the additional loss of events due to both pileup and matter interactions affecting the multiplicity and photon rejection cuts, is not an issue for this normalisation as it cancels in the acceptance ratio (unlike π+π0 as it doesn't include these cuts)
Step 5: Calculate the single event sensitivity (SES) and compare it to the π+π0 normalisation. [done]
  • SES defined as the BR assuming a single event of signal with no background contributions
Added:
>
>
  • This is simplified by introducing the calculated number of kaon decays from the normalisation sample information:
NK = Nμ+ν⋅D/Aμ+ν⋅BR(K+→μ+ν)
  • Then add the trigger efficiency correction and sum over the 4 pion momentum bins to give:
SES = 1/NK⋅∑j[Aπ+νν(pj)⋅εtriggerπ+νν(pj)]
  • Which differs from the π+π0 normalisation only in the lack of a random veto correction and the result was clearly consistent with the π+π0 SES to an estimated 10% muon sample error due to MC and within the fully calculated errors on the π+π0 sample
 

The seqence of processes involved in NA62 Pnn (and similar) data analysis

This section is written to later discus the efficiencies of the NA62 analysis and which efficiencies do not cancel between the Pnn channel and the muon normalisation.

Revision 172018-04-03 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 27 to 27
  The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:
Changed:
<
<
Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ Ai
>
>
Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ Atotali
 
Changed:
<
<
where fK is the frequency of kaons in the beam, t is the time period of data taking and Ai is the total "acceptance" or number of decays in the detector's fiducial region (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring).
>
>
where fK is the frequency of kaons in the beam, t is the total time period of data taking and Atotali is the total "acceptance" or fraction of decays in the detector's fiducial region that pass all processing and cuts (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring, pileup, matter interactions etc...).

We can define the total acceptance as the product of three contributions:

Atotali=Ageoi ⋅ Acutsi ⋅ Acori

where Ageoi is the Geometric acceptance (number of events that can be reconstructed by the detector equipment), Acutsi is the acceptance due to the selection cuts (calculated from MC) and Acori is the correction to the acceptance due to elements not modeled in the MC (such as the trigger efficiency).

From here we define Ai = Acutsi

  From this we can construct an equation for:
Changed:
<
<
BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+νAμ+ν/Aπ+νν
where the fK and t terms cancel, along with many of the efficiencies included in the acceptance and BR(K+→μ+ν) can be taken from the PDG listings as it has been thoroughly measured by previous experiments.
>
>
BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+νAμ+ν/Aπ+νν
where the fK and t terms cancel, along with the geometric acceptance and many of the correction efficiencies included in the total acceptance, and BR(K+→μ+ν) can be taken from the PDG listings, as it has been thoroughly measured by previous experiments.
  Step 1: Generate a Kμ2 normalisation sample. [done]
  • Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms
Line: 49 to 57
 Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν". [done]
  • Run selection on run number 6431 (large, good quality run) to start with
  • Run on Giuseppe's list of all good runs of 2016
Changed:
<
<
Step 4: Start looking at the efficiencies that don't cancel in the acceptances fraction "εr".
  • Pion ID efficiency: where the efficiency of pion ID in Pnn data "επdata+νν)" can be described by: [being studied by others]
>
>
Step 4: Start looking at the efficiencies that don't cancel in the acceptances fraction "Acori".
  • Pion ID efficiency: where the efficiency of pion ID in Pnn data "επdata+νν)" can be described by: [studied by others, determined negligible with respect to MC approximation for precision of Pnn analysis]
 επdata+νν) = επMC+νν)⋅επdata+π0)/επMC+π0)
Changed:
<
<
where "επMC+νν)" is the efficiency of pion ID in Pnn MC (which is used to calculate the Pnn acceptance), "επdata+π0)" is the efficiency of pion ID in π+π0 data and "επMC+π0)" is the efficiency of pion ID in π+π0 MC. Therefore, the Acceptance of Pnn "Aπ+νν" must be corrected to:
>
>
where "επMC+νν)" is the efficiency of pion ID in Pnn MC (which is used to calculate the Pnn acceptance), "επdata+π0)" is the efficiency of pion ID in π+π0 data and "επMC+π0)" is the efficiency of pion ID in π+π0 MC. Therefore, the Acceptance of Pnn "Aπ+νν" can be corrected to:
  Aπ+ννεπdata+π0)/επMC+π0)
Changed:
<
<
  • Matter interaction differences of muon and pion.
  • Muon ID efficiency [should be done from the MC run]
>
>
  • Matter interaction differences of muon and pion (assumed small).
  • Muon ID efficiency (similarly to pion ID efficiency)
 
  • Trigger efficiencies
  • Anything we missed?
Added:
>
>
  • Random veto: the additional loss of events due to both pileup and matter interactions affecting the multiplicity and photon rejection cuts, is not an issue for this normalisation as it cancels in the acceptance ratio (unlike π+π0 as it doesn't include these cuts)
Step 5: Calculate the single event sensitivity (SES) and compare it to the π+π0 normalisation. [done]
  • SES defined as the BR assuming a single event of signal with no background contributions
 

The seqence of processes involved in NA62 Pnn (and similar) data analysis

This section is written to later discus the efficiencies of the NA62 analysis and which efficiencies do not cancel between the Pnn channel and the muon normalisation.

Revision 162018-04-03 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 7 to 7
 

Run the (updated) 2016 cuts based analysis on the 2017 data [started]

Summary:

Changed:
<
<
  1. Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check you are using the correct framework revision)
>
>
  1. Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check files are the correct versions)
 
  1. Run on one file to check functionality
  2. Ensure blinding is maintained
  3. Test over at least one full run
Line: 29 to 29
  Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ Ai
Changed:
<
<
where fK is the frequency of kaons in the beam, t is the time period of data taking and Ai is the "acceptance" or number of decays in the detector's fiducial region (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring).
>
>
where fK is the frequency of kaons in the beam, t is the time period of data taking and Ai is the total "acceptance" or number of decays in the detector's fiducial region (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring).
  From this we can construct an equation for:

Revision 152018-03-22 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 10 to 10
 
  1. Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check you are using the correct framework revision)
  2. Run on one file to check functionality
  3. Ensure blinding is maintained
Added:
>
>
  1. Test over at least one full run
 
  1. Run on as much data as possible
  2. Try to run on any failed data files to complete the selection
  3. Discus unblinding
  4. Unblind starting with control regions
  5. Write up results with the intention to publish eventually
Changed:
<
<
1. Updated the version of the user directory files and started testing [in progress]
>
>
1. Update the version of the user directory files and start testing [in progress]
  Copied over a recent version of Giuseppe's codes (after a full backup), we need to check that these all work as intended with 2017 data and check that these are the most recent versions of the codes (and check for any missing, new codes).
Changed:
<
<
  • Starting with a test run on just TrackAnalysis (responsible for track to sub-detector matching, but no cuts) and TwoPhotonAnalysis (responsible for the pi0 variables), with just GigaTrackerEvtReco as a pre-analyzer and with the usual dependency libraries. This was done with a single, reprocessed, 2017, golden run file; specifically run 8252, filtered for Pnn, bursts 13-15.
>
>
  • Starting with a test run on just TrackAnalysis (responsible for track to sub-detector matching, but no cuts) and TwoPhotonAnalysis (responsible for the pi0 variables), with just GigaTrackerEvtReco as a pre-analyzer and with the usual dependency libraries. This was done with a single, reprocessed, 2017, golden run file; specifically run 8252, filtered for Pnn, bursts 13-15.
    • Initial results suggest timing issues in the CHOD as expected
  • Next we will add the OneTrackEventAnalysis code and test that
  • Later we will add KaonEventAnalysis and any new pre-analyzers
 

Using Kμ2 as a normalisation sample [done]

The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:

Line: 43 to 46
 
  • Take basic acceptance cut of 105 (possibly 115) to 165m
  • Bin events that pass all cuts by momentum as has been done in previous studies (15-20, 20-25, 25-30, 30-35 GeV)
  • Calculate binned acceptance by dividing by all events that pass Kmu2 cuts, recorded using the binning system, by the number that passed the truth geometric acceptance (such that the bins sum to the total acceptance)
Changed:
<
<
Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν". [almost finished]
  • Run selection on run number 6431 (large, good quality run) to start with [done]
  • Run on Giuseppe's list of all good runs of 2016 [running]
>
>
Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν". [done]
  • Run selection on run number 6431 (large, good quality run) to start with
  • Run on Giuseppe's list of all good runs of 2016
 Step 4: Start looking at the efficiencies that don't cancel in the acceptances fraction "εr".
  • Pion ID efficiency: where the efficiency of pion ID in Pnn data "επdata+νν)" can be described by: [being studied by others]
επdata+νν) = επMC+νν)⋅επdata+π0)/επMC+π0)

Revision 142018-03-22 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.

Analysis Timeline

Changed:
<
<

The seqence of processes involved in NA62 data analysis

This section is written to later discus the efficiencies of the NA62 analysis and which efficiencies do not cancel between the Pnn channel and the muon normalisation.

>
>

Run the (updated) 2016 cuts based analysis on the 2017 data [started]

  Summary:
Changed:
<
<
  1. The events occur in the detector
  2. Detector information is processed in the triggers and stored on Castor
  3. Raw data is reconstructed by the framework then the reconstructed data is filtered to purpose and stored on EOS
  4. Filtered data is then processed by the user directory files
  5. KaonEventAnalysis.cc then processes the data in stages
1: The events occur in the detector

Beam particles (mostly pions and protons, with 6% kaons) exit the T10 target and pass through the colimators, magnetic fields and GTK, possibly interacting or decaying (largely muon and photon decays from the pions). Negligible background from matter emissions (E too low) and high energy cosmic particles (low frequency below ground and usually wrong kinematics).

2: Detector information is processed in the triggers and stored on Castor

The L0TP processes the L0 trigger decision from the detector signals, then the PC farm processes L1 and auto-passes L2 (assuming all signals present). The Mergers then buffer the events and write to Castor.

3: Raw data is reconstructed by the framework then the reconstructed data is filtered to purpose and stored on EOS

The data is reconstructed using a version or revision of the framework that is dependent on the time the data was taken, reco efficiency is important at this stage.

The Pnn filtering code or others are used to reduce the file sizes and separate the events based on the analysis group that will use them.

4: Filtered data is then processed by the user directory files

>
>
  1. Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check you are using the correct framework revision)
  2. Run on one file to check functionality
  3. Ensure blinding is maintained
  4. Run on as much data as possible
  5. Try to run on any failed data files to complete the selection
  6. Discus unblinding
  7. Unblind starting with control regions
  8. Write up results with the intention to publish eventually
1. Updated the version of the user directory files and started testing [in progress]
 
Changed:
<
<
User directory pre-analysing files: GigaTrackerEvtReco, TwoPhotonAnalysis.cc, TrackAnalysis.cc and OneTrackEventAnalysis.cc.
>
>
Copied over a recent version of Giuseppe's codes (after a full backup), we need to check that these all work as intended with 2017 data and check that these are the most recent versions of the codes (and check for any missing, new codes).
  • Starting with a test run on just TrackAnalysis (responsible for track to sub-detector matching, but no cuts) and TwoPhotonAnalysis (responsible for the pi0 variables), with just GigaTrackerEvtReco as a pre-analyzer and with the usual dependency libraries. This was done with a single, reprocessed, 2017, golden run file; specifically run 8252, filtered for Pnn, bursts 13-15.
 
Changed:
<
<
5: KaonEventAnalysis.cc then processes the data in stages
  1. The main function containing the base analysis
  2. Start of Job, Run then Burst; Initialise histograms, trees and output
  3. Process each event and call the relevant analysis functions
  4. Run the specific analysis function on each event that passed the previous stages
  5. Post-processing with: PostProcess; End of Burst, Run, Job; DrawPlot

Using Kμ2 as a normalisation sample [current work]

>
>

Using Kμ2 as a normalisation sample [done]

  The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:
Line: 75 to 57
 
  • Muon ID efficiency [should be done from the MC run]
  • Trigger efficiencies
  • Anything we missed?
Changed:
<
<

Run the (updated) 2016 cuts based analysis on the 2017 data [future]

  1. Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check you are using the correct framework revision)
  2. Run on one file to check functionality
  3. Ensure blinding is maintained
  4. Run on as much data as possible
  5. Try to run on any failed data files to complete the selection
  6. Discus unblinding
  7. Unblind starting with control regions
  8. Write up results with the intention to publish eventually

GTK3 interaction MC work [future]

>
>

The seqence of processes involved in NA62 Pnn (and similar) data analysis

This section is written to later discus the efficiencies of the NA62 analysis and which efficiencies do not cancel between the Pnn channel and the muon normalisation.

Summary:

  1. The events occur in the detector
  2. Detector information is processed in the triggers and stored on Castor
  3. Raw data is reconstructed by the framework then the reconstructed data is filtered to purpose and stored on EOS
  4. Filtered data is then processed by the user directory files
  5. KaonEventAnalysis.cc then processes the data in stages
1: The events occur in the detector

Beam particles (mostly pions and protons, with 6% kaons) exit the T10 target and pass through the colimators, magnetic fields and GTK, possibly interacting or decaying (largely muon and photon decays from the pions). Negligible background from matter emissions (E too low) and high energy cosmic particles (low frequency below ground and usually wrong kinematics).

2: Detector information is processed in the triggers and stored on Castor

The L0TP processes the L0 trigger decision from the detector signals, then the PC farm processes L1 and auto-passes L2 (assuming all signals present). The Mergers then buffer the events and write to Castor.

3: Raw data is reconstructed by the framework then the reconstructed data is filtered to purpose and stored on EOS

The data is reconstructed using a version or revision of the framework that is dependent on the time the data was taken, reco efficiency is important at this stage.

The Pnn filtering code or others are used to reduce the file sizes and separate the events based on the analysis group that will use them.

4: Filtered data is then processed by the user directory files

User directory pre-analysing files: GigaTrackerEvtReco, TwoPhotonAnalysis.cc, TrackAnalysis.cc and OneTrackEventAnalysis.cc.

5: KaonEventAnalysis.cc then processes the data in stages

  1. The main function containing the base analysis
  2. Start of Job, Run then Burst; Initialise histograms, trees and output
  3. Process each event and call the relevant analysis functions
  4. Run the specific analysis function on each event that passed the previous stages
  5. Post-processing with: PostProcess; End of Burst, Run, Job; DrawPlot

GTK3 interaction MC work [Cancelled]

 
  1. Altering the Geant4 setup to change the hadronic interaction probability in GTK3 to 1 or reject non-interacting events (more likely solution)
  2. Generate on the order of 100M events
  3. Compare with data
  4. Make an estimation of this background for the PNN errors
Added:
>
>
Cancelled as generation of the statistics isn't feasible.

This has been left for the experts to look into further, as considerable work would be required to make this possible.

 

Initail work completed to set up the framework and user directory codes:

Changed:
<
<
Build flag issue with --old-specialtrigger, causing a dependency on the framaework's UserMethods.cc file.
>
>
Build flag issue with --old-specialtrigger, causing a dependency on the framaework's UserMethods.cc file.
 
  • Everything works as expected if you comment out the OLD_SPECIALTRIGGER block as described in the analysisbuild.txt readme file and replace it manually with whichever trigger you want to use (either works or complains that you're using the wrong one at runtime). However, you then have a dependency on a framework file.
  • When using the flag, it seems that the "#define OLD_SPECIALTRIGGER" line in NA62AnalysisBuilder.py causes this definition to become stuck in the pre-processor, such that it continues to be defined if you try to build without the flag at a later stage
    • Solution 1: run a CleanAll command with NA62AnalysisBuilder.py then re-source the env.sh file
    • Solution 2: CleanAll then log out of your ssh session and log back in, source then build
Changed:
<
<
Generating a Pnn sample from the Kaon code given to me by Giuseppe.
>
>
Generating a Pnn sample from the Kaon code given to me by Giuseppe.
 
  • Build fails due to class conflict in the public directory files
    • Solution: fixed manually by Giuseppe in the codes, largely by replacing the conflicting "Particle" class with "MyParticle"
  • Further run fails due to a special trigger issue not dependent on the build flag
    • Solution: a special trigger element of the code needs changing when swapping between MC and data files
  • Now working on afs and should be able to set up on any other system (copy placed in: userDir2)
Changed:
<
<
A test analyser, to understand how to generate an analyser from scratch and plot variables in the data files, using the framework as a basis.
>
>
A test analyser, to understand how to generate an analyser from scratch and plot variables in the data files, using the framework as a basis.
 
  • This analyser is now setup such that it builds and begins to run with the current setup, designed to record the number of spectrometer candidates, but it fails at runtime due to an issue with a special trigger which is not specifically used in the code.
    • Solution: frameworks are not yet completely backwards compatible, I need to use the --old-specialtrigger flag after "build" to get this to work

Revision 132018-02-12 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 53 to 53
 Step 1: Generate a Kμ2 normalisation sample. [done]
  • Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms
    • Write a new analyser and tree to group useful output
Changed:
<
<
    • Apply Pnn like cuts
    • Add a timing based MUV3 cut to select muon events
>
>
    • Apply Pnn like cuts without PID
    • Add a muon missing mass cut and a timing based MUV3 cut to select muon events
 
  • Confirm compatibility of Pnn cuts, comment cuts as muon or Pnn, generally tidy up the code to finalise the cuts
Changed:
<
<
Step 2: Calculate the acceptance "Aμ+ν" using all the muon MC with HTCondor. [started]
  • Select muon events at MC truth level [starting]
>
>
Step 2: Calculate the acceptance "Aμ+ν" using all the muon MC with HTCondor. [done]
  • Select muon events at MC truth level
 
  • Take basic acceptance cut of 105 (possibly 115) to 165m
Changed:
<
<
  • Bin "passed" events by momentum as has been done in previous studies (15-20, 20-25, 25-30, 30-35 GeV)
  • Calculate binned acceptance by dividing by all events that pass Kmu2 cuts, recorded using the same binning system, by the geometric acceptance binned numbers
Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν".
  • Run selection on run number 6431 (large, good quality run) to start with
  • Run on Giuseppe's list of all good runs of 2016
>
>
  • Bin events that pass all cuts by momentum as has been done in previous studies (15-20, 20-25, 25-30, 30-35 GeV)
  • Calculate binned acceptance by dividing by all events that pass Kmu2 cuts, recorded using the binning system, by the number that passed the truth geometric acceptance (such that the bins sum to the total acceptance)
Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν". [almost finished]
  • Run selection on run number 6431 (large, good quality run) to start with [done]
  • Run on Giuseppe's list of all good runs of 2016 [running]
 Step 4: Start looking at the efficiencies that don't cancel in the acceptances fraction "εr".
Changed:
<
<
  • Pion ID efficiency: where the efficiency of pion ID in Pnn data "επdata+νν)" can be described by:

επdata+νν) = επMC+νν)⋅επdata+π0)/επMC+π0)

>
>
  • Pion ID efficiency: where the efficiency of pion ID in Pnn data "επdata+νν)" can be described by: [being studied by others]
επdata+νν) = επMC+νν)⋅επdata+π0)/επMC+π0)
 
Changed:
<
<

where "επMC+νν)" is the efficiency of pion ID in Pnn MC (which is used to calculate the Pnn acceptance), "επdata+π0)" is the efficiency of pion ID in π+π0 data and "επMC+π0)" is the efficiency of pion ID in π+π0 MC. Therefore, the Acceptance of Pnn "Aπ+νν" must be corrected to:

Aπ+ννεπdata+π0)/επMC+π0)

>
>
where "επMC+νν)" is the efficiency of pion ID in Pnn MC (which is used to calculate the Pnn acceptance), "επdata+π0)" is the efficiency of pion ID in π+π0 data and "επMC+π0)" is the efficiency of pion ID in π+π0 MC. Therefore, the Acceptance of Pnn "Aπ+νν" must be corrected to:
 
Added:
>
>
Aπ+ννεπdata+π0)/επMC+π0)
 
  • Matter interaction differences of muon and pion.
Added:
>
>
  • Muon ID efficiency [should be done from the MC run]
  • Trigger efficiencies
 
  • Anything we missed?

Run the (updated) 2016 cuts based analysis on the 2017 data [future]

  1. Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check you are using the correct framework revision)

Revision 122018-01-29 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 64 to 64
 Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν".
  • Run selection on run number 6431 (large, good quality run) to start with
  • Run on Giuseppe's list of all good runs of 2016
Changed:
<
<
Step 4: Start looking at the efficiencies that don't cancel in the acceptances fraction "εr".
  • Pion ID efficiency.
>
>
Step 4: Start looking at the efficiencies that don't cancel in the acceptances fraction "εr".
  • Pion ID efficiency: where the efficiency of pion ID in Pnn data "επdata+νν)" can be described by:

επdata+νν) = επMC+νν)⋅επdata+π0)/επMC+π0)

where "επMC+νν)" is the efficiency of pion ID in Pnn MC (which is used to calculate the Pnn acceptance), "επdata+π0)" is the efficiency of pion ID in π+π0 data and "επMC+π0)" is the efficiency of pion ID in π+π0 MC. Therefore, the Acceptance of Pnn "Aπ+νν" must be corrected to:

Aπ+ννεπdata+π0)/επMC+π0)

 
  • Matter interaction differences of muon and pion.
  • Anything we missed?

Run the (updated) 2016 cuts based analysis on the 2017 data [future]

Revision 112018-01-29 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 11 to 11
 Summary:
  1. The events occur in the detector
  2. Detector information is processed in the triggers and stored on Castor
Changed:
<
<
  1. Raw data is reconstructed by the framework
  2. Reconstructed data is filtered to purpose then stored on EOS
>
>
  1. Raw data is reconstructed by the framework then the reconstructed data is filtered to purpose and stored on EOS
 
  1. Filtered data is then processed by the user directory files
  2. KaonEventAnalysis.cc then processes the data in stages
1: The events occur in the detector
Line: 23 to 22
  The L0TP processes the L0 trigger decision from the detector signals, then the PC farm processes L1 and auto-passes L2 (assuming all signals present). The Mergers then buffer the events and write to Castor.
Changed:
<
<
3: Raw data is reconstructed by the framework
>
>
3: Raw data is reconstructed by the framework then the reconstructed data is filtered to purpose and stored on EOS
  The data is reconstructed using a version or revision of the framework that is dependent on the time the data was taken, reco efficiency is important at this stage.
Deleted:
<
<
4: Reconstructed data is filtered to purpose then stored on EOS
 The Pnn filtering code or others are used to reduce the file sizes and separate the events based on the analysis group that will use them.
Changed:
<
<
5: Filtered data is then processed by the user directory files
>
>
4: Filtered data is then processed by the user directory files
 
Changed:
<
<
User directory pre-analysing files: GigaTrackerEvtReco, AccidentalAnalysis.cc, TwoPhotonAnalysis.cc, TrackAnalysis.cc and OneTrackEventAnalysis.cc.
>
>
User directory pre-analysing files: GigaTrackerEvtReco, TwoPhotonAnalysis.cc, TrackAnalysis.cc and OneTrackEventAnalysis.cc.
 
Changed:
<
<
6: KaonEventAnalysis.cc then processes the data in stages
>
>
5: KaonEventAnalysis.cc then processes the data in stages
 
  1. The main function containing the base analysis
Changed:
<
<
  1. Start of Job, Run then Burst
  2. Initialise histograms, trees and output
>
>
  1. Start of Job, Run then Burst; Initialise histograms, trees and output
 
  1. Process each event and call the relevant analysis functions
  2. Run the specific analysis function on each event that passed the previous stages
  3. Post-processing with: PostProcess; End of Burst, Run, Job; DrawPlot
Line: 46 to 42
  The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:
Changed:
<
<
Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ Ai ⋅ Ei
>
>
Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ Ai
 
Changed:
<
<
where fK is the frequency of kaons in the beam, t is the time period of data taking, Ai is the "acceptance" or number of decays in the detector's fiducial region and Ei is the product of efficiencies ∏rεr for the efficiencies r relating to the trigger, reconstruction, kaon ID, daughter ID, track matching and all other processes used in the analysis (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring).
>
>
where fK is the frequency of kaons in the beam, t is the time period of data taking and Ai is the "acceptance" or number of decays in the detector's fiducial region (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring).
  From this we can construct an equation for:
Changed:
<
<
BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+νAμ+ν/Aπ+ννEμ+ν/Eπ+νν
where the fK and t terms cancel, along with many of the εr efficiencies and BR(K+→μ+ν) can be taken from the PDG listings as it has been thoroughly measured by previous experiments.
>
>
BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+νAμ+ν/Aπ+νν
where the fK and t terms cancel, along with many of the efficiencies included in the acceptance and BR(K+→μ+ν) can be taken from the PDG listings as it has been thoroughly measured by previous experiments.
  Step 1: Generate a Kμ2 normalisation sample. [done]
  • Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms
Line: 64 to 60
 
  • Select muon events at MC truth level [starting]
  • Take basic acceptance cut of 105 (possibly 115) to 165m
  • Bin "passed" events by momentum as has been done in previous studies (15-20, 20-25, 25-30, 30-35 GeV)
Changed:
<
<
  • Calculate binned acceptance by dividing by all events, recorded using the same binning system
>
>
  • Calculate binned acceptance by dividing by all events that pass Kmu2 cuts, recorded using the same binning system, by the geometric acceptance binned numbers
 Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν".
Changed:
<
<
Step 4: Start looking at the efficiencies "εr".
  • First, look at the pion ID efficiency
>
>
  • Run selection on run number 6431 (large, good quality run) to start with
  • Run on Giuseppe's list of all good runs of 2016
Step 4: Start looking at the efficiencies that don't cancel in the acceptances fraction "εr".
  • Pion ID efficiency.
  • Matter interaction differences of muon and pion.
  • Anything we missed?
 

Run the (updated) 2016 cuts based analysis on the 2017 data [future]

  1. Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check you are using the correct framework revision)
  2. Run on one file to check functionality

Revision 102018-01-29 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 83 to 83
 
  1. Generate on the order of 100M events
  2. Compare with data
  3. Make an estimation of this background for the PNN errors
Changed:
<
<

Finished:

>
>

Initail work completed to set up the framework and user directory codes:

  Build flag issue with --old-specialtrigger, causing a dependency on the framaework's UserMethods.cc file.
  • Everything works as expected if you comment out the OLD_SPECIALTRIGGER block as described in the analysisbuild.txt readme file and replace it manually with whichever trigger you want to use (either works or complains that you're using the wrong one at runtime). However, you then have a dependency on a framework file.

Revision 92018-01-26 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.

Analysis Timeline

Changed:
<
<

Using Kμ2 as a normalisation sample

>
>

The seqence of processes involved in NA62 data analysis

This section is written to later discus the efficiencies of the NA62 analysis and which efficiencies do not cancel between the Pnn channel and the muon normalisation.

Summary:

  1. The events occur in the detector
  2. Detector information is processed in the triggers and stored on Castor
  3. Raw data is reconstructed by the framework
  4. Reconstructed data is filtered to purpose then stored on EOS
  5. Filtered data is then processed by the user directory files
  6. KaonEventAnalysis.cc then processes the data in stages
1: The events occur in the detector

Beam particles (mostly pions and protons, with 6% kaons) exit the T10 target and pass through the colimators, magnetic fields and GTK, possibly interacting or decaying (largely muon and photon decays from the pions). Negligible background from matter emissions (E too low) and high energy cosmic particles (low frequency below ground and usually wrong kinematics).

2: Detector information is processed in the triggers and stored on Castor

The L0TP processes the L0 trigger decision from the detector signals, then the PC farm processes L1 and auto-passes L2 (assuming all signals present). The Mergers then buffer the events and write to Castor.

3: Raw data is reconstructed by the framework

The data is reconstructed using a version or revision of the framework that is dependent on the time the data was taken, reco efficiency is important at this stage.

4: Reconstructed data is filtered to purpose then stored on EOS

The Pnn filtering code or others are used to reduce the file sizes and separate the events based on the analysis group that will use them.

5: Filtered data is then processed by the user directory files

User directory pre-analysing files: GigaTrackerEvtReco, AccidentalAnalysis.cc, TwoPhotonAnalysis.cc, TrackAnalysis.cc and OneTrackEventAnalysis.cc.

6: KaonEventAnalysis.cc then processes the data in stages

  1. The main function containing the base analysis
  2. Start of Job, Run then Burst
  3. Initialise histograms, trees and output
  4. Process each event and call the relevant analysis functions
  5. Run the specific analysis function on each event that passed the previous stages
  6. Post-processing with: PostProcess; End of Burst, Run, Job; DrawPlot

Using Kμ2 as a normalisation sample [current work]

  The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:
Line: 15 to 53
 From this we can construct an equation for:

BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+νAμ+ν/Aπ+ννEμ+ν/Eπ+νν
where the fK and t terms cancel, along with many of the εr efficiencies and BR(K+→μ+ν) can be taken from the PDG listings as it has been thoroughly measured by previous experiments.

Deleted:
<
<

Currently working on:

  Step 1: Generate a Kμ2 normalisation sample. [done]
  • Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms
Line: 23 to 60
 
    • Apply Pnn like cuts
    • Add a timing based MUV3 cut to select muon events
  • Confirm compatibility of Pnn cuts, comment cuts as muon or Pnn, generally tidy up the code to finalise the cuts
Changed:
<
<
Step 2: Calculate the acceptance "Aμ+ν" using all the muon MC with HTCondor. [started]
>
>
Step 2: Calculate the acceptance "Aμ+ν" using all the muon MC with HTCondor. [started]
 
  • Select muon events at MC truth level [starting]
  • Take basic acceptance cut of 105 (possibly 115) to 165m
  • Bin "passed" events by momentum as has been done in previous studies (15-20, 20-25, 25-30, 30-35 GeV)
  • Calculate binned acceptance by dividing by all events, recorded using the same binning system
Changed:
<
<
Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν".
>
>
Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν".
  Step 4: Start looking at the efficiencies "εr".
Changed:
<
<
  • first, look at the pion ID efficiency

Possible future work:

Parallel MC work: [no further information on this as of yet]

  • Altering the Geant4 setup to change the hadronic interaction probability in GTK3 to 1
  • Generating on the order of 100M events
  • Comparing with data
  • Aim: to make an estimation of this background for the PNN errors
>
>
  • First, look at the pion ID efficiency

Run the (updated) 2016 cuts based analysis on the 2017 data [future]

  1. Finish updating the user directory files (specifically CHODAnalysis) so that KaonEventAnalysis.cc runs correctly on 2017 data (check you are using the correct framework revision)
  2. Run on one file to check functionality
  3. Ensure blinding is maintained
  4. Run on as much data as possible
  5. Try to run on any failed data files to complete the selection
  6. Discus unblinding
  7. Unblind starting with control regions
  8. Write up results with the intention to publish eventually

GTK3 interaction MC work [future]

  1. Altering the Geant4 setup to change the hadronic interaction probability in GTK3 to 1 or reject non-interacting events (more likely solution)
  2. Generate on the order of 100M events
  3. Compare with data
  4. Make an estimation of this background for the PNN errors
 

Finished:

Changed:
<
<
build flag issue with --old-specialtrigger, causing a dependency on the framaework's UserMethods.cc file.
>
>
Build flag issue with --old-specialtrigger, causing a dependency on the framaework's UserMethods.cc file.
 
  • Everything works as expected if you comment out the OLD_SPECIALTRIGGER block as described in the analysisbuild.txt readme file and replace it manually with whichever trigger you want to use (either works or complains that you're using the wrong one at runtime). However, you then have a dependency on a framework file.
  • When using the flag, it seems that the "#define OLD_SPECIALTRIGGER" line in NA62AnalysisBuilder.py causes this definition to become stuck in the pre-processor, such that it continues to be defined if you try to build without the flag at a later stage
    • Solution 1: run a CleanAll command with NA62AnalysisBuilder.py then re-source the env.sh file
Line: 56 to 100
 A test analyser, to understand how to generate an analyser from scratch and plot variables in the data files, using the framework as a basis.
  • This analyser is now setup such that it builds and begins to run with the current setup, designed to record the number of spectrometer candidates, but it fails at runtime due to an issue with a special trigger which is not specifically used in the code.
    • Solution: frameworks are not yet completely backwards compatible, I need to use the --old-specialtrigger flag after "build" to get this to work
Deleted:
<
<
*

Revision 82018-01-11 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 17 to 17
 BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+νAμ+ν/Aπ+ννEμ+ν/Eπ+νν
where the fK and t terms cancel, along with many of the εr efficiencies and BR(K+→μ+ν) can be taken from the PDG listings as it has been thoroughly measured by previous experiments.

Currently working on:

Changed:
<
<
Step 1: Generate a Kμ2 normalisation sample.
  • Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms [in progress]
    • Write a new analyser and tree to group useful output [done]
    • Apply Pnn like cuts [done]
    • Add a timing based MUV3 cut to select muon events [testing results]
  • Confirm compatibility of Pnn cuts, comment cuts as muon or Pnn, generally tidy up the code to finalise the cuts [started]
Step 2: Calculate the acceptance "Aμ+ν" using all the muon MC with HTCondor.
>
>
Step 1: Generate a Kμ2 normalisation sample. [done]
  • Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms
    • Write a new analyser and tree to group useful output
    • Apply Pnn like cuts
    • Add a timing based MUV3 cut to select muon events
  • Confirm compatibility of Pnn cuts, comment cuts as muon or Pnn, generally tidy up the code to finalise the cuts
Step 2: Calculate the acceptance "Aμ+ν" using all the muon MC with HTCondor. [started]
  • Select muon events at MC truth level [starting]
  • Take basic acceptance cut of 105 (possibly 115) to 165m
  • Bin "passed" events by momentum as has been done in previous studies (15-20, 20-25, 25-30, 30-35 GeV)
  • Calculate binned acceptance by dividing by all events, recorded using the same binning system
 Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν".

Step 4: Start looking at the efficiencies "εr".

Revision 72017-12-14 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 21 to 21
 
  • Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms [in progress]
    • Write a new analyser and tree to group useful output [done]
    • Apply Pnn like cuts [done]
Changed:
<
<
    • Add a timing based MUV3 cut to select muon events [done but needs testing]
  • Confirm compatibility of Pnn cuts, comment cuts as muon or Pnn, generally tidy up the code to finalise the cuts [not started yet]
>
>
    • Add a timing based MUV3 cut to select muon events [testing results]
  • Confirm compatibility of Pnn cuts, comment cuts as muon or Pnn, generally tidy up the code to finalise the cuts [started]
 Step 2: Calculate the acceptance "Aμ+ν" using all the muon MC with HTCondor.

Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν".

Revision 62017-12-08 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 8 to 8
  The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:
Changed:
<
<
Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ A ⋅ Ei
>
>
Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ Ai ⋅ Ei
 
Changed:
<
<
where fK is the frequency of kaons in the beam, t is the time period of data taking, A is the "acceptance" or number of decays in the detector's fiducial region and Ei is the product of efficiencies ∏rεr for the efficiencies r relating to the trigger, reconstruction, kaon ID, daughter ID, track matching and all other processes used in the analysis (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring).
>
>
where fK is the frequency of kaons in the beam, t is the time period of data taking, Ai is the "acceptance" or number of decays in the detector's fiducial region and Ei is the product of efficiencies ∏rεr for the efficiencies r relating to the trigger, reconstruction, kaon ID, daughter ID, track matching and all other processes used in the analysis (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring).
  From this we can construct an equation for:

Revision 52017-12-07 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 18 to 18
 

Currently working on:

Step 1: Generate a Kμ2 normalisation sample.

Changed:
<
<
  • Start by generating a sample of Kμ2 data with PNN like cuts from one burst (current file) and organise some output histograms [in progress]
>
>
  • Start by generating a sample of Kμ2 data with Pnn like cuts from one burst (current file) and organise some output histograms [in progress]
 
    • Write a new analyser and tree to group useful output [done]
Changed:
<
<
    • Apply PNN like cuts [done]
    • Add a timing based MUV3 cut to select muon events [in progress]
>
>
    • Apply Pnn like cuts [done]
    • Add a timing based MUV3 cut to select muon events [done but needs testing]
  • Confirm compatibility of Pnn cuts, comment cuts as muon or Pnn, generally tidy up the code to finalise the cuts [not started yet]
 Step 2: Calculate the acceptance "Aμ+ν" using all the muon MC with HTCondor.

Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν".

Revision 42017-12-05 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Line: 8 to 8
  The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:
Changed:
<
<
Ni = fK x t x [BR(K→i)] x A x Ei
>
>
Ni = fK ⋅ t ⋅ [BR(K→i)] ⋅ A ⋅ Ei
  where fK is the frequency of kaons in the beam, t is the time period of data taking, A is the "acceptance" or number of decays in the detector's fiducial region and Ei is the product of efficiencies ∏rεr for the efficiencies r relating to the trigger, reconstruction, kaon ID, daughter ID, track matching and all other processes used in the analysis (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring).

Revision 32017-12-04 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.

Analysis Timeline

Added:
>
>

Using Kμ2 as a normalisation sample

The overall aim of NA62 if to measure the branching fraction (using BR as the canonical shorthand from here) of the decay K+→π+νν. In order to do so, we must account for errors both statistical and systematic. Therefore, if we measure the BR and normalise the number of events we observe by dividing it by one of the primary kaon decays (μ+ν or π+π0) we can cancel many of the major systematics. If we use both primary decays for a normalisation sample and compare the value, we can check if we are properly accounting for all systematics, as both should provide the same result. First we use the number of observed events of decay i:

Ni = fK x t x [BR(K→i)] x A x Ei

where fK is the frequency of kaons in the beam, t is the time period of data taking, A is the "acceptance" or number of decays in the detector's fiducial region and Ei is the product of efficiencies ∏rεr for the efficiencies r relating to the trigger, reconstruction, kaon ID, daughter ID, track matching and all other processes used in the analysis (this should cover all contributions, even things like the possibility of events being incorrectly tagged as the decay you are measuring).

From this we can construct an equation for:

BR(K+→π+νν) = BR(K+→μ+ν) ⋅ Nπ+νν/Nμ+νAμ+ν/Aπ+ννEμ+ν/Eπ+νν
where the fK and t terms cancel, along with many of the εr efficiencies and BR(K+→μ+ν) can be taken from the PDG listings as it has been thoroughly measured by previous experiments.

 

Currently working on:

Changed:
<
<
Attempting to fix the --old-specialtrigger build flag issue, to remove the dependency on the framaework's UserMethods.cc file.
  • Everything works as expected if you comment out the OLD_SPECIALTRIGGER block as described in the analysisbuild.txt readme file and replace it manually with whichever trigger you want to use (either works or complains that you're using the wrong one at runtime).
Generating a Kμ2 normalisation sample to compare the ππ0 and μ+ν normalised branching ratio values, to determine if anything has been missed in the systematics.
  • Step 1: Start generating a sample of Kμ2 data with PNN like cuts from one burst (current file) and organise some output histograms [in progress]
    • Write a new tree to group useful output [done along with a new function in the code]
    • Still to get started on PNN like cuts [ongoing]
  • Step 2: Test on 100 bursts with HTCondor (Stoyan will send me the instructions readme and associated codes when required)
    • Refine cuts and output histograms etc...
  • Step 3: Run on 1k bursts (average run) with HTCondor to try and get some good results
    • e.g. the number of kaon decays for Kμ2
  • Important: The acceptance of each decay has to be studied carefully as not all elements cancel, so everything has to be considered for how it could affect the acceptance with respect to the other decay (PNN or Kμ2)
>
>
Step 1: Generate a Kμ2 normalisation sample.
  • Start by generating a sample of Kμ2 data with PNN like cuts from one burst (current file) and organise some output histograms [in progress]
    • Write a new analyser and tree to group useful output [done]
    • Apply PNN like cuts [done]
    • Add a timing based MUV3 cut to select muon events [in progress]
Step 2: Calculate the acceptance "Aμ+ν" using all the muon MC with HTCondor.

Step 3: Run on as much 2016 data as possible with HTCondor to calculate a value for "Nμ+ν".

Step 4: Start looking at the efficiencies "εr".

  • first, look at the pion ID efficiency
 

Possible future work:

Parallel MC work: [no further information on this as of yet]

Line: 26 to 37
 
  • Aim: to make an estimation of this background for the PNN errors

Finished:

Added:
>
>
build flag issue with --old-specialtrigger, causing a dependency on the framaework's UserMethods.cc file.
  • Everything works as expected if you comment out the OLD_SPECIALTRIGGER block as described in the analysisbuild.txt readme file and replace it manually with whichever trigger you want to use (either works or complains that you're using the wrong one at runtime). However, you then have a dependency on a framework file.
  • When using the flag, it seems that the "#define OLD_SPECIALTRIGGER" line in NA62AnalysisBuilder.py causes this definition to become stuck in the pre-processor, such that it continues to be defined if you try to build without the flag at a later stage
    • Solution 1: run a CleanAll command with NA62AnalysisBuilder.py then re-source the env.sh file
    • Solution 2: CleanAll then log out of your ssh session and log back in, source then build
 Generating a Pnn sample from the Kaon code given to me by Giuseppe.
  • Build fails due to class conflict in the public directory files
    • Solution: fixed manually by Giuseppe in the codes, largely by replacing the conflicting "Particle" class with "MyParticle"

Revision 22017-11-22 - ConnorGraham

Line: 1 to 1
 
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.
Added:
>
>
 

Analysis Timeline

Currently working on:

Revision 12017-11-19 - ConnorGraham

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="ConnorsAnalysis"
Return the the main analysis page.

Analysis Timeline

Currently working on:

Attempting to fix the --old-specialtrigger build flag issue, to remove the dependency on the framaework's UserMethods.cc file.

  • Everything works as expected if you comment out the OLD_SPECIALTRIGGER block as described in the analysisbuild.txt readme file and replace it manually with whichever trigger you want to use (either works or complains that you're using the wrong one at runtime).
Generating a Kμ2 normalisation sample to compare the ππ0 and μ+ν normalised branching ratio values, to determine if anything has been missed in the systematics.
  • Step 1: Start generating a sample of Kμ2 data with PNN like cuts from one burst (current file) and organise some output histograms [in progress]
    • Write a new tree to group useful output [done along with a new function in the code]
    • Still to get started on PNN like cuts [ongoing]
  • Step 2: Test on 100 bursts with HTCondor (Stoyan will send me the instructions readme and associated codes when required)
    • Refine cuts and output histograms etc...
  • Step 3: Run on 1k bursts (average run) with HTCondor to try and get some good results
    • e.g. the number of kaon decays for Kμ2
  • Important: The acceptance of each decay has to be studied carefully as not all elements cancel, so everything has to be considered for how it could affect the acceptance with respect to the other decay (PNN or Kμ2)

Possible future work:

Parallel MC work: [no further information on this as of yet]

  • Altering the Geant4 setup to change the hadronic interaction probability in GTK3 to 1
  • Generating on the order of 100M events
  • Comparing with data
  • Aim: to make an estimation of this background for the PNN errors

Finished:

Generating a Pnn sample from the Kaon code given to me by Giuseppe.

  • Build fails due to class conflict in the public directory files
    • Solution: fixed manually by Giuseppe in the codes, largely by replacing the conflicting "Particle" class with "MyParticle"
  • Further run fails due to a special trigger issue not dependent on the build flag
    • Solution: a special trigger element of the code needs changing when swapping between MC and data files
  • Now working on afs and should be able to set up on any other system (copy placed in: userDir2)
A test analyser, to understand how to generate an analyser from scratch and plot variables in the data files, using the framework as a basis.
  • This analyser is now setup such that it builds and begins to run with the current setup, designed to record the number of spectrometer candidates, but it fails at runtime due to an issue with a special trigger which is not specifically used in the code.
    • Solution: frameworks are not yet completely backwards compatible, I need to use the --old-specialtrigger flag after "build" to get this to work *
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback