Line: 1 to 1 | |||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
<-- p { margin-bottom: 0.21cm; }h1 { margin-bottom: 0.21cm; }h1.western { font-family: "Liberation Serif",serif; }h1.cjk { font-family: "DejaVu Sans"; }h1.ctl { font-family: "DejaVu Sans"; }h2 { margin-bottom: 0.21cm; }h4 { margin-bottom: 0.21cm; }h5 { margin-bottom: 0.21cm; }h3 { margin-bottom: 0.21cm; }h3.western { font-family: "Liberation Serif",serif; }h3.cjk { font-family: "DejaVu Sans"; }h3.ctl { font-family: "DejaVu Sans"; }pre.cjk { font-family: "DejaVu Sans",monospace; }a:link { } --> | |||||||||||||||||||||||||
Added: | |||||||||||||||||||||||||
> > | |||||||||||||||||||||||||
Computentp, Neural Nets and MCLIMITSThis page has been substantially rewritten (and remains a work in progress) to focus just on the information required for a successful run of the Computentp and Neural Net package, to deliver exclusions. For information on results obtained using inputs created in v12 of athena, please refer to the archive. This page also describes how to run on GlaNtp - the version of the code set up for use in Glasgow, with no CDF dependencies. To use the previous version of the code (there are some important differences) refer to r93 and earlier. | |||||||||||||||||||||||||
Line: 25 to 28 | |||||||||||||||||||||||||
Current samples in use | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | Input data and cross-sections | ||||||||||||||||||||||||
> > | Input data and cross-sections | ||||||||||||||||||||||||
These cross-sections are for the overall process, at √s = 7 TeV. | |||||||||||||||||||||||||
Line: 33 to 36 | |||||||||||||||||||||||||
The tt samples were initially generated to produce the equivalent of 75fb-1 of data, based on the LO cross-sections. Taking into account the k-factor of 1.84, this means that now all samples simulate 40.8fb-1 of data. These samples have also had a generator-level filter applied - most events (especially for tt+0j) are of no interest to us, so we don't want to fill up disk-space with them, so we apply filters based on the numbers of jets etc. The Filter Efficiency is the fraction of events that pass from the general sample into the final simulated sample. To clarify how all the numbers hang together, consider the case of tt+0j. We have simulated 66,911 events - as said above, this corresponds to 40.8fb-1 of data. We have a Filter Efficiency of 0.06774, so the full number of events that a complete semi-leptonic event would be comes to 987,762 events in 40fb-1. Divide this by 40 to get the number of events in 1fb-1 (i.e. the cross-section), and you get 24,694 events per fb-1. Our starting point for our cross-section is 13.18, with a k-factor of 1.84, which gives a cross-section of 24.25 - so all the numbers compare with each other pretty favourably. This of course makes getting from the number of sensible state events to the number expected per fb-1 rather easy - simply divide by 40.8.... You'll notice that the cross-section includes all the branching ratios already, so we don't need to worry about that. | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | **IMPORTANT** The Filter Efficiency for these samples was calculated based on a no-pileup sample. The filter is generator level, and one of the things it will cut an event for is not enough jets. However, pileup adds jets, but these are added well after the filter. The net result is that a number of events that failed the filter would have passed, had the pileup been added earlier in the process. This means the filter efficiency (and thus the cross-sections) are incorrect, by a yet to determined amount.... | ||||||||||||||||||||||||
> > | **IMPORTANT** The Filter Efficiency for these samples was calculated based on a no-pileup sample. The filter is generator level, and one of the things it will cut an event for is not enough jets. However, pileup adds jets, but these are added well after the filter. The net result is that a number of events that failed the filter would have passed, had the pileup been added earlier in the process. This means the filter efficiency (and thus the cross-sections) are incorrect, by a yet to determined amount.... | ||||||||||||||||||||||||
For the other samples, however, we do need to worry about branching ratios - the quoted initial cross-section includes all final states, so we need to apply branching ratios to the cross-section to reduce it down, so that it reflects the sample we've generated. We then subsequently need to reduce the cross-section further so that it reflects the number of sensible states. | |||||||||||||||||||||||||
Line: 77 to 80 | |||||||||||||||||||||||||
| |||||||||||||||||||||||||
Deleted: | |||||||||||||||||||||||||
< < | |||||||||||||||||||||||||
These cross-sections and branching ratios are correct as of 8 Feb 2011. qq→ttbb (EWK) is currently not being used, thanks to a bug in the production of the MC | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | Number of events surviving preselection, weights and TrainWeights | ||||||||||||||||||||||||
> > | Number of events surviving preselection, weights and TrainWeights | ||||||||||||||||||||||||
(See later in the TWiki for an explanation of weights and TrainWeights.) This table will be completed with all the relevant weights and TrainWeights at a later date - these values are to be compared to the output from Computentp to ensure everything is working as intended, and are calculated for the sensible cross-sections/events. (A quick check of the TrainWeight is to multiply the number so events of each background by their TrainWeight and sum them - by design, this should equal the number of entries in the ttH sample.) | |||||||||||||||||||||||||
Line: 115 to 116 | |||||||||||||||||||||||||
Things to do | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | * In the script used to make the webpage showing the results, the reference to H6AONN5MEMLP is hardwired. It should become a argument. It is the name of the method you give TMVA in the training, and so if it changes in one you should be able to change in the other | ||||||||||||||||||||||||
> > | * | ||||||||||||||||||||||||
Added: | |||||||||||||||||||||||||
> > | In the script used to make the webpage showing the results, the reference to H6AONN5MEMLP is hardwired. It should become a argument. It is the name of the method you give TMVA in the training, and so if it changes in one you should be able to change in the other | ||||||||||||||||||||||||
Overview of the process | |||||||||||||||||||||||||
Line: 194 to 196 | |||||||||||||||||||||||||
Getting a copy of GlaNtp | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < |
| ||||||||||||||||||||||||
> > |
| ||||||||||||||||||||||||
mkdir /home/ahgemmell/GlaNtp cd /home/ahgemmell/GlaNtp cp /home/stdenis/GlaNtpScript.sh . | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < |
| ||||||||||||||||||||||||
> > |
| ||||||||||||||||||||||||
cp /home/stdenis/atlas/testGlaNtp/cleanpath3.sh source cleanpath3.sh | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < |
| ||||||||||||||||||||||||
> > |
| ||||||||||||||||||||||||
export GLANTP_DATA=/data/cdf01/stdenis/GlaNtpData | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < |
./GlaNtpScript.sh SVN 00-00-10* This will check out everything, and run a few simple validations - the final output should look like this (i.e. don't be worried that not everything seems to have passed validation!): | ||||||||||||||||||||||||
> > |
./GlaNtpScript.sh SVN 00-00-10* This will check out everything, and run a few simple validations - the final output should look like this (i.e. don't be worried that not everything seems to have passed validation!): | ||||||||||||||||||||||||
HwwFlatFitATLAS Validation succeeded Done with core tests | |||||||||||||||||||||||||
Line: 249 to 257 | |||||||||||||||||||||||||
Result of FlatAscii validation: OK Result of FlatAscii_global validation: OK Result of FlatTRntp validation: OK | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | |||||||||||||||||||||||||
> > | |||||||||||||||||||||||||
Variables used by the GlaNtp package | |||||||||||||||||||||||||
Added: | |||||||||||||||||||||||||
> > | |||||||||||||||||||||||||
The variables used by the package can be divided into two sets. The first are those variables that are constant throughout the sample - the 'global' variables (e.g. cross-section of the sample). These can be specified in their own tree, where they will be recorded (and read by GlaNtp) once only. If desired, these variables can be defined within the main tree of the input file - however, then they will be recorded once per event, and read in once per event. This is obviously a bit wasteful, but for historical reasons it can be done. To determine which of these behaviours you use, set LoadGlobalOnEachEvent in FlatPlotter and FlatReader to 1 for the events to be read in on an event-by-event basis, or 0 to be read in once from the global tree (or from the first event only). For more information on this switch, refer to this![]() | |||||||||||||||||||||||||
Line: 269 to 278 | |||||||||||||||||||||||||
ListParameter EvInfoTree:1 1 NN_BJetWeight_Jet1:NN_BJetWeight_Jet1/NN_BJetWeight_Jet1 | |||||||||||||||||||||||||
Added: | |||||||||||||||||||||||||
> > | |||||||||||||||||||||||||
that I need to ask Rick about...
Variables used for training the Neural NetThe list of variables on which the neural net is to train is set in the shell script, under TMVAvarset.txt (this file is created when the script runs). At present, these variables are: | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | The b-weights for the six 'leading' jets - currently the jets are ranked according to their b-weights, but it is possible to rank them according to pT and energy. The decision about how to rank them is done in the AOD -> NTuple stage: NN_BJetWeight_Jet1 NN_BJetWeight_Jet2 NN_BJetWeight_Jet3 NN_BJetWeight_Jet4 NN_BJetWeight_Jet5 NN_BJetWeight_Jet6 | ||||||||||||||||||||||||
> > | The b-weights for the six 'leading' jets - currently the jets are ranked according to their b-weights, but it is possible to rank them according to pT and energy. The decision about how to rank them is done in the AOD -> NTuple stage: NN_BJetWeight_Jet1 NN_BJetWeight_Jet2 NN_BJetWeight_Jet3 NN_BJetWeight_Jet4 NN_BJetWeight_Jet5 NN_BJetWeight_Jet6 | ||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | The masses and pT of the various jet combinations (only considering the four 'top' jets - i.e. if ranked by b-weights, the jets that we expect to really be b-jets in our signal: NN_BJet12_M NN_BJet13_M NN_BJet14_M NN_BJet23_M NN_BJet24_M NN_BJet34_M NN_BJet12_Pt NN_BJet13_Pt NN_BJet14_Pt NN_BJet23_Pt NN_BJet24_Pt NN_BJet34_Pt | ||||||||||||||||||||||||
> > | The masses and pT of the various jet combinations (only considering the four 'top' jets - i.e. if ranked by b-weights, the jets that we expect to really be b-jets in our signal: NN_BJet12_M NN_BJet13_M NN_BJet14_M NN_BJet23_M NN_BJet24_M NN_BJet34_M NN_BJet12_Pt NN_BJet13_Pt NN_BJet14_Pt NN_BJet23_Pt NN_BJet24_Pt NN_BJet34_Pt | ||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | The sums of the eT of the two reconstructed tops, for each of the top three states: NN_State1_SumTopEt NN_State2_SumTopEt NN_State3_SumTopEt | ||||||||||||||||||||||||
> > | The sums of the eT of the two reconstructed tops, for each of the top three states: NN_State1_SumTopEt NN_State2_SumTopEt NN_State3_SumTopEt | ||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | And the differences between the eta and phi of the two reconstructed tops, again from the top three states: NN_State1_DiffTopEta NN_State2_DiffTopEta NN_State3_DiffTopEta NN_State1_DiffTopPhi NN_State2_DiffTopPhi NN_State3_DiffTopPhi | ||||||||||||||||||||||||
> > | And the differences between the eta and phi of the two reconstructed tops, again from the top three states: NN_State1_DiffTopEta NN_State2_DiffTopEta NN_State3_DiffTopEta NN_State1_DiffTopPhi NN_State2_DiffTopPhi NN_State3_DiffTopPhi | ||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | You also need to provide addresses to the Neural Net so that it can find the variables in the input trees. This is done inside VariableTreeToNTPATLASttHSemiLeptonic-v15.txt | ||||||||||||||||||||||||
> > | You also need to provide addresses to the Neural Net so that it can find the variables in the input trees. This is done inside VariableTreeToNTPATLASttHSemiLeptonic-v15.txt : | ||||||||||||||||||||||||
ListParameter EvInfoTree:1 1 NN_BJetWeight_Jet1:NN_BJetWeight_Jet1/NN_BJetWeight_Jet1 | |||||||||||||||||||||||||
Line: 526 to 532 | |||||||||||||||||||||||||
This controls the events used in the training, using a bitwise comparison. If the constraint is true (i.e. the first two bits are set, and not equal to zero), then the event is used for training. This filter is not used currently, as training of the net takes place based on the Computentp output - this Computentp output only contains sensible states (as specified in the TreeSpecATLAStth.txt file's filter). If further filtering is required, then care must be taken to ensure that my_failEvent (or whatever you wish to base your filter on) is specified in the VariableTreeToNTP file, so that Computentp will copy it into its output. | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | **If USEHILOSB is set to 1 then && must be appended to cut criteria, e.g. GeneralParameter string 1 Constraint=(my_failEvent&65536)==0&&. This is because USEHILOSB adds more constraints.** | ||||||||||||||||||||||||
> > | **If USEHILOSB is set to 1 then && must be appended to cut criteria, e.g. GeneralParameter string 1 Constraint=(my_failEvent&65536)==0&&. This is because USEHILOSB adds more constraints.** | ||||||||||||||||||||||||
Running | |||||||||||||||||||||||||
Line: 543 to 549 | |||||||||||||||||||||||||
| |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < |
| ||||||||||||||||||||||||
> > |
| ||||||||||||||||||||||||
| |||||||||||||||||||||||||
Line: 581 to 587 | |||||||||||||||||||||||||
These control whether or not various parts of the code are run - the names of the flags are pretty self-explanatory about what parts of the code they control. For example, it is possible to omit the training in subsequent (templating) runs, if it has previously been done. This shortens the run time significantly. | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | ***NOTE*** The flags DoTraining and DoTemplates had previously (until release 00-00-21) been set on the command line. They were moved from the command line when the other flags were introduced. | ||||||||||||||||||||||||
> > | ***NOTE*** The flags DoTraining and DoTemplates had previously (until release 00-00-21) been set on the command line. They were moved from the command line when the other flags were introduced. | ||||||||||||||||||||||||
Where the output is stored | |||||||||||||||||||||||||
Line: 787 to 793 | |||||||||||||||||||||||||
runFlatReader FlatReaderATLAStthNoNN.txt /data/atlas09/ahgemmell/NNInputFiles_v16/mergedfilesProcessed/ttH-v16.root | |||||||||||||||||||||||||
Changed: | |||||||||||||||||||||||||
< < | This produces a lot of printout, so be sure to restrict the number of events as described above! | ||||||||||||||||||||||||
> > | This produces a lot of printout, so be sure to restrict the number of events as described above! | ||||||||||||||||||||||||
Various other switches of interest | |||||||||||||||||||||||||
Line: 812 to 818 | |||||||||||||||||||||||||
If the phrase 'H6AONN5MEMLP' is changed, then this change must also be propogated to the webpage plotter (e-mail from Rick 1 Mar 2011) | |||||||||||||||||||||||||
Added: | |||||||||||||||||||||||||
> > | |||||||||||||||||||||||||
|