Line: 1 to 1 | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
<-- p { margin-bottom: 0.21cm; }h1 { margin-bottom: 0.21cm; }h1.western { font-family: "Liberation Serif",serif; }h1.cjk { font-family: "DejaVu Sans"; }h1.ctl { font-family: "DejaVu Sans"; }h2 { margin-bottom: 0.21cm; }h4 { margin-bottom: 0.21cm; }h5 { margin-bottom: 0.21cm; }h3 { margin-bottom: 0.21cm; }h3.western { font-family: "Liberation Serif",serif; }h3.cjk { font-family: "DejaVu Sans"; }h3.ctl { font-family: "DejaVu Sans"; }pre.cjk { font-family: "DejaVu Sans",monospace; }a:link { } --> Computentp, Neural Nets and MCLIMITS | ||||||||||||||||||||||||||||
Line: 98 to 98 | ||||||||||||||||||||||||||||
NN_BJetWeight_Jet1 NN_BJetWeight_Jet2 NN_BJetWeight_Jet3 NN_BJetWeight_Jet4 NN_BJetWeight_Jet5 NN_BJetWeight_Jet6 NN_BJet12_M NN_BJet13_M NN_BJet14_M NN_BJet23_M NN_BJet24_M NN_BJet34_M NN_BJet12_Pt NN_BJet13_Pt NN_BJet14_Pt NN_BJet23_Pt NN_BJet24_Pt NN_BJet34_Pt NN_State1_SumTopEt NN_State2_SumTopEt NN_State3_SumTopEt NN_State1_DiffTopEta NN_State2_DiffTopEta NN_State3_DiffTopEta NN_State1_DiffTopPhi NN_State2_DiffTopPhi NN_State3_DiffTopPhi | ||||||||||||||||||||||||||||
Added: | ||||||||||||||||||||||||||||
> > | You also need to provide addresses to the Neural Net so that it can find the variables in the input trees. This is done inside VariableTreeToNTPATLASttHSemiLeptonic-v15.txt
ListParameter EvInfoTree:1 1 NN_BJetWeight_Jet1:NN_BJetWeight_Jet1/NN_BJetWeight_Jet1Currently all information is in the EvInfoTree, which provides event level information. However, future work will involve trying to establish a GlobalInfoTree, which contains information about the entire sample, such as cross-section - this will only need to be loaded once, and saves having to write the same information into the tree repeatedly, and subsequently reading it repeatedly. | |||||||||||||||||||||||||||
Variable Weights in the Neural NetTo set up a neural net for the analysis of a particular kind of data it is necessary to train it with sample data; this process will adjust the "weights" on each variable that the neural net analyses in the ntuple, in order to optimise performance. These weights can then be viewed as a scatter plot in ROOT. | ||||||||||||||||||||||||||||
Line: 278 to 281 | ||||||||||||||||||||||||||||
This table will be completed with all the relevant weights and TrainWeights at a later date - these values are to be compared to the output from Computentp to ensure everything is working as intended, and are calculated for the sensible cross-sections/events. (A quick check of the TrainWeight is to multiply the number so events of each background by their TrainWeight and sum them - by design, this should equal the number of entries in the ttH sample.) | ||||||||||||||||||||||||||||
Added: | ||||||||||||||||||||||||||||
> > |
<-- /editTable --> | |||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||
Line: 335 to 339 | ||||||||||||||||||||||||||||
The first parameter consists of two parts in this example: 'Combine' and 'Lumi'. The second part is the name of the uncertainty being considered. The first part 'Combine' (and the associated semicolon between them) is optional. It tells the ANN that the uncertainty thus labelled are independent of each other, and can be added in quadrature. 'OnOff' obviously tells the ANN to consider those uncertainty (1) or not (0). 'Low' and 'High' establish the relevant bounds of the uncertainty as fractions of the total (however, for the ANN these uncertainties are symmetrised, so to save time they are here assumed to be symmetric unless elsewhere stated) - note that these are not the uncertainties on the quantity, but rather the effect of that uncertainty on the rate of your process. Process is not actually read by the ANN, but is there to make the whole thing more human-friendly to read. The current errors, and their bounds are below. If no source for these error bounds is given, then they were the defaults found in the files from time immemorial (where as necessary I assumed that all tt + X errors were the same, as were all ttbb (QCD) errors, as in the original files the only samples considered were ttjj, ttbb(EWK), ttbb(QCD) and ttH - these errors probably originate from the CSC note). If you are only considering rate uncertainties, this is where the fitting code will find the relevant numbers. | ||||||||||||||||||||||||||||
Added: | ||||||||||||||||||||||||||||
> > |
<-- /editTable --> | |||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||
Line: 396 to 401 | ||||||||||||||||||||||||||||
GeneralParameter string 1 Constraint=(my_failEvent&3)==0 | ||||||||||||||||||||||||||||
Changed: | ||||||||||||||||||||||||||||
< < | in genemflat_batch_Complete2.sh will exclude from training those events where the above constraint is false. So in the above example, only events with the first two bits equal to one will pass the filter. | |||||||||||||||||||||||||||
> > | in genemflat_batch_Complete2.sh controls the events used in the training, using a bitwise comparison. If the constraint is true (i.e. the first to bits are not set, and are equal to zero), then the event is used for training. | |||||||||||||||||||||||||||
Changed: | ||||||||||||||||||||||||||||
< < | In TreeSpecATLAStth.txt the filters are set with: | |||||||||||||||||||||||||||
> > | In TreeSpecATLAStth.txt the filters control what is used for the templating, and Computentp: | |||||||||||||||||||||||||||
Changed: | ||||||||||||||||||||||||||||
< < | ListParameter SpecifyVariable:Higgs:cutMask 1 Type:int:Default:1 | |||||||||||||||||||||||||||
> > | ListParameter SpecifyVariable:Higgs:cutMask 1 Type:int:Default:3 | |||||||||||||||||||||||||||
ListParameter SpecifyVariable:Higgs:invertWord 1 Type:int:Default:0 | ||||||||||||||||||||||||||||
Changed: | ||||||||||||||||||||||||||||
< < | The constraint is hardwired to be off the form where my_failEvent&cutMask==0 would fail the event, exaclty like the constraint. However, it is not beyond the realms of possibility where you want events with bits of my_failEvent set to zero to pass, not fail. In genemflat this is easily done by changing ==0 into ==1 - however, we cannot directly do this in TreeSpec. To get around the problem we have invertWord - this simply flips the relevant bits in my_failEvent before passing them to the test. | |||||||||||||||||||||||||||
> > | InvertWord is used to invert the relevant bits (in this case no bits are inverted) before the cut from cutMask is applied. The cutMask will exclude from templating those events where the matching bits are equal to zero AFTER the inversion. So here, with no inversion applied, those events with my_failEvent == 3 will be used for templating. **NOTE** The above example is inconsistent - the Constraint excludes those that have my_failEvent==3, while the InvertWord/CutMask excludes those that have my_failEvent!=3. This requires some working out to make sure everything works, and is ongoing.... | |||||||||||||||||||||||||||
Running |