Line: 1 to 1 | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
<-- p { margin-bottom: 0.21cm; }h1 { margin-bottom: 0.21cm; }h1.western { font-family: "Liberation Serif",serif; }h1.cjk { font-family: "DejaVu Sans"; }h1.ctl { font-family: "DejaVu Sans"; }h2 { margin-bottom: 0.21cm; }h4 { margin-bottom: 0.21cm; }h5 { margin-bottom: 0.21cm; }h3 { margin-bottom: 0.21cm; }h3.western { font-family: "Liberation Serif",serif; }h3.cjk { font-family: "DejaVu Sans"; }h3.ctl { font-family: "DejaVu Sans"; }pre.cjk { font-family: "DejaVu Sans",monospace; }a:link { } --> Computentp, Neural Nets and MCLIMITSThis page has been substantially rewritten (and remains a work in progress) to focus just on the information required for a successful run of the Computentp and Neural Net package, to deliver exclusions. For information on results obtained using inputs created in v12 of Athena, please refer to the archive. This page also describes how to run on GlaNtp – the version of the code set up for use in Glasgow, with no CDF dependencies. To use the previous version of the code (there are some important differences) refer to r93 and earlier. To set up the current version of GlaNtp on Glasgow AFS, create a symbolic link to the setup script:ln -s setup_glantp.sh /afs/phas.gla.ac.uk/user/a/atlasmgr/physics/GlaNtp/setup_glantp.shthen set up the environment: source ./setup_glantp.sh -v 00-00-72 -b /afs/phas.gla.ac.uk/user/a/atlasmgr/physics/GlaNtp/ -s GlaNtp\ Packagev17 -a 17.0.5.5.2this will set up the environment for working in a v17 release of Athena, and it will make available the GlaNtp commands in the current working environment. Note that an overview of the GlaNtp framework, made in doxygen, may be found here ![]()
Project AimsThis project aims to document the use of an Artificial Neural Network (ANN) system and fitting software for the analysis of data from inclusive Higgs searches at ATLAS involving a lepton trigger and Higgs decay to b+bbar. This will use input from the Computentp software, designed to automate the weighting of the input files as required.ToolsANN :- This is a kind of algorithm with a structure consisting of "neurons" organised in a sequence of layers. The most common type, which is used here, is the Multi-Layer Perceptron (MLP), which comprises three kinds of layer. The input neurons are activated by a set trigger, and once activated they pass data on to a further set of "hidden" neurons (which can in principle be organised into any number of layers, but most frequently one or two - in our case one), and finally the processed data is forwarded to the output neurons. The key feature of a neural network is its ability to be "trained" to recognise patterns in data, allowing high efficiency algorithms to be developed with relative ease. This training is typically done with sample data which has been generated artificially, resulting in an algorithm that is very effective at recognising certain patterns in data sets. The only shortcoming is the danger of "over-training" an ANN, meaning that it becomes overly discriminating and searches across a narrower range of patterns than is desired (one countermeasure is to add extra noise to training data). Computentp – Simply running the code as above will result in less than optimal Neural Net training. The training procedure requires equal numbers of events from signal and from background (in this case it results in half of the signal events being used in training, half for testing). However, the above code will take events from the background signal samples in proportion to the file sizes - these result in proportions not quite in accordance with physical ratios. As the Neural Net weights results according to information about the cross-section of the process and so on stored in the tree, the final result is that while the outputs are weighted in a physical fashion, the Net is not trained to the same ratios, and so is not optimally trained. To solve this problem, Computentp is used to mix together all background and signal samples and assign TrainWeights to them, so that the events are weighted correctly for the Net's training.Preparing samples for the Neural NetPrevious work went into producing samples for the Neural Net from AODs - results have previously been obtained for MC samples derived from v12 and v15 of athena, with work directed toward upgrading this to v16. The inputs are created from AODs using the TtHHbbDPDBasedAnalysis package (currently 00-04-18 and its branches are for v15, 00-04-19 is for v16), which can be found here![]() ![]() Current samples in useInput data and cross-sectionsThese cross-sections are for the overall process, at √s = 7 TeV. The ttH sample cross-sections are provided for the overall process – the MC is divided into two samples with W+ and W- independent of one another. These two samples are merged before being put through the ANN. The tt samples were initially generated to produce the equivalent of 75 fb-1 of data, based on the LO cross-sections. Taking into account the k-factor of 1.84, this means that now all samples simulate 40.8 fb-1 of data. These samples have also had a generator-level filter applied - most events (especially for tt+0j) are of no interest to us, so we don't want to fill up disk-space with them, so we apply filters based on the numbers of jets etc. The Filter Efficiency is the fraction of events that pass from the general sample into the final simulated sample. To clarify how all the numbers hang together, consider the case of tt+0j. We have simulated 66,911 events - as said above, this corresponds to 40.8 fb-1 of data. We have a Filter Efficiency of 0.06774, so the full number of events that a complete semi-leptonic event would be comes to 987,762 events in 40 fb-1. Divide this by 40 to get the number of events in 1 fb-1 (i.e. the cross-section), and you get 24,694 events per fb-1. Our starting point for our cross-section is 13.18, with a k-factor of 1.84, which gives a cross-section of 24.25 – so all the numbers compare with each other pretty favourably. This of course makes getting from the number of sensible state events to the number expected per fb-1 rather easy – simply divide by 40.8. You'll notice that the cross-section includes all the branching ratios already, so we don't need to worry about that. **IMPORTANT** The Filter Efficiency for these samples was calculated based on a no-pileup sample. The filter is generator level, and one of the things it will cut an event for is not enough jets. However, pileup adds jets, but these are added well after the filter. The net result is that a number of events that failed the filter would have passed, had the pileup been added earlier in the process. This means the filter efficiency (and thus the cross-sections) are incorrect, by a yet to determined amount.... For the other samples, however, we do need to worry about branching ratios - the quoted initial cross-section includes all final states, so we need to apply branching ratios to the cross-section to reduce it down, so that it reflects the sample we've generated. We then subsequently need to reduce the cross-section further so that it reflects the number of sensible states.<-- /editTable -->These cross-sections and branching ratios are correct as of 8 Feb 2011. qq→ttbb (EWK) is currently not being used, thanks to a bug in the production of the MC Number of events surviving preselection, weights and TrainWeights(See later in the TWiki for an explanation of weights and TrainWeights.) This table will be completed with all the relevant weights and TrainWeights at a later date - these values are to be compared to the output from Computentp to ensure everything is working as intended, and are calculated for the sensible cross-sections/events. (A quick check of the TrainWeight is to multiply the number so events of each background by their TrainWeight and sum them - by design, this should equal the number of entries in the ttH sample.)<-- /editTable --> Running the Neural NetThings to do* In the script used to make the webpage showing the results, the reference to H6AONN5MEMLP is hardwired. It should become a argument. It is the name of the method you give TMVA in the training, and so if it changes in one you should be able to change in the otherOverview of the process![]()
Setting upInitial SetupIn the file genemflat_batch_Complete2.sh, you should correct the line:#PBS -j oe -m e -M a.gemmell@physics.gla.ac.ukwith your e-mail address – this enables the batch system to send you an e-mail informing you of the completion (successful or otherwise) of your job. Trainweights and WeightsInitial versions of the code used to have the weights for the ANN hard-coded into the runcards (General Parameter FixWeight in the FlatReader file). However, later versions of the code do not need this hardcoding – weights are calculated from values found in the input files themselves, and so FixWeight has been set to 1 (this can be used to multiply certain samples' cross-sections by a given number, if desired). However, the formulae to obtain the weights are still quoted below, so that you can check the Computentp's work and make sure that it makes sense. N.B. These weights are wrong for the ttjj (5212) sample. The input that was produced in v12 of athena was initially produced using MC@NLO. This produces both positive (+1) and negative (-1) weighted events (an easy way to consider this is to consider the negatively weighted events as destructive events, that interfere with the positively weighted events, with the net result of decreasing the cross-section of the process). We considered all events equally for our calculation of the weights, simply considering the total number of events in the files. This problem will disappear when we switch to v15 inputs, where the ttjj samples have been produced using Alpgen. The first weight to be considered is TrainWeight – the scale factor we multiply each of the background events by so that they are in physically realistic ratios in relation to one another, while enforcing the requirement of the ANN training that we have equal numbers of signal and background events – used for the training of the ANN. The calculation for the Trainweight is: (Number of generated signal events / Number of generated events for that background) * (Cross-section for that background / Cross-section for all backgrounds combined) The next weight is simply called Weight – this is the scale factor used to produce a physically realistic input for the ANN – now with the signal weighted as well. The formula used to find this is simply (Number of events expected for your desired luminosity) / (Number of events present in input dataset) Both of these numbers are calculated by Computentp, and can be checked in the file trees/NNInputs_120.root, where they are in branches labelled by 'TrainWeight' and 'weight'. However, it should be noted that while TrainWeight as produced by Computentp is used by the ANN in the training sequence, the final results are produced independently Computentp – the ANN calculates the weight on its own.FlatReader and FlatPlotterThen, genemflat_batch_Complete2_SL5.sh creates a file called FlatPlotterATLAStth${prefix}.txt . This file is used in the 'templating' phase of genemflat_batch_Complete2_SL5.sh and is based on the templates provided in teststeerFlatPlotterATLAStthSemileptonic-v15.txt. And so via the FlatPlotter file the FlatReader files are included in the call:runFlatPlotter \$steerPlotter ... which produces a template for each of the signal and individual background samples. User SetupTo set up the neural net,
Getting a copy of GlaNtp
HwwFlatFitATLAS Validation succeeded Done with core tests Result of UtilBase validation: NOT DONE: NEED Result of Steer validation: OK Result of StringStringSet validation: OK Result of StringIntMap validation: OK Result of ItemCategoryMap validation: OK Result of FlatSystematic validation: OK Result of LJMetValues validation: OK Result of PhysicsProc validation: OK Result of FlatNonTriggerableFakeScale validation: OK Result of FlatProcessInfo validation: OK Result of PaletteList validation: OK Result of CutInterface validation: NOT DONE: NEED Result of NNWeight validation: NOT DONE: NEED Result of FlatFileMetadata validation: OK Result of FlatFileMetadataContainer validation: OK Result of Masks validation: NOT DONE: NEED Result of FFMetadata validation: OK Result of RUtil validation: NOT DONE: NEED Result of HistHolder validation: NOT DONE: NEED Result of GlaFlatFitCDF validation: OK Result of GlaFlatFitBigSysTableCDF validation: OK Result of GlaFlatFitBigSysTableNoScalingCDF validation: OK Result of GlaFlatFitATLAS validation: OK Result of FlatTuple validation: OK Result of FlatReWeight validation: OK Result of FlatReWeight_global validation: OK Result of FlatReWeightMVA validation: OK Result of FlatReWeightMVA_global validation: OK Result of TreeSpecGenerator validation: OK Result of FlatAscii validation: OK Result of FlatAscii_global validation: OK Result of FlatTRntp validation: OK Variables used by the GlaNtp packageThe variables used by the package can be divided into two sets. The first are those variables that are constant throughout the sample - the 'global' variables (e.g. cross-section of the sample). These can be specified in their own tree, where they will be recorded (and read by GlaNtp) once only. If desired, these variables can be defined within the main tree of the input file - however, then they will be recorded once per event, and read in once per event. This is obviously a bit wasteful, but for historical reasons it can be done. To determine which of these behaviours you use, set LoadGlobalOnEachEvent in FlatPlotter and FlatReader to 1 for the events to be read in on an event-by-event basis, or 0 to be read in once from the global tree (or from the first event only). For more information on this switch, refer to this. The other variables are those that change on an event-by-event basis. These variables include both the variables we are going to train the Neural Net on (more information relevant to those variables is given in the relevant section of this TWiki), and other useful variables, such as filter flags (that tell GlaNtp whether an event is sensible or not). All of these variables are listed in the file VariableTreeToNTPATLASttHSemiLeptonic-v15.txt The file maps logical values to their branch/leaf. The tree can be the global tree or the event tree.GeneralParameter string 1 FlatTupleVar/<variable_name>=<tree>/<variable_name_in_tree>Also specified are the name of the leaf for the cutmask and invert word -- these are global values for a file. GeneralParameter string 1 CutMaskString=cutMask GeneralParameter string 1 InvertWordString=invertWordThe structure of Computentp's output is specified by ListParameter EvInfoTree:1 1 NN_BJetWeight_Jet1:NN_BJetWeight_Jet1/NN_BJetWeight_Jet1If you want a parameter to be found in the output, best to list it here.... Calculating my_integral (The Magic Formula)1. Check the critical formula: The most important forumla is the first thing to check: weight*= GetSF()*GetXsect()*GetBrFrac()*GetFilterEff()*GetLumiForType()/GetNGenForType(); my_brFrac my_filterEff my_xSect my_lumiForType my_nGenForType You need a scale factor: my_sf? 2. Check the stuff that FlatReader uses. This is documented in GlaNtp/NtpAna/test/VariableTreeToNtp.txt. Variable Tree to Ntp is the one that maps logical values to their physical branch/leaf. Anything prefaced with FlatTupleVar needs to be specified or is useful to specify. Values are divided in to those that can change on each event (kept in the "ev" tree) and those that are the same for a file (kept in the "global" tree). As you know you can set the tree names. You really should create a global tree for the global file values now. We have procrastinated on this a long time. There you see # # Values that are required from global # GeneralParameter string 1 FlatTupleVar/BrFrac=globalInfo/BrFrac GeneralParameter string 1 FlatTupleVar/FilterEff=globalInfo/FilterEff GeneralParameter string 1 FlatTupleVar/Fraction=Fraction/Fraction GeneralParameter string 1 FlatTupleVar/Integral=Integral/Integral GeneralParameter string 1 FlatTupleVar/XSect=globalInfo/Xsect # This specifies the name of the leaf for the cutmask and invert word. #Again, these are global values for a file. GeneralParameter string 1 CutMaskString=cutMask GeneralParameter string 1 InvertWordString=invertWord This confirms that Fraction and Integral are needed. my_fraction my_integral Here are the ones that are required for ev: # # Values that are required from ev # GeneralParameter string 1 FlatTupleVar/Channel=evInfo/Channel GeneralParameter string 1 FlatTupleVar/DilMass=evInfo/Mll GeneralParameter string 1 FlatTupleVar/Entry=evInfo/ientry GeneralParameter string 1 FlatTupleVar/Event=evInfo/eventNumber GeneralParameter string 1 FlatTupleVar/Lep1En=evInfo/lep1_E GeneralParameter string 1 FlatTupleVar/Lep2En=evInfo/lep2_E GeneralParameter string 1 FlatTupleVar/LumiForType=evInfo/lumiForType GeneralParameter string 1 FlatTupleVar/MEVal=LRInfo/LRHWW GeneralParameter string 1 FlatTupleVar/NGenForType=evInfo/nGenForType GeneralParameter string 1 FlatTupleVar/Njets=evInfo/Njets GeneralParameter string 1 FlatTupleVar/Rand=evInfo/Rand GeneralParameter string 1 FlatTupleVar/Run=evInfo/runNumber GeneralParameter string 1 FlatTupleVar/Weight=LRInfo/weight GeneralParameter string 1 FlatTupleVar/cutWord=evInfo/cutWord GeneralParameter string 1 FlatTupleVar/lep1_Type=evInfo/lep1_Type GeneralParameter string 1 FlatTupleVar/lep2_Type=evInfo/lep2_Type GeneralParameter string 1 FlatTupleVar/sf=evInfo/sf Some can be used with the default values that FlatTuple gives: GeneralParameter string 1 FlatTupleVar/Channel=evInfo/Channel GeneralParameter string 1 FlatTupleVar/DilMass=evInfo/Mll GeneralParameter string 1 FlatTupleVar/MEVal=LRInfo/LRHWW GeneralParameter string 1 FlatTupleVar/Rand=evInfo/Rand GeneralParameter string 1 FlatTupleVar/lep1_Type=evInfo/lep1_Type GeneralParameter string 1 FlatTupleVar/lep2_Type=evInfo/lep2_Type Some are ok to leave if you dont want to use it: there are switches that turn on the use of these GeneralParameter string 1 FlatTupleVar/Lep1En=evInfo/lep1_E GeneralParameter string 1 FlatTupleVar/Lep2En=evInfo/lep2_E GeneralParameter string 1 FlatTupleVar/Weight=LRInfo/weight Some are useful for plotting: GeneralParameter string 1 FlatTupleVar/Njets=evInfo/Njets I think you have my_Eventtype as channel my_failEvent as cutwordVariables that must be listed in the event (not the global) treenGenForType, LumiForType, EventtypeVariables used for training the Neural NetThe list of variables on which the neural net is to train is set in the shell script, under TMVAvarset.txt (this file is created when the script runs). At present, these variables are: The b-weights for the six 'leading' jets - currently the jets are ranked according to their b-weights, but it is possible to rank them according to pT and energy. The decision about how to rank them is done in the AOD -> NTuple stage:NN_BJetWeight_Jet1 NN_BJetWeight_Jet2 NN_BJetWeight_Jet3 NN_BJetWeight_Jet4 NN_BJetWeight_Jet5 NN_BJetWeight_Jet6 The masses and pT of the various jet combinations (only considering the four 'top' jets - i.e. if ranked by b-weights, the jets that we expect to really be b-jets in our signal: NN_BJet12_M NN_BJet13_M NN_BJet14_M NN_BJet23_M NN_BJet24_M NN_BJet34_M NN_BJet12_Pt NN_BJet13_Pt NN_BJet14_Pt NN_BJet23_Pt NN_BJet24_Pt NN_BJet34_Pt The sums of the eT of the two reconstructed tops, for each of the top three states: NN_State1_SumTopEt NN_State2_SumTopEt NN_State3_SumTopEt And the differences between the eta and phi of the two reconstructed tops, again from the top three states: NN_State1_DiffTopEta NN_State2_DiffTopEta NN_State3_DiffTopEta NN_State1_DiffTopPhi NN_State2_DiffTopPhi NN_State3_DiffTopPhi You also need to provide addresses to the Neural Net so that it can find the variables in the input trees. This is done inside VariableTreeToNTPATLASttHSemiLeptonic-v15.txt : ListParameter EvInfoTree:1 1 NN_BJetWeight_Jet1:NN_BJetWeight_Jet1/NN_BJetWeight_Jet1Currently all information is in the EvInfoTree, which provides event level information. However, future work will involve trying to establish a GlobalInfoTree, which contains information about the entire sample, such as cross-section - this will only need to be loaded once, and saves having to write the same information into the tree repeatedly, and subsequently reading it repeatedly. Variable Weights in the Neural NetTo set up a neural net for the analysis of a particular kind of data it is necessary to train it with sample data; this process will adjust the "weights" on each variable that the neural net analyses in the ntuple, in order to optimise performance. These weights can then be viewed as a scatter plot in ROOT.Specifying files as Signal/Background or as real dataThe input datasets need to be specified in a number of peripheral files, so that the ANN can distinguish between signal and background MC files or real data files. There is only one place where data files need to be specified differently to MC - FlatAtlastthPhysicsProc1.txt - and if you are running the fit with data and not pseudodata, this is determined through one single flag, set in genemflat - see here. Errors for each process also need to be specified - how this is done is detailed in that section. The relevant files for adding processes are atlastth_histlist_flat-v15.txt, AtlasttHRealTitles.txt, FlatAtlastthPhysicsProc1.txt and FlatSysSetAtlastth1.txt. There are also some files that are produced through the action of genemflat_batch_Complete2_SL5.sh. At several points in these files, there are common structures for inputting data, relating to ListParameter and ColumnParameter:ListParameter <tag> <onoff> <colon-separated-parameter-list><onoff> - specifies whether this parameter will be taken into consideration (1) or ignored (0) - generally this should be set to 1. <tag> and <colon-seperated-parameter-list> - varies from process to process, will be explained for individual cases below. There can only be one instance of a <tag> active at any one time (i.e. you can write more than one version, but only one can be taken into consideration). ColumnParameter <tag> <sequence> <keyword=doubleValue:keyword=doubleValue...>The expression <tag>:<sequence> must be unique, e.g. ColumnParameter File 0 OnOff=0:SorB=0:Process=Data ColumnParameter File 1 OnOff=1:SorB=0:Process=Fakewhere <tag> is the same, but <sequence> is different. The fact that the <sequence> carries meaning is specific to the implementation. Note that all of the values passed from ColumnParameter will eventually be evaluated as Doubles - any variables where you pass a string (as for 'Process' above), this is not actually passed to the code - these code snippets are to make the code more easily readable by puny humans, who comprehend the meaning of strings more readily than Doubles. atlastth_hislist_flat-v15.txtThis file provides a map for the ANN, giving it the output file names (and in which directory they are to be stored, relative to ${template_area} - set in genemflat) and the tree structure where the final result of the ANN will be stored in the output (in the example below, the output file is ${template_area}/11602-filter.root, and the result graph will be FlatPlotter/NNScoreAny_0_0_0 0). The number to the left of the file name indicates which process it is - this is also used in the files AtlasttHRealTitles.txt and FlatAtlastthPhysicsProc1.txt, and needs to be consistent with them), and corresponds to the variable my_Eventtype in the input files (this can also be influenced in genemflat). Thus it is possible to assign multiple files to the same process (e.g. a file for the electron stream and muon stream are both assigned to Data), by giving them a common number at the start of the line.0 116102-filter.root FlatPlotter/NNScoreAny_0_0_0 0The file in general expects each line to contain an integer, 2 strings and another integer, separated by spaces. If the integers are less than one, then that line is ignored. Therefore, so long as you are careful to exclude spaces from your strings, and stick to the string/integer formula, it is possible to place comments in this file: -1 -------------------------------------------------------- x -1 -1 ttH x -1 -1 -------------------------------------------------------- x -1 AtlasttHRealTitles.txtThe list of signal/background processes can be found in AtlasttHRealTitles.txt (where the names are specified and associated with process numbers - these process/number associations need to be the same as in atlastth_hislist_flat-v15.txt and FlatAtlastthPhysicsProc1.txt). At present these are:Process_0_0 TTjj:Semileptonic Process_1_0 ttH:Semileptonic Process_2_0 EWK:Semileptonic Process_3_0 QCD:Semileptonic FlatStackInputSteer.txt / FlatStackInputSteerLog.txtThese files create the stacked input plots. FlatStackInputSteer.txt creates plots with a linear y-scale, FlatStackInputSteerLog.txt is with a log y-scale. Comments are within the file to explain the meaning of a few parameters.genemflat_batch_Complete2_SL5.shgenemflat creates the file TMVAsteer.txt, which sets a number of parameters for the running of the ANN - the constraints on the events, the precise Neural Net structure and so on - for establishing the input files, we are interested in only a couple of these parameters:GeneralParameter string 1 FileString=my_EventtypeIndicates the leaf in the input file which shows which process the event belongs to - this is the same number as we specify later in genemflat for steerComputentp.txt - it does not have to be consistent with the process numbers as defined in atlastth_hislist_flat-v15.txt, AtlasttHRealTitles.txt and FlatAtlastthPhysicsProc1.txt. ColumnParameter File 1 OnOff=1:SorB=1:Process=tthThe number before the switches (OnOff, SorB, etc - in this case it is 1) corresponds to the number given in AtlasttHRealTitles.txt. The other numbers are self-explanatory - they establish if that file is to be used, if it is signal or background (1=signal. 0=background) and the name of the process. In this instance, the Process name is just a comment for your own elucidation - it is not used itself in the code, so does not necessarily have to correspond to the process names as provided in AtlasttHRealTitles.txt (though of course it is useful for them to be similar). The other file that is produced by genemflat that specifies the input files for Computentp is steerComputentp.txt # Specify the known metadata ListParameter SignalProcessList 1 Alistair_tth ListParameter Process:Alistair_tth 1 Filename:${ntuple_area}/ttH-v15.root:File:${mh}:IntLumi:1.0This is just a list of the various input files, and we specify the integrated luminosity. The 'File' parameter is only used for book-keeping by Computentp, and does not have to correspond to the file numbers used in the ANN steering files (or to my_Eventtype), but for sanity's sake it is probably best to keep things consistent. We make an exception for the signal - we assign it the number ${mh} - so that we can keep track of things if we have different mass Higgs in our signals. # Map of input file name to output file name: The ComputentpOutput will have a sed used to get the right mapping. ListParameter InputOutputMapName:1 1 ${ntuple_area}/ttH-v15.root:${Computentpoutput}/tth_NNinput.rootThe InputOutputMapName is a list of integers - this doesn't have to bear any relevance to any numbers that have gone before - just give each output a unique number. This is followed by the mapping of input file names provided, to the output names that Computentp will produce. FlatAtlastthPhysicsProc1.txtThis file contains various parameters:ColumnParameter BackgroundList 0 tt0j=0 ColumnParameter SignalList 1 ttH=1 ColumnParameter DataList 1 Data=11Here you specify once again the numbers assigned to the processes by my_Eventtype (for tt0j it equals zero), and list things as BackgroundList, SignalList or DataList. The number after 'BackgroundList' or 'SignalList' is unique for each process (to preserve the uniqueness of <tag>:<sequence>), but it must be sequential, running from 0 to n-1 (where you have n samples) - apart from for DataList entries (as shown above). It also does not need to correspond to my_Eventtype, however, for completeness' sake within this file I have set it as such. The number at the end of this declaration (tt0j=0 in this case) needs to be sequential - it instructs the net of the order in which to process the samples, so it must go from 0 to n-1 (when you have n samples). It must match up with the numbers provided in atlastth_histlist_flat-v16.txt and AtlasttHRealTitles.txt so that processes and data can be matched to the various individual files. ColumnParameter PseudoDataList 0 tt0j=0This is simply a restatement of the BackgroundList (as we're looking for exclusion, the pseudodata is background only) - the same numbers in the same place. This list specifies the processes included in the pseudoexperiments, and therefore the signal process is not included in this list. ListParameter ProcessLabels:1 1 tt0j:t#bar{t}0jThe number after ProcessLabels again doesn't correspond to my_Eventtype - I have made it the same as the number after BackgroundList/SignalList and PseudoDataList. The important feature from this is that it tells the ANN what to label each of the various processes as in the results plots. The numbers must run from 1 to n. ColumnParameter UCSDPalette 0 tt0j=19 ColumnParameter PrimaryColorPalette 0 tt0j=0These two parameters specify the colours used in the plotting for each of the processes (the numbers correspond to those in the Color Wheel of TColor). The numbers after the UCSDPalette and PrimaryColorPalette are the same ones as have been used previously in this file. Whether the plotting uses the colours stated in UCSDPalette or PrimaryColourPalette is determined in the file flatsteerStackNNAtlas.txt by setting the parameter: GeneralParameter string 1 Palette=UCSDPaletteThe final parameter to be set in FlatAtlastthPhysicsProc1.txt is: ColumnParameter ProcessOrder 0 tt0j=0Once again, the number on its own (in this case 0) is the same as the other such instances in this file. The final number (zero in this case) is the order in which this process should be plotted - i.e. in this case, the tt0j sample will be plotted first in the output, with the other samples piled on top of it. This number obviously does not need to correspond to my_Eventtype. FlatSysSetAtlastth1.txtThis file contains all the information on the errors that you pass to the ANN so that it can work out how the errors propogate to the final plots and answers, and so most of the details of this file will be covered in that section. The basic format of the file is:ColumnParameter Combine:Lumi 0 OnOff=1:Low=-0.11:High=0.11:Channel=1:Process=TTjjThe <sequence> parameter (in this case '0') is there so that you can specify the parameters for a given error for multiple channels, without falling foul of the uniqueness requirement for <tag>:<sequence>. We have chosen it so that it equals my_Eventtype for that process. 'Channel' is present just in case you're considering multiple channels. We're only considering the one channel in this case (SemiLeptonic). The final parameter (Process) is not actually used - the second parameter tells the ANN which errors are which, but this isn't very easily read by you, so feel free to add it in to help you keep track of the various errors! These final few parameters can be placed in any order, so long as they are separated by semicolons. teststeerFlatReaderATLAStthSemileptonic-v16.txtThis file contains parameters to control the loops over events.GeneralParameter int 1 NEvent=20000000 GeneralParameter int 0 FirstEvent=1 GeneralParameter int 0 LastEvent=10FirstEvent and LastEvent allow you to specify a range of events to run over - this is liable only to be useful during debugging. (Note that these parameters are currently turned off). NEvent gives the maximum number of events processed for any given sample - take care with this, if you are running a particularly large sample through the code.... Setting which variables to plot and train onYou need to let GlaNtp know where the variables you are interested in are. It is also possible to merely plot some variables you're interested in without adding them to the training just yet. You need to provide GlaNtp with the location of the variables in all cases - this is done in VariableTreeToNTPATLASttHSemiLeptonic-v16.txt and TreeSpecATLAStth-v16_event.txt (or TreeSpecATLAStth-v16_global.txt as applicable).VariableTreeToNTPATLASttHSemiLeptonic-v16.txtListParameter EvInfoTree:1 1 my_NN_BJetWeight_Jet1:my_NN_BJetWeight_Jet1/my_NN_BJetWeight_Jet1This information must be provided for every variable you're interested in in any way. It provides the variable name, and a map to that variable name from the input tree. Note that the number after EvInfoTree must be unique for each entry (EvInfoTree:2, EvInfoTree:3, etc) TreeSpecATLAStth-v16_event.txtListParameter SpecifyVariable:my_NN_BJetWeight_Jet1 1 Type:doubleThis is another compulsory piece of information for GlaNtp - telling it which tree the information is in (event or global) and the event type. teststeerFlatPlotterATLAStthSemileptonic-v16.txtColumnParameter SpecifyHist:my_NN_BJetWeight_Jet1 0 OnOff=1:Min=-5:Max=10:NBin=25This is just for the plotting scripts (but if you're training on variables, you should probably want them plotted as well...). The number after the SpecifyHist string (in this case 0) needs to be different for each entry. OnOff decides whether the variable is to be plotted or not, and must be specified. Min and Max specifiy the range of the x-axis (for energy / mass, this is in units of MeV), and unless specified defaults to 0 and 200 respectively. NBin specifies the number of bins in the histogram, with the default of 50. TMVAvarset.txt (genemflat_batch_Complete2_SL5.sh)This is for the templating - a nice and simple list of all the variables you want to train on. Simples.FlatStackInputSteer.txt / FlatStackInputSteerLog.txtParameters for the templating and the making of the stacked plots. Individual paramters are commented within the file itself.ATLAStthDiscrToLabel.txtCalled by the FlatStack files. It allows for more instructive axes labels.ListParameter DiscrToLabel:7 1 my_NN_BJet12_M:M^{BJet}_{12}\(MeV/c^{2}),MeV/c^{2}The number number following DiscrToLabel must match that given in VariableTreeToNTPATLASttHSemiLeptonic-v16.txt. Then comes the real variable name - the name that the code deals with. Following the semicolon is the x-axis label, written in LaTeX style formatting. The backslash denotes a space in the axis label (the parameter must be one long continuous stream). The bit after the comma is optional, but if used specifies the units for the y-axis (e.g. # events per MeV). These labels are written using Root's LaTeX markup. Setting Systematic UncertaintiesThe fitting code can take into account two different types of systematic uncertainty - rate and shape. The basic method to obtain both these uncertainties is that you should make your input samples for both your nominal sample, and for the two bounds of a given error (e.g. Initial State Radiation, ISR). Repeat this for all of the errors you wish to consider. The rate systematic uncertainty is simply how the number of events change that pass your preselection cuts etc. (you can only consider this, if you like). To obtain the shape uncertainty, you should pass each of the resulting datasets through the ANN (up to and including the templating, so that you have ANN results for both the nominal results, and as a result of varying each background). These ANN outputs can then be used to produce the rate uncertainties based on their integrals, before being normalised to the nominal cross-section so as to find the shape uncertainty - a measure of the percentage change in the bin-by-bin distribution for each error. The fitting code is passed the relevant information about errors through the use of a number of files, but in the simplest case (when shape uncertainties are not being considered), there are only two: FlatSysSetAtlastth1.txt and SysNamesAtlastth1.txt. The basic call to the fitting code is in genemflat_batch_Complete2_SL5.sh:sysfile=FlatSysSetAtlastth.txt steerfile=FlatFitSteer.txt mkdir -p templates/fit rm -f templates/fit/out_${mh}.log Fit ${basehistlistname} ${template_area}/ \$sysfile \$steerfile $mh > templates/fit/out_${mh}.logThe final call is rendered in the actual job file (e.g. run114) as Fit /home/ahgemmell/NNFitter-00-00-09-Edited/NNTraining/atlastth_histlist_flat-v15.txt templates/tth120/ $sysfile $steerfile 120 > templates/fit/out_120.logIf you want to save time, (by not having to run templating for every error you wish to consider), you can instead only consider the rate uncertainties, and provide these as fractional changes to the rate, specified in FlatSysSetAtlastth1.txt. Whether or not you consider shape uncertainties is controlled by a couple of parameters in the steering file FlatFitSteer,txt, (which is created by the action of genemflat_batch_Complete2_SL5.sh) GeneralParameter bool 1 UseShape=0 GeneralParameter bool 1 UseShapeMean=0Setting UseShape=1 means shape uncertainties will be taken into account for all the uncertainties that you provide the extra steering files and ANN scores for, UseShapeMean=1 means that the ANN results for your various uncertainties will be used to produce the rate uncertainties based on their integrals, rather than on the numbers provided in FlatSysSetAtlastth1.txt - using the relative sizes of the integrals of the AAN output as an estimator of the rate uncertainty can be useful if you don't want to be subject to statistical variations in the computation of your systematic uncertainties (if UseShapeMean=0, the systematic rate uncertainty is calculated as a fractional change on the nominal rate). Considering shape uncertainties requires more steering files, and this will be detailed in later. FlatSysSetAtlastth1.txtColumnParameter Combine:Lumi 0 OnOff=1:Low=-0.11:High=0.11:Channel=1:Process=TTjjThe first parameter consists of two parts in this example: 'Combine' and 'Lumi'. The second part is the name of the uncertainty being considered. The first part 'Combine' (and the associated semicolon between them) is optional. It tells the ANN that the uncertainty thus labelled are independent of each other, and can be added in quadrature. 'OnOff' obviously tells the ANN to consider those uncertainty (1) or not (0). 'Low' and 'High' establish the relevant bounds of the uncertainty as fractions of the total (however, for the ANN these uncertainties are symmetrised, so to save time they are here assumed to be symmetric unless elsewhere stated) - note that these are not the uncertainties on the quantity, but rather the effect of that uncertainty on the rate of your process. Process is not actually read by the ANN, but is there to make the whole thing more human-friendly to read. The current errors, and their bounds are below. If no source for these error bounds is given, then they were the defaults found in the files from time immemorial (where as necessary I assumed that all tt + X errors were the same, as were all ttbb (QCD) errors, as in the original files the only samples considered were ttjj, ttbb(EWK), ttbb(QCD) and ttH - these errors probably originate from the CSC note). If you are only considering rate uncertainties, this is where the fitting code will find the relevant numbers. <-- /editTable --> SysNamesAtlastth1.txtListParameter SysInfoToSysMap:1 1 Combine:LumiTrigLepIDThe number in the <tag> after SysInfoToSysMap is unique for each error (in this case it goes from one to eight). There is one entry per error considered, apart from the cases where the errors are combined in quadrature (as specified in FlatSysSetAtlastth1.txt), where they are given one entry to share between them. The <colon-separated-parameter-list> provides a map between the name of the errors as considered by FlatSysSetAtlastth1.txt (the errors combined in quadrature are lumped together under the name 'Combine'), and something more human-readable. The human-readable names are what will be written out by the fitting code (which identifies each error based on numbers, rather than the names in FlatSysSetAtlastth1.txt) when it is producing its logfile. Obviously there is often not much change between the two names, apart form in the case of Combined errors. Including shape uncertaintiesFor the sake of argument, we shall pretend to only be considering the one error overall. It is possible to consider rate errors independently of shape uncertainties (by setting UseShapeMean=0 in FlatFitSteer.txt) - this might be useful and quicker to run if a given error produces a large rate error, but the change to the shape of the ANN distribution is minimal (you can have a look at the ANN results yourself and make your own judgements). If you are not considering a given shape uncertainty but you are considering the rate uncertainty, all that needs to be done is to not produce the relevant steering files. In this example, we will already have run three ANN templating steps - run1 (the nominal run), run2 (the results of taking the lower bound of the error) and run3 (the results of taking the higher bound of the error). You can then move into a new directory (e.g. run1_2) in which you want to perform the fitting, and at the very least set UseShape=1 in FlatFitSteer.txt (also perhaps setting UseShapeMean=1). This requires some changes to the call to the fitting code and atlastth_histlist_flat-v15.txt so that the combination of ${template_area} and the filenames given in atlastth_histlist_flat-v15.txt still point toward the ANN template files you wish to consider - as shown in the two lines below (the first from genemflat establishing ${template_area}, the second from atlastth_histlist_flat-v15.txt establishing the filename for the ANN template):template_area=templates/${process}${mh} 0 116102-filter.root FlatPlotter/NNScoreAny_0_0_0 0could become: template_area=${MAINDIR} 0 run1/templates/tth120/116102-filter.root FlatPlotter/NNScoreAny_0_0_0 0This ensures that atlastth_histlist_flat-v15.txt will still point toward the ANN templates from the nominal run. You must now create additional steering files to point toward the high and low error ANN templates - their names are of the format: "ShapePos_"+errorname+"_"+HistOutput "ShapeNeg_"+errorname+"_"+HistOutputwhere HistOutput is atlastth_histlist_flat-v15.txt and errorname is the human-readable error name, as defined in SysNamesAtlastth1.txt. You also need to change the ${basehistlistname} in the call to the fitting code so that it points directly at atlastth_histlist_flat-v15.txt, with no preceding directory structure - the code bases the names of the two extra shape steering files on this argument, and will not take into account any directories in the argument. (So that if ${basehistlistname} was directory/file.txt, the fitting code would look for the extra steering files with the name ShapePos _ISR_directory/file.txt in the case of ISR being our error). FiltersIt is possible for the inputs to the ANN to have more events in than those that you want to pass to on for processing. We only want to train the ANN on those samples that would pass our preselection cuts - general cleaning cuts and the like. (There was a previous version of our inputs where we also required 'sensible states' - for each candidate event we required it to reconstruct tops and Ws with vaguely realistic masses. However - this is a Neural Net analysis, so it has been decided to remove these cuts - they will after all in effect be reintroduced by the net itself if they would have been useful, and by not applying them ourselves, we are passing more information to the net.) We therefore have filters so that Computentp and the ANN only look at events of our choosing. These filters take the place of various bitwise tests in TMVAsteer.txt (created in genemflat_batch_Complete2.sh) (not currently used, as explained below) and TreeSpecATLAStth_global.txt.VariableTreeToNTPATLASttHSemiLeptonic-v16.txtGeneralParameter string 1 FlatTupleVar/cutWord=my_GoodJets_N/my_GoodJets_NThis sets the variable we wish to use in our filter - it interfaces with the cutMask and invertWord as specified in TreeSpecATLAStth.txt. Note that depending on the number of jets you wish to run your analysis on (set as a command line argument during the running of the script), this is edited with genemflat. TreeSpecATLAStth_global.txtIn TreeSpecATLAStth.txt, we establish the filters which control what is used for the templating, and Computentp:ListParameter SpecifyVariable:Higgs:cutMask 1 Type:int:Default:3 ListParameter SpecifyVariable:Higgs:invertWord 1 Type:int:Default:0InvertWord is used to invert the relevant bits (in this case no bits are inverted) before the cut from cutMask is applied. The cutMask tells the filter which bits we care about (we use a binary filter). So, for example, if cutMask is set to 6 (110 in binary), we are telling the filter that we wish the second and third bit to be equal to one in cutWord - we don't care about the first bit. It is possible to specify multiple options of the cutMask and invertWord in the same file, distinguished by the word after SpecifyVariable (in this case Higgs). Which ones are used are determined by teststeerFlatReaderATLAStthSemileptonic-v16.txt. TMVAsteer.txt (genemflat_batch_Complete2.sh)GeneralParameter string 1 Constraint=(my_failEvent&3)==3This controls the events used in the training, using a bitwise comparison. If the constraint is true (i.e. the first two bits are set, and not equal to zero), then the event is used for training. This filter is not used currently, as training of the net takes place based on the Computentp output - this Computentp output only contains sensible states (as specified in the TreeSpecATLAStth.txt file's filter). If further filtering is required, then care must be taken to ensure that my_failEvent (or whatever you wish to base your filter on) is specified in the VariableTreeToNTP file, so that Computentp will copy it into its output. **If USEHILOSB is set to 1 then && must be appended to cut criteria, e.g. GeneralParameter string 1 Constraint=(my_failEvent&65536)==0&&. This is because USEHILOSB adds more constraints.** teststeerFlatReaderATLAStthSemileptonic-v16.txtGeneralParameter string 1 ControlRegion=HiggsSpecifies which cutMask and invertWord are to be used from TreeSpecATLAStth_global.txt. This is changed at runtime with one of the parameters RunningTo run the script, first log into the batch system (ppepbs). The genemflat_batch_Complete2_SL5.sh (NNFitter 00-00-21 version) script can be executed with the command (the last argument can be optional):./genemflat_batch_Complete2_SL5.sh 12 400 1.04 tth 120 120 6 Higgs 00-00-45 /data/atlas07/stdenis/v16-r13/bjet2 agemmell@cern.ch srv001 ahgemmell ppepc23.physics.gla.ac.ukThese options denote:
Other switches to influence the runninggenemflat_batch_Complete2_SL5.shAt the start of the file there are a number of switches established:# Flags to limit the scope of the run if desired Computentps=1 DoTraining=1 ComputeTMVA=1 DoTemplates=1 DoStackedPlots=1 DoFit=1These control whether or not various parts of the code are run - the names of the flags are pretty self-explanatory about what parts of the code they control. For example, it is possible to omit the training in subsequent (templating) runs, if it has previously been done. This shortens the run time significantly. ***NOTE*** The flags DoTraining and DoTemplates had previously (until release 00-00-21) been set on the command line. They were moved from the command line when the other flags were introduced. If you wish the fit to be run using data and not pseudodata, then the flag is set in FlatFitSteer.txt, which is created in genemflat: GeneralParameter bool 1 PseudoData=1If this flag is set to 1 then pseudodata is used, 0 causes data to be used. teststeerFlatPlotterATLAStthSemileptonic-v16.txt and teststeerFlatReaderATLAStthSemileptonic-v16.txtGeneralParameter bool 1 LoadGlobalOnEachEvent=0Determines if you have a separate global tree or not. If you do not, set this equal to one, and the relevant global values will be read out anew for each event from the event tree. Important notes about running parts of the code (not a complete run - for debugging, replotting etc)
Where the output is stored
Information found in log filesComputentp120.logAfter looping over all the events, you will see a table like the one below:Process Name File Name File Scale Events Integral IntLumi Alpha ttjj /data/atlas09/ahgemmell/NNInputFiles_v16/mergedfilesProcessed/105200-29Aug.root 0 1 613 613 1 0.907015 ttH /data/atlas09/ahgemmell/NNInputFiles_v16/mergedfilesProcessed/ttH-v16.root 120 1 556 556 1 1Some of the values are established through steerComputentp.txt in the line ListParameter Process:ttH 1 Filename:/data/atlas09/ahgemmell/NNInputFiles_v16/mergedfilesProcessed/ttH-v16.root:File:120:IntLumi:1.0
Limitations
Diagnostic RunA diagnostic run may be carried out by setting Debug=1 and NEvent=99 in teststeerFlatReaderATLAStthSemileptonic.txt. It is also advisable to cut the run time down by setting the number of training cycles to a low number (e.g. 20) in genemflat_batch_Complete2_SL5.sh - this appears as NCycles in the TMVAsteer.txt part of the file.TMVA Training PlotsThere is a macro in the latest NNFitter version![]()
Running analysis & making ntuplesThis is the procedure used to remake the ATLAS ntuples, using code from CERN Subversion repositories. These ntuples were then used as input for the neural net. cd ~mkdir tth_analysis_making_ntuples_v.13 cd tth_analysis_making_ntuples_v.13 export SVNROOT=svn+ssh://kirby@svn.cern.ch/reps/atlasoff svn co $SVNROOT/PhysicsAnalysis/HiggsPhys/HiggsAssocTop/HiggsAssocWithTopToBBbar/tags/HiggsAssocWithTopToBBbar-00-00-00-13 <nop>PhysicsAnalysis/HiggsPhys/HiggsAssocTop/HiggsAssocWithTopToBBbar cd <nop>PhysicsAnalysis/HiggsPhys/HiggsAssocTop/HiggsAssocWithTopToBBbar/NtupleAnalysis/ make In a new terminal window: cd /data/atlas07 mkdir gkirby cd gkirby mkdir ntuples_sensstatecutword The script files in the <nop>NtupleAnalysis directory were then altered to output to this new directory. This is the output line for the signal ( tthhbbOptions86581430018061-nn.txt) OUTPUT /data/atlas07/gkirby/ntuples_sensstatecutword/86581430018061-nn.root Code changes: The tthhbbClass.cxx file was edited to include a new error code: the following lines were added to it to allow us to exclude events that we do not wish the NN to train/test with. if (<nop>SensibleStates.size()==0) { m_failEvent+=65536; } Then the make command was used again in this directory. The tthhbb executable was run with each of the input ("Options") text files to prepare the ntuples. Another change was also required; the tthhbbClass.cxx file was also edited to include a fail code to allow for events not having '4 tight b-tagged jets' since this was one of the criteria used in the cut-based preselection. The following code was added to tthhbbClass.cxx: if (<nop>BJets.size()<4) { m_failEvent+=131072; } This was done so that the Signal to Background ratio could be increased in order for the fit to finish and provide sensible results. Requiring 4 tight b-Jets removes proportionally much more of the ttjj background than the other samples because there are fewer b-jets. The reason the S/B ratio was so low was that a problem was found which meant that the original ratio used in the 'fix_weight' variable in genemflat_batch_Complete2_SL5.sh (up to version -06) was in fact incorrect, and when the correct weights were calculated the S/B ratio was very low indeed. The Neural Net was then configured to exclude events where: (m_failEvent & 196608)==1 with 196608=131072+65536. Creating plots to review the dataThere is a simple shell script included in the running Neural Net code package that can produce a nice html document you can use to review a few plots of interest - plotTMVA.sh. To run it, move it into the run directory you want to review, then it's a simple one-line command:./plotTMVA.sh 120 <run> <job>N.B. This is done automatically by genemflat currently. Debugging the codeBefore trying debugging, you should set up the environment in your terminal (when running the code normally, this is done automatically by tr${run}.job). This can be done by sourcing setup_glantp.sh, which automates setting of the relevant paths - remember to specify the release number of GlaNtp that you have in your area:source setup_glantp.sh 00-00-32At any point you can check that the a given steering file can be read by GlaNtp by using testSteerrv5.exe - found inside your GlaNtp package: testSteerrv5.exe <file to be tested>Another debugging script checks you have defined the processes correctly: testFlatProcessInforv5.exe FlatAtlastthPhysicsProc1.txtThe output from the near the end of this is the important bit: Table of what category each process is falling under IP : PN : LB : B : D : S : P : O 0 : tt : t#bar{t} : 1 : 0 : 0 : 1 : 0IP is something or other (need to ask Rick to remind me), PN is process name, LB is the label of the process. B, D and S are whether or not the process is Background, Data or Signal respectively. P is whether or not that process is included in the manufacture of Pseudoexperiments, and O is the order in which that process is plotted. To debug the code further, two things need to be done - first, all the debug switches need to be turned on, and then you need to restrict the number of events to ~10 (for a Computentp run this will still manage to generate a 2 GB log file!). All of these switches are found in teststeerFlatReaderATLAStthSemileptonic.txt (the progenitor for all FlatReader files) and steerComputentp.txt (created by genemflat). The debug switches are: GeneralParameter bool 1 Debug=0 GeneralParameter bool 1 DebugGlobalInfo=0 GeneralParameter bool 1 DebugEvInfo=0 GeneralParameter int 1 ReportInterval=100In steerComputentp.txt there is also one additional debug option: GeneralParameter bool 1 DebugFlatTRntp=1All the debug switches can be set to one (I'm not sure of the exact effect of each individual switch) - the report interval can be adapted depending on how many events are present in your input files and on how large you want your log files to be. To restrict the events you use # # Loop Control # GeneralParameter int 1 NEvent=999999 GeneralParameter int 0 FirstEvent=1 GeneralParameter int 0 LastEvent=10The easiest switch is to set NEvent=10 - however, if desired you can run over a specified range, by switching of the NEvent switch (changing it to int 0 NEvent) and switching on the other two switches, using them to specify the events you wish to run over. Then you can run a subset of a complete run, but altering the flags found in genemflat: # Flags to limit the scope of the run if desired Computentps=1 DoTraining=0 ComputeTMVA=0 DoTemplates=0 DoStackedPlots=0 DoFit=0However, sometimes even this can not produce enough information., so there exist a few other options for checking your code. The first option is runFlatReader FlatReaderATLAStthNoNN.txt /data/atlas09/ahgemmell/NNInputFiles_v16/mergedfilesProcessed/ttH-v16.rootThis produces a lot of printout, so be sure to restrict the number of events as described above! An example of part of the output is ****** FlatReader Info Start ****** Entries examined : 13307 Events:00 Seen : 13307 Events:00 Seen Any : 13307 Events:01 Passing Mask Selection for Higgs : 5049The numbers correspond to first of all the number of entries in the input MC passing into the FlatReader (e.g. in our preselection we require at least 6 jets). The second number is in reference to the number of entries passing the cutMask (e.g. requiring exactly 6 jets). This is shortly followed by WtEvents:00 Seen : 6.17741 WtEvents:00 Seen Any : 6.17741 WtEvents:01 Passing Mask Selection for Higgs : 2.59829These entries correspond to the yields - the numbers of events expected in our specified luminosity. If you want to get more debugging from Computentp, then run it with another argument (doesn't matter what the argument is - in the example below it's simply 1): Computentp steerComputentp.txt 1 Some error messages and how to fix themDouble Variable: my_NN_BJet12_M not valid and hence saved : 1Look at VariableTreeToNTPATLASttHSemiLeptonic-v16.txt - are the names of the variables really consistent? Various other switches of interestIn FlatReader:To enable you to specify the range and number of bins in the histogram showing the distribution of the pseudoexperiment exclusions. (Found in drivetestFlatFitAtlastth.rootUnscaledTemplates.root)GeneralParameter int 1 LikeliPseudoExpMin=0. GeneralParameter int 1 LikeliPseudoExpMax=10. GeneralParameter int 1 LikeliPseudoExpNBin=400 TMVAsteer.txt (genemflat_batch_Complete2_SL5.sh)H6AONN5MEMLP MLP 1 !H:!V:NCycles=1000:HiddenLayers=N+1,N:RandomSeed=9876543If the phrase 'H6AONN5MEMLP' is changed, then this change must also be propogated to the webpage plotter (e-mail from Rick 1 Mar 2011) | |||||||||||||||||||||
Changed: | |||||||||||||||||||||
< < |
| ||||||||||||||||||||
> > |
| ||||||||||||||||||||
|