Difference: HiggsBb (1 vs. 32)

Revision 312014-06-13 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 349 to 349
 The next step is to calculate the MJ scale factors and scale the plots accordingly. To do this run
 ./RunPlotMaker -config ./configs/plot.config -MJScales 
Added:
>
>
If you are making a file with full systematics you will want to run the -MJScales multiple times. One for each of the systematics you want to scale. For example run

 ./RunPlotMaker -config ./configs/MyFirstSystematic.config -MJScales 

Where in the config file you replace the MCMJ, DataElMJ and DataMuMJ files with their systematic versions.

 (Note: To make a basic set of plots now do ./RunPlotMaker -config ./configs/plot.config)

to calculate the scale factory then run

Line: 373 to 380
  This will produce a limit file.
Added:
>
>

A little about the plot.config file

The plot.Config file will look something like the following


VersionMC           20140515_LimitFile_v5.3
TagMJ               True
CDI_CALIB           ttbar_all
DataType            Data12
YearName            2012
#ScaleFitRegion      MJFit
ScaleFitRegion      2BTag
#ScaleFitRegion      MJFit2BTag
ScaleFitVar         met
#ScaleFitVar         mtw
#ScaleFitVar         lep1Pt
UseEOS              True
#OutTag              2BTagScales

Blind               True

#                   MergedLepton, Electron or Muon
LepFlavour          MergedLepton
SplitCharge         False

# Directory to store input plot outputs - could do to make this nicer to you don't need to input username
FitDirectory        /afs/cern.ch/work/p/pmullen/analysis/FitInputs/

#DataMETFile         
#DataMETMJFile       
DataElFile          AnalysisManager.data12_8TeV.OneLepton.Electron.Hists.root
DataElMJFile        AnalysisManager.data12_8TeV.OneLepton.Electron.Hists.MJ.SysMJShapeDo.root
DataMuFile          AnalysisManager.data12_8TeV.OneLepton.Muon.Hists.root
DataMuMJFile        AnalysisManager.data12_8TeV.OneLepton.Muon.Hists.MJ.root
MCFile              AnalysisManager.mc12_8TeV_p1328.OneLepton.Hists.root
MCMJFile            AnalysisManager.mc12_8TeV_p1328.OneLepton.Hists.MJ.root

LimitFileVersion    v5.3

If you want to look at the MJ template and its electroweak contamination you can make DataElFile and DataElMJFile be the MJ template, comment out the MCMJ file and make the MC file the MC MJ template.

 

META FILEATTACHMENT attachment="RunPlotMaker.C" attr="" comment="plotting script" date="1361536336" name="RunPlotMaker.C" path="RunPlotMaker.C" size="35987" stream="RunPlotMaker.C" tmpFilename="/usr/tmp/CGItemp31048" user="PaulMullen" version="2"

Revision 302014-03-13 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 362 to 362
 source scripts/tar_area.sh SomeName_v0.0_LimitFile source scripts/submitInputFile.sh plot.config datestamp_SomeName_v0.0_LimitFile
Changed:
<
<
This tars the files and sends them off to the batch to scale the plots using the calculated scale factors.
>
>
This tars the files and sends them off to the batch to scale the plots using the calculated scale factors and writing a lot of files to your work directory to be used for making an limit file.
 (Note: You do not need configs/plot.config just plot.config as the bash script that untars the file on the batch will take care of prepending configs/)

When thi is done you can do

Revision 292014-02-25 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 323 to 336
  Get a file named listMCFiles.lst that lists all the montecarlo background input files to RunHistMaker. This can usually be found on eos. (I got it from /eos/atlas/user/g/gfacini/Hbb/Pub2014/SFrame/2012/NTuple_20140119/listMCFiles.lst)
Changed:
<
<
The run
 source scripts/tar_area.sh SomeName_v0.0 
This creates a directory of that name in ~/work. You then do
 source scripts/setup2012.sh 
. This file can be edited to decide which systematics to inclue. For none just set the systematic to Nominal. This will submit all the jobs to the batch and will return the output to your eos directory named something like /eos/atlas/user/p/pmullen/Hbb/Pub2014/histMaker/2012.
>
>
The run
 source scripts/tar_area.sh SomeName_v0.0 

This creates a directory of that name in ~/work. You then do

 source scripts/submit2012.sh 

. This file can be edited to decide which systematics to inclue. For none just set the systematic to Nominal. This will submit all the jobs to the batch and will return the output to your eos directory named something like /eos/atlas/user/p/pmullen/Hbb/Pub2014/histMaker/2012.

  (Note: When the directory is tarred it creates a file called scripts/run_now.txt that has Some_Name_v0.0 in it and the Submit2012.sh script looks in this text file for the name of the tarball to use for the submission)
Added:
>
>
The next step is to calculate the MJ scale factors and scale the plots accordingly. To do this run
 ./RunPlotMaker -config ./configs/plot.config -MJScales 
 
Deleted:
<
<
The next step is to calculate the MJ scale factors and scale the plots accordingly. To do this run
 ./RunPlotMaker -config ./configs/plot.config -MJScales 
 (Note: To make a basic set of plots now do ./RunPlotMaker -config ./configs/plot.config)
Changed:
<
<
to calculate the scale factory then run
 ./RunPlotMaker -config ./configs/plot.config -CheckForFiles 
to make sure all the required files are present.(Note: This will not check the files are of a reasonable size so if something went wrong but the file was returned as empty it will think nothing is wrong)
>
>
to calculate the scale factory then run
 ./RunPlotMaker -config ./configs/plot.config -CheckForFiles 

to make sure all the required files are present.(Note: This will not check the files are of a reasonable size so if something went wrong but the file was returned as empty it will think nothing is wrong)

  Next we do
Added:
>
>
 
source scripts/tar_area.sh SomeName_v0.0_LimitFile
Changed:
<
<
source scripts/submitInputFile.sh plot.config datestamp_SomeName_v0.0_LimitFile
>
>
source scripts/submitInputFile.sh plot.config datestamp_SomeName_v0.0_LimitFile
  This tars the files and sends them off to the batch to scale the plots using the calculated scale factors. (Note: You do not need configs/plot.config just plot.config as the bash script that untars the file on the batch will take care of prepending configs/)
Line: 344 to 366
 (Note: You do not need configs/plot.config just plot.config as the bash script that untars the file on the batch will take care of prepending configs/)

When thi is done you can do

Added:
>
>
 
./RunPlotMaker -config ./config/plot.config -NominalLimitFile
Changed:
<
<
./RunPlotMaker -config ./config/plot.config -LimitFile
>
>
./RunPlotMaker -config ./config/plot.config -LimitFile
  This will produce a limit file.

Revision 282014-02-24 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 329 to 329
 (Note: When the directory is tarred it creates a file called scripts/run_now.txt that has Some_Name_v0.0 in it and the Submit2012.sh script looks in this text file for the name of the tarball to use for the submission)
Changed:
<
<
The next step is to calculate the MJ scale factors and scale the plots accordingly. To do this run
 ./RunPlotMaker -config ./configs/plot.config -MJScales 
to calculate the scale factory then run
 ./RunPlotMaker -config ./configs/plot.config -CheckForFiles 
to make sure all the required files are present.(Note: This will not check the files are of a reasonable size so if something went wrong but the file was returned as empty it will think nothing is wrong)
>
>
The next step is to calculate the MJ scale factors and scale the plots accordingly. To do this run
 ./RunPlotMaker -config ./configs/plot.config -MJScales 
(Note: To make a basic set of plots now do ./RunPlotMaker -config ./configs/plot.config)

to calculate the scale factory then run

 ./RunPlotMaker -config ./configs/plot.config -CheckForFiles 
to make sure all the required files are present.(Note: This will not check the files are of a reasonable size so if something went wrong but the file was returned as empty it will think nothing is wrong)
  Next we do

Revision 272014-02-21 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 22 to 22
 

Code Checkout

The latest version of the code can be checked out from the correct repository by doing:

Deleted:
<
<
 
mkdir myVH_research
cd myVH_research
Changed:
<
<
svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/trunk
>
>
svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/trunk
  However it is probably best to use the latest tagged version of the code as this should (but wont necessarily) be stable. This can be checked out by doing
Deleted:
<
<
 
mkdir myVH_research
cd myVH_research
Changed:
<
<
svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/tags ElectroweakBosons-xx-xx-xx
>
>
svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/tags ElectroweakBosons-xx-xx-xx
  Where xx-xx-xx is the tag number.
Line: 46 to 42
 (Note: for more complete instructions see the README file in the trunk subdirectory)

(1) To setup the environment:

Deleted:
<
<
 
cd myVH_research/trunk
Changed:
<
<
source scripts/setup_lxplus.sh
>
>
source scripts/setup_lxplus.sh
  This will setup the required environment variables, the Gnu C++ compiler and ROOTCORE.
Line: 58 to 52
  (Note: If the extra code fails to compile due to the MET package you need to remove the code and run the 2012 versions of all the scripts)
Deleted:
<
<
 
./scripts/get_allcode.sh
cd SFrame
source setup.sh
Changed:
<
<
cd ..
>
>
cd ..
  (3) Next compile the software by doing
Deleted:
<
<
 
Changed:
<
<
make
>
>
make
  (You can also clean up by doing make clean and make distclean)
Changed:
<
<
(4) For the purposes of analysing H->bb decays the code is stored in the AnalysisWZorHbb/ directory.We now need to compile this code before we can do the analysis.
>
>
(4) For the purposes of analysing H->bb decays the code is stored in the AnalysisWZorHbb/ directory.We now need to compile this code before we can do the analysis.
 
cd AnalysisWZorHbb
Changed:
<
<
make
>
>
make
 

Configuring and Running

Changed:
<
<
Now we can run our analysis by doing sframe_main config/WZorHbb_config_mc_nu_2011.xml for example. The .xml file is where the input files and settings are all defined(e.g. what corrections to apply to the data) and can be modified or written to suit the needs of the analysis. For example to do a H->bb analysis involving zero leptons that makes flat ntuples you need the following lines.
>
>
Now we can run our analysis by doing sframe_main config/WZorHbb_config_mc_nu_2011.xml for example. The .xml file is where the input files and settings are all defined(e.g. what corrections to apply to the data) and can be modified or written to suit the needs of the analysis. For example to do a H->bb analysis involving zero leptons that makes flat ntuples you need the following lines.
 
 <UserConfig>
                &setup;
Line: 175 to 156
 

Producing Plots

The output from running sframe is stored in .root files as common ntuples. To produce plots from these output files we need to use HistMaker by doing

Deleted:
<
<
 
cd macros
Changed:
<
<
make
>
>
make
  to make the required code. To run the code we use the command
Deleted:
<
<
 
./RunHistMaker  <path to config file> <runMJ> <path to files output by sframe> <output directory>      -- explain better?
Line: 196 to 174
 This will output a .root file containing histograms.

If RunHistMaker crashes saying it did not find the cross-section(xsec) for a given number(correlating to a data set) you may need to edit the function InitCrossSections2011() in the file macros/src/process.cpp. You will need to add the following code

Deleted:
<
<
 

allXSecs.push_back(xsection(dataset, cross-section, k factor, 1, "sample name"));  -- find out what 1 is.

e.g.
Changed:
<
<
allXSecs.push_back(xsection(109351, 4.0638E-05, 1, 1,"ZHnunu"));
>
>
allXSecs.push_back(xsection(109351, 4.0638E-05, 1, 1,"ZHnunu"));
  Atlas twiki pages can be used to find the cross-section for 7TeV data, 8 TeV data and for general datasets.
Line: 214 to 189
 

Stacking Plots

Stacked plots can be produced using the RunPlotMaker.C program.(This uses the plotter.cpp class) The current implementation of the program expects input files to follow a naming convention that contains "OneLepton" in the root file name. This need to be changed depending on what you had in your .xml file. It also has hard coded links to a home directory that need to be changed. Choosing what systematics etc. you want to run is done with command line flags. My current version of the edited RunPlotMaker.C can be found here. It has been edited so that you can give it the path to a folder containing your files as a command line argument. It can be run as follows

Deleted:
<
<
 
Changed:
<
<
./RunPlotMaker -Data12 -llbb -NominalLimitFile -Indir $HOME/out/
>
>
./RunPlotMaker -Data12 -llbb -NominalLimitFile -Indir $HOME/out/
  It expects to find 6 files in the $HOME/out/ folder.
Deleted:
<
<
 
AnalysisManager.data12_8TeV.OneLepton.Electron.Hists.MJ.root
AnalysisManager.data12_8TeV.OneLepton.Electron.Hists.root
AnalysisManager.data12_8TeV.OneLepton.Muon.Hists.MJ.root
AnalysisManager.data12_8TeV.OneLepton.Muon.Hists.root
AnalysisManager.mc12_8TeV.OneLepton.Nominal.Hists.MJ.root
Changed:
<
<
AnalysisManager.mc12_8TeV.OneLepton.Nominal.Hists.root
>
>
AnalysisManager.mc12_8TeV.OneLepton.Nominal.Hists.root
  Note the "OneLepton" in all the file names. This is because I have yet to change the hard coded file name format in the script. The "MJ" files(Multijet) were produced from the RunHistMaker program by changing the RunMJ parameter to True. If you do not want to have to rename your files before using this program then simply edit your sframe .xml config file as follows
Added:
>
>
 
     

<InputData Lumi="1.0" NEventsMax="-1" Type="mc12_8TeV" Version="OneLepton.Nominal" Cacheable="True" SkipValid="True">
Line: 253 to 225
 

Installing and Building

To Install PROOF on Demand first we must get and unpack the code.

Deleted:
<
<
 
cd $HOME
wget http://pod.gsi.de/releases/pod/3.10/PoD-3.10-Source.tar.gz
Changed:
<
<
tar -xzvf PoD-3.10-Source.tar.gz
>
>
tar -xzvf PoD-3.10-Source.tar.gz
  To build the code do
Deleted:
<
<
 
cd $HOME/PoD-3.10-Source
mkdir build
Line: 274 to 243
 source scripts/setup_pod_lxplus.sh cd cd PoD/3.10/
Changed:
<
<
source PoD_env.sh
>
>
source PoD_env.sh
  To change the working directory PoD uses along with many other settings just edit $HOME/.PoD/PoD.cfg
Line: 308 to 275
 

Using PROOF

We now have PROOF installed but before doing any analysis we must first setup the correct environment. It is advised that after installing PROOF you start from a clean shell then do

Added:
>
>
 
cd myVH_research/trunk
source scripts/setup_lxplus.sh
source scripts/setup_pod_lxplus.sh
make
cd AnalysisWZorHbb
Changed:
<
<
make
>
>
make
  (Note: It is advisable to delete your ~/.proof and /tmp/$USER directories before starting proof)
Line: 320 to 287
 (Note: It is advisable to delete your ~/.proof and /tmp/$USER directories before starting proof)

To start the proof server we can do pod-server start. You can now request worker nodes. To do this we use the pod-submit command. For example to request 20 worker nodes we would do

Deleted:
<
<
 
Changed:
<
<
pod-submit -r lsf -q 1nh -n 20
>
>
pod-submit -r lsf -q 1nh -n 20
  Where 1nh is the name of a que we are submitting the jobs to and lsf is the name of the resource management system.

Now that we have set up our environment and requested worker nodes we can use the nodes for our analysis. To run analysis we use the same comand as before; sframe_main config/proof_WZorHbb_config_mc_nu_2011.xml.

Changed:
<
<
(Note: It is advisable to use nohup sframe_main config/proof_WZorHbb_config_mc_nu_2011.xml &> output.txt & so that terminal crashes will not kill your job)
>
>
(Note: It is advisable to use nohup sframe_main config/proof_WZorHbb_config_mc_nu_2011.xml &> output.txt & so that terminal crashes will not kill your job)
  However our .xml file will look different from our non-PROOF analysis. We must change the RunMode option to be RunMode="PROOF" and the ProofServer option must also be changed. It must look like ProofServer="username@host:portnumber" where the port number, host and username are those output from the pod-server start command or the pod-info -c. For example ProofServer="pmullen@lxplus438.cern.ch:21002"
Line: 333 to 298
  However our .xml file will look different from our non-PROOF analysis. We must change the RunMode option to be RunMode="PROOF" and the ProofServer option must also be changed. It must look like ProofServer="username@host:portnumber" where the port number, host and username are those output from the pod-server start command or the pod-info -c. For example ProofServer="pmullen@lxplus438.cern.ch:21002"
Deleted:
<
<
 

Adding D3PD variables to our Algorithm

Changed:
<
<
In order to use D3PD variables that are not already used by the CERN code you must first find the details of the variable. To do this it is easiest to load the root file in CINT (root -l myfile.root) then use the MakeClass function of root to produce a header file that could be used to read the tree. This can be done on the CINT command line by doing treename->MakeClass("MyClassName"). This will write out a MyClassName.C and a MyClassName.h file. The details of how to read the variable you want can then be found in these files.
>
>
In order to use D3PD variables that are not already used by the CERN code you must first find the details of the variable. To do this it is easiest to load the root file in CINT (root -l myfile.root) then use the MakeClass function of root to produce a header file that could be used to read the tree. This can be done on the CINT command line by doing treename->MakeClass("MyClassName"). This will write out a MyClassName.C and a MyClassName.h file. The details of how to read the variable you want can then be found in these files.
  Once you have the details of the variable we can now add it to the AnalysisBase/include/EventBase.h and the AnalysisBase/include/Event.h file so that any code that inherits from this code is aware of the variable(the AnalysisWZorHbb code inherits from this as does all the code that comes from the CERN SVN package)
Line: 348 to 312
  For the variable to be written to the TTree in the SFrame section of the code we must create an output variable. To do this first add a variable of the correct type to the Algorithms header file. Now go to the Algorithms source file and use DeclareVariable in the BeginInputData() function to declare a branch in the output tree. Make sure to also clear() the variable in the ResetNtupleVars() function. You can now fill this variable in the fillNTuple() function probably using push_back.
Changed:
<
<
As an example lets say we want to output the jet_WIDTH variable from the previous section. We would first add std::vector pauls_jet_WIDTH; in the header file. In the source file we would go to the BeginInputData() function and add DeclareVariable(pauls_jet_WIDTH, "pauls_jet_WIDTH", treename); where pauls_jet_WIDTH is the name of the branch in our output tree and treename is our output tree; Also add pauls_jet_WIDTH.clear(); in the fillNTuple() function. To fill this variable we would then add pauls_jet_WIDTH.push_back(ev.jet_WIDTH->at(i)); to the fillNTuple() function.
>
>
As an example lets say we want to output the jet_WIDTH variable from the previous section. We would first add std::vector pauls_jet_WIDTH; in the header file. In the source file we would go to the BeginInputData() function and add DeclareVariable(pauls_jet_WIDTH, "pauls_jet_WIDTH", treename); where pauls_jet_WIDTH is the name of the branch in our output tree and treename is our output tree; Also add pauls_jet_WIDTH.clear(); in the fillNTuple() function. To fill this variable we would then add pauls_jet_WIDTH.push_back(ev.jet_WIDTH->at(i)); to the fillNTuple() function.

Using RunPlotMaker

Making an Input File

To use RunPlotMaker to make an input file do the following.

Get a file named listMCFiles.lst that lists all the montecarlo background input files to RunHistMaker. This can usually be found on eos. (I got it from /eos/atlas/user/g/gfacini/Hbb/Pub2014/SFrame/2012/NTuple_20140119/listMCFiles.lst)

The run

 source scripts/tar_area.sh SomeName_v0.0 
This creates a directory of that name in ~/work. You then do
 source scripts/setup2012.sh 
. This file can be edited to decide which systematics to inclue. For none just set the systematic to Nominal. This will submit all the jobs to the batch and will return the output to your eos directory named something like /eos/atlas/user/p/pmullen/Hbb/Pub2014/histMaker/2012.
 
Added:
>
>
(Note: When the directory is tarred it creates a file called scripts/run_now.txt that has Some_Name_v0.0 in it and the Submit2012.sh script looks in this text file for the name of the tarball to use for the submission)
 
Added:
>
>
The next step is to calculate the MJ scale factors and scale the plots accordingly. To do this run
 ./RunPlotMaker -config ./configs/plot.config -MJScales 
to calculate the scale factory then run
 ./RunPlotMaker -config ./configs/plot.config -CheckForFiles 
to make sure all the required files are present.(Note: This will not check the files are of a reasonable size so if something went wrong but the file was returned as empty it will think nothing is wrong)

Next we do

source scripts/tar_area.sh SomeName_v0.0_LimitFile
source scripts/submitInputFile.sh plot.config datestamp_SomeName_v0.0_LimitFile

This tars the files and sends them off to the batch to scale the plots using the calculated scale factors. (Note: You do not need configs/plot.config just plot.config as the bash script that untars the file on the batch will take care of prepending configs/)

When thi is done you can do

./RunPlotMaker -config ./config/plot.config -NominalLimitFile
./RunPlotMaker -config ./config/plot.config -LimitFile
 
Added:
>
>
This will produce a limit file.
 

Revision 262013-11-25 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 317 to 317
 make
Added:
>
>
(Note: It is advisable to delete your ~/.proof and /tmp/$USER directories before starting proof)
  To start the proof server we can do pod-server start. You can now request worker nodes. To do this we use the pod-submit command. For example to request 20 worker nodes we would do

Revision 252013-09-25 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 210 to 210
 

For a 2012 event InitCrossSection2012() pulls the information from a file called configs/2012_cross_sections.txt which can be edited to add the information for your particular dataset

Changed:
<
<
>
>

Adding Extra Plots

 

Stacking Plots

Stacked plots can be produced using the RunPlotMaker.C program.(This uses the plotter.cpp class) The current implementation of the program expects input files to follow a naming convention that contains "OneLepton" in the root file name. This need to be changed depending on what you had in your .xml file. It also has hard coded links to a home directory that need to be changed. Choosing what systematics etc. you want to run is done with command line flags. My current version of the edited RunPlotMaker.C can be found here. It has been edited so that you can give it the path to a folder containing your files as a command line argument. It can be run as follows

Line: 245 to 245
 For the data files change the Type to "data12_8TeV" and the Version to "OneLepton.Muon" or "OneLepton.Electron". This does not change the analysis type, only the output file name.

To change the Analysis type you change the Tool Name="BaselineTwoLepton" to BaselineTwoLeptonElectron or BaselineTwoLeptonMuon

Changed:
<
<
>
>

Adding Extra Stacked Plots

 

Running Using PROOF on Demand(PoD)

Proof is a framework for running many root analyses in parallel. It can be run locally using PROOF-lite which will utilise multiple cores on one machine or it can be run on many machines usind the PoD framework.

Revision 232013-03-22 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 343 to 343
  For instance for the jet_AntiKt4TopoEM_WIDTH variable we would find in MyClassName.h the line vector   *jet_AntiKt4TopoEM_WIDTH;. We would then go to AnalysisBase/include/EventBase.h and add this line there. We would then go to /AnalysisWZorHbb/src/WZorHbb.cxx and add ConnectVariable("jet_WIDTH"). Note that we have removed the name of the algorithm from the variable name. This is the convention used. The algorithm name is added by the ConnectVariable function. We must also add vector *jet_WIDTH to AnalysisBase/include/Event.h .
Added:
>
>

Adding the Variable to The Output Tree

For the variable to be written to the TTree in the SFrame section of the code we must create an output variable. To do this first add a variable of the correct type to the Algorithms header file. Now go to the Algorithms source file and use DeclareVariable in the BeginInputData() function to declare a branch in the output tree. Make sure to also clear() the variable in the ResetNtupleVars() function. You can now fill this variable in the fillNTuple() function probably using push_back.

As an example lets say we want to output the jet_WIDTH variable from the previous section. We would first add std::vector pauls_jet_WIDTH; in the header file. In the source file we would go to the BeginInputData() function and add DeclareVariable(pauls_jet_WIDTH, "pauls_jet_WIDTH", treename); where pauls_jet_WIDTH is the name of the branch in our output tree and treename is our output tree; Also add pauls_jet_WIDTH.clear(); in the fillNTuple() function. To fill this variable we would then add pauls_jet_WIDTH.push_back(ev.jet_WIDTH->at(i)); to the fillNTuple() function.

 

Revision 222013-03-21 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 332 to 332
  However our .xml file will look different from our non-PROOF analysis. We must change the RunMode option to be RunMode="PROOF" and the ProofServer option must also be changed. It must look like ProofServer="username@host:portnumber" where the port number, host and username are those output from the pod-server start command or the pod-info -c. For example ProofServer="pmullen@lxplus438.cern.ch:21002"
Added:
>
>

Adding D3PD variables to our Algorithm

In order to use D3PD variables that are not already used by the CERN code you must first find the details of the variable. To do this it is easiest to load the root file in CINT (root -l myfile.root) then use the MakeClass function of root to produce a header file that could be used to read the tree. This can be done on the CINT command line by doing treename->MakeClass("MyClassName"). This will write out a MyClassName.C and a MyClassName.h file. The details of how to read the variable you want can then be found in these files.

Once you have the details of the variable we can now add it to the AnalysisBase/include/EventBase.h and the AnalysisBase/include/Event.h file so that any code that inherits from this code is aware of the variable(the AnalysisWZorHbb code inherits from this as does all the code that comes from the CERN SVN package)

Now that we have done this we can add our variable to our function. In the case of the AnalysisWZorHbb code we would go to /AnalysisWZorHbb/src/WZorHbb.cxx and add ConnectVariable("MyD3PDVariableName") to the BeginInputFile function.

For instance for the jet_AntiKt4TopoEM_WIDTH variable we would find in MyClassName.h the line vector   *jet_AntiKt4TopoEM_WIDTH;. We would then go to AnalysisBase/include/EventBase.h and add this line there. We would then go to /AnalysisWZorHbb/src/WZorHbb.cxx and add ConnectVariable("jet_WIDTH"). Note that we have removed the name of the algorithm from the variable name. This is the convention used. The algorithm name is added by the ConnectVariable function. We must also add vector *jet_WIDTH to AnalysisBase/include/Event.h .

 

META FILEATTACHMENT attachment="RunPlotMaker.C" attr="" comment="plotting script" date="1361536336" name="RunPlotMaker.C" path="RunPlotMaker.C" size="35987" stream="RunPlotMaker.C" tmpFilename="/usr/tmp/CGItemp31048" user="PaulMullen" version="2"

Revision 212013-02-22 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 29 to 29
 svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/trunk
Changed:
<
<
However it is probably best to use the latest tagged version of the code as this is guaranteed to (but wont necessarily) be stable. This can be checked out by doing
>
>
However it is probably best to use the latest tagged version of the code as this should (but wont necessarily) be stable. This can be checked out by doing
 
mkdir myVH_research

Revision 202013-02-22 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 21 to 21
 

Code Checkout

Changed:
<
<
The code can be checked out from the correct repository by doing:
>
>
The latest version of the code can be checked out from the correct repository by doing:
 
mkdir myVH_research
Line: 29 to 29
 svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/trunk
Added:
>
>
However it is probably best to use the latest tagged version of the code as this is guaranteed to (but wont necessarily) be stable. This can be checked out by doing

mkdir myVH_research
cd myVH_research
svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/tags ElectroweakBosons-xx-xx-xx 

Where xx-xx-xx is the tag number.

 Note: you may want to set up kerberos first.(I did not require the GSSAPITrustDNS yes line).
Changed:
<
<

Compiling And Using The Code

>
>

Compiling the Code

  (Note: for more complete instructions see the README file in the trunk subdirectory)
Line: 71 to 81
 make
Added:
>
>

Configuring and Running

  Now we can run our analysis by doing sframe_main config/WZorHbb_config_mc_nu_2011.xml for example. The .xml file is where the input files and settings are all defined(e.g. what corrections to apply to the data) and can be modified or written to suit the needs of the analysis. For example to do a H->bb analysis involving zero leptons that makes flat ntuples you need the following lines.
Line: 89 to 100
 The "BaselineZeroLepton" option is what sets the analysis type. The string is parsed to determine whether we are doing a 0, 1 or 2 lepton analysis and also if we want to do an electron, muon or nominal analysis. (See line 250ish in WZorHbb.cxx for the code that performs the selection)
Changed:
<
<
(Note: every time you open a shell to run the code you need to do steps (1), (3) and (4) again. i.e. setup the environment and make the code. It is also a good idea to run make clean before you recompile anything.)
>
>
(Note: every time you open a shell to run the code you need to do steps (1), (3) and (4) from the compilation instructions again. i.e. setup the environment and make the code. It is also a good idea to run make clean before you recompile anything.)

(Note: It is advisable to use BaselineOneLepton for both the 1 lepton and 2 lepton analysis then diffirentiate between the two in the config file that is passed to RunHistmaker.C)

In order to do a full analysis with the whole CERN framework this code must produce three files. One for montecarlo, one for electrons and one for muons. To do this three separate "cycles" must be defined in the .xml file. A cycle is defined as follows


<Cycle Name="AnalysisManager" RunMode="LOCAL" ProofServer="lite://" ProofWorkDir="/afs/cern.ch/user/p/pmullen/tag-tmp/" ProofNodes="-1" UseTreeCache="True" TreeCacheSize="30000000" TreeCacheLearnEntries="10" OutputDirectory="/afs/cern.ch/user/p/pmullen/tag-tmp/" PostFix="" TargetLumi="1.0">

    <InputData Lumi="1.0" NEventsMax="-1" Type="mc12_8TeV" Version="OneLepton.Nominal" Cacheable="True" SkipValid="True">

        &pauls_mc;
       <InputTree Name="physics" />
                 <OutputTree Name="Ntuple" />
    </InputData>

<UserConfig>


                &setup;
                <Item Name="CorrectFatJet"  Value="False"/>

                <Tool Name="BaselineOneLepton"    Class="WZorHbb">
                        &tool_base;
                                <Item Name="runNominal" Value = "True"/><!--true-->
                                <Item Name="ntupleName" Value = "Ntuple"/>

                </Tool>
    </UserConfig>
  </Cycle>

Where &pauls_mc is a file with paths to the montecarlo files being input. For contrast another cycle configuration is shown below; this time for an electron analysis in data.

(Note: The first cycle must be closed with the flag before opening another cycle)

  <Cycle Name="AnalysisManager" RunMode="LOCAL" ProofServer="lite://" ProofWorkDir="/afs/cern.ch/user/p/pmullen/tag-tmp/" ProofNodes="-1" UseTreeCache="True" TreeCacheSize="30000000" TreeCacheLearnEntries="10" OutputDirectory="/afs/cern.ch/user/p/pmullen/tag-tmp/" PostFix="" TargetLumi="1.0">


    <InputData Lumi="1.0" NEventsMax="-1" Type="data12_8TeV" Version="OneLepton.Electron" Cacheable="True" SkipValid="True">    

        &pauls_data;
       <InputTree Name="physics" />
                 <OutputTree Name="Ntuple" />
    </InputData>
    <UserConfig>

                &setup;
                <Item Name="CorrectFatJet"  Value="False"/>

                <Tool Name="OneLepton"    Class="WZorHbb">
                        &tool_base;
                                <Item Name="runNominal" Value = "True"/>
                                <Item Name="ntupleName" Value = "Ntuple"/>
                </Tool>


    </UserConfig>

  </Cycle>


The config file and the file containing the paths to the montecarlo files are attached for reference.

 

Producing Plots

Line: 254 to 334
 
Changed:
<
<
META FILEATTACHMENT attachment="RunPlotMaker.C" attr="" comment="plotting script" date="1354200605" name="RunPlotMaker.C" path="RunPlotMaker.C" size="27013" stream="RunPlotMaker.C" tmpFilename="/usr/tmp/CGItemp24384" user="PaulMullen" version="1"
>
>
META FILEATTACHMENT attachment="RunPlotMaker.C" attr="" comment="plotting script" date="1361536336" name="RunPlotMaker.C" path="RunPlotMaker.C" size="35987" stream="RunPlotMaker.C" tmpFilename="/usr/tmp/CGItemp31048" user="PaulMullen" version="2"
META FILEATTACHMENT attachment="llbb_local.xml" attr="" comment="Path to montecarlo files" date="1361537411" name="llbb_local.xml" path="llbb_local.xml" size="4281" stream="llbb_local.xml" tmpFilename="/usr/tmp/CGItemp27858" user="PaulMullen" version="1"
META FILEATTACHMENT attachment="paul_test_lepton_cuts.xml" attr="" comment="SFrame configuration file" date="1361537440" name="paul_test_lepton_cuts.xml" path="paul_test_lepton_cuts.xml" size="4573" stream="paul_test_lepton_cuts.xml" tmpFilename="/usr/tmp/CGItemp27695" user="PaulMullen" version="1"

Revision 192013-01-14 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 164 to 164
  For the data files change the Type to "data12_8TeV" and the Version to "OneLepton.Muon" or "OneLepton.Electron". This does not change the analysis type, only the output file name.
Changed:
<
<
To change the Analysis type you change the = <Tool Name="BaselineTwoLepton" = to BaselineTwoLeptonElectron or BaselineTwoLeptonMuon
>
>
To change the Analysis type you change the Tool Name="BaselineTwoLepton" to BaselineTwoLeptonElectron or BaselineTwoLeptonMuon
 

Running Using PROOF on Demand(PoD)

Revision 182013-01-11 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 86 to 86
 
Added:
>
>
The "BaselineZeroLepton" option is what sets the analysis type. The string is parsed to determine whether we are doing a 0, 1 or 2 lepton analysis and also if we want to do an electron, muon or nominal analysis. (See line 250ish in WZorHbb.cxx for the code that performs the selection)
 

(Note: every time you open a shell to run the code you need to do steps (1), (3) and (4) again. i.e. setup the environment and make the code. It is also a good idea to run make clean before you recompile anything.)

Revision 172012-12-17 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 163 to 163
  For the data files change the Type to "data12_8TeV" and the Version to "OneLepton.Muon" or "OneLepton.Electron". This does not change the analysis type, only the output file name.
Added:
>
>
To change the Analysis type you change the = <Tool Name="BaselineTwoLepton" = to BaselineTwoLeptonElectron or BaselineTwoLeptonMuon
 

Running Using PROOF on Demand(PoD)

Proof is a framework for running many root analyses in parallel. It can be run locally using PROOF-lite which will utilise multiple cores on one machine or it can be run on many machines usind the PoD framework.

Revision 162012-12-11 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 9 to 9
  The CERN twiki page for Diboson studies may also be useful.
Added:
>
>
The latest CERN ntuples may be useful for testing purposes.
 (Note: If at any stage when setting up an environment or building code you encounter a problem before trying anything else first do make clean on anything you have done make on then use a clean shell and retry.)

Running Mode

Revision 152012-11-29 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 159 to 159
 
Changed:
<
<
For the data files change the Type to "data12_8TeV". This does not change the analysis type, only the output file name.
>
>
For the data files change the Type to "data12_8TeV" and the Version to "OneLepton.Muon" or "OneLepton.Electron". This does not change the analysis type, only the output file name.
 

Running Using PROOF on Demand(PoD)

Revision 142012-11-29 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 130 to 130
 

Stacking Plots

Changed:
<
<
Stacked plots can be produced using the RunPlotMaker.C program. This uses the plotter.cpp class. This is problematic as the current implementation of plotter.cpp has hard coded references to "BaselineOneLepton" when it looks for tree/branches in the root file provided. This need to be changed depending on what you had in your .xml file. TheRunPlotMaker program itself has hard coded links to a home directory that need to be changed. Choosing what systematics etc. you want to run is done with command line flags. My current version of the edited RunPlotMaker.C can be found here.
>
>
Stacked plots can be produced using the RunPlotMaker.C program.(This uses the plotter.cpp class) The current implementation of the program expects input files to follow a naming convention that contains "OneLepton" in the root file name. This need to be changed depending on what you had in your .xml file. It also has hard coded links to a home directory that need to be changed. Choosing what systematics etc. you want to run is done with command line flags. My current version of the edited RunPlotMaker.C can be found here. It has been edited so that you can give it the path to a folder containing your files as a command line argument. It can be run as follows

./RunPlotMaker -Data12 -llbb -NominalLimitFile -Indir $HOME/out/

It expects to find 6 files in the $HOME/out/ folder.

AnalysisManager.data12_8TeV.OneLepton.Electron.Hists.MJ.root
AnalysisManager.data12_8TeV.OneLepton.Electron.Hists.root
AnalysisManager.data12_8TeV.OneLepton.Muon.Hists.MJ.root
AnalysisManager.data12_8TeV.OneLepton.Muon.Hists.root
AnalysisManager.mc12_8TeV.OneLepton.Nominal.Hists.MJ.root
AnalysisManager.mc12_8TeV.OneLepton.Nominal.Hists.root

Note the "OneLepton" in all the file names. This is because I have yet to change the hard coded file name format in the script. The "MJ" files(Multijet) were produced from the RunHistMaker program by changing the RunMJ parameter to True. If you do not want to have to rename your files before using this program then simply edit your sframe .xml config file as follows

     

<InputData Lumi="1.0" NEventsMax="-1" Type="mc12_8TeV" Version="OneLepton.Nominal" Cacheable="True" SkipValid="True">

        <input data files here>
       <InputTree Name="physics" />
                 <OutputTree Name="Ntuple" />
    </InputData>

For the data files change the Type to "data12_8TeV". This does not change the analysis type, only the output file name.

 

Running Using PROOF on Demand(PoD)

Line: 217 to 246
 (Note: It is advisable to use nohup sframe_main config/proof_WZorHbb_config_mc_nu_2011.xml &> output.txt & so that terminal crashes will not kill your job)

However our .xml file will look different from our non-PROOF analysis. We must change the RunMode option to be RunMode="PROOF" and the ProofServer option must also be changed. It must look like ProofServer="username@host:portnumber" where the port number, host and username are those output from the pod-server start command or the pod-info -c. For example ProofServer="pmullen@lxplus438.cern.ch:21002"

Added:
>
>

META FILEATTACHMENT attachment="RunPlotMaker.C" attr="" comment="plotting script" date="1354200605" name="RunPlotMaker.C" path="RunPlotMaker.C" size="27013" stream="RunPlotMaker.C" tmpFilename="/usr/tmp/CGItemp24384" user="PaulMullen" version="1"

Revision 132012-11-21 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 128 to 128
  For a 2012 event InitCrossSection2012() pulls the information from a file called configs/2012_cross_sections.txt which can be edited to add the information for your particular dataset
Added:
>
>

Stacking Plots

Stacked plots can be produced using the RunPlotMaker.C program. This uses the plotter.cpp class. This is problematic as the current implementation of plotter.cpp has hard coded references to "BaselineOneLepton" when it looks for tree/branches in the root file provided. This need to be changed depending on what you had in your .xml file. TheRunPlotMaker program itself has hard coded links to a home directory that need to be changed. Choosing what systematics etc. you want to run is done with command line flags. My current version of the edited RunPlotMaker.C can be found here.

 

Running Using PROOF on Demand(PoD)

Proof is a framework for running many root analyses in parallel. It can be run locally using PROOF-lite which will utilise multiple cores on one machine or it can be run on many machines usind the PoD framework.

Revision 122012-11-15 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 210 to 210
  Now that we have set up our environment and requested worker nodes we can use the nodes for our analysis. To run analysis we use the same comand as before; sframe_main config/proof_WZorHbb_config_mc_nu_2011.xml.
Added:
>
>
(Note: It is advisable to use nohup sframe_main config/proof_WZorHbb_config_mc_nu_2011.xml &> output.txt & so that terminal crashes will not kill your job)
 However our .xml file will look different from our non-PROOF analysis. We must change the RunMode option to be RunMode="PROOF" and the ProofServer option must also be changed. It must look like ProofServer="username@host:portnumber" where the port number, host and username are those output from the pod-server start command or the pod-info -c. For example ProofServer="pmullen@lxplus438.cern.ch:21002"

Revision 112012-10-26 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 7 to 7
  An overview of the software framework can be found here.
Added:
>
>
The CERN twiki page for Diboson studies may also be useful.
 (Note: If at any stage when setting up an environment or building code you encounter a problem before trying anything else first do make clean on anything you have done make on then use a clean shell and retry.)

Running Mode

Line: 187 to 189
 

Using PROOF

Added:
>
>
We now have PROOF installed but before doing any analysis we must first setup the correct environment. It is advised that after installing PROOF you start from a clean shell then do
cd myVH_research/trunk
source scripts/setup_lxplus.sh
source scripts/setup_pod_lxplus.sh
make
cd AnalysisWZorHbb
make
 To start the proof server we can do pod-server start. You can now request worker nodes. To do this we use the pod-submit command. For example to request 20 worker nodes we would do
Line: 194 to 207
 

Where 1nh is the name of a que we are submitting the jobs to and lsf is the name of the resource management system.

Added:
>
>
Now that we have set up our environment and requested worker nodes we can use the nodes for our analysis. To run analysis we use the same comand as before; sframe_main config/proof_WZorHbb_config_mc_nu_2011.xml.

However our .xml file will look different from our non-PROOF analysis. We must change the RunMode option to be RunMode="PROOF" and the ProofServer option must also be changed. It must look like ProofServer="username@host:portnumber" where the port number, host and username are those output from the pod-server start command or the pod-info -c. For example ProofServer="pmullen@lxplus438.cern.ch:21002"

Revision 102012-10-25 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 31 to 31
  (Note: for more complete instructions see the README file in the trunk subdirectory)
Changed:
<
<
To setup the environment:
>
>
(1) To setup the environment:
 
cd myVH_research/trunk
Line: 40 to 40
  This will setup the required environment variables, the Gnu C++ compiler and ROOTCORE.
Changed:
<
<
To download and compile and extra software required do
>
>
(2) To download and compile and extra software required do
  (Note: If the extra code fails to compile due to the MET package you need to remove the code and run the 2012 versions of all the scripts)
Line: 52 to 52
 cd ..
Changed:
<
<
Next compile the software by doing
>
>
(3) Next compile the software by doing
 
make
Line: 60 to 60
  (You can also clean up by doing make clean and make distclean)
Changed:
<
<
For the purposes of analysing H->bb decays the code is stored in the AnalysisWZorHbb/ directory.We now need to compile this code before we can do the analysis.
>
>
(4) For the purposes of analysing H->bb decays the code is stored in the AnalysisWZorHbb/ directory.We now need to compile this code before we can do the analysis.
 
cd AnalysisWZorHbb
Line: 84 to 84
 
Changed:
<
<
(Note: every time you open a shell to run the code you need to do steps 1 and 3 again. i.e. setup the environment and make the code. It is also a good idea to run make clean before you recompile anything.)
>
>
(Note: every time you open a shell to run the code you need to do steps (1), (3) and (4) again. i.e. setup the environment and make the code. It is also a good idea to run make clean before you recompile anything.)
 

Producing Plots

Line: 100 to 100
 
./RunHistMaker  <path to config file> <runMJ> <path to files output by sframe> <output directory>      -- explain better?
Added:
>
>
e.g.

./RunHistMaker configs/zerolepton.config false $HOME/out/AnalysisManager.mc11_7TeV.ZeroLepton.Nominal.root ./

 

This will output a .root file containing histograms.

Revision 92012-10-24 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 22 to 22
 
mkdir myVH_research
cd myVH_research
Changed:
<
<
svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/trunk/AnalysisWZorHbb/
>
>
svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/trunk
 

Note: you may want to set up kerberos first.(I did not require the GSSAPITrustDNS yes line).

Line: 34 to 34
 To setup the environment:
Changed:
<
<
cd myVH_research/ElectroweakBosons/trunk
>
>
cd myVH_research/trunk
 source scripts/setup_lxplus.sh
Line: 68 to 68
 
Changed:
<
<
Now we can run our analysis by doing sframe_main config/WZorHbb_config_mc_nu_2011.xml for example. The .xml file is where the input files and settings are all defined(e.g. what corrections to apply to the data) and can be modified or written to suit the needs of the analysis.
>
>
Now we can run our analysis by doing sframe_main config/WZorHbb_config_mc_nu_2011.xml for example. The .xml file is where the input files and settings are all defined(e.g. what corrections to apply to the data) and can be modified or written to suit the needs of the analysis. For example to do a H->bb analysis involving zero leptons that makes flat ntuples you need the following lines.

 <UserConfig>
                &setup;
                <Tool Name="BaselineZeroLepton"    Class="WZorHbb">
                        &tool_base;
                                <Item Name="runNominal" Value = "True"/>
                                <Item Name="ntupleName" Value = "Ntuple"/>
                </Tool>
    </UserConfig>

  (Note: every time you open a shell to run the code you need to do steps 1 and 3 again. i.e. setup the environment and make the code. It is also a good idea to run make clean before you recompile anything.)
Line: 130 to 144
 make make install cd
Changed:
<
<
source myVH_research/ElectroweakBosons/trunk/scripts/setup_pod_lxplus.sh
>
>
cd myVH_research/ElectroweakBosons/trunk source scripts/setup_pod_lxplus.sh cd
 cd PoD/3.10/ source PoD_env.sh
Line: 139 to 155
  (Note: For those using an afs directory you need to change your [server] work_dir parameter to something like /tmp/$USER/PoD)
Added:
>
>
Now create a file called $HOME/.PoD/user_worker_env.sh and put the line source /afs/cern.ch/sw/lcg/contrib/gcc/4.3.5/x86_64-slc5-gcc43-opt/setup.sh in it

Then create a file called $HOME/.PoD/user_xpd.cf and put the line xpd.rootd allow in it.

DEPRECATED?----------------------------------------------------------------------------------------------------------

 Now create a file called $HOME/.PoD/user_worker_env.sh and put the following settings in it
Line: 154 to 177
 echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH"

Added:
>
>
END---------------------------------------------------------------------------------------------------------------------------------------

Using PROOF

To start the proof server we can do pod-server start. You can now request worker nodes. To do this we use the pod-submit command. For example to request 20 worker nodes we would do

pod-submit -r lsf -q 1nh -n 20
 
Changed:
<
<
Then create a file called $HOME/.PoD/user_xpd.cf and put the line xpd.root allow in it.
>
>
Where 1nh is the name of a que we are submitting the jobs to and lsf is the name of the resource management system.

Revision 82012-10-24 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 137 to 137
  To change the working directory PoD uses along with many other settings just edit $HOME/.PoD/PoD.cfg
Added:
>
>
(Note: For those using an afs directory you need to change your [server] work_dir parameter to something like /tmp/$USER/PoD)
 Now create a file called $HOME/.PoD/user_worker_env.sh and put the following settings in it

Revision 72012-10-23 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 7 to 7
  An overview of the software framework can be found here.
Added:
>
>
(Note: If at any stage when setting up an environment or building code you encounter a problem before trying anything else first do make clean on anything you have done make on then use a clean shell and retry.)
 

Running Mode

The code can be run in two different modes. Locally or using PROOF(Parallel ROOT Facility). PROOF allows analysis of a large number of ROOT files in parallel using multiple machines or processor cores.

Line: 104 to 106
  For a 2012 event InitCrossSection2012() pulls the information from a file called configs/2012_cross_sections.txt which can be edited to add the information for your particular dataset
Added:
>
>

Running Using PROOF on Demand(PoD)

Proof is a framework for running many root analyses in parallel. It can be run locally using PROOF-lite which will utilise multiple cores on one machine or it can be run on many machines usind the PoD framework.

Installing and Building

To Install PROOF on Demand first we must get and unpack the code.

cd $HOME
wget http://pod.gsi.de/releases/pod/3.10/PoD-3.10-Source.tar.gz
tar -xzvf PoD-3.10-Source.tar.gz

To build the code do

cd $HOME/PoD-3.10-Source
mkdir build
cd build
cmake -C ../BuildSetup.cmake ..
make
make install
cd
source myVH_research/ElectroweakBosons/trunk/scripts/setup_pod_lxplus.sh
cd PoD/3.10/
source PoD_env.sh

To change the working directory PoD uses along with many other settings just edit $HOME/.PoD/PoD.cfg

Now create a file called $HOME/.PoD/user_worker_env.sh and put the following settings in it


#! /usr/bin/env bash
echo "Setting user environment for workers ..."
export LD_LIBRARY_PATH=/afs/cern.ch/sw/lcg/external/qt/4.4.2/x86_64-slc5-gcc43-opt/lib:\
/afs/cern.ch/sw/lcg/external/Boost/1.44.0_python2.6/x86_64-slc5-gcc43-opt//lib:\
/afs/cern.ch/sw/lcg/app/releases/ROOT/5.30.01/x86_64-slc5-gcc43-opt/root/lib:\
/afs/cern.ch/sw/lcg/contrib/gcc/4.3.5/x86_64-slc5-gcc34-opt/lib64:\
/afs/cern.ch/sw/lcg/contrib/mpfr/2.3.1/x86_64-slc5-gcc34-opt/lib:\
/afs/cern.ch/sw/lcg/contrib/gmp/4.2.2/x86_64-slc5-gcc34-opt/lib
echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH"

Then create a file called $HOME/.PoD/user_xpd.cf and put the line xpd.root allow in it.

Revision 62012-10-19 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 68 to 68
  Now we can run our analysis by doing sframe_main config/WZorHbb_config_mc_nu_2011.xml for example. The .xml file is where the input files and settings are all defined(e.g. what corrections to apply to the data) and can be modified or written to suit the needs of the analysis.
Changed:
<
<
(Note: every time you open a shell to run the code you need to do steps 1 and 3 again. i.e. setup the environment and make the code.)
>
>
(Note: every time you open a shell to run the code you need to do steps 1 and 3 again. i.e. setup the environment and make the code. It is also a good idea to run make clean before you recompile anything.)
 

Producing Plots

Revision 52012-10-18 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 87 to 87
 

This will output a .root file containing histograms.

Added:
>
>
If RunHistMaker crashes saying it did not find the cross-section(xsec) for a given number(correlating to a data set) you may need to edit the function InitCrossSections2011() in the file macros/src/process.cpp. You will need to add the following code


allXSecs.push_back(xsection(dataset, cross-section, k factor, 1, "sample name"));  -- find out what 1 is.

e.g.

allXSecs.push_back(xsection(109351, 4.0638E-05, 1, 1,"ZHnunu"));

Atlas twiki pages can be used to find the cross-section for 7TeV data, 8 TeV data and for general datasets.

For a 2012 event InitCrossSection2012() pulls the information from a file called configs/2012_cross_sections.txt which can be edited to add the information for your particular dataset

Revision 42012-10-15 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 40 to 40
  To download and compile and extra software required do
Added:
>
>
(Note: If the extra code fails to compile due to the MET package you need to remove the code and run the 2012 versions of all the scripts)
 
./scripts/get_allcode.sh
cd SFrame

Revision 32012-10-10 - PaulMullen

Line: 1 to 1
 

Introduction

Line: 27 to 27
 

Compiling And Using The Code

Changed:
<
<
(Note: for more complete instructions see the README file in the trunk subdirectory) To setup the environment:
>
>
(Note: for more complete instructions see the README file in the trunk subdirectory)

To setup the environment:

 
cd myVH_research/ElectroweakBosons/trunk
Line: 63 to 65
  Now we can run our analysis by doing sframe_main config/WZorHbb_config_mc_nu_2011.xml for example. The .xml file is where the input files and settings are all defined(e.g. what corrections to apply to the data) and can be modified or written to suit the needs of the analysis.
Added:
>
>
(Note: every time you open a shell to run the code you need to do steps 1 and 3 again. i.e. setup the environment and make the code.)
 

Producing Plots

Revision 22012-10-09 - PaulMullen

Line: 1 to 1
Added:
>
>
 

Introduction

Documenting the setup of Higgsbb analysis code used by cern.

Changed:
<
<

Code Checkout

>
>
An overview of the software framework can be found here.

Running Mode

The code can be run in two different modes. Locally or using PROOF(Parallel ROOT Facility). PROOF allows analysis of a large number of ROOT files in parallel using multiple machines or processor cores.

Running Locally

Code Checkout

  The code can be checked out from the correct repository by doing:
Added:
>
>
 mkdir myVH_research cd myVH_research svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/trunk/AnalysisWZorHbb/
Added:
>
>
  Note: you may want to set up kerberos first.(I did not require the GSSAPITrustDNS yes line).
Added:
>
>

Compiling And Using The Code

(Note: for more complete instructions see the README file in the trunk subdirectory) To setup the environment:

cd myVH_research/ElectroweakBosons/trunk
source scripts/setup_lxplus.sh

This will setup the required environment variables, the Gnu C++ compiler and ROOTCORE.

To download and compile and extra software required do

./scripts/get_allcode.sh
cd SFrame
source setup.sh
cd ..

Next compile the software by doing

make

(You can also clean up by doing make clean and make distclean)

For the purposes of analysing H->bb decays the code is stored in the AnalysisWZorHbb/ directory.We now need to compile this code before we can do the analysis.

cd AnalysisWZorHbb
make

Now we can run our analysis by doing sframe_main config/WZorHbb_config_mc_nu_2011.xml for example. The .xml file is where the input files and settings are all defined(e.g. what corrections to apply to the data) and can be modified or written to suit the needs of the analysis.

Producing Plots

The output from running sframe is stored in .root files as common ntuples. To produce plots from these output files we need to use HistMaker by doing

cd macros
make

to make the required code. To run the code we use the command

./RunHistMaker  <path to config file> <runMJ> <path to files output by sframe> <output directory>      -- explain better?
 
Changed:
<
<

Using The Code

>
>
This will output a .root file containing histograms.

Revision 12012-10-04 - PaulMullen

Line: 1 to 1
Added:
>
>

Introduction

Documenting the setup of Higgsbb analysis code used by cern.

Code Checkout

The code can be checked out from the correct repository by doing:

mkdir myVH_research cd myVH_research svn co svn+ssh://svn.cern.ch/reps/atlasusr/mbellomo/ElectroweakBosons/trunk/AnalysisWZorHbb/

Note: you may want to set up kerberos first.(I did not require the GSSAPITrustDNS yes line).

Using The Code

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback