Higgs analysis at ATLAS using RooStats

This page contains basic information on getting started with Higgs analysis at ATLAS using RooStats. Only the most meager of attempts is made to keep this documentation current.

What is RooStats?

RooStats is a project to create statistical tools built on top of the RooFit library, which is a data-modelling toolkit. It is distributed in ROOT. Specifically, it has been distributed in the ROOT release since version 5.22 (December 2008). The latest version of ROOT is recommended as the RooStats project develops quickly.

setting up RooStats

There are three main options available for acquiring ROOT with RooStats included.

option 1: Download the latest ROOT release binaries.

The latest ROOT binaries for various operating systems are accessible here.

option 2: Build the ROOT trunk from source.

Follow the appropriate instructions here to build the ROOT trunk.

shell script: building ROOT with RooFit and RooStats


# This script builds the latest version of ROOT in Ubuntu. Specifically, first
# the ROOT prerequisites are installed, then the most common ROOT optional
# packages are installed. Next, the latest version of ROOT in the CERN Git
# repository is checked out. Finally, ROOT is compiled. After compiling is
# complete, ROOT environment variables should be set up as necessary.

echo -e "\nstart ROOT installation\n"
read -s -n 1 -p "Press any key to continue."

# Specify the time.

# Install ROOT prerequisites.
echo "install ROOT prerequisites..."
sudo apt-get -y install subversion
sudo apt-get -y install dpkg-dev
sudo apt-get -y install make
sudo apt-get -y install g++
sudo apt-get -y install gcc
sudo apt-get -y install binutils
sudo apt-get -y install libx11-dev
#sudo apt-get -y install libxpm-dev
sudo apt-get -y install libgd2-xpm-dev
sudo apt-get -y install libxft-dev
sudo apt-get -y install libxext-dev

# Install optional ROOT packages.
echo "install optional ROOT packages..."
sudo apt-get -y install gfortran
sudo apt-get -y openssl-dev
sudo apt-get -y install libssl-dev
#sudo apt-get -y install ncurses-dev
sudo apt-get -y install libpcre3-dev
sudo apt-get -y install xlibmesa-glu-dev
sudo apt-get -y install libglew1.5-dev
sudo apt-get -y install libftgl-dev
sudo apt-get -y install libmysqlclient-dev
sudo apt-get -y install libfftw3-dev
sudo apt-get -y install cfitsio-dev
sudo apt-get -y install graphviz-dev
sudo apt-get -y install libavahi-compat-libdnssd-dev
#sudo apt-get -y install libldap-dev
sudo apt-get -y install libldap2-dev
sudo apt-get -y install python-dev
sudo apt-get -y install libxml2-dev
sudo apt-get -y install libkrb5-dev
sudo apt-get -y install libgsl0-dev
sudo apt-get -y install libqt4-dev

# Check out the latest ROOT trunk. Save the download in the ~/root directory.
# This should take only a moment.
echo "check out the latest ROOT trunk..."
cd ~/
git clone http://root.cern.ch/git/root.git

# Configure for the compilation. Specifically, the system architecture is
# defined and building of the libRooFit advanced fitting package is enabled.
cd ~/root
while true; do
    read -p "Specify the computer bit architecture you want to compile ROOT for (64/32): " computerArchitecture
    if [ "${computerArchitecture}" == "32" ]; then
        echo "configure ROOT compile for 32 bit computer architecture..."
        #./configure linux --enable-roofit --enable-minuit2
        ./configure linux --enable-roofit --enable-minuit2 --enable-python --with-python-incdir=/usr/include/python2.6 --with-python-libdir=/usr/lib/i386-linux-gnu
    elif [ "${computerArchitecture}" == "64" ]; then
        echo "configure ROOT compile for 64 bit computer architecture..."
        #./configure linuxx8664gcc --enable-roofit --enable-minuit2
        ./configure linuxx8664gcc --enable-roofit --enable-minuit2 --enable-python --with-python-incdir=/usr/include/python2.6 --with-python-libdir=/usr/lib/x86_64-linux-gnu
        echo "invalid input"
# See other possible configurations using the following command: ./configure --help

# Specify the time.

while true; do
    read -p "Do you want to continue to compile ROOT now? (y/n): " yOrn
    if [ "$(echo "${yOrn}" | sed 's/\(.*\)/\L\1/')" == "y" ]; then
    elif [ "$(echo "${yOrn}" | sed 's/\(.*\)/\L\1/')" == "n" ]; then
        echo "exit installation script..."
        exit 0
        echo "invalid input"

# Compile.
echo "compile ROOT..."
time make

# Move ROOT to the install directory (e.g., /usr/local/) and set up ROOT environment variables in the specified configuration file (e.g., /etc/bash.bashrc, ~/.bashrc).
while true; do
    read -p "Do you want to continue to move ROOT to the directory "${installationDirectory}" and set up the ROOT environment variables in the file "${configurationFile}"? (y/n): " yOrn
    yOrnLowercase="$(echo "${yOrn}" | sed 's/\(.*\)/\L\1/')"
    if [ "${yOrnLowercase}" == "y" ]; then
    elif [ "${yOrnLowercase}" == "n" ]; then
        echo "exit installation script..."
        exit 0
    echo "invalid input"
echo "move ROOT to the directory "${installationDirectory}"..."
sudo mv ~/root "${installationDirectory}"

echo "Set up ROOT environment variables in the file "${configurationFile}"..."
echo -e "\n# ROOT environment variables" >> "${configurationFile}"
echo "export ROOTSYS="${installationDirectory}"/root" >> "${configurationFile}"
echo "export PATH=\$PATH:\$ROOTSYS/bin" >> "${configurationFile}"
echo "export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:\$ROOTSYS/lib" >> "${configurationFile}"

# Specify the time.

echo -e "\nROOT install complete\n"

option 3: Build the RooStats branch.

The RooStats branch can be built in order to have the latest development of RooStats (that has not yet been incorporated into a ROOT version). Instructions can be found here.


general description

The RooFit library provides a toolkit for modelling the expected distribution of events in a physics analysis. Models can be used to perform unbinned maximum likelihood fits, produce plots and generate "toy Monte Carlo" samples for various studies.

The core functionality of RooFit is to enable the modelling of 'event data' distributions, in which each event is a discrete occurrence in time and has one or more measured observables associated with it. Experiments of this nature result in datasets of Poisson (or binomial) statistics. The natural modeling language for such distributions is probability density functions (PDFs), , that describe the probability density of the distribution of observables x in terms of the function parameter .

In RooFit, every variable, data point, function and PDF is represented by a C++ object. So, for example, in constructing a RooFit model, the mathematical components of the model map to separate C++ objects. Objects are classified by the data or function type that they represent, not by their respective role in a particular setup. All objects are self-documenting. The name of an object is a unique identifier for the object while the title of an object is a more elaborate description of the object.

Here are a few examples of mathematical concepts that correspond to various RooFit classes:

mathematical concept RooFit class
variable RooRealVar
function RooAbsReal
PDF RooAbsPdf
space point (set of parameters) RooArgSet
list of space points RooAbsData
integral RooRealIntegral

Composite functions correspond to composite objects. The ArgSet class is dependent on argument order while the ArgList class is not.

example code: defining a RooFit variable

// general form for defining a RooFit variable:
RooRealVar x(<object name>, <object title>, <value>, <minimum value>, <maximum value>)
// specific example for defining a RooFit variable x with the value 5:
RooRealVar x("x", "x observable", 5, -10, 10)


A RooPlot is essentially an empty frame that is capable of holding anything plotted against its variable.


One of the things that makes the RooFit PDFs nice and flexible (but perhaps counterintuitive) is that they have no idea what is considered the observable. For example, a Gaussian has , the mean and sigma while the PDF is always normalised to unity. What is one integrating in order to get 1? For the Gaussian, it is , by convention; the mean and sigma are parameters of the model. RooFit doesn't know that is special; , the mean and sigma are all on equal footing. You can tell RooFit that x is the variable to normalise over.

So, RooGaussian has no intrinsic notion of distinction between observables and parameters. The choice of observables (for unit normalisation) is always passed to gauss.getVal().

What is the value of this? In a nutshell, it allows one to do Bayesian stuff very easily.

Bayes' theorem holds that the probability of given is related to the probability of given . Normally, one might say that the mean of a Gaussian is 1 and that then gives a distribution for . However, if one had a dataset for , one might want to know the posterior for the mean. The RooFit ability to switch what the probability density function is of is very useful for Bayesian stuff. Because RooFit allows this composition, the thing that acts as in the Gaussian could actually be a function of, say, . A Jacobian factor is picked up in going from to , so the normalisation no longer makes sense. Sometimes RooFit can figure out what the Jacobian factor should be and sometimes it resorts to numerical integration.

example code: create a Gaussian PDF using RooStats and plot it using the RooPlot class

    // Build a Gaussian PDF.
    RooRealVar x("x", "x", -10, 10);
    RooRealVar mean("mean", "mean of Gaussian", 0, -10, 10);
    RooRealVar sigma("sigma", "width of Gaussian", 3);
    RooGaussian gauss("gauss", "Gaussian PDF", x, mean, sigma);

    // Plot the PDF.
    RooPlot* xframe = x.frame();

example code: telling a RooFit PDF what to normalise over

// not normalised (i.e., this is not a PDF):
// Hey, RooFit! This is the thing over which you should normalise (i.e., guarantees Int[xmin, xmax] Gauss(x, m, s)dx == 1):
// What is the value if sigma is considered the observable? (i.e., guarantees Int[smin, smax] Gauss(x, m, s)ds == 1):


general description

A dataset is a collection of points in -dimensional space. In RooFit, data can be stored in an unbinned or binned manner. Unbinned data is stored using the RooDataSet class while binned data is stored using the RooDataHist class. The user can define how many bins there are in a variable. For the purposes of plotting using the RooPlot class, a RooDataSet is binned into a histogram.

In general, working in RooFit with binned and unbinned data is very similar, as both the RooDataSet (for unbinned data) and RooDataHist (for binned data) classes inherit from a common base class, RooAbsData, which defines the interface for a generic abstract data sample. With few exceptions, all RooFit methods take abstract datasets as input arguments, allowing for the interchangeable use of binned and unbinned data.

RooDataSet (unbinned data)

example code: generating toy Monte Carlo, storing it as unbinned data and then plotting it

    // Create a RooDataSet and fill it with generated toy Monte Carlo data:
    RooDataSet* myData = gauss.generate(x, 100);
    // Plot the dataset.
    RooPlot* myFrame = x.frame();

Plotting unbinned data is similar to plotting binned data with the exception that one can display it in some preferred binning.

example code: plotting unbinned data (a RooDataSet) using a specified binning

RooPlot* myFrame = x.frame() ;
myData.plotOn(myFrame, Binning(25)) ;

importing data from ROOT trees (how to populate RooDataSets from TTrees)

* to be completed *

RooDataHist (binned data)

importing data from ROOT TH histogram objects (take a histogram and map it to a binned data set) (how to populate RooDataHists from histograms)

In RooFit, binned data is represented by the RooDataHist class. The contents of a ROOT histogram can be imported to a RooDataHist object. In importing a ROOT histogram, the binning of the original histogram is imported as well. A RooDataHist associates the histogram with a RooFit variable object of type RooRealVar. In this way it is always known what kind of data is stored in the histogram.

In displaying the data, RooFit, by default, shows the 68% confidence interval for Poisson statistics.

example code: import a ROOT histogram into a RooDataHist (a RooFit binned dataset)

    // Access the file.
    TFile* myFile = new TFile("myFile.root");
    // Load the histogram.
    TH1* myHistogram = (TH1*) myFile->Get("myHistogram");
    // Draw the loaded histogram.

    // Declare an observable x.
    RooRealVar x("x", "x", -1, 2);

    // Create a binned dataset that imports the contents of TH1 and associates its contents to observable 'x'.
    RooDataHist myData("myData", "myData", RooArgList(x), myHistogram);

    // Plot the imported dataset.
    RooPlot* myFrame = x.frame();


fitting a model to data

Fitting a model to data can be done in many ways. The most common methods are the fit and the fit. The default fitting method in ROOT is the method, while the default method in RooFit is the method. The method is often preferred because it is more robust for low statistics fits and because it can also be performed on unbinned data.

fitting a PDF to unbinned data

example code: fit a Gaussian PDF to data

// Fit gauss to unbinned data

The RooFit workspace

general description

The RooFit "workspace" provides the ability to store the full likelihood model, any desired priors and the minimal data necessary to reproduce the likelihood function in a ROOT file. Thus, the workspace is needed for combinations and has potential for digitally publishing results (PhyStats agreed to publish likelihood functions). The RooFit workspace can be used for this.

Consider a Gaussian PDF. One might create this Gaussian PDF and then import it into a workspace. All of the dependencies of the Gaussian would be drawn in and owned by the workspace (there are no nightmarish ownership problems). Alternatively, one might create simply the Gaussian inside the workspace using the "Workspace Factory".

example code: using the Workspace Factory to create a Gaussian PDF

// Create a Gaussian PDF using the Workspace Factory (this is essentially the shorthand for creating a Gaussian).
 RooWorkspace* myWorkspace = new RooWorkspace("myWorkspace");
 myWorkspace->factory("Gaussian::g(x[-5, 5], mu[0], sigma[1]");

What's in the RooFit workspace?

example code: What's in the workspace?

// Open the appropriate ROOT file.
root -l myFile.root
// Import the workspace.
myWorkspace = (RooWorkspace*) _file0->Get("myWorkspace");
// Print the workspace contents.
// Example printout:

// variables
// ---------
// (x,m,s)
// p.d.f.s
// -------
// RooGaussian::g[ x=x mean=m sigma=s ] = 0
// datasets
// --------
// RooDataSet::d(x)

// Import the variable saved as x.
RooRealVar* myVariable = myWorkspace.var("x");
// Import the PDF saved as g.
RooAbsPdf* myPDF = myWorkspace.pdf("g");
// Import the data saved as d.
RooAbsData* myData = myWorkspace.data("d");
// Import the ModelConfig saved as m.
ModelConfig* myModelConfig = (ModelConfig*) myWorkspace.obj("m");

visual representations of the model/PDF contents


Graphviz consists of a graph description language called the DOT language and a set of tools that can generate and/or process DOT files.

example code: examining PDFs and creating graphical representations of them

// Create variables and a PDF using those variables.
RooRealVar mu("mu", "mu", 150);
RooRealVar sigma("sigma", "sigma", 5, 0, 20);
RooGaussian myGaussianPDF("myGaussianPDF", "Gaussian PDF", x, mu, sigma);

// Create a Graphviz DOT file with a representation of the object tree.
// This produced DOT file can be converted to some graphical representation:
// - Convert the DOT file to a 'top-to-bottom graph' using UNIX commands:
//     dot -Tgif -o myGaussianPDF_top-to-bottom_graph.gif myGaussianPDFTree.dot
// - Convert the DOT file to a 'spring-model graph' using UNIX commands:
//     fdp -Tgif -o myGaussianPDF_spring-model_graph.gif myGaussianPDFTree.dot

// Print the PDF contents.
// example output:
//     RooGaussian::G[ x=x mean=mu sigma=sigma ] = 1
//     sigma.Print()
//     RooRealVar::sigma = 5  L(0 - 20)

// Print the PDF contents in a detailed manner.

// Print the PDF contents to stdout.
// example output:
//     0x166eab0 RooGaussian::G = 1 [Auto] 
//     0x15f7fe0/V- RooRealVar::x = 150
//     0x1487090/V- RooRealVar::mu = 150
//     0x1487bc0/V- RooRealVar::sigma = 5

// Print the PDF contents to a file.
myGaussianPDF.printCompactTree("", "myGaussianPDFTree.txt")
// example output file contents:
//     0x166eab0 RooGaussian::G = 1 [Auto] 
//     0x15f7fe0/V- RooRealVar::x = 150
//     0x1487090/V- RooRealVar::mu = 150
//     0x1487bc0/V- RooRealVar::sigma = 5

Model Inspector

The Model Inspector is a GUI for examining the model contained in the RooFit workspace. The function and its parameters are as follows:

void ModelInspector(const char* infile = "", const char* workspaceName = "combined", const char* modelConfigName = "ModelConfig", const char* dataName = "obsData")

If the workspace(s) were made using hist2workspace, the names have a standard form (as shown above).

using the Model Inspector

// Load the macro.
root -L ModelInspector.C++
// Run the macro on the appropriate ROOT workspace file.

The Model Inspector GUI should appear. The GUI consists of a number of plots, corresponding to the various channels in the model, and a few sliders, corresponding to the parameters of interest and the nuisance parameters in the model. The initial plots are based on the values of the parameters in the workspace. There is a little "Fit" button which fits the model to the data points (while also printing the standard terminal output detailing the fitting). After fitting, a yellow band is shown around the best fit model indicating the uncertainty from propagating the uncertainty of the fit through the model. On the plots, there is a red line shown (corresponding to the fit for each of the parameters at their nominal value pushed up by 1 sigma).

How do I get it?

You can get the ModelInspector.C from the Statistics Forum RooStats tools here.


accessing the RooFit workspace

example code: accessing the workspace

// Open the appropriate ROOT file.
root -l BR5_MSSM_signal90_combined_datastat_model.root
// Alternatively, you could open the file in a manner such as the following:
//     myFileName = "BR5_MSSM_signal90_combined_datastat_model.root"
//     TFile *myFile = TFile::Open(myFileName);
// Import the workspace.
RooWorkspace* myWorkspace = (RooWorkspace*) _file0->Get("combined");
// Print the workspace contents.
// Import the PDF.
RooAbsPdf* myPDF = myWorkspace->pdf("model_BR5_MSSM_signal90");
// Import the variable representing the observable.
RooRealVar* myObservable = myWorkspace->var("obs");
// Create a RooPlot frame using the imported variable..
RooPlot* myFrame = myObservable.frame();
// Plot the PDF on the created RooPlot frame.
// Draw the RooPlot.

example code: accessing both data and PDF from a workspace stored in a file

// Note that the following code is independent of actual PDF in the file. So, for example, a full Higgs combination could work with identical code.

// Open a file and import the workspace.
TFile myFile("myResults.root") ;
RooWorkspace* myWorkspace = f.Get("myWorkspace") ;
// Plot the data and PDF
RooPlot* xframe = w->var("x")->frame() ;
w->data("d")->plotOn(xframe) ;
w->pdf("g")->plotOn(xframe) ;
// Construct a likelihood and profile likelihood
RooNLLVar nll("nll","nll",*myWorkspace->pdf("g"),*w->data("d")) ;
RooProfileLL pll("pll","pll", nll,*myWorkspace->var("m")) ;
RooPlot* myFrame = w->var("m")->frame(-1,1) ;
pll.plotOn(myFrame) ;

links for RooFit


general description

RooStats provides tools for high-level statistics questions in ROOT. It builds on RooFit, which provides basic building blocks for statistical questions.

The main goal is to standardise the interface for major statistical procedures so that they can work on an arbitrary RooFit model and dataset and handle any parameters of interest and nuisance parameters. Another goal is to implement most accepted techniques from frequentist, Bayesian and likelihood based approaches. A further goal is to provide utilities to perform combined measurements.

example code: create a simple model using the RooFit Workspace Factory. Specify parts of the model using ModelConfig. Create a simple dataset. Complete a confidence interval test using the ProfileLikelihoodCalculator of RooStats

    // In this script, a simple model is created using the Workspace Factory in RooFit.
    // ModelConfig is used to specify the parts of the model necessary for the statistical tools of RooStats.
    // A 95% confidence interval test is run using the ProfileLikelihoodCalculator of RooStats.
    // Define a RooFit random seed in order to produce reproducible results.
    // Make a simple model using the Workspace Factory.
    // Create a new workspace.
    RooWorkspace* myWorkspace = new RooWorkspace();
    // Create the PDF G(x|mu,1) and the variables x, mu and sigma in one command using the factory syntax.
    myWorkspace->factory("Gaussian::normal(x[-10,10], mu[-1,1], sigma[1])");

    // Define parameter sets for observables and parameters of interest.
    // Print the workspace contents.
    myWorkspace->Print() ;

    // Specify for the statistical tools the components of the defined model.
    // Create a new ModelConfig.
    ModelConfig* myModelConfig = new ModelConfig("my G(x|mu,1)");
    // Specify the workspace.
    // Specify the PDF.
    // Specify the parameters of interest.
    // Specify the observables.

    // Create a toy dataset.

    // Create a toy dataset of 100 measurements of the observables (x).
    RooDataSet* myData = myWorkspace->pdf("normal")->generate(*myWorkspace->set("obs"), 100);

    // Use the ProfileLikelihoodCalculator to obtain a 95% confidence interval.

    // Specify the confidence level required.
    double confidenceLevel = 0.95;
    // Create an instance of the ProfileLikelihoodCalculator, specifying the data and the ModelConfig for it.
    ProfileLikelihoodCalculator myProfileLikelihoodCalculator(*myData, *myModelConfig);
    // Set the confidence level.
    // Obtain the resulting interval.
    LikelihoodInterval* myProfileLikelihoodInterval = myProfileLikelihoodCalculator.GetInterval();
    // Use this interval result. In this case, it makes sense to say what the lower and upper limits are.
    // Define the object variables for the purposes of the confidence interval.
    RooRealVar* x = myWorkspace->var("x");
    RooRealVar* mu = myWorkspace->var("mu");
    cout << "The profile likelihood calculator interval is [ "<<
    myProfileLikelihoodInterval->LowerLimit(*mu) << ", " <<
    myProfileLikelihoodInterval->UpperLimit(*mu) << "] " << endl;
    // Set mu equal to zero.
    // Is mu in the interval?
    cout << "Is mu = 0 in the interval?" << endl;
    if (myProfileLikelihoodInterval->IsInInterval(*mu) == 1){
       cout << "Yes" << endl;
    } else{
       cout << "No" << endl;

links for RooStats


The ModelConfig RooStats class encapsulates the configuration of a model to define a particular hypothesis. It is now used extensively by the calculator tools. ModelConfig always contains a reference to an external workspace that manages all of the objects that are a part of the model (PDFs and parameter sets). So, in order to use ModelConfig, the user must specify a workspace pointer before creating the various objects of the model.


general description

The HistFactory can be used to run an analysis without being a RooFit/RooStats expert. There are two main ways to interact with HistFactory: through C++ code and through XML configuration files. In the XML configuration approach, in a nutshell, ROOT files containing input histograms are set up and XML configuration files are set up for those input ROOT files. The XML configuration files specify details on the histograms, how RooFit should interpret the information in the files and how histograms are to be used in calculations. A little executable contained in ROOT called hist2workspace is used to import the histograms into a RooFit workspace.

XML approach to HistFactory


The ROOT release ships with a script called prepareHistFactory in the $ROOTSYS/bin directory. The prepareHistFactory script prepares a working area. Specifically, it creates results/, data/ and config/ directories. It then copies the HistFactorySchema.dtd and example XML files from the ROOT tutorials directory into the config/ directory. It also copies a ROOT file into the data/ directory for use with the example XML configuration files.


The HistFactorySchema.dtd file, located in $ROOTSYS/etc/, specifies the XML schema. It is typically placed in the config/ directory of a working area together with the top-level XML file and the individual channel XML files. The user should not modify this file. The HistFactorySchema.dtd is commented to specify exactly the meaning of the various options.


The hist2workspace executable is run using the top-level XML configuration file as an argument in the following manner:

hist2workspace top_level.xml

hist2workspace builds the model and saves input histograms in the output ROOT file. The measurement's configuration class is made to persist as well. This measurement class has a member function that can write XML configuration files which point to the histograms saved in the (output) ROOT file (e.g.: GaussExample->WriteToXML).

XML files

general description

A minimum of two XML files are required for configuration. The "top-level" XML configuration file defines the measurement and contains a list of channels that contribute to this measurement. The "channel" XML configuration files are used to describe each channel in detail. For each contributing channel, there is a separate XML configuration file.


The nominal and variational histograms should all have the same normalisation convention. There are a few conventions possible:

  • option 1:
    • Lumi="XXX" is in the main XML's element, where XXX is in fb^-1.
    • Histograms have units of fb/bin.
    • Some samples have NormFactors that are all relative to prediction (e.g. 1 is the nominal prediction).

  • option 2:
    • Lumi="1" is in the main XML's element.
    • Histograms are normalized to unity.
    • Each sample has a NormFactor that is the expected numbers of events in data.

  • option 3:
    • Lumi="1" is in the main XML's element.
    • Histograms have units of (numbers of events)/bin expected in data.
    • Some samples have NormFactors that are all relative to prediction (e.g. 1 is the nominal prediction).
    • It's up to you. In the end, the expected number is a product: N=Lumi*BinContent*NormFactor(s). See the user's guide for more precise equations.

top-Level XML file

general description

This file specifies a top level 'Combination' that is composed of:

  • several 'Channels', which are described in separate XML configuration files.
  • several 'Measurements' (corresponding to a full fit of the model), each of which specifies
    • Name, a name for this measurement to be used in tables and files.
    • Lumi, the integrated luminosity associated with the measurement in picobarns.
    • LumiRelErr, the relative error of the luminosity measurement.
    • the histogram bins to be used:
      • BinLow specifies the lowest bin number used for the measurement (inclusive).
      • BinHigh specifies the highest bin number used for the measurement (exclusive).
    • relative uncertainty on the luminosity.
    • what is(/are) the parameter(/s) of interest that will be measured.
      • Use POI to specify this.
    • which parameter(/s) should be fixed/floating (e.g., nuisance parameters)
    • which type of constriants are desired:
      • ConstraintTerm is used.
      • default: Gaussian
      • supported: Gaussian, Gamma, LogNormal, Uniform
      • RelativeUncertainty
    • whether the tool should export the model only and skip the default fit.
      • ExportOnly: if "True", skip the fit (export only the model; don't perform the initial fit).

OutputFilePrefix is a prefix for the output ROOT file to be created.

Mode represents the type of analysis. Use "comb".

ParamSetting allows for the specification of which parameters are fixed. If a parameter is included here, it is neither a nuisance parameter nor a POI, but a fixed parameter of the mode.

Val allows for the specification of the specific value.

Const: this has a specific value, not set here, but where the param is defined.

specific instructions

There are several parts in the top-level XML file. Initially, the XML schema is specified. The output is specified next. Specifically, the prefix for any output files (ROOT files containing workspaces) is given. Now, the channel XML files are given for each measurement (e.g., signal, background, systematics information) using the Input tag. Next, the details for each channel are defined using the Measurement tag. Which bins to use are specified inclusively using the Measurement tag attributes BinLow and BinHigh. Within the Measurement tag, the POI tag is used to specify the point of interest for the channel, i.e. the test statistic.

example file: $ROOTSYS/tutorials/histfactory/example.xml

<!DOCTYPE Combination  SYSTEM 'HistFactorySchema.dtd'>

<Combination OutputFilePrefix="./results/example" Mode="comb" >


    <Measurement Name="GaussExample" Lumi="1." LumiRelErr="0.1" BinLow="0" BinHigh="2" Mode="comb" >
        <ParamSetting Const="True">Lumi alpha_syst1</ParamSetting>
        <!-- don't need <ConstraintTerm> default is Gaussian-->

    <Measurement Name="GammaExample" Lumi="1." LumiRelErr="0.1" BinLow="0" BinHigh="2" Mode="comb" >
        <ParamSetting Const="True">Lumi alpha_syst1</ParamSetting>
        <ConstraintTerm Type="Gamma" RelativeUncertainty=".3">syst2</ConstraintTerm>

    <Measurement Name="LogNormExample" Lumi="1." LumiRelErr="0.1" BinLow="0" BinHigh="2" Mode="comb" >
        <ParamSetting Const="True">Lumi alpha_syst1</ParamSetting>
        <ConstraintTerm Type="LogNormal" RelativeUncertainty=".3">syst2</ConstraintTerm>

    <Measurement Name="ConstExample" Lumi="1." LumiRelErr="0.1" BinLow="0" BinHigh="2" Mode="comb" ExportOnly="True">
        <ParamSetting Const="True">Lumi alpha_syst1</ParamSetting>


channel XML files

general description

These files specify for each channel

  • Name, a name for the channel.
  • InputFile, the input file in which the histogram can be found. If this is not specified, it must be specified for each sample and data.
  • HistoPath, the path (within the ROOT file) at which the histogram can be found.
  • HistoName (optional), the name of the histogram to be used, unless overridden for specific samples and data.
  • observed data (if absent, the tool will use the expectation, which is useful for expected sensitivity)
  • several 'Samples' (e.g., signal, bkg1, bkg2 etc.), each of which specifies
    • Name, a name
    • whether the sample is normalized by theory (e.g., N = L * sigma) or whether the sample is data driven.
    • a nominal expectation histogram.
    • a named 'Normalization Factor' (which can be fixed or allowed to float in a fit).
    • several 'Overall Systematics' in normalization with
      • a name.
      • +/- 1 sigma variations (e.g., 1.05 and 0.95 for a 5% uncertainty).
    • several 'Histogram Systematics' in shape with
      • a name (which can be shared with the OverallSyst if correlated).
      • +/- 1 sigma variational histograms.

specific instructions

First, the channel file specifies the XML schema. Then, the channel is defined and named. The location of the data histogram for the channel is defined. For each background, a Sample is defined. It is specified whether it is normalised to luminosity (i.e., the histograms should be per inverse picobarn and will be scaled). To enable normalisation to luminosity, set the tag attribute NormalizeByTheory to "True". For external normalisation, a data-driven background measurement is fixed to the lumi of the dataset. In this case, set the tag attribute NormalizeByTheory to "False". For the normalisation factor, (e.g., "SigXsecOverSM"), the tag attribute "Name" should match the POI specified in the top level XML configuration file.

systematic uncertainties

For an overall relative rate systematic, the "OverallSys" tag is used with its appropriate tag attributes. For a shape systematic (a systematic that affects the shape of a histogram), the "HistoSys" tag is used with its appropriate tag attributes. Specifically, for the HistoSys tag attributes "HistoNameHigh" and "HistoNameLow", the respective histograms for the upper and lower shape systematic uncertainties are specified in a manner such as the following:

<HistoSys Name="myShapeSystematic_1" HistoNameHigh="myShapeSystematic_1_high" HistoNameLow="myShapeSystematic_1_low" />

Note that the order in which the various classes of systematics should be specified is overall systematics followed by shape systematics.

example file: $ROOTSYS/tutorials/histfactory/example_channel.xml

<!DOCTYPE Channel  SYSTEM 'HistFactorySchema.dtd'>

    <Channel Name="channel1" InputFile="./data/example.root" HistoName="" >
        <Data HistoName="data" HistoPath="" />
        <Sample Name="signal" HistoPath="" HistoName="signal">
            <OverallSys Name="syst1" High="1.05" Low="0.95"/>
            <NormFactor Name="SigXsecOverSM" Val="1" Low="0." High="3." Const="True" />
        <Sample Name="background1" HistoPath="" NormalizeByTheory="True" HistoName="background1">
            <OverallSys Name="syst2" Low="0.95" High="1.05"/>
        <Sample Name="background2" HistoPath="" NormalizeByTheory="True" HistoName="background2">
            <OverallSys Name="syst3" Low="0.95" High="1.05"/>
            <!-- <HistoSys Name="syst4" HistoPathHigh="" HistoPathLow="histForSyst4"/>-->


slash suffix in HistoPath attribute

A slash must be added to the end of the HistoPath string attribute of the Channel, Data and Sample tags when referencing directories other than the root directory in ROOT files, in such a manner as follows:

<Data  HistoName="myDataHistogram" HistoPath="myDirectory/" />

colon characters in Name attributes

Sometimes colon characters are unusable in the Name attribute of the Channel, Data and Sample tags. It is wise to avoid using colons in this context.

guidance in writing the XML configuration files



The units of the input histogram and the luminosity specified in the top-level XML configuration file should be compatible; for example, input histograms might be in while the luminosity might be in or input histograms might be in while the luminosity might be in .

naming convention

The input histogram names are of no special significance, however, it is often preferable to have a good naming convention devised. One might consider the order in which one has information in the XML and aim to have histogram names appear in a similar order when listed alphabetically.

generic example
type of histogram histogram naming convention
Phenomenon histogram <phenomenon name>_m<mass point>
Phenomenon upward systematic histogram <phenomenon name>_m<mass point>_sys_<systematic name>_up
Phenomenon downward systematic histogram <phenomenon name>_m<mass point>_sys_<systematic name>_do

The reason for having an "m" character preceding the mass point is that it allows for easy search and replace of the mass point value when automatically producing multiple XML configuration files corresponding to multiple mass points.

specific example
type of histogram histogram name
ttH histogram ttH_m110
ttH upward luminosity systematic histogram ttH_m110_sys_Lumi_up
ttH downward luminosity systematic histogram ttH_m110_sys_Lumi_do
ttH upward JES systematic histogram ttH_m110_sys_JES_up
ttH downward JES systematic histogram ttH_m110_sys_JES_do
WW Herwig 105987 upward luminosity systematic histogram WW_Herwig_105987_m110_sys_Lumi_up
WW Herwig 105987 downward luminosity systematic histogram WW_Herwig_105987_m110_sys_Lumi_do
WW Herwig 105987 upward JES systematic histogram WW_Herwig_105987_m110_sys_JES_up
WW Herwig 105987 downward JES systematic histogram WW_Herwig_105987_m110_sys_JES_do


The sample names are of no special significance, however, it is often preferable to have very short names (e.g., signal1) for the purposes of clarity when, for example, printing the contents of the workspace.

The NormalizedByTheory attribute should be "True" (as opposed to "TRUE" or "true") for all non-data-driven backgrounds. If the Data tag is removed, expected data shall be used in calculations.

The signal(s) are specified as such through the use of the POI tag (in the channel XML configuration files).

example file: ttH_m110_channel.xml

<!DOCTYPE Channel SYSTEM 'HistFactorySchema.dtd'>

<!-- channel name and input file -->
<Channel Name="ttH_m110" InputFile="data/ttH_histograms.root" HistoName="">
    <!-- data -->
    <Data HistoName="data" />

    <!-- signal -->
    <Sample Name="signal" HistoName="ttH_m110" NormalizeByTheory="False" >
        <!-- systematics: -->
        <HistoSys Name="Lumi"
        <HistoSys Name="JES"
        <OverallSys Name="scale_ttbar" High="1.05" Low="0.95"/>
        <NormFactor Name="SigXsecOverSM" Val="1.0" Low="0.0" High="100." Const="True" />

    <!-- backgrounds -->

    <!-- WW_Herwig_105987 -->
    <Sample Name="WW_Herwig_105987" NormalizeByTheory="True" HistoName="WW_Herwig_105987_m110">

    <!-- Wplusjets -->
    <Sample Name="Wplusjets" NormalizeByTheory="False" HistoName="Wplusjets_m110">
        <!-- systematics -->
        <HistoSys Name="Lumi"
        <HistoSys Name="JES"


example file: ttH_m110_top-level.xml

<!DOCTYPE Combination SYSTEM 'HistFactorySchema.dtd'>

<!-- workspace output file prefix -->
<Combination OutputFilePrefix="workspaces/ttH_m110_workspace" Mode="comb" >

    <!-- channel XML file(s) -->

    <!-- measurement and bin range -->
    <Measurement Name="datastat" Lumi="1" LumiRelErr="0.037" BinLow="0" BinHigh="21" Mode="comb" ExportOnly="True">


create XML configuration files automatically

Once the XML configuration files for a certain mass point are written, these files can be used to automatically create further XML configuration files for all remaining mass points.

If the described naming convention is used (i.e., mass points are prefixed with an "m"), the following shell script might be used to create all required XML configuration files for all mass points.

example file: createXMLFiles.sh


# This script is designed for use with the naming conventions described in the following page:
# ATLAS/HiggsAnalysisAtATLASUsingRooStats
# The #userSet hashtag is used to indicate places in the script where the user might modify
# things.


    # arguments: originalMass

    # Set/specify the original XML configuration file names.

    # Set/specify the new XML configuration file names.

    # Duplicate the original XML configuration files while substituting the original mass
    # points with the new mass points in the new resulting file.
    # Here, the program sed is used to replace all occurrances of a specified pattern in
    # a specified file with another specified pattern.
    # The "s" correspondes to "substitute".
    # The "g" corresponds to "globally" (all instances in a line).
    sed "s/m$originalMass/m$newMass/g" ${originalChannelXMLFileName} > ${newChannelXMLFileName}
    sed "s/m$originalMass/m$newMass/g" ${originalTopLevelXMLFileName} > ${newTopLevelXMLFileName}

# Specify the original mass point.
    originalMass=110 #userSet

# Execute the function for creation of new XML configuration files for all required mass points. 
    createXMLFiles ${originalMass} 115 #userSet
    createXMLFiles ${originalMass} 120 #userSet
    createXMLFiles ${originalMass} 125 #userSet
    createXMLFiles ${originalMass} 130 #userSet
    createXMLFiles ${originalMass} 140 #userSet

C++ approach to HistFactory

The C++ approach to HistFactory allows one to interact with HistFactory (creating models etc.) using C++ and Python code. The configuration part of HistFactory is based on a C++ class structure. Since the configuration state of HistFactory is based on C++, one can bypass XML. All functions used in the executable hist2workspace are available for use in C++ code. These C++ classes mirror the structure of the input XML tags. These classes are saved automatically in the output ROOT file along with the workspace. They can then be browsed in CINT or imported in code.

The main advantage of using the C++ approach to HistFactory is that it allows one to easily create many similar models based on an initial template. In the XML approach, one must create many similar XML configuration files and execute hist2workspace many times (though this can be done in an automated manner).

Since ROOT version 5.34, HistFactory has had the capability to accept C++ code to build models. Previously, all HistFactory models had to be created using static XML configuration files. Essentially, HistFactory models are created in C++ by defining a node/tree structure of the statistical model. A HistFactory model is a very structured object, consisting of one of several channels, each with data and various samples. Each sample can have different types of systematic uncertainties or can have freely floating parameters. Each of these objects (or nodes of the tree) is represented by a class that can be constructed and added. This C++ approach is relatively new and code examples will be written in due course.

HistFactory class tree structure

  • Measurement
    • std::string POI
    • double Lumi
    • ... etc.
    • std:: vector<Channels>
      • Channel
        • Data
          • TH1* Observed
        • StatErrorConfig
        • std::vector<Samples>
          • Sample
            • TH1* Nominal
            • std::vector<NormFactor>
            • std::vector<OverallSys>
            • std::vector<HistoSys>

HistFactory configuration in C++

example (early in project development): creation of the Measurement and a Channel and, thence, creation of the channel samples, including signal and backgrounds

// Create the Measurement and a channel.

std::string myInputFile = "./data/myData.root";
std::string myChannel1Path = "";

// Create the measurement.
Measurement myMeasurement("myMeasurement", "myMeasurement");
myMeasurement.OutputFilePrefix = "./workspaces/myWorkspace";
myMeasurement.POI = "SigXsecOverSM";
myMeasurement.Lumi = 1.0;
myMeasurement.LumiRelErr = 0.10;
myMeasurement.ExportOnly = false;
myMeasurement.BinHigh = 2;

// Create a channel.
Channel myChannel("myChannel1");
myChannel.SetData("myData", myInputFile);
myChannel.SetStatErrorConfig(0.05, "Poisson");

// Create the channel samples, including signal and background.

// Create the signal sample.
Sample mySignal("mySignal", "mySignal", myInputFile);
mySignal.AddOverallSys("mySystematic1", 0.95, 1.05);
mySignal.AddNormFactor("SigXsecOverSM", 1, 0, 3);

// Create the background 1 sample.
Sample myBackground1("myBackground1", "myBackground1", myInputFile);
myBackground1.ActivateStatError("myBackground1StatisticalUncertainty", myInputFile);
myBackground1.AddOverallSys("mySystematic2", 0.95, 1.05);

// Create the background 2 sample.
Sample myBackground2("myBackground2", "myBackground2", myInputFile);
myBackground2.AddOverallSys("mySystematic3", 0.95, 1.05);

// Add the samples to the measurement.

example HistFactory model construction using C++

A HistFactory model is to be created. It shall consist of a single channel that shall have only signal and background. Uncertainties on the luminosity, statistical uncertainties from Monte Carlo and separate systematics on signal and background shall be included.

#include "RooStats/HistFactory/Measurement.h"
void MakeSimpleModel(){
    // This function creates a simple model with one channel.

    // Create the measurement object.
        RooStats::HistFactory::Measurement measurement_1("measurement_1", "measurement_1");
        // Configure the measurement.
            // Set the output files' prefix.
            // Set ExportOnly to false, meaning that further
            // activities (such as fitting and plotting) shall be
            // carried out beyond simple exporting to the workspace.
            // Set the parameter of interest.
            // Set the luminosity.
                // It is assumed that all histograms have been
                // scaled by luminosity.
                // Set the uncertainty.

        // Create a channel.
            RooStats::HistFactory::Channel channel_1("channel_1");
            // Configure the channel.
                // Set the data.
                    // The data is a histogram representing
                    // the measured distribution. It can
                    // have one or many bins.
                    // The name of the ROOT file containing
                    // the data and the name to attribute to
                    // the data are specified.
                    channel_1.SetData("data", "data/example.root");
            // Create a sample (signal).
                // Samples describe the various processes tha
                // are used to model the data. In this case,
                // they consist only of a signal process and a
                // single background process.
                RooStats::HistFactory::Sample signal_1("signal_1", "signal_1", "data/example.root");
                // Configure the sample.
                    // The cross section scaling parameter
                    // is added.
                        signal_1.AddNormFactor("SigXsecOverSM", 1, 0, 3);
                    // A systematic uncertainty of 5% is
                    // added.
                        signal_1.AddOverallSys("systematic_1",  0.95, 1.05);
                // Add the sample to the channel.
            // Create a sample (background).
                RooStats::HistFactory::Sample background1("background_1", "background_1", "data/example.root");
                // Configure the sample.
                    // Add a statistical uncertainty.
                        background_1.ActivateStatError("background_1_statistical_uncertainty", InputFile);
                    // A systematic uncertainty of 5% is added.
                        background_1.AddOverallSys("systematic_uncertainty_2", 0.95, 1.05 );
                // Add the sample to the channel.
            // Create a sample (background).
                RooStats::HistFactory::Sample background_2("background_2", "background_2", "data/example.root");
                    // Add a statistical uncertainty.
                    // Add a systematic uncertainty.
                        background2.AddOverallSys("systematic_uncertainty_3", 0.95, 1.05 );
                // Add the sample to the channel.
            // Add the channel to the measurement.
        // Access the specified data and collect, copy and store the
        // histograms.
        // Print a text representation of the model.
        // Run the measurement (this is equivalent to an execution of
        // the program hist2workspace).

details on HistFactory usage in C++

details on the object measurement

A measurement has several methods to configure its options, each of which is equivalent to their XML equivalents.

objective code
set the prefix for output files void SetOutputFilePrefix(const std::string& prefix);
set the parameter of interest for the measurement void SetPOI(const std::string& POI);
set a parameter in the model to be constant void AddConstantParam(const std::string& param);
set the value of a parameter in the model void SetParamValue(const std::string& param, double val);
set the low and high bins for all observables void SetBinLow(int BinLow); void SetBinHigh(int BinHigh);
set the luminosity and its relative error void SetLumi(double Lumi); void SetLumiRelErr(double LumiRelErr);
set whether the model should save plots and tables or should export the workspace void SetExportOnly(bool ExportOnly);
add a channel object to a model (a measurement) void AddChannel(RooStats::HistFactory::Channel chan);
open all specified ROOT files and copy and save all necessary histograms void CollectHistograms();
save a measurement (a model) to a ROOT file (for possible future modification and use in creating new models) void writeToFile(TFile* file);

HistFactory supports parameters that are functions of other parameters. Such parameters are made in the inital mode and then are converted into dynamic functions during a processing pass over the model. Such parameters can be created using additional methods of a measurement object.

add a preprocessed function by giving the function a name, a functional expression and a string with a bracketed list of dependencies (e.g. "SigXsecOverSM[0,3]")

void AddPreprocessFunction(std::string name, std::string expression, std::string dependencies);

create a class representing a preprocess function and add it to a measurement directly using the constructor PreprocessFunction and a method of a measurement object

PreprocessFunction::PreprocessFunction(std::string Name, std::string Expression, std::string Dependents);
void AddPreprocessFunction(const std::string& function);

details on object channel

Measurements contain collections of channels.

objective code
create a channel and give it a name Channel::Channel(const std::string& name);
set the channel histogram using the name and path of a histogram in a ROOT file void SetData(std::string HistoName, std::string InputFile, std::string HistoPath="");
set the value of the single bin of a channel with only one bin (creating a 1 bin histogram) void SetData(double value_1);
create or load a histogram in memory by supplying a pointer to the histogram as the data void SetData(TH1* data_1);
create a HistFactory data object and load it directly (useful in configuring an object and using it multiple times void SetData(const RooStats::HistFactory::Data& data);

details on object sample

Each channel has several samples which describe the data. These samples can represent both signals and backgrounds. Samples are constructed, configured and then added to a channel. Each sample has a histogram describing its shape and a list of systematic uncertainties describing how that shape transforms based on a number of parameters.

objective code
create a sample object specifying the name Sample(std::string Name);
create a sample object specifying the name, the histogram name, the histogram file and the histogram path in the file Sample(std::string Name, std::string HistoName, std::string InputFile, std::string HistoPath="");
add a sample object to a channel object void AddSample(RooStats::HistFactory::Sample sample);
independently set a histogram object void SetHisto(TH1* histogram_1);
independently set a value void SetValue(Double_t value_1);
set a sample to be "normalised by theory" (its normalisation scales with luminosity) void SetNormalizeByTheory(bool norm);

systematic uncertainties

A sample object can have many different types of systematic uncertainties defined.

void AddOverallSys(std::string Name, Double_t Low, Double_t High);
void AddNormFactor(std::string Name, Double_t Val, Double_t Low, Double_t High, bool Const=false);
void AddHistoSys(std::string Name, std::string HistoNameLow, std::string HistoFileLow,  std::string HistoPathLow, std::string HistoNameHigh, std::string HistoFileHigh, std::string HistoPathHigh);
void AddHistoFactor(std::string Name, std::string HistoNameLow, std::string HistoFileLow,  std::string HistoPathLow, std::string HistoNameHigh, std::string HistoFileHigh, std::string HistoPathHigh);
void AddShapeFactor(std::string Name);
void AddShapeSys(std::string Name, Constraint::Type ConstraintType, std::string HistoName, std::string HistoFile, std::string HistoPath="");

A sample can be included in a channel's bin-by-bin statistical uncertainty fluctuations by "activating" the sample. There are two ways to do this. The first way is to use the default errors that are stored in the histogram's uncertainty array. The second way is to supply the errors using an external histogram (in the case where the desired errors differ from those stored by the HT1 histogram). These can be achieved using the following methods:

void ActivateStatError();
void ActivateStatError(std::string HistoName, std::string InputFile, std::string HistoPath="");    


Once each channel has been created, filled with data and samples and added to the overall measurement, the analysis can begin. The first step is to generate the RooFit model from the measurement object. This model is stored in a RooFit workspace object. There are two ways to do this. The first way is used by the program hist2workspace. It builds the workspace, fits it, creates output plots and saves the workspace and plots to files.

RooWorkspace* MakeModelAndMeasurementFast(RooStats::HistFactory::Measurement& measurement);

The second way is to build the workspace in memory and return a pointer to the workspace object.

static RooWorkspace* MakeCombinedModel(Measurement& measurement);
A workspace can be created for only a single channel of a model:

RooWorkspace* MakeSingleChannelModel(Measurement& measurement, Channel& channel);

A function can be used to fit a model.

void FitModel(RooWorkspace *, std::string data_name="obsData");

All of these methods return a pointer to the newly created workspace object. The workspace can be analysed directly, for example using RooStats scripts, or it can be saved to an output file for later analysis or publication.

links for HistFactory


ATLAS recommends the use of the profile likelihood as a test statistic.

full significance calculation example, from histogram creation to model production (using hist2workspace), to RooStats calculations

Prepare a working area.

Make the main directory.

cd ~
mkdir test
cd test

Make the directory for the XML configuration files.

mkdir config

Make the directory for the input histogram ROOT files.

mkdir data

Make the directory for the workspace.

mkdir workspaces

Generate input histograms.

Change to the directory for the input histograms.

cd data

Run some code such as the following. For fun, we will create a simulated data signal at about 126 GeV at about three times the number of events of that which was expected.

example file: make_test_histograms.cxx

#include <iostream>
#include <fstream>
#include <sstream>
#include <stdio.h>
#include <string.h>
#include <cmath>
#include "TStyle.h"
#include "TROOT.h"
#include "TPluginManager.h"
#include "TSystem.h"
#include "TFile.h"
#include "TGaxis.h"
#include "TCanvas.h"
#include "TH1.h"
#include "TF1.h"
#include "TLine.h"
#include "TSpline.h"
#include "RooAbsData.h"
#include "RooDataHist.h"
#include "RooCategory.h"
#include "RooDataSet.h"
#include "RooRealVar.h"
#include "RooAbsPdf.h"
#include "RooSimultaneous.h"
#include "RooProdPdf.h"
#include "RooNLLVar.h"
#include "RooProfileLL.h"
#include "RooFitResult.h"
#include "RooPlot.h"
#include "RooRandom.h"
#include "RooMinuit.h"
#include "TRandom3.h"
#include "RooWorkspace.h"
#include "RooStats/RooStatsUtils.h"
#include "RooStats/ModelConfig.h"
#include "RooStats/ProfileLikelihoodCalculator.h"
#include "RooStats/LikelihoodInterval.h"
#include "RooStats/LikelihoodIntervalPlot.h"
#include "TStopwatch.h"
using namespace std;
using namespace RooFit;
using namespace RooStats;

int main(){

    // Create the expected signal histogram.

    // Create the function used to describe the signal shape (a simple Gaussian shape).
    TF1 mySignalFunction("mySignalFunction", "(1/sqrt(2*pi*0.5^2))*2.718^(-(x-126)^2/(2*0.5^2))", 120, 130);
    // Create the histogram with 100 bins between 120 and 130 GeV.
    TH1F mySignal("mySignal", "mySignal", 100, 120, 130);
    // Fill the histogram using the signal function.
    mySignal.FillRandom("mySignalFunction", 10000000);

    // Create the background histogram.

    // Create the function used to describe the signal shape (a simple polynomial of the second order).
    TF1 myBackgroundFunction("myBackgroundFunction", "-2.3*10^(-6)*x^2+0.0007*x+0.4", 120, 130);
    // Create the histogram with 100 bins between 120 and 130 GeV.
    TH1F myBackground("myBackground", "myBackground", 100, 120, 130);
    // Fill the histogram using the background function.
    myBackground.FillRandom("myBackgroundFunction", 100000000000);

    // Create the (simulated) data histogram. This histogram represents what one might have as real data.

    // Create the histogram using by combining the signal histogram multiplied by 3 with the background histogram.
    TH1F myData = 3 * mySignal + myBackground;
    // Set the name of the histogram.

    // Save the histograms created to a ROOT file.
    TFile myFile("test_histograms.root", "RECREATE");
    return 0;

Create the workspace.

There are two approaches which might be used to create the RooFit workspace. The first method uses the hist2workspace program to run on XML configuration files and input histogram files. The second method explicitly builds the workspace using RooFit code. In general, most should opt for the first, XML-based approach. If you wish to understand the process of specifying the specifics of the models you want to create, you might take a look at the second approach.

option 1: Create the workspace using hist2workspace and XML configuration files.

Change to the directory for the configuration XML files.

cd ~/test/config

Copy the HistFactory XML schema to the configuration directory.

cp $ROOTSYS/etc/HistFactorySchema.dtd .

Create XML configuration files in the configuration directory.

example file: test_top-level.xml

<!DOCTYPE Combination SYSTEM 'HistFactorySchema.dtd'>

<!-- workspace output file prefix -->
<Combination OutputFilePrefix="./workspaces/test_workspace">

   <!-- channel XML file(s) -->

   <!-- measurement and bin range -->
   <Measurement Name="datastat" Lumi="1" LumiRelErr="0.037" BinLow="0" BinHigh="99" Mode="comb" ExportOnly="True">


example file: test_channel.xml

<!DOCTYPE Channel SYSTEM 'HistFactorySchema.dtd'>

<!-- channel name and input file -->
<Channel Name="test" InputFile="./data/test_histograms.root" HistoName="">
    <!-- data -->
    <Data HistoName="myData" />

    <!-- signal -->
    <Sample Name="signal" HistoName="mySignal" NormalizeByTheory="True" >
        <NormFactor Name="SigXsecOverSM" Val="1.0" Low="0.0" High="100." Const="True" />

    <!-- backgrounds -->
    <!-- background -->
    <Sample Name="background" NormalizeByTheory="True" HistoName="myBackground">


As you can see, the model is very simple. There is the expected signal, the expected background and the actual data (in this case, the data is simulated). The point of interest relates to the signal and the expected data will be compared to the actual data.

Run hist2workspace.

hist2workspace config/test_top-level.xml

There should now be created 4 files in the workspaces directory:


The test_workspace_combined_datastat_model.root file is the ROOT file that contains the combined workspace object (is has constraints, weightings etc. properly incorporated). This is the file you are almost certainly interested in. The test_workspace_test_datastat_model.root file contains a workspace object without proper constraints, weightings etc. I don't know what the other two files are.

Use the ProfileLikelihoodCalculator to calculate the 95% confidence interval on the parameter of interest as specified in the ModelConfig.

Create a C++ program such as the following:

example file: ProfileLikeliHoodCalculator_confidence_level.cxx

#include <iostream>
#include <fstream>
#include <sstream>
#include <stdio.h>
#include <string.h>
#include <cmath>
#include "TStyle.h"
#include "TROOT.h"
#include "TPluginManager.h"
#include "TSystem.h"
#include "TFile.h"
#include "TGaxis.h"
#include "TCanvas.h"
#include "TH1.h"
#include "TF1.h"
#include "TLine.h"
#include "TSpline.h"
#include "RooAbsData.h"
#include "RooDataHist.h"
#include "RooCategory.h"
#include "RooDataSet.h"
#include "RooRealVar.h"
#include "RooAbsPdf.h"
#include "RooSimultaneous.h"
#include "RooProdPdf.h"
#include "RooNLLVar.h"
#include "RooProfileLL.h"
#include "RooFitResult.h"
#include "RooPlot.h"
#include "RooRandom.h"
#include "RooMinuit.h"
#include "TRandom3.h"
#include "RooWorkspace.h"
#include "RooStats/RooStatsUtils.h"
#include "RooStats/ModelConfig.h"
#include "RooStats/ProfileLikelihoodCalculator.h"
#include "RooStats/LikelihoodInterval.h"
#include "RooStats/LikelihoodIntervalPlot.h"
#include "TStopwatch.h"
using namespace std;
using namespace RooFit;
using namespace RooStats;

int main(){

    // Open the ROOT workspace file.
    TString myInputFileName = "workspaces/test_workspace_combined_datastat_model.root";
    cout << "open file " << myInputFileName << endl;
    TFile *_file0 = TFile::Open(myInputFileName);
    // Access the workspace.
    cout << "access workspace" << endl;
    RooWorkspace *myWorkspace = (RooWorkspace*) _file0->Get("combined");
    // Access the ModelConfig.
    cout << "access ModelConfig..." << endl;
    ModelConfig *myModelConfig = (ModelConfig*) myWorkspace->obj("ModelConfig");
    // Access the data.
    cout << "accessing data" << endl;
    RooAbsData *myData = myWorkspace->data("obsData");

    // Use the ProfileLikelihoodCalculator to calculate the 95% confidence
    // interval on the parameter of interest as specified in the ModelConfig.
    cout << "calculate profile likelihood" << endl;
    ProfileLikelihoodCalculator myProfileLikelihood(*myData, *myModelConfig);
    LikelihoodInterval* myConfidenceInterval = myProfileLikelihood.GetInterval();
    // Access the confidence interval on the parameter of interest (POI).
    RooRealVar* myPOI = (RooRealVar*) myModelConfig->GetParametersOfInterest()->first();

    // Print results.
    cout << "print results" << endl;
    // Print the confidence interval on the POI.
    cout <<
    "\n95% confidence interval on the point of interest " <<
    myPOI->GetName()<<": ["<<
    myConfidenceInterval->LowerLimit(*myPOI) << ", " <<
    myConfidenceInterval->UpperLimit(*myPOI) << "]\n" << endl;
    return 0;

Compile this code using a Makefile such as the following:

example file: Makefile

ProfileLikeliHoodCalculator_confidence_level : ProfileLikeliHoodCalculator_confidence_level.cxx
    g++ -g -O2 -fPIC -Wno-deprecated  -o ProfileLikeliHoodCalculator_confidence_level ProfileLikeliHoodCalculator_confidence_level.cxx `root-config --cflags --libs --ldflags` -lHistFactory -lXMLParser -lRooStats -lRooFit -lRooFitCore -lThread -lMinuit -lFoam -lHtml -lMathMore

Compile the code.



The end result as displayed in the terminal output is the following:

95% confidence interval on the point of interest SigXsecOverSM: [2.99653, 3.00347]

further information

links for ROOT

-- WilliamBreadenMadden - 2010-10-29

Topic revision: r51 - 2016-05-06 - WilliamBreadenMadden
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback