Difference: RunningGangaWithPanda (14 vs. 15)

Revision 152015-01-07 - GraemeStewart

Line: 1 to 1
Changed:
<
<
-- ThomasDoherty - 2009-10-26
>
>

Deprecated

This page is now only of historical interest. For up to date ganga and pathena instructions, please see the information in the ATLAS Software Tutorial.

 

Using Ganga to submit jobs to the Panda backend on lxplus

References:

Line: 8 to 10
  Data preparation reprocessing - using Ganga
Changed:
<
<
https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#DAY_4_USING_THE_GRID
>
>
https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#DAY_4_USING_THE_GRID
  1. In a clean lxplus afs shell, setup Ganga.
Changed:
<
<
  source /afs/cern.ch/sw/ganga/install/etc/setup-atlas.sh
>
>
  source /afs/cern.ch/sw/ganga/install/etc/setup-atlas.sh
  2. Setup the athena release.

NOTE: To set up for any release one must be familar with using CMT (bootstrap procedures and requirement files) - see here for more information. In this example (for reprocessing see the reference page above for the necessary release) once the requirements file is set up and a release directory in your test area is created then try:

Changed:
<
<
  source ~/cmthome/setup.sh -tag=14.5.2.6,32,AtlasProduction
>
>
  source ~/cmthome/setup.sh -tag=14.5.2.6,32,AtlasProduction
  3. Setup any checked out packages you use in your code.

For this example check out (and compile) the UserAnalysis package as in the "HelloWorld"example here or here.

Changed:
<
<
  cd $TESTAREA/14.5.2.6/PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt
  source setup.sh
>
>
  cd $TESTAREA/14.5.2.6/PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt
  source setup.sh
  NOTE: For demonstration purposes (to show that this setup does indeed pull in the code changes made to a checked out package) - I have appended to a comment in AnalysisSkeleton.cxx (i.e I changed "No AOD MC truth particle container found in TDS" to "No AOD MC truth particle container found in TDS - This comment changed by Tom")

4. Go to run directory and start ganga (once the code is compiled).NOTE ganga picks up your grid cert/key from your .globus directory on lxplus - if you have not created these files please follow the instructions here under the preparing the certificate section. Also you must create a file called 'vomses' in a new directory called .glite (again note the '.') and enter this line for the ATLAS VO - "atlas" "voms.cern.ch" "15001" "/C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch" "atlas" into this file.

Deleted:
<
<
   cd ../run
   ganga
 
Changed:
<
<
5. Before you prepare/run your Ganga JO you must prepare the Athena JO it points to.For this you can use the top job options copied from your UserAnalysis packages share directory.
>
>
   cd ../run
   ganga
 
Added:
>
>
5. Before you prepare/run your Ganga JO you must prepare the Athena JO it points to.For this you can use the top job options copied from your UserAnalysis packages share directory.
 
cp ../share/AnalysisSkeleton_topOptions.py .

BUT to prepare your code for running on the Grid there are some changes needed for this Athena JO - please add these lines:

Line: 54 to 52
 from DBReplicaSvcConf import DBReplicaSvc ServiceMgr+=DBReplicaSvc(UseCOOLSQLite=False)
Changed:
<
<
_________________________________________________________________________
>
>
_____________________________________________________________________________
  Also remember to remove (or comment out) the input data line and if you are running a reprocessing job change the geometry tag and the conditions DB tag to match those used in the reprocessing cycle (see details for each reprocessing campaign on this page here. For example:
Changed:
<
<
globalflags.ConditionsTag.set_Value_and_Lock('COMCOND-REPC-002-13')
>
>
globalflags.ConditionsTag.set_Value_and_Lock('COMCOND-REPC-002-13')
  6. Execute your Ganga job script while Ganga is running (where an example of what the 'pandaBackend_test.py' would look like is below in other words have this file in your run directory) and type:
Deleted:
<
<
    execfile('pandaBackend_test.py') 
 
Changed:
<
<
or simply from the command line run ganga with the name of the Ganga JO appended:
>
>
    execfile('pandaBackend_test.py') 
 
Added:
>
>
or simply from the command line run ganga with the name of the Ganga JO appended:
 
    ganga pandaBackend_test.py

6. You can monitor your job's progress by typing jobs inside Ganga or, if you submitted to the Panda backend by http://panda.cern.ch:25880/server/pandamon/query.

7. Once your job has finished you can copy the output data using the dq2 tools.

Changed:
<
<
          
source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh
   dq2-get "your_dataset_name"
>
>
source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh
   dq2-get "your_dataset_name"
  Where "your_dataset_name" is given to you by Ganga once the job completes. Also once the job completes Panda in particular sends you an email like this. In the email if you click on the 'PandaMonURL' link - in my case for job id 81.Then click on any of the sub-job pandaID's for example ' 1026613035' then scroll down to the link 'Find and view log files' - you can look at the Ganga log for your subjob which is named 'athena_stdout.txt' (only look at 'athena_stderr.txt if your job does not complete). I can then find that"No AOD MC truth particle container found in TDS - This comment changed by Tom" appears in this log.

The Ganga JO 'pandaBackend_test.py' could look like this (without line numbers):

Changed:
<
<
1    j = Job()
>
>
1    j = Job()
 2 j.application = Athena() 3 j.application.atlas_dbrelease = 'ddo.000001.Atlas.Ideal.DBRelease.v06060101:DBRelease-6.6.1.1.tar.gz' 4 j.application.option_file = 'AnalysisSkeleton_topOptions.py'
Line: 95 to 91
 13 j.submit()

Deleted:
<
<
  For the LCG() backend you might also need
j.outputdata.outputdata=['AnalysisSkeleton.aan.root']
which should match exactly the output file name from your jobs.
Line: 182 to 177
 

Reusing the environment

Changed:
<
<
>
>
 
  If you're running the same code over multiple datasets, you can save time by telling jobs after the first job to use the same environment. This means that panda doesn't have to run a new `build' job every time, and can make things much faster. This works as long as you don't recompile anything in your environment between jobs, you can change the joboptions.
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback