Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
-- ThomasDoherty - 2009-10-26 | ||||||||
Changed: | ||||||||
< < | Using Ganga to submit jobs to the Panda backend | |||||||
> > | Using Ganga to submit jobs to the Panda backend on lxplus | |||||||
References: | ||||||||
Line: 8 to 8 | ||||||||
Data preparation reprocessing - using Ganga![]() | ||||||||
Changed: | ||||||||
< < | 1. In a clean afs shell, setup Ganga. | |||||||
> > | 1. In a clean lxplus afs shell, setup Ganga. | |||||||
source /afs/cern.ch/sw/ganga/install/etc/setup-atlas.sh | ||||||||
Line: 30 to 30 | ||||||||
NOTE: For demonstration purposes (to show that this setup does indeed pull in the code changes made to a checked out package) - I have appended to a comment in AnalysisSkeleton.cxx (i.e I changed "No AOD MC truth particle container found in TDS" to "No AOD MC truth particle container found in TDS - This comment changed by Tom") | ||||||||
Changed: | ||||||||
< < | 4. Go to run directory and start ganga (once the code is compiled). | |||||||
> > | 4. Go to run directory and start ganga (once the code is compiled).NOTE ganga picks up your grid cert/key from your .globus directory on lxplus - if you have not created these files please follow the instructions here![]() | |||||||
cd ../run ganga | ||||||||
Changed: | ||||||||
< < | 5. Execute your Ganga job script while Ganga is running (where an example of what the 'pandaBackend_test.py' would look like is below in other words have this file![]() execfile('pandaBackend_test.py') | |||||||
> > | 5. Before you prepare/run your Ganga JO you must prepare the Athena JO it points to.For this you can use the top job options copied from your UserAnalysis packages share directory. | |||||||
Added: | ||||||||
> > | cp ../share/AnalysisSkeleton_topOptions.py .BUT to prepare your code for running on the Grid there are some changes needed for this Athena JO - please add these lines: ______________________________________________________________________________ include("RecExCommission/RecExCommissionFlags_jobOptions.py" ) ATLASCosmicFlags.useLocalCOOL = True # setup DBReplicaSvc to choose closest Oracle replica, configurables style from AthenaCommon.AppMgr import ServiceMgr from PoolSvc.PoolSvcConf import PoolSvc ServiceMgr+=PoolSvc(SortReplicas=True) from DBReplicaSvc.DBReplicaSvcConf import DBReplicaSvc ServiceMgr+=DBReplicaSvc(UseCOOLSQLite=False) | |||||||
Added: | ||||||||
> > | _________________________________________________________________________ | |||||||
Added: | ||||||||
> > |
Also remember to remove (or comment out) the input data line and if you are running a reprocessing job change the geometry tag and the conditions DB tag to match those used in the reprocessing cycle (see details for each reprocessing campaign on this page here![]() globalflags.ConditionsTag.set_Value_and_Lock('COMCOND-REPC-002-13') | |||||||
Added: | ||||||||
> > | 6. Execute your Ganga job script while Ganga is running (where an example of what the 'pandaBackend_test.py' would look like is below in other words have this file![]() execfile('pandaBackend_test.py') | |||||||
or simply from the command line run ganga with the name of the Ganga JO appended:
ganga pandaBackend_test.py | ||||||||
Line: 75 to 94 | ||||||||
Changed: | ||||||||
< < | NOTE: Line 3 is an example of overriding a database release to match the one needed to read ESD/DPD. In the case of the spring cosmic reprocessing,the DB release is 6.6.1.1. If the database releases don't match the jobs fail on the Grid ( remove this line if it is not necessary). Line 4 corresponds to your Athena jobOptions. You can use the top job options copied from your UserAnalysis packages share directory.
cp ../share/AnalysisSkeleton_topOptions.py .BUT to prepare your code for running on the Grid there are some changes needed for this Athena JO - please add these lines: ______________________________________________________________________________ include("RecExCommission/RecExCommissionFlags_jobOptions.py" ) ATLASCosmicFlags.useLocalCOOL = True # setup DBReplicaSvc to choose closest Oracle replica, configurables style from AthenaCommon.AppMgr import ServiceMgr from PoolSvc.PoolSvcConf import PoolSvc ServiceMgr+=PoolSvc(SortReplicas=True) from DBReplicaSvc.DBReplicaSvcConf import DBReplicaSvc ServiceMgr+=DBReplicaSvc(UseCOOLSQLite=False) _____________________________________________________________________________Also remember to remove (or comment out) the input data line and if you are running a reprocessing job change the geometry tag and the conditions DB tag to match those used in the reprocessing cycle (see details for each reprocessing campaign on this page here ![]() Back to the Ganga JO script:globalflags.ConditionsTag.set_Value_and_Lock('COMCOND-REPC-002-13') Line 5 is set to False because we have already compiled the packages locally if you want your job to compile | |||||||
> > | NOTE: Line 3 is an example of overriding a database release to match the one needed to read ESD/DPD. In the case of the spring cosmic reprocessing,the DB release is 6.6.1.1. If the database releases don't match the jobs fail on the Grid ( remove this line if it is not necessary). Line 4 corresponds to your Athena jobOptions. Line 5 is set to False because we have already compiled the packages locally if you want your job to compile | |||||||
your checked out code before submitting then simply change this to True Line 6 tells Ganga to tar your user area and send it with the job. Line 10 specifies the backend to which you are sending your job. There are three options: LCG, Panda and NorduGrid. In the example above Panda was chosen because the data existed only in BNLPANDA, a site in the US cloud. Line 12 corresponds to the number of subjobs you want to split your job into. | ||||||||
Changed: | ||||||||
< < | Finally in Line 13 you submit your job. | |||||||
> > | Finally in Line 13 you submit your job |