TWiki
>
ATLAS Web
>
RunningGangaWithPanda
(revision 11) (raw view)
Edit
Attach
-- Main.ThomasDoherty - 2009-10-26 ---++++ %MAKETEXT{"<strong>Using Ganga to submit jobs to the Panda backend on lxplus</strong>" }% References: [[https://twiki.cern.ch/twiki/bin/view/Atlas/FullGangaAtlasTutorial][Full Ganga Atlas Tutorial]] [[https://twiki.cern.ch/twiki/bin/view/Atlas/DataPreparationReprocessing#Submitting_the_job_with_GANGA][Data preparation reprocessing - using Ganga]] [[https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#DAY_4_USING_THE_GRID][https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#DAY_4_USING_THE_GRID]] 1. In a clean lxplus afs shell, setup Ganga. <pre><verbatim> source /afs/cern.ch/sw/ganga/install/etc/setup-atlas.sh </verbatim> </pre> 2. Setup the athena release. NOTE: To set up for any release one must be familar with using CMT (bootstrap procedures and requirement files) - see [[https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookSetAccount][here]] for more information. In this example (for reprocessing see the reference page above for the necessary release) once the requirements file is set up and a release directory in your test area is created then try: <pre><verbatim> source ~/cmthome/setup.sh -tag=14.5.2.6,32,AtlasProduction </verbatim> </pre> 3. Setup any checked out packages you use in your code. For this example check out (and compile) the UserAnalysis package as in the "HelloWorld"example [[https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookRunAthenaHelloWorld][here]] or [[https://twiki.cern.ch/twiki/bin/view/Atlas/FullGangaAtlasTutorial#4_2_Setting_up_a_Basic_Analysis][here]]. <pre><verbatim> cd $TESTAREA/14.5.2.6/PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt source setup.sh </verbatim> </pre> NOTE: For demonstration purposes (to show that this setup does indeed pull in the code changes made to a checked out package) - I have appended to a comment in AnalysisSkeleton.cxx (i.e I changed "No AOD MC truth particle container found in TDS" to "No AOD MC truth particle container found in TDS - This comment changed by Tom") 4. Go to run directory and start ganga (once the code is compiled).NOTE ganga picks up your grid cert/key from your .globus directory on lxplus - if you have not created these files please follow the instructions [[https://ppes8.physics.gla.ac.uk/twiki/bin/view/IT/GridCertificates][here]] under the preparing the certificate section. Also you must create a file called 'vomses' in a new directory called .glite (again note the '.') and enter this line for the ATLAS VO - "atlas" "voms.cern.ch" "15001" "/C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch" "atlas" into this file. <pre><verbatim> cd ../run ganga </verbatim> </pre> 5. Before you prepare/run your Ganga JO you must prepare the Athena JO it points to.For this you can use the top job options copied from your UserAnalysis packages share directory. <verbatim>cp ../share/AnalysisSkeleton_topOptions.py .</verbatim> BUT to prepare your code for running on the Grid there are some changes needed for this Athena JO - please add these lines: <verbatim>______________________________________________________________________________ include("RecExCommission/RecExCommissionFlags_jobOptions.py" ) ATLASCosmicFlags.useLocalCOOL = True # setup DBReplicaSvc to choose closest Oracle replica, configurables style from AthenaCommon.AppMgr import ServiceMgr from PoolSvc.PoolSvcConf import PoolSvc ServiceMgr+=PoolSvc(SortReplicas=True) from DBReplicaSvc.DBReplicaSvcConf import DBReplicaSvc ServiceMgr+=DBReplicaSvc(UseCOOLSQLite=False) _____________________________________________________________________________ </verbatim> Also remember to remove (or comment out) the input data line and if you are running a reprocessing job change the geometry tag and the conditions DB tag to match those used in the reprocessing cycle (see details for each reprocessing campaign on this page [[https://twiki.cern.ch/twiki/bin/view/Atlas/DataPreparationReprocessing#How_to_submit_jobs_to_the_Grid][here]]. For example: <pre><verbatim>globalflags.ConditionsTag.set_Value_and_Lock('COMCOND-REPC-002-13')</verbatim> </pre> 6. Execute your Ganga job script while Ganga is running (where an example of what the 'pandaBackend_test.py' would look like is below in other words have this [[http://ppewww.physics.gla.ac.uk/~tdoherty/GangaPanda/pandaBackend_test.py][file]] in your run directory) and type: <pre><verbatim> execfile('pandaBackend_test.py') </verbatim></pre> or simply from the command line run ganga with the name of the Ganga JO appended: <verbatim> ganga pandaBackend_test.py</verbatim> 6. You can monitor your job's progress by typing *jobs* inside Ganga or, if you submitted to the Panda backend by http://panda.cern.ch:25880/server/pandamon/query. 7. Once your job has finished you can copy the output data using the dq2 tools. <pre> <verbatim>source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh dq2-get "your_dataset_name" </verbatim> </pre> Where "your_dataset_name" is given to you by Ganga once the job completes. Also once the job completes Panda in particular sends you an email like [[http://ppewww.physics.gla.ac.uk/~tdoherty/GangaPanda/PandaNotification][this]]. In the email if you click on the 'PandaMonURL' link - in my case for job id 81.Then click on any of the sub-job pandaID's for example ' [[http://panda.cern.ch:25980/server/pandamon/query?job=1026613035][1026613035]]' then scroll down to the link 'Find and view log files' - you can look at the Ganga log for your subjob which is named 'athena_stdout.txt' (only look at 'athena_stderr.txt if your job does not complete). I can then find that"No AOD MC truth particle container found in TDS - This comment changed by Tom" appears in this log. The Ganga JO 'pandaBackend_test.py' could look like this (without line numbers): <pre><verbatim>1 j = Job() 2 j.application = Athena() 3 j.application.atlas_dbrelease = 'ddo.000001.Atlas.Ideal.DBRelease.v06060101:DBRelease-6.6.1.1.tar.gz' 4 j.application.option_file = 'AnalysisSkeleton_topOptions.py' 5 j.application.athena_compile = False 6 j.application.prepare() 7 j.inputdata = DQ2Dataset() 8 j.inputdata.dataset = "data08_cos.00092051.physics_IDCosmic.recon.ESD.o4_r653/" 9 j.outputdata = DQ2OutputDataset() 10 j.backend = Panda() 11 j.splitter = DQ2JobSplitter() 12 j.splitter.numsubjobs = 20 13 j.submit() </verbatim> </pre> For the LCG() backend you might also need<br> =j.outputdata.outputdata=['AnalysisSkeleton.aan.root']= <br> which should match exactly the output file name from your jobs. Also for the LCG() backend, a site can be specified:<br> =j.backend.requirements.other=['other.GlueSiteUniqueID=="UKI-SCOTGRID-GLASGOW"']= <br> To submit to the UK Cloud add<br> =j.backend.requirements.cloud='UK'= <br> </pre> NOTE: Line 3 is an example of overriding a database release to match the one needed to read ESD/DPD. In the case of the spring cosmic reprocessing,the DB release is 6.6.1.1. If the database releases don't match the jobs fail on the Grid ( remove this line if it is not necessary). Line 4 corresponds to your Athena jobOptions. Line 5 is set to False because we have already compiled the packages locally if you want your job to compile your checked out code before submitting then simply change this to True Line 6 tells Ganga to tar your user area and send it with the job. Line 10 specifies the backend to which you are sending your job. There are three options: LCG, Panda and NorduGrid. In the example above Panda was chosen because the data existed only in BNLPANDA, a site in the US cloud. Line 12 corresponds to the number of subjobs you want to split your job into. Finally in Line 13 you submit your job The Ganga output looks something like this (Note the output is a dataset: =Output dataset user09.chriscollins.ganga.2.20091210= ): <pre> run% ganga pandaBackend_test.py *** Welcome to Ganga *** Version: Ganga-5-4-3 Documentation and support: http://cern.ch/ganga Type help() or help('index') for online help. This is free software (GPL), and you are welcome to redistribute it under certain conditions; type license() for details. For help visit the ATLAS Distributed Analysis Help eGroup: https://groups.cern.ch/group/hn-atlas-dist-analysis-help/ GangaAtlas : INFO Found 0 tasks Ganga.GPIDev.Lib.JobRegistry : INFO Found 2 jobs in "jobs", completed in 0 seconds Ganga.GPIDev.Lib.JobRegistry : INFO Found 0 jobs in "templates", completed in 0 seconds ******************************************************************** New in 5.2.0: Change the configuration order w.r.t. Athena.prepare() New Panda backend schema - not backwards compatible For details see the release notes or the wiki tutorials ******************************************************************** GangaAtlas.Lib.Athena : WARNING New prepare() method has been called. The old prepare method is c alled now prepare_old() GangaAtlas.Lib.Athena : INFO Found Working Directory /home/chrisc/atlas/GANGA-TEST-15.3.0.1/15 .3.0.1 GangaAtlas.Lib.Athena : INFO Found ATLAS Release 15.3.0 GangaAtlas.Lib.Athena : INFO Found ATLAS Production Release 15.3.0.1 GangaAtlas.Lib.Athena : INFO Found ATLAS Project AtlasProduction GangaAtlas.Lib.Athena : INFO Found ATLAS CMTCONFIG i686-slc4-gcc34-opt GangaAtlas.Lib.Athena : INFO Using run directory: PhysicsAnalysis/HiggsPhys/HiggsAssocTop/TtHH bbDPDBasedAnalysis/run/ GangaAtlas.Lib.Athena : INFO Extracting athena run configuration ... GangaAtlas.Lib.Athena : INFO Detected Athena run configuration: {'input': {'noInput': True}, ' other': {}, 'output': {'outNtuple': ['FILE1'], 'alloutputs': ['D3PD.root']}} GangaAtlas.Lib.Athena : INFO Creating /tmp/chrisc/sources.f3f6d811-f7cd-42d3-8d50-47c47f58ae78 .tar ... GangaAtlas.Lib.Athena : INFO Option athena_compile=False. Adding InstallArea to /tmp/chrisc/so urces.f3f6d811-f7cd-42d3-8d50-47c47f58ae78.tar ... Ganga.GPIDev.Lib.Job : INFO submitting job 2 Ganga.GPIDev.Lib.Job : INFO job 2 status changed to "submitting" GangaPanda.Lib.Panda : INFO Panda brokerage results: cloud UK, site ANALY_GLASGOW GangaAtlas.Lib.ATLASDataset : WARNING Dataset mc08.106314.Pythia_ttH120_2l2nu4b.merge.AOD.e364_s462_r63 5_t53_tid065132 has 8 locations GangaAtlas.Lib.ATLASDataset : WARNING Please be patient - waiting for site-index update at site UKI-SCO TGRID-GLASGOW_LOCALGROUPDISK ... GangaAtlas.Lib.Athena : WARNING You are using DQ2JobSplitter.filesize or the backend used support s only a maximum dataset size of 10000 MB per subjob - job splitting has been adjusted accordingly. GangaPanda.Lib.Athena : INFO Input dataset(s) ['mc08.106314.Pythia_ttH120_2l2nu4b.merge.AOD.e3 64_s462_r635_t53_tid065132'] GangaPanda.Lib.Athena : INFO Output dataset user09.chriscollins.ganga.2.20091210 GangaPanda.Lib.Athena : INFO Running job options: TtAnalysis-ttHSignalSMALL-GRID.py GangaPanda.Lib.Panda : INFO Uploading source tarball sources.f3f6d811-f7cd-42d3-8d50-47c47f58 ae78.tar.gz in /tmp/chrisc to Panda... Ganga.GPIDev.Lib.Job : INFO job 2.0 status changed to "submitting" Ganga.GPIDev.Lib.Job : INFO job 2.0 status changed to "submitted" Ganga.GPIDev.Lib.Job : INFO job 2 status changed to "submitted" </pre> Helpful commands inside Ganga:<br> =jobs= lists your jobs<br> =jobs(1)= lists the content of job 1<br> =help()= goes into help mode ( =quit= to leave help)<br> =j=jobs(1)= and =j.kill()= will kill job 1. Your output will be in the dq2 registered dataset. For me this was =user09.chriscollins.ganga.2.20091210= Again this is available from =jobs(x)= <br>
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r15
|
r13
<
r12
<
r11
<
r10
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r11 - 2010-02-03
-
ChrisCollins
ATLAS
Log In
or
Register
ATLAS Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
Webs
ATLAS
PUUKA
DetDev
Gridmon
IT
LHCb
LinearCollider
Main
NA62
Sandbox
TWiki
Copyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback