Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
-- ThomasDoherty - 2009-10-26
Using Ganga to submit jobs to the Panda backend on lxplus | ||||||||
Line: 8 to 8 | ||||||||
Data preparation reprocessing - using Ganga![]() | ||||||||
Added: | ||||||||
> > | https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#DAY_4_USING_THE_GRID![]() | |||||||
1. In a clean lxplus afs shell, setup Ganga.
source /afs/cern.ch/sw/ganga/install/etc/setup-atlas.sh | ||||||||
Line: 94 to 96 | ||||||||
Added: | ||||||||
> > | you might also needj.outputdata.outputdata=['AnalysisSkeleton.aan.root'] which sould match exaclt y the output file name from your jobs. | |||||||
NOTE: Line 3 is an example of overriding a database release to match the one needed to read ESD/DPD. In the case of the spring cosmic reprocessing,the DB release is 6.6.1.1. If the database releases don't match the jobs fail on the Grid ( remove this line if it is not necessary). Line 4 corresponds to your Athena jobOptions. Line 5 is set to False because we have already compiled the packages locally if you want your job to compile your checked out code before submitting then simply change this to True Line 6 tells Ganga to tar your user area and send it with the job. | ||||||||
Line: 102 to 111 | ||||||||
a site in the US cloud. Line 12 corresponds to the number of subjobs you want to split your job into. Finally in Line 13 you submit your job | ||||||||
Added: | ||||||||
> > |
The Ganga output looks something like this (Note the output is a dataset:
Output dataset user09.chriscollins.ganga.2.20091210 ):
run% ganga pandaBackend_test.py *** Welcome to Ganga *** Version: Ganga-5-4-3 Documentation and support: http://cern.ch/ganga Type help() or help('index') for online help. This is free software (GPL), and you are welcome to redistribute it under certain conditions; type license() for details. For help visit the ATLAS Distributed Analysis Help eGroup: https://groups.cern.ch/group/hn-atlas-dist-analysis-help/ GangaAtlas : INFO Found 0 tasks Ganga.GPIDev.Lib.JobRegistry : INFO Found 2 jobs in "jobs", completed in 0 seconds Ganga.GPIDev.Lib.JobRegistry : INFO Found 0 jobs in "templates", completed in 0 seconds ******************************************************************** New in 5.2.0: Change the configuration order w.r.t. Athena.prepare() New Panda backend schema - not backwards compatible For details see the release notes or the wiki tutorials ******************************************************************** GangaAtlas.Lib.Athena : WARNING New prepare() method has been called. The old prepare method is c alled now prepare_old() GangaAtlas.Lib.Athena : INFO Found Working Directory /home/chrisc/atlas/GANGA-TEST-15.3.0.1/15 .3.0.1 GangaAtlas.Lib.Athena : INFO Found ATLAS Release 15.3.0 GangaAtlas.Lib.Athena : INFO Found ATLAS Production Release 15.3.0.1 GangaAtlas.Lib.Athena : INFO Found ATLAS Project AtlasProduction GangaAtlas.Lib.Athena : INFO Found ATLAS CMTCONFIG i686-slc4-gcc34-opt GangaAtlas.Lib.Athena : INFO Using run directory: PhysicsAnalysis/HiggsPhys/HiggsAssocTop/TtHH bbDPDBasedAnalysis/run/ GangaAtlas.Lib.Athena : INFO Extracting athena run configuration ... GangaAtlas.Lib.Athena : INFO Detected Athena run configuration: {'input': {'noInput': True}, ' other': {}, 'output': {'outNtuple': ['FILE1'], 'alloutputs': ['D3PD.root']}} GangaAtlas.Lib.Athena : INFO Creating /tmp/chrisc/sources.f3f6d811-f7cd-42d3-8d50-47c47f58ae78 .tar ... GangaAtlas.Lib.Athena : INFO Option athena_compile=False. Adding InstallArea to /tmp/chrisc/so urces.f3f6d811-f7cd-42d3-8d50-47c47f58ae78.tar ... Ganga.GPIDev.Lib.Job : INFO submitting job 2 Ganga.GPIDev.Lib.Job : INFO job 2 status changed to "submitting" GangaPanda.Lib.Panda : INFO Panda brokerage results: cloud UK, site ANALY_GLASGOW GangaAtlas.Lib.ATLASDataset : WARNING Dataset mc08.106314.Pythia_ttH120_2l2nu4b.merge.AOD.e364_s462_r63 5_t53_tid065132 has 8 locations GangaAtlas.Lib.ATLASDataset : WARNING Please be patient - waiting for site-index update at site UKI-SCO TGRID-GLASGOW_LOCALGROUPDISK ... GangaAtlas.Lib.Athena : WARNING You are using DQ2JobSplitter.filesize or the backend used support s only a maximum dataset size of 10000 MB per subjob - job splitting has been adjusted accordingly. GangaPanda.Lib.Athena : INFO Input dataset(s) ['mc08.106314.Pythia_ttH120_2l2nu4b.merge.AOD.e3 64_s462_r635_t53_tid065132'] GangaPanda.Lib.Athena : INFO Output dataset user09.chriscollins.ganga.2.20091210 GangaPanda.Lib.Athena : INFO Running job options: TtAnalysis-ttHSignalSMALL-GRID.py GangaPanda.Lib.Panda : INFO Uploading source tarball sources.f3f6d811-f7cd-42d3-8d50-47c47f58 ae78.tar.gz in /tmp/chrisc to Panda... Ganga.GPIDev.Lib.Job : INFO job 2.0 status changed to "submitting" Ganga.GPIDev.Lib.Job : INFO job 2.0 status changed to "submitted" Ganga.GPIDev.Lib.Job : INFO job 2 status changed to "submitted"Helpful commands inside Ganga: jobs lists your jobsjobs(1) lists the content of job 1help() goes into help mode ( quit to leave help)j=jobs(1) and j.kill() will kill job 1.
Your output will be in the dq2 registered dataset. For me this was user09.chriscollins.ganga.2.20091210 Again this is available from jobs(x) |