Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
-- ThomasDoherty - 2009-10-26
Using Ganga to submit jobs to the Panda backend | ||||||||
Line: 22 to 22 | ||||||||
3. Setup any checked out packages you use in your code. | ||||||||
Changed: | ||||||||
< < | For example check out (and compile) the UserAnalysis package as in the "HelloWorld"example here![]() ![]() | |||||||
> > | For this example check out (and compile) the UserAnalysis package as in the "HelloWorld"example here![]() ![]() | |||||||
cd $TEST/PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt source setup.sh | ||||||||
Line: 51 to 51 | ||||||||
1 j = Job() 2 j.application = Athena() 3 j.application.atlas_dbrelease = 'ddo.000001.Atlas.Ideal.DBRelease.v06060101:DBRelease-6.6.1.1.tar.gz' | ||||||||
Changed: | ||||||||
< < | 4 j.application.option_file = 'Data_jobOptions_cosmic.py' | |||||||
> > | 4 j.application.option_file = 'AnalysisSkeleton_topOptions.py' | |||||||
5 j.application.athena_compile = False 6 j.application.prepare() 7 j.inputdata = DQ2Dataset() | ||||||||
Line: 65 to 65 | ||||||||
Changed: | ||||||||
< < | NOTE: Line 3 is overriding the database release to match the one needed to read ESD/DPD. In the case of the spring cosmic reprocessing,the DB release is 6.6.1.1. If the database releases don't match the jobs fail on the Grid. Line 4 corresponds to your Athena jobOptions. You can use the top job options copied from your UserAnalysis packages share directory. | |||||||
> > | NOTE: Line 3 is an example of overriding a database release to match the one needed to read ESD/DPD. In the case of the spring cosmic reprocessing,the DB release is 6.6.1.1. If the database releases don't match the jobs fail on the Grid ( remove this line if it is not necessary). Line 4 corresponds to your Athena jobOptions. You can use the top job options copied from your UserAnalysis packages share directory. | |||||||
cp ../share/AnalysisSkeleton_topOptions.py .BUT to prepare your code for running on the Grid there are some changes needed for this Athena JO - please add these lines: | ||||||||
Line: 83 to 83 | ||||||||
_________________________________________________________________________ | ||||||||
Changed: | ||||||||
< < | Also remember to remove (or comment out) the input data line and change the geometry tag and the conditions DB tag to match those used in the reprocessing cycle (see details for each reprocessing campaign on this page here![]() | |||||||
> > | Also remember to remove (or comment out) the input data line and if you are running a reprocessing job change the geometry tag and the conditions DB tag to match those used in the reprocessing cycle (see details for each reprocessing campaign on this page here![]() | |||||||
globalflags.ConditionsTag.set_Value_and_Lock('COMCOND-REPC-002-13') | ||||||||
Deleted: | ||||||||
< < | | |||||||
Deleted: | ||||||||
< < | The athena JO's used for this specific example (Data_jobOptions_cosmic.py and Settings_DepletionDepth.py) can be found here![]() ![]() | |||||||
Back to the Ganga JO script:
Line 5 is set to False because we have already compiled the packages locally. Line 6 tells Ganga to tar your user area and send it with the job. Line 10 specifies the backend to which you are sending your job. There are three options: LCG, | ||||||||
Changed: | ||||||||
< < | Panda and NorduGrid. I chose Panda because my data existed only in BNLPANDA, a site in the US cloud. | |||||||
> > | Panda and NorduGrid. In the example above Panda was chosen because the data existed only in BNLPANDA, a site in the US cloud. | |||||||
Line 12 corresponds to the number of subjobs you want to split your job into. Finally in Line 13 you submit your job. |