Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Batch System (HTCondor) | ||||||||
Line: 32 to 32 | ||||||||
Using HTCondor | ||||||||
Changed: | ||||||||
< < | Unlike PBS, which has a central server and multiple client machines, HTCondor features a distributed architecture. Jobs can be submitted from the central manager or from any machine running the scheduler daemon, which includes most Linux desktops. The job history which is reported by condor_history provides information for jobs submitted via the scheduler on the local machine (rather than across the whole pool), so it is a good idea to use a single machine for job submission. Running jobs must also communicate periodically with the submission machine.
You may find it easiest to submit jobs by first logging into hex.ppe.gla.ac.uk . | |||||||
> > | Unlike PBS, which has a central server and multiple client machines, HTCondor features a distributed architecture. The job history which is reported by condor_history provides information for jobs submitted via the scheduler on the local machine (rather than across the whole pool), so it is a good idea to use a single machine for job submission. Running jobs must also communicate periodically with the submission machine. For these reasons, it is recommended that you first log into hex.ppe.gla.ac.uk in order to submit your jobs. | |||||||
Create a submit description file | ||||||||
Line: 148 to 146 | ||||||||
Specify CPU and memory requirements | ||||||||
Changed: | ||||||||
< < | Unlike the old PBS nodes, on which jobs were free to grab whatever resources they liked (to the detriment of both themselves and other jobs on the node), the Condor compute nodes are configured to use cgroups which will restrict a job's resource usage to those resources requested. By default, all Condor jobs are allocated a single CPU and 1 GiB memory. You can adjust these values by adding request_cpus and request_memory statements to your job submit description file: | |||||||
> > | Unlike the old PBS nodes, on which jobs were free to grab whatever resources they liked (to the detriment of both themselves and other jobs on the node), the Condor compute nodes are configured to use cgroups which will restrict a job's resource usage to those resources requested. By default, all Condor jobs are allocated a single CPU and 1 GiB memory. You can adjust these values by adding request_cpus and request_memory statements to your job submit description file: | |||||||
request_cpus = 2 request_memory = 4 GB | ||||||||
Line: 164 to 162 | ||||||||
Submit a job with additional requirements | ||||||||
Changed: | ||||||||
< < | You can exert more control over where a job runs by including a requirements specification in your job submit description file. This allows you to specify values for various Condor ClassAds, combined with C-style boolean operators. For example, to specify that your job should run on a Scientific Linux 6 machine: | |||||||
> > | You can exert more control over where a job runs by including a requirements specification in your job submit description file. This allows you to specify values for various Condor ClassAds, combined with C-style boolean operators. For example, to specify that your job should run on a Scientific Linux 6 machine: | |||||||
requirements = OpSysAndVer == "SL6" |