Best Practice for ATLAS
A selection of useful best practice for specifically for ATLAS. This is in addition to the general best practice guide (https://www.scotgrid.ac.uk/wiki/index.php/Best_Practice_Guide).
Preventing your job running twice
It is advisable to disable deep re-submission, as it may happen that a jobs fails after it has already done something write logs to shared file systems, copy files back to the UI, register files in the LFC or DQ2 and therefore, depending on the job the re-submission of an identical job might generate inconsistencies.
One way to stop this is to set the deep RetryCount in your JDL with
RetryCount = 0;
or if you are in Ganga you can set this with:
config['LCG']['RetryCount'] = 0
or from your .gangarc file.
This is particularly important for jobs using the data management tools.
As if you job runs fails and runs twice you can end up with duplicates/orphaned file in the file catalogue and DQ2 tools.