Project

General

Profile

Lack Of functionalities

back to array

Distribution Rate

A mean to limit the number of spawned jobs user and group for a scheduler round
Use case:
Suppose a user queuing a thousand of jobs, in a cluster pretty under-loaded, for instance after a couple of racks is swithched on production after a shutdown.
Each of these jobs are expected to read the same configuration file, or datafile, or DB acess, before really working. When the scheduler will spawn a great numbers of these jobs, the storage server reaching this file will be suddenly overloaded, involving a damage on the whole services, for all users.
So we need to limit the number of spawned jobs per user and/or project in each sechduler execution phasis.

back to array

Missing ressources

GE doesn't refuse a job requesting inconsistant/incompatible queues and ressources
Use case:
When an unreachable resource is needed by a job, for instance 100GB memory, GE should have to refuse to queue this job, and qsub shouls have to exit with an error message and code.
NL-File:
If a user submit a job requesting for ressources like cpu, mem, or disk which are not available in the farm, the job stays endlessly queued.
It would be better if requirements are checked by GE before the job is allowed to be queued.
Then GE could give a message to the user in order to advise him that the job submission is not possible because "the current configuration of the farm cannot afford satisfaying its requirements".
For example if a user ask for the following job to be submitted :
<qsub -q short -l cpu=48:00:00>
This job will say queued because the short queue does not allow this CPU time (max cpu available for this queue is 6 hours)

back to array

Job Efficiency

We need a command that lists the values requested as parameters (cpu, memory, resources, etc) with filters (group, owner, time period, etc)
For instance : > qlist -g atlas -q long -s rpz -d 1 cpu_usage,wallclock,cpu_request,complex_request

345723 400000 matlab=1

It would be very helpfull to have access to cpu, wallclock also for running jobs in a simple format for instance in seconds instead of DD:HH:MM:SS.

and additional variables like fsize usage, scale factor of the workers.

back to array

Jobs limitation

No means to limit jobs (Queued+run) per user or project (only global limit)
Use case:
Suppose having a large cluster which can run more than 10000 jobs at the same time. It serves user groups of various sizes, some of them running thousands of jobs all from the same loginid - particularly true in a grid environment -, others individually and only some jobs a day. In the first case, a very high limit on the number of queued and running jobs is needed, in the latter one a fairly low limit. In both cases the main purpose of the limit is the same, avoid overloading the system by runaway scripts producing jobs in a loop.

back to array

qalter

We cannot modify only ONE resource of a queued job: we have to specify again all resources
Use case: (NL File)
The qalter command which allow users to update a job by adding ressource is very useful and easy to use.
However we noticed that when the user only add the new ressource and does not recapitulate all the ressources previously requested by the job, all the ressources requested before the qalter are lost.

back to array

qacct, qstat

Not the same information depending on current status of job : Queued or Run vs Ended. Difficult to get a synoptic (coherent?) view
Use case:
We need a command that lists the values requested as parameters (cpu, memory, resources, etc) with filters (group, owner)
For instance : > qlist -g atlas -q long -s rpz cpu_usage,cpu_request,complex_request
> 345723 400000 matlab=1

back to array

qstat

BUG FIXED NOW - qstat error message is truncated to 128 characters
Use case: (NL-File)
the error message reported by the command qstat is truncated to 128 characters.
It means that there is no way for users to undertand why the job failed.
2 exemples:
$ qstat -j 9041445 | grep error
error reason 1: 11/18/2011 10:12:32 [41086:2546]: execvp(/var/spool/sge/ccwpge0023/job_scripts/9041445, "/var/spool/
$ qstat -j 4760939 | grep error
error reason 1: 01/17/2012 23:27:23 [0:5613]: failed to set AFS token - set_token_command "/opt/sge/util/set_token_c

back to array

Minimum running jobs

Grant a minimum amount of running jobs per user
Use case:
For a farm having a huge amount of users and groups some
users have a hard time having a few jobs running. This is especially
true when a group consists of many users or if the group has a relatively
low share. The idea would be to give higher priority to the N (10?) first
running and pending jobs of each user. It will not have any impact
if the users having already 10 running jobs but will boost those who have
less than N jobs running. We could think of a profile of priorities (more priority
to the very first jobs and less to the last N in the list).

back to array