It is clear from chapter 3, that if initial placement is to be used to maintain the process mix on a host, then some form of a priori knowledge about the jobs being scheduled is required. The most useful way to provide this information is by forming predictions based on historical behaviour, with a probabilistic rule, as in Harchol-Balter and Downey [HBD95], or learnt behaviour as in Koch et al. [KRK94].
A prediction mechanism is not a traditional part of load distribution, and most previous work has not considered any form of a priori knowledge. Essentially, a prediction is simply extra information on which a policy may choose to act, and the insight it adds, is the job's potential contribution towards the load on the system, and as such, can be viewed as an extension of the load metric. Thus the logical extension of the taxonomy from section 2.2.3 is to split the branch containing the load metric into branches for host and job metrics, as shown in figure
Figure 7.4: Extension of the taxonomy to include prediction.
Chapter 4 included several techniques used to predict the future activity of jobs. The predictions supplied by the mechanism described in this chapter are based on the running average of resource use from past executions. The four values recorded for each program are:
The physical implementation uses a global hash table indexed by the program name to locate the past average resource requirements for each program. The interaction of the prediction mechanism and load distribution system is shown in figure
Figure 7.5: The prediction mechanism and its interaction with the initial placement mechanism.
Initial placement is triggered by the arrival of a job j, on the source host at time t (a). The policy module is contacted (b) which performs a lookup on j (c) and receives j's past average resource requirements (d). Using this information, the job is scheduled to a destination host e, where it starts execution at time t+T (f). After j has completed (in the finished control state), the resource requirements from this run are forwarded to update the average in step g.
The time spent in the delay queue T, is not counted towards the average duration as it is a scheduling artifact rather than behaviour inherent to the job or host.
The duration is dependent on the characteristics of the job as well as the load of the host on which it was executed. Unlike the delay T, this is difficult to compensate against, so one solution is to restrict the period over which the average is collected to ensure the load is consistent. This is done by considering short trace periods in section 9.1.2, and in a real system by using a sliding window.