next up previous
Next: Local Search Algorithms Up: Algorithms for a Job-Scheduling Previous: Complexity of the Scheduling

Subsections

   
Greedy Heuristics

In this section we present some heuristical algorithms solving the ODScheduling problem. Since the problem has to be solved online in the sense that the time for solving the problem is part of the response time, the scheduling algorithms have to be fastly computable and have to deliver good results. So it is clear that we should not try to solve the scheduling problem optimally.

Algorithms

Algorithmic Frame

All heuristical algorithms that we have investigated use the same greedy approach: The set of jobs is worked one after the other and for each job we select one processor the job is assigned to. This algorithmic frame is presented in Algorithm 1.
  \begin{algorithm}% latex2html id marker 590 \begin{small} \begin{algorithmic}[1]... ...rithmic}\caption{Algorithmic Frame for all heuristics}\end{small}\end{algorithm}

So we have to specify for each heuristic, which job j is worked next (line 4) and on which processor p this job j is scheduled (line 5). We use the following greedy approach for the selection of a processor for a job j in all of our algorithm: Assuming that there is a schedule s of a subset of jobs which are scheduled already. Then we take a processor p such that $\tilde{T}_{s,p,j}$with $ \tilde{T}_{s,p,j} := T_{s,p} + u_{s,p,j}$being minimal for the job j. The value Ts,p is the time on processor p according to s and us,p,j is the additional time needed on processor pif the job j is assigned to it basing on the previously scheduled jobs according to s:

\begin{displaymath}u_{s,p,j} \;:=\; t + \vert\{o\in w(j)\;\vert \begin{array}[t... ...\ \forall j : s(j)\ne p \vee o\notin w(j) \}\vert\end{array}\end{displaymath}

If there are several processors with the same $\tilde{T}_{s,p,j}$we take a random one of these.

Different Strategies

We get different heuristics by using different strategies for the selection of the next job. We have examined the following principles:
Random Order:
The first and simplest strategy is using a random order of the jobs.
Minimal New Time:
Take the job with the smallest value of $\min_p\tilde{T}_{s,p,j}$as the next one. So we obtain that this good $\min_p\tilde{T}_{s,p,j}$is indeed realized by the schedule.
Minimal Number:
Take the job with the minimal number of different optimal processors regarding $\tilde{T}_{s,p,j}$. So this job can use its best processor and since other jobs have a greater number of good processors there are still some good processors left for them.
Minimal Time and Number:
Take the job with the smallest $\min_p\tilde{T}_{s,p,j}$. If two jobs have the same time, use the one with the smallest number of different optimal processors. So we have combined both strategies that have been mentioned before.
Minimal Additional Time:
Take the job with the smallest us,p,j concerning the processor which fulfills the $\min_p\tilde{T}_{s,p,j}$minimization. So the biggest amount of unused processor time remains after this assignment. As secondary criterion we use the value $\min_p\tilde{T}_{s,p,j}$and as third criterion the number of optimal processors is used again.

The selection of the processor has to be done correspondingly.

Further we have looked for some other principles, e.g. using the job with the biggest $\min_p\tilde{T}_{s,p,j}$first. But all these other ideas had no advantage compared to the ones listed above.

The Representation of the Problem as a Linear Program

In order to get an idea of the solution quality of a schedule, we have computed a lower bound on the optimal makespan. Thus, we express the ODScheduling problem as an integer linear programming problem. Ignoring the integer constraints we get the LP-Relaxation. The value of an optimal solution of the LP-Relaxation is a lower bound on the makespan of the optimal schedule. The LP representation is built from a given ODScheduling problem using the following scheme:

For each job $j\in J$we use m 0-1 variables with the meaning $x_{j,p} = 1 \;\Leftrightarrow\;$job j is scheduled to processor p. A 0-1 variable is a variable which has only two permitted values: zero and one. A second set of 0-1 variables is used for the representation of a copy process of an object: $y_{o,p} = 1 \;\Leftrightarrow\; \mbox{object $o$\space has to be copied to processor $p$ }$ . Finally, we need one variable M representing the makespan of the schedule. So altogether we use m(n+|O|) 0-1 variables and one non-integer variable.

Since we have to achieve an unambiguous job processor assignment, we have to enforce that each job is assigned to exactly one processor, so we use the constraints $\forall j\in J: \sum_p x_{j,p} = 1.$Additionally, we have to achieve that all necessary objects are copied, so the constraints $\forall j\in J, p\in P:\sum_{o\in w_p(j)} y_{o,p} \ge x_{j,p}\cdot \vert w_p(j)\vert$ are used with $w_p(j) := \{o\in w(j) \;\vert\; v(o)\ne p\}$. Finally, m constraints are needed in order to achieve that every processor needs at most time M: $\forall p\in P: t \sum_j x_{j,p} + \sum_{o} y_{o,p} \le M.$

Altogether we need n + mn + m constraints. Since the goal is to minimize the makespan all variables have a cost of zero except the variable M, which has a cost of one; and the LP should be minimized.


next up previous
Next: Local Search Algorithms Up: Algorithms for a Job-Scheduling Previous: Complexity of the Scheduling
Norbert Sensen
1999-08-24