Greedy Heuristics

In this section we present some heuristical algorithms solving the ODScheduling problem. Since the problem has to be solved online in the sense that the time for solving the problem is part of the response time, the scheduling algorithms have to be fastly computable and have to deliver good results. So it is clear that we should not try to solve the scheduling problem optimally.

So we have to specify for each heuristic, which job *j* is
worked next (line 4) and on which processor *p* this job *
j* is scheduled (line 5). We use the following greedy approach
for the selection of a processor for a job *j* in all of our
algorithm: Assuming that there is a schedule *s* of a subset
of jobs which are scheduled already. Then we take a processor *
p* such that with
being minimal for
the job *j*. The value *T*_{s,p} is
the time on processor *p* according to *s* and *
u*_{s,p,j} is the additional time
needed on processor *p*if the job *j* is assigned to it
basing on the previously scheduled jobs according to *s*:

If there are several processors with the same we take a random one of these.

**Random Order:**- The first and simplest strategy is using a random order of the jobs.
**Minimal New Time:**- Take the job with the smallest value of as the next one. So we obtain that this good is indeed realized by the schedule.
**Minimal Number:**- Take the job with the minimal number of different optimal processors regarding . So this job can use its best processor and since other jobs have a greater number of good processors there are still some good processors left for them.
**Minimal Time and Number:**- Take the job with the smallest . If two jobs have the same time, use the one with the smallest number of different optimal processors. So we have combined both strategies that have been mentioned before.
**Minimal Additional Time:**- Take the job with the smallest
*u*_{s,p,j}concerning the processor which fulfills the minimization. So the biggest amount of unused processor time remains after this assignment. As secondary criterion we use the value and as third criterion the number of optimal processors is used again.The selection of the processor has to be done correspondingly.

Further we have looked for some other principles, e.g. using the job with the biggest first. But all these other ideas had no advantage compared to the ones listed above.

In order to get an idea of the solution quality of a schedule, we have computed a lower bound on the optimal makespan. Thus, we express the ODScheduling problem as an integer linear programming problem. Ignoring the integer constraints we get the LP-Relaxation. The value of an optimal solution of the LP-Relaxation is a lower bound on the makespan of the optimal schedule. The LP representation is built from a given ODScheduling problem using the following scheme:

For each job we use *m* 0-1 variables
with the meaning
job *j* is
scheduled to processor *p*. A 0-1 variable is a variable which
has only two permitted values: zero and one. A second set of 0-1
variables is used for the representation of a copy process of an
object:
. Finally, we need one variable *M* representing the makespan
of the schedule. So altogether we use *m*(*n*+|*O*|)
0-1 variables and one non-integer variable.

Since we have to achieve an unambiguous job processor
assignment, we have to enforce that each job is assigned to exactly
one processor, so we use the constraints
Additionally, we have to
achieve that all necessary objects are copied, so the constraints
are used with
. Finally, *
m* constraints are needed in order to achieve that every
processor needs at most time *M*:

Altogether we need *n* + *
mn* + *m* constraints. Since the goal is to minimize the
makespan all variables have a cost of zero except the variable *
M*, which has a cost of one; and the LP should be minimized.