List of talks
Scheduling and algorithmic game theory
We present some results concerning scheduling in the area of algorithmic game theory. We focus on the notions of Nash equilibrium, price of anarchy, coordination mechanism and truthfulness.
Scheduling on unrelated parallel machines to maximize the minimum load
Suppose we are given a collection of n jobs which need to be scheduled on m machines. In the unrelated machines setting, the size of a job depends arbitrarily on the machine it is assigned to (we can view the machines are people, and they will process some jobs faster than other jobs depending on their skills and the nature of the job). We consider the problem of assigning the jobs to machines such that the total load on the least loaded machine is maximized. A natural motivation for this problem is in fair allocation of jobs.
Somewhat surprisingly, not many non-trivial results are known even for very special cases of this problem. In this talk, I will survey the known results and describe an approximation algorithm which significantly improves the approximation ratio for the so called "restricted assignment" case of this problem.
On minimizing the number of idle time intervals : An application to power management
Power Management policies aim at reducing the amount of energy consumed by battery operated systems, while keeping the overall performance high. In this paper we focus on shut-down mechanisms that put a system into a sleep state when it is idle. A very small amount of energy is consumed in this state but, a fixed amount of energy is required when moving the system from the sleep state to the active state. The offline version of this problem consists in scheduling a set of tasks, with release dates and deadlines, on a single machine in order to minimize the number of idle time periods. We show that this problem can be solved in polynomial time by Dynamic Programming.
Scheduling Algorithms for Procrastinators
This paper presents scheduling algorithms for procrastinators, where the speed that a procrastinator executes a job increases as the due date approaches. We give optimal off-line scheduling policies for both linearly and non-linearly increasing speed functions. We then explain the computational/numerical issues involved in implementing this policy. We next explore the online setting, showing that there exist adversaries that force any online scheduling policy to miss due dates. This impossibility result motivates the problem of minimizing the maximum interval stretch of any job; the interval stretch of a job is the job's flow time divided by the job's due date minus release time. We show that several common scheduling strategies, including the "hit-the-highest-nail" strategy beloved by procrastinators, have arbitrarily large maximum interval stretch. Then we give the 2-delayed LIFO scheduling policy and show that it is a Θ(1) approximation algorithm for the maximum interval stretch. Finally, we show how to achieve constant bounds for a class of non-linear speed functions.
SCAN : a Heuristic for Near-Optimal Software Pipelining
Software pipelining is a well known compiler technique to improve loops performance. In embedded context, where applications are compiled once during the design of the system, it is interesting to spend more time performing deep optimizations as the gain is amplified by the mass production.
Usually software pipelining is computed using iterative heuristics such as modulo scheduling. While optimal solutions have been formulated as integer linear program, their resolution becomes intractable as the loops size grows. In this article, we investigate a new approach, the SCAN heuristic, which uses a simplified form of the optimal formulation to compute near-optimal software pipelines.
We conducted experiments on multimedia applications benchmarks for the ST200 VLIW processor which show that, for difficult cases, our approach is almost always able to compute an optimal solution while classical optimal formulations are already intractable. This results in up to 33.3% performance gain over modulo scheduling.
Quay crane allocation as a moldable task scheduling problem
Co-authorsM. Machowiak and C. Oguz
In this paper, the allocation problem of berths to the incoming ships is modelled by a moldable tasks scheduling problem. This model considers the tasks as the ships and the processors as quay cranes assigned to these ships. Since the duration of berthing for a ship depends on the number of quay cranes allocated to the ship, the use of moldable task scheduling model is substantiated. In the model, the processing speed of a task is considered to be a non-linear function of the number of processors allocated to it. A suboptimal algorithm, which starts from the continuous version of the problem (i.e. where the tasks may require a fractional part of the resources) to obtain a feasible solution to the discrete version of the problem, is presented. The computational experiments conducted showed that the suboptimal algorithm has a very good average behavior.
Round robin tournaments and three index assignment
Co-authorsAndreas Drexl and Frits C. R. Spieksma
In general scheduling a sports league is a difficult combinatorial optimization problem. We study some variants of round robin tournaments and analyze the relationship to the planar three index assignment problem. The complexity of round robin tournaments is proven by reduction from the planar three index assignment problem.
Furthermore, integer programming models are introduced. We pick up a popular idea and decompose the overall problem in order to obtain two subproblems which can be solved sequentially. The latter subproblem can be represented as planar three index assignment problem which makes corresponding solution techniques amenable to sports league scheduling.
Power-aware scheduling for makespan and flow
The proliferation of mobile computing devices has made power management an important problem. One tool these devices have is the ability to conserve energy by operating more slowly. For example, many modern processors can reduce their clock speed, enabling them to operate at a lower voltage. Similarly, wireless devices can transmit more slowly, enabling them to use a coding scheme with lower rate and reduce their transmission power. Scheduling where the system can save energy by slowing down gives a tradeoff between energy consumption and schedule quality.
In this talk, I will summarize known results and describe offline algorithms for the bicriteria problems of minimizing energy and either makespan or total flow. The main result is a linear-time algorithm for uniprocessor makespan that computes all non-dominated solutions. An extension of that algorithm does the same for multiprocessors running equal-work jobs. The technique also gives an arbitrarily-good approximation for minimizing the total flow of equal-work jobs on a multiprocessor. Finally, I show that it is NP-hard to minimize makespan on a multiprocessor with arbitrary jobs and impossible to exactly minimize total flow on a uniprocessor with equal-work jobs.
Runway Scheduling at Heathrow Airport
We have been working with National Air Traffic Services to investigate take-off scheduling at London Heathrow airport, one of the busiest airports in the world. Runway controllers re-order aircraft for take-off within holding areas at the ends of the runway. They currently do this manually, despite the complex physical constraints imposed by the holding point structures and temporal constraints imposed by the mandated separation rules and take-off slots. Changing the take-off order for aircraft currently reduces the total delay for aircraft taking off to around a third of that obtained by a first come first served ordering. We have been investigating the feasibility of providing a decision support system for runway controllers, taking into account more aircraft than the controllers could be expected to do, to achieve further delay savings by early prediction and rectification of future problems. The model we have built for a decision support system will be presented. We will also discuss how a hybrid meta-heuristic search is used to find good take-off schedules fast enough to be of use in a real-time system. Finally, the simulation model we use to assess the performance of the decision support system will be presented together with some of the results we have obtained.
New lower bounds for the two-dimensional bin-packing problem
Co-authorsFrançois Clautiaux and Aziz Moukrim
The two-dimensional bin-packing problem consists in determining the minimum number of bins (large rectangles) needed to pack a given set of items (small rectangles) . It has been shown by Fekete et Schepers (2001) that Dual-Feasible Functions (DFF) proposed by Johnson (1973) can be used to get good lower bounds. In this talk we will introduce the notion of Data-Dependent Functions and we report computational results obtained with three families of functions. We also discuss two new methods for generating other functions, with a higher computational cost. The first method consists in iteratively composing the three functions previously proposed. The second one uses the enumerative method of Carlier et Neron (2002) to generate discrete DFF. We provide computational experiments to compare the effectiveness of the two methods against classical benchmarks derived from the literature.
Admissible Resource Selection Strategies in Two Level Job-Scheduling for a Computational Grid
We address parallel jobs scheduling problem for computational GRID systems. We concentrate on two-level hierarchy scheduling: at the first level broker allocates computational jobs to parallel computers. At the second level each computer generates schedules of the parallel jobs assigned to it by its own local scheduler. Selection, allocation strategies, and efficiency of proposed hierarchical scheduling algorithms based on realistic GRID parameters and real workloads are discussed.
Advanced in a Dag-Scheduling Theory for Internet-Based Computing
Earlier work has developed the underpinnings of a theory of scheduling computations having intertask dependencies - modeled via dags - for Internet-based computing. The goal of the schedules produced is to render tasks eligible for execution at the maximum possible rate. This goal aims:
- to utilize remote clients' computational resources well, by always having work to allocate to an available client;
- to lessen the likelihood of the "gridlock" that ensues when a computation stalls for lack of eligible tasks.
The theory crafts a schedule for a given dag G by "parsing" G (if possible) into connected bipartite building-block dags (CBBBs, for short) that one can "compose" to form G and then analyzing the scheduling dependencies among these CBBBs. We extend the theory by developing an algorithmic tool that allows one to expand the theory's repertoire of building blocks, both via new classes of CBBBs, and by allowing one to add classes of bipartite building-block dags that are not necessarily connected.
Approximation algorithms for a three-dimensional orthogonal knapsack problem
Co-authorsRolf Harren, Klaus Jansen and Ralf Thoele
We study non-overlapping axis-parallel packings of three-dimensional boxes with profits into a dedicated bigger box where rotation is forbidden; we wish to maximize the total profit. Since this optimization problem is NP-hard, we focus on approximation algorithms. We obtain fast and simple algorithms with approximation ratios 9+ε and 8+ε as well as an algorithm with approximation ratio 7+ε that uses more sophisticated techniques. To our best knowledge, these are the smallest absolute approximation ratios for the problem under consideration.
There is also a connection to scheduling; we can imagine a scenario where we wish to choose the most profitable jobs which are to be executed in a connected mesh system. From this perpective, the first two dimensions constitute the geometry of the mesh system while the third dimensions corresponds to the time.
Scheduling multiple divisible loads in star systems
We analyze complexity of scheduling multiple divisible loads on star-interconnected parallel processors. It is shown that this problem is computationally hard for identical processors. Providing polynomial time algorithms is a challenging task even for the simplest forms of this problem. Nevertheless, some polynomially solvable cases are identified.
Scheduling monotone sub-chain orders on typed task systems
A sub-chain order graph is a special case of interval order graph, that consists in the transitive closure of a chain plus independent tasks. We consider the machine model of typed task systems, where each operation has a type, and there is a given number of processors of each type. Typed task systems generalize parallel machines. We consider the feasibility of scheduling a UET typed task systems with release dates, deadlines, sub-chain order precedences, and precedence delays that are monotonic. We show this problem can be solved in polynomial time by extending the algorithm of Leung-Palem-Pnueli.
QoS-based Management of Multiple Shared Resource in Dynamic Real-Time Systems
Dynamic real-time systems require adaptive resource management to accommodate varying processing needs. We address the problem of resource management with multiple shared resources for soft real-time systems consisting of tasks that have discrete QoS settings that correspond to varying resource usage and varying utility. Given an amount of available resource, the problem is to provide on-line control of the tasks' QoS settings so as to optimize the overall system utility. We propose several heuristic algorithms that will be shown to be compatible with the requirements imposed by our control theoretical resource management framework:
- By only making incremental adjustments to QoS settings as available resources change, they provide low runtime complexity, making them suitable for use in on-line resource managers
- Differences between actual utility and optimal utility do not accumulate over time, so there is no long-term degradation in performance.
- The lower and upper bound on actual utility can be calculated dynamically based on current system conditions, and absolute bounds can be calculated statically in advance.
- It is possible to respond to the actual resource possible, allowing all resources to be used and tolerating misspecification of task resource requirements.
Scheduling with reservations
Co-authorsGrégory Mounié and Denis Trystram
In this work, we target on scheduling a set of independent jobs on a parallel platform composed of identical processors. This general problem has been widely studied on both theoretical side (complexity analysis, approximability of some families of scheduling algorithms) and practical side (dedicated softwares that are utilized under production). It is frequent that some processors of a cluster become not available for a certain amount of time (it corresponds to reservations where, for instance, a specific application will have to start its execution at a determined moment on a given number of processors).
We propose here to investigate the scheduling problem where there are restricted resource avalabilities (temporary deconnection of a sub-set of machines). Our main result is to provide a deep analysis for this problem (complexity, lower bounds, upper bounds) for several variants of list scheduling algorithms. More precisely, we show that the problem of scheduling with any reservations can not be approximated. This leads to study restricted versions of this problem where the amount of reservation is limited.
Our analysis is based on an old bound of Graham for list resource constraint scheduling for which we propose a new simpler proof by considering the continuous version of this problem.
Cyclic Properties of Locally Connected Graphs with Bounded Vertex Degree
Co-authorsYury Orlovich and Chris Potts
We consider the existence of Hamiltonian cycles for locally connected graphs with bounded vertex degree. Let D(G) and d(G) denote, respectively, the maximum and minimum vertex degree of graph G. We explicitly describe all connected, locally connected graphs with D(G)<5. We show that every connected, locally connected graph with D(G)=5 and d(G)>2 is fully cycle extendable. Furthermore, we prove that the HAMILTONIAN CYCLE problem for locally connected graphs with D(G)<8 is NP-complete.
Scheduling trees of tasks for parallel mutifrontal methods
Co-authorsOlivier Beaumont, Pierre-François Dutot and Denis Trystram
We consider the direct solution of large sparse systems of linear equations of the form Ax = b on distributed memory parallel computers using multifrontal Gaussian elimination. For an unsymmetric matrix, we compute its LU factorization; if the matrix is symmetric, its LDLT factorization is computed. Because of numerical stability, pivoting may be required.
The multifrontal method was initially developed for indefinite sparse symmetric linear systems and was then extended to unsymmetric matrices.
The factorization of the matrix is done by performing a succession of partial factorizations of small dense matrices called frontal matrices, that are associated to the nodes of a so-called assembly tree representing the dependencies between tasks. The frontal matrix is divided into two parts: the factor block, also called fully summed block, which corresponds to the variables factorized when the elimination algorithm processes the frontal matrix; and the contribution block, which corresponds to the variables updated when processing the frontal matrix. Once the partial factorization is completed, the contribution block is passed to the parent node. When contributions from all children are available on the parent, they can be assembled (i.e. summed with the values contained in the frontal matrix of the parent). The elimination algorithm is a postorder traversal (we do not process parent nodes before their children) of the assembly tree.
In the parallel context, the frontal matrices may be treated either on a single processor, or with a parallel factorization scheme. Thus, it is important to be able to efficiently map the nodes of the assembly tree on the set of working processors. This can be viewed as a problem of mapping a tree of malleable tasks over a set of computational resources. We study in our context the behaviour of two of the state-of-the-art techniques achieving this goal: The Algorithm presented by Prasanna and Musicus in for scheduling a tree of malleable tasks over a set of processors, and an application of the techniques proposed by Skutella in dealing with the time-cost tradeoff problem. This work will be done in the context of the parallel mutifrontal solver MUMPS.
Finding total unimodularity in optimization problems solved by linear programs
A popular approach in combinatorial optimization is to model problems as integer linear programs. Ideally, the relaxed linear program would have only integer solutions, which happens for instance when the constraint matrix is totally unimodular. Still, sometimes it is possible to build an integer solution with same cost from the fractional solution. Examples are two scheduling problems [1,2] and the single disk prefetching/caching problem .
We show that problems such as the three previously mentioned can be separated into two subproblems:
- finding an optimal feasible set of slots, and
- assigning the jobs or pages to the slots.
It is straigthforward to show that the latter can be solved greedily. We are able to solve the former with a totally unimodular linear program, which provides simpler (and sometimes combinatorial) algorithms with better worst case running times.
- Baptiste and B. Schieber. A note on scheduling tall/small multiprocessor tasks with unit processing time to minimize maximum tardiness. J. Sched., 6(4):395-404, 2003.
- P. Brucker and S.A. Kravchenko. Scheduling jobs with equal processing times and time windows on identical parallel machines.
- S. Albers, N. Garg, and S. Leonardi. Minimizing stall time in single and parallel disk systems. In Proceedings of the 30th Annual ACM Symposium on Theory of Computing (STOC-98), pages 454--462, New York, May 23--26 1998. ACM Press.
Football Fixture Scheduling in England
Sports scheduling is a relatively unexplored area, with some notable exceptions. In this talk I will present a sports scheduling problem which has yet to be addressed by the academic community. I will outline the problem, together with the constraints that have to be adhered to (both hard and soft). I will present initial results and the future research directions that are planned.
An auction mechanism for scheduling batches on uniform machines
In a centralized scheduling system all decisions are taken by one author- ity. In contrast, when machines are responsible for their own schedules, a control mechanism is needed for distributing the jobs between the machines. Inspired by the results of Nisan and Ronen, and of Archer and Tardos on job allocation problems, we have studied allocation of batches in a uniform machine environment.
In a job auction, a dispatcher wants to assign jobs to machines the ob- jective being to minimize the makespan. But it does not know everything about the capabilities of the machines, i.e., it does not know their speeds. Therefore, the dispatcher announces all job parameters and asks a process- ing speed (a bid) from each machine. The machines, in turn, send-in some claimed speed. Based on the collected speeds, the dispatcher allocates the batches to the machines and computes a payment. The cost of each machine is proportional to the true time spent on processing the assigned batches. Hence, a machine could make profit by bidding differently to its true speed. However, if the machines do not provide their true speed, the dispatcher may allocate jobs poorly, i.e., the resulting schedule is far from the optimum. Hence, the dispatcher has to use a payment scheme which incite the machines telling their true speeds.
The above scheme has been described by Archer and Tardos for job auc- tions in a uniform machine environment. We extend their results to batch scheduling problems, where each batch consists of identical jobs and setups do not depend on the sequence of processing. Our main contribution is a polyno- mial time auction mechanism (allocation algorithm + payment calculation) for scheduling the batches on uniform machines to minimize makespan.
Batching Deteriorating Items
Co-authorsFawaz S. Al-Anzi and Ali Allahverdi
We study two deterministic scheduling problems that combine batching and deterioration features. In both problems, there is a certain demand for identical good quality items to be produced in batches. In the first problem, each batch is assigned an individual machine that requires a cost and a time to be activated. All the machines are identical, work in parallel, and always produce good quality items. All the items are available at time zero and they deteriorate while waiting for production. Deterioration results in a linear increase of time and cost of production. In the second problem, there is a single machine that produces good quality as well as defective items in batches. Each batch is preceded by a setup time and requires a setup cost. Defective items have to be reworked on the same machine. They deteriorate while waiting for rework. At a time to be decided, the machine switches from production to rework defective items of the current batch. After rework, every defective item has the required good quality. In both problems, the objective is to find batch partitioning such that a linear combination of the production cost and production completion time is minimized. The two problems were observed at computer service providers and also reverse logistics. In computer service providers, machines and items correspond to communication service channels and information transfer tasks, respectively. We reduce both problems to minimizing a function of one variable representing the number of batches. In an optimal solution of either problem, there are at most two different batch sizes. Linear time algorithms are proposed for both problems.
Primary-backup Link Scheduling for Wireless Networks Operating in Hostile Environments
To address reliability, security and survivability concerns in wireless networks, a model is introduced that is based on cross-monitoring. Specifically, principal path and orthogonal monitoring are used to detect and possibly correct diverse faults. Recovery is conditioned on the specific requirements associated with the fault model considered. The network model subscribes to a five-fault hybrid fault model considering transmissive and omissive versions of symmetric and asymmetric faults.
To address the redundancy requirements necessary for recovery in the presence of failing or maliciously compromised nodes, several characteristics can be exploited to reduce redundant packet transmission for cross-monitoring purposes. The broadcast paradigm allows fault detection on the principal communication path. Moreover, in the orthogonal dimension it enables detection and correction. The detection mechanisms can be used for primary-backup scheduling. Traditional backup-backup scheduling was adapted to recover from benign and omission faults. In addition a new paradigm is introduced that is based on primary-backup overloading in conjunction with backup-backup overloading capable of dealing with malicious faults.
Characterization of just in time sequencing via apportionment
The just in time sequencing is used to balance workloads throughout just in time supply chains intended for low-volume high-mix family of products. It renders supply chains more stable and carrying less inventories of final products and components but at the same time it ensures less shortages. A number of algorithms have been proposed in the literature to optimize just in time sequencing. This paper characterizes these algorithms via characteristics developed by the apportionment theory.
Fairness in Job Scheduling on Cplant
The primary focus of research in job scheduling has been to increase utilization and improve desired user metrics such as turnaround time and slowdown. Very little research has addressed fairness. The Cplant scheduler uses a "fair share" measure to order jobs in the queue, but how fair is this scheduler?
One possible approach to assessing fairness is as follows: For each job, find the number of jobs with a higher user usage-count that are serviced while the job waits. The problem with that approach is how do we account for "benign" backfilling that uses slots in the schedule not usable by this job? Instead, we propose assigning a "fair-start" time to a job when it is submitted by generating a non-backfilling, in order schedule based on fairness priority. If a job's actual start time does not exceed its fair-start time, the job is considered to have been fairly treated. Otherwise, it is considered to have been unfairly treated.
The original Cplant scheduler implemented no guarantee backfilling. With no guarantee backfilling none of the jobs submitted gets a scheduler reservation. Therefore, there is no blocking of jobs that can backfill onto open processors.
To preventing starvation, the Cplant scheduler will drain the machine when a job has been waiting for more than twenty-four hours. During the time that Cplant is being drained, many processors can go idle while jobs that can fit on the machine are waiting. In order to increase processor utilization and decrease job waiting times, aggressive/easy backfilling was added to the Cplant scheduler drain. With aggressive backfilling, a job that can fit on Cplant is scheduled during the time that the machine is being drained as long as the time requested for the job is no greater than the maximum remaining time for the jobs running on the machine. Therefore, one job gets a reservation and backfilling can be blocked by that job.
When Cplant became oversubscribed, the no guarantee backfilling and starvation drain made Cplant "unfair". We studied the consequences to "fairness" of various alternatives to the above scheduler.
Scheduling with deployment
Grid computing provides computing resources to users with high requirement. The hardware is setup relatively easily. Nevertheless the software stack is still ongoing work. Grid5000 is a platform dedicated to computer science. The goal is to provide a large scale testbed for researchers. This work describes a new challenge for scheduling experiments on such platforms. A new goal is to minimize the number of deployment of software stacks. We propose to use well-known multi-processor list scheduling. We present the performance evaluation of our bicriteria algorithm.
Cyclic Scheduling with Latencies
We consider in this talk a set of generic tasks constrained by a set of uniform precedence constraints corresponding to a natural generalization of the basic cyclic scheduling problem : the two parameters of any uniform constraints (namely the value and the height) between two tasks may be negative, which allows to tackle a large class of practical applications.
Firstly, we will present an application of this model for the design of embedded systems. Then, we will discuss the possible extensions of the well known results of the basic scheduling problem to this models if the number of processors is not bounded. Lastly, we will present some theoretical and experimental results if all the tasks are executed by a same processor.
Minimizing Total Weighted Completion Time on Two Dedicated Processors with Preemptive Multiprocessor Tasks
We consider scheduling preemptive multiprocessor tasks on two dedicated processors to minimize total weighted completion time. We first show that this problem is NP-hard in the strong sense. We then present properties of the optimal schedule. Based on these properties, we propose a heuristic algorithm and analyze its average performance by a computational experiment. The results suggest that the heuristic algorithm is both efficient and effective with a relative error less than 2%.
Communication-Aware Processor Allocation for Supercomputers
We give processor-allocation algorithms for parallel jobs on grid architectures, where the objective is to select processors from a set of available processors to minimize the average number of communication hops. We present a natural approximation algorithm and show that it is a 7/4-approximation for 2D grids. In d dimensions, the approximation guarantee is 2-(1/2d), which is tight. We will describe experimental results comparing this algorithm to other strategies on a trace of a real job stream. We will describe the impact of this research at Sandia National Laboratories.
Speed Scaling for Weighted Flow
Co-authorsNikhil Bansal and Clifford Stein
Intel's SpeedStep and AMD's PowerNOW technologies allow the the Windows XP operating system to dynamically change the speed of the processor to prolong battery life. In this setting, the operating system must not only have a job selection policy to determine which job to run, but also a speed scaling policy to determine the speed to run that job at. These policies must be online since the operating system does not in general have any knowledge about any jobs that may arrive in the future. We give an O(1)-competitive algorithm for weighted flow time. This result is a big step forward in terms of understanding speeding scaling in problems with flow objectives.
Multiprocessing scheduling under uncertainty
We study a multiprocessor scheduling problem in which there is uncertainty concerning the successful execution of a job on a given processor. In this problem, we are given n unit-time jobs and m machines, with precedence constraints among the jobs. For every job j and machine i, we are also given pij, which is the probability that job j when scheduled on a machine i will be successfully completed, independent of all other job execution events. Multiple machines can be assigned to the same job at the same time. The goal of the problem is to find a schedule such that the expected completion time of all the jobs is minimized.
The above scheduling problem is motivated by grid computing applications in which a large collection of unreliable machines may be handling a large collection of complex tasks with dependencies among them. The problem also models project management scenarios in which there may be uncertainty in the successful execution of the project tasks.
In this talk, we will present a polynomial time poly-logarithmic approximation algorithm for the problem when the precedence constraints form a forest. Interestingly, we achieve this approximation ratio using oblivious schedules, that do not attempt to adapt to the outcomes of uncertain job executions.
Processor-oblivious parallel algorithms and scheduling: illustration on parallel prefix
The modified randomised work-stealing proposed by Bender&Rabin schedules a parallel program that performs W1 operations with a critical path Winf in time W1/(p.πave) +O((p.Winf)/(πave)) on p processors with average speed πave. Thus, it is well suited to schedule a fine-grain parallel algorihm that performs a nearly optimal number W1 of operations on processors with changing speeds. However, numerous parallel algorithms require to increase the number W1 of operations with respect to the one Ws of an optimal sequential algorithm.
In this talk, we present a generic adaptive algorithm based on the coupling of both a sequential optimal algorithm and a fine grain parallel one, but with non optimal work W1. The coupling is performed on-line, directed by a work-stealing schedule; it adapts the number of operations to the number and speeds of available processors. Also, the algorithm can be considered as processor-oblivious.
This generic adaptive scheme is illustrated on processor oblivious prefix computation, where W1 is strictly lower bounded by 2n - Winf; this implies a lower bound 2n/(p+1) for the parallel time on p identical processors. Extending this lower bound to processors with changing speeds, we prove that the processor-oblivious prefix computation achieves a nearly optimal parallel time 2n/((p+1).πave) + O(p.log n/(πave)). Experimentation on a 8-SMP machine exhibits this result: the processor-oblivious prefix compares favorably to an optimal parallel prefix algorithm for p identical processors.
Prisoner's Dilemma in Heterogeneous Grids
The recent paradigm shift in distributed computation from centralized clusters to organizationally distributed grids introduces a number of new challenges in resource management and scheduling. As a grid is composed of resources owned by many independent organizations, their multiple goals must be fulfilled. Consequently, multi-criteria optimization and game theory must be used to study such systems.
In this work we propose a model in which each organization owns a specialized piece of equipment (a dedicated processor), yet also has jobs that must be computed on processors belonging to another organizations. We consider the problem of off-line ordering of sets of jobs (originating from different organizations) on each processor. Firstly, we show that only n job ordering strategies (out of n!) for each organization are not dominated. Then, in a system composed of two organizations, we show the ordering problem can be reduced to the well-known Prisoner's Dilemma game. Finally, we consider a heterogeneous case in which one organization has significantly more jobs to compute on the other processor. We propose a notion of a fair solution, which is Pareto-optimal and equitably dominates all other outcomes. We show that this solution is the equivalent of mutual cooperation in the Prisoner's Dilemma.
Reliability versus performance for embedded real-time applications
Co-authorsAlain Girault and Denis Trystram
As a consequence of the huge development of ubiquitous and mobile computing, embedded systems are more and more popular. Embedded applications are often subject to real-time constraints, dependability issues and low energy consumption. These constraints make scheduling for embedded systems a complex problem.
In this work, we focus on generating high performance schedules with a good reliability. Reliability is a key factor of dependability. Usually, reliability is optimized by the replication of adequate tasks of the algorithm on some well chosen processors. The choice is done in regard to their reliability rate per time unit. We will consider a reliability model corresponding to transitory fault, in which only processors can fault, communication links are reliable.
Multi-objective optimization is a recent field which recently motivated a lot of attention from researchers and engineers. The commonly used approaches consist in fixing one criterion and optimizing the other. We propose a way for optimizing both criteria by providing a set of best compromize solutions. More precisely, we propose a heuristic method that decompose the problem in two successive phases, namely, a spatial allocation with a good reliability and a scheduling for minimizing the execution time. As the reliability problem is not so hard, the first step is implemented using a randomized algorithm. The second one is obtained by a local search algorithm.
Connectivity measures for internet topologies
Co-authorsThomas Erlebach, Linda Moonen and Danica Vukadinovic
The topology of the Internet has initially been modelled as an undirected graph, where vertices correspond to so-called Autonomous Systems (ASs), and edges correspond to physical links between pairs of ASs. However, in order to capture the impact of routing policies, it has recently become apparent that one needs to classify the edges according to the existing economic relationships (customer-provider, peer-to-peer or siblings) between the ASs. This leads to a directed graph model in which traffic can be sent only along so-called valley-free paths. Four different algorithms have been proposed in the literature for inferring AS relationships using publicly available data from routing tables. We investigate the differences in the graph models produced by these algorithms, focussing on connectivity measures. To this aim, we compute the maximum number of vertex-disjoint valley-free paths between ASs as well as the size of a minimum cut separating a pair of ASs. Although these problems are solvable in polynomial time for ordinary graphs, they are NP-hard in our setting. We formulate the two problems as integer programs, and we propose a number of exact algorithms for solving them. For the problem of finding the maximum number of vertex-disjoint paths, we discuss two algorithms; the first one is a branch-and-price algorithm based on the IP formulation, and the second algorithm is a non LP based branch-and-bound algorithm. For the problem of finding minimum cuts we use a branch-and-cut algorithm, based on the IP formulation of this problem. Using these algorithms, we obtain exact solutions for both problems in reasonable time. It turns out that there is a large gap in terms of the connectivity measures between the undirected and directed models. This finding supports our conclusion that economic relationships need to be taken into account when building a topology of the Internet.
Proportional Share Scheduling
No abstract yet
Late Work Minimization in a small Flexible Manufacturing System
The research concerns a small flexible manufacturing system located at the Poznan University of Technology consisting of two CNC machines, a measurement center and a single robot. The system is modeled as the extended job shop environment with open shop sections within particular jobs. A branch and bound method is proposed for minimizing the late work (i.e. for minimizing the late parts of activities executed after their given due dates) within a single shift of the production. Based on results of computational experiments, conclusions are formulated on the efficiency of B&B algorithm and on the behavior of FMS under consideration.
Scheduling Pairs of Jobs on Two Machines
Co-authorIrina N. Lushchakova
The problem of scheduling jobs on two consecutive machines to minimize the makespan is considered, provided that on each machine the jobs are grouped in pairs, i.e., batches containing exactly two jobs, not necessarily the same on each machine. The problem is related to scheduling models with batch availability and has applications to human resource management. A linear-time algorithm is presented.
Mixed criterion scheduling
We consider a scheduling problem that models a simplified variant of providing different levels of quality of service to different network clients. In our scheduling problem, there are two distinct sets of preemptable jobs: hard-deadline jobs which represent real-time packets in a network and flow jobs which represent other packets in the network. As the names imply, the jobs in these two sets differ by the criteria associated with them. For this problem, the flow jobs are scheduled to minimize the sum of their flow times, and the hard-deadline jobs are scheduled so that no job misses its deadline.
We show that our mixed criterion scheduling problem, which we have named the DeadFlow problem, is NP-Complete. We consider two simplified variants of the DeadFlow problem. We show the first variant in which there is only one hard-deadline job is NP-complete. We present a polynomial time algorithm that solves the second variant where the jobs are of unit size.
Scheduling applications with uncertainties on new computing platforms
Co-authorsJonathan Pecero-Sanchez and Denis Trystram
The recent development of new parallel and distributed platforms based on the interconnection of a large number of standard components highly changed the landscape of the parallel processing field. The most important point for a more effective use of such systems is the management and optimization of the resources, particularly scheduling. It consists in allocating the tasks of a parallel program to the processors on the patform and to determine at what time they will start their execution. These new systems are characterized by many new features that should be taken into account for optimizing the performances. More than ever, the handled data are subject to uncertainties and-or disturbances. It is mostly impossible to have a precise prediction of the parameters of the scheduling problem.
In this talk, we propose to investigate several ways for dealing with uncertainties in scheduling algorithms. We first survey the different existing approaches and see how they can be interpreted and used in the context of cluster and grid computing. Then, we focus on partially on-line approaches which start from an initial solution computed with estimated data and correct it on-line depending on the value of actual data. We describe the principle of stabilization and we analyze a scheduling algorithm that is intrinsically stable (i.e. it is able absorb the bad effects of disturbances). Finally, it is compared experimentally to pure on-line algorithms.
Minimizing the stretch when scheduling flows of divisible requests
We consider the problem of scheduling comparisons of motifs against biological databanks. This problem lies in the divisible load framework with negligible communication costs. Thus far, very few results have been proposed in this model. We first explain the relationship between this model and the preemptive uni-processor one. After having selected a few relevant metrics (max-stretch and sum-stretch), we survey, improve, and complete the existing results for the uni-processor model. Then we derive algorithms for the multi-processor case and we extensively study their performance in realistic scenarios. Our study clearly suggest an efficient heuristic for each of the two metrics, though a combined optimization is in theory not possible in the general case.
Data Redistribution between Distant Clusters using both Local and Wide Area Networks
In recent years, the apparition of high performance networks has enabled the emergence of large scale parallel computing. However, communications times between distant sites are far from being negligible and can represent a problem for high performance applications, in particular applications needing frequent data transfers such as code coupling applications.
We study here the message scheduling problem between distant clusters when it is possible to use, on each cluster, a local network to achieve the redistribution. The use of such a local network helps in balancing the amount of data to be transfered among the participating nodes.
We propose an algorithm that applies to two different cases.
- A steady-state redistribution where the communication pattern is repeating. Under this case, our algorithm is asymptotically optimal.
- When the redistribution pattern is scheduled only once this algorithm is a 3-approximation algorithm.
Multicriteria scheduling on the Grid
Firstly, two models of a Grid resource management problem are discussed: (i) with neither time characteristics nor resource reservations available, and (ii) with time characteristics obtained from prediction techniques and with resource reservation mechanisms available.
Then a scenario of Grid scheduling with predictions and resource reservations is presented with a general model of a multicriteria choice problem.
Finally some implementation issues, as well as first practical results are described.
A Comparison of Heuristics for Mean Flow Time Open Shop Scheduling
The problem of scheduling n jobs on m machines in an open shop environment is considered so that mean flow time becomes minimal. Since this problem is strongly NP-hard, different constructive and iterative heuristic algorithms are developed and discussed. Computational results are presented for problems with up to 50 jobs and 50 machines, respectively.
A NP-complete problem
Co-authorsC. Bentz, M.C. Costa, C. Picouleau and B. Ries
The basic image reconstruction problem in discrete tomography consists in assigning some color for every pixel of a rectangular array in such a way that requirements on the number of occurrences of each color in the rows and columns are satisfied.
This can be modelled as a vertex coloring problem in a graph with additional constraints.
In a similar way, chromatic scheduling considers applications of coloring models to various types of scheduling problems; here also additional requirements have to be introduced in order to capture the essential constraints of the scheduling problem.
Here we shall be concerned with graph coloring models where, in addition to the usual requirements occurring in coloring problems, we will have a collection P of chains Pi of the associated graph and it will be required that in each Pi color j occurs exactly hij times, where the hij are given integers.
We will examine special cases of graphs like trees, cacti and bipartite graphs. Polynomially solvable cases will be identified and general complexity results will be given. We will also relax some assumptions on colorings and study cases where the colorings need not be proper. The case of edge coloring (often used in basic timetabling models) will also be investigated. Open questions will finally be raised in relation with applications.
Keywordschromatic scheduling, discrete tomography, image reconstruction, graph coloring, complexity
Online scheduling with hard deadlines
In this talk, we deal with the following online scheduling problem. We are given m identical machines. All jobs have identical processing times. Each job is associated with a release time and a deadline, neither of which is known until the job appears. As soon as a job is available, we must immediately decide if the job is accepted or rejected. If a job is accepted, it must be completed by its deadline. The goal is to maximize the total number of jobs accepted. We study various strategies for designing an on-line algorithm. Our main result is deriving a nontrivial on-line algorithm with competitive ratio 3/2 for the two-machine case, which is best possible. Deterministic lower bounds and a greedy algorithm for the general case are also given.