e-journal
Adaptive Algorithm for Minimizing Cloud Task Length with Prediction Errors
Compared to traditional distributed computing like grid system, it is non-trivial to optimize cloud task’s execution performance due to its more constraints like user payment budget and divisible resource demand. In this paper, we analyze in-depth our proposed optimal algorithm minimizing task execution length with divisible resources and payment budget: 1) We derive the upper bound of cloud task length, by taking into account both workload prediction errors and hostload prediction errors. With such state-ofthe-
art bounds, the worst-case task execution performance is predictable, which can improve the quality of service in turn. 2) We design a dynamic version for the algorithm to adapt to the load dynamics over task execution progress, further improving the resource utilization. 3) We rigorously build a cloud prototype over a real cluster environment with 56 virtual machines, and evaluate our algorithm with different levels of resource contention. Cloud users in our cloud system are able to compose various tasks based on off-the-shelf web services. Experiments show that task execution lengths under our algorithm are always close to their theoretical optimal values, even in a competitive situation with limited available resources. We also observe a high level of fair treatment on the resource allocation among all tasks.
Index Terms—Algorithm, cloud computing, divisible-resource allocation, convex optimization, upper bound analysis
Tidak ada salinan data
Tidak tersedia versi lain