Rounding using Random Walks-An Experimental Study

Soumen Basu, Sandeep Sen

semanticscholar(2015)

引用 0|浏览1
暂无评分
摘要
We have carried out rigorous experimental analysis of iterative randomized rounding algorithms for the packing integer problem in this project. We have explored techniques based on multidimensional Brownian motion in R. Let x′ be a fractional feasible solution that maximizes a linear objective function with respect to the set of constraints Ax ≤ 1, A ∈ {0,1}m×n. The independent randomized rounding method proposed by Raghavan and Thompson [6] rounds each variable xi to 1 with probability x ′ i. This matches the expected value of the rounded objective function with the fractional optimum and no constraint is violated by more than O( logn log logn ). Our research aims to find techniques that produce better bound than this. The experimental studies confirm that we can improve the error bound. The first technique closely resembles the ‘Edge-Walk’ method proposed by Lovett and Meka [3]. We start from a fractional feasible solution, then do constrained multidimensional random walk that conforms to the constraints. Once the random walk hits a constraint Ai (or δ-close to it), it gets constrained within the hyper plane Ci that bounds Ai. The walk progresses along Ci till it hits another constraint Aj and then it is restricted within the hyperplane Ci ∩ Cj. We proceed in this manner till the dimension becomes 0, i.e., the random walk is confined to a point. At this stage we relax the constraints by an amount ∆ and repeat the procedure. In the second technique we iteratively transform x′ to x∗ using random walk. This method sparsifies the constraint matrix and reduce it to a new matrix A∗ where each constraint has no more than log n non-zero coefficients. At this point we exploit the reduced dependencies among the constraints by using the Moser-Tardos’ constructive form of Lovász Local Lemma. For m constraints in n variables, with exactly k variables in each inequality, the constraints are satisfied within O( log(mk logn) n +log log (mn) log log(mk logn) n +log log (mn) ) with high probability. For log(mk logn) n = o(log n) this is better than the O( logn log logn ) error produced by Raghavan and Thompson’s method. In particular, for m = O(n) and k = polylog(n), this method incurs only O( log logn log log logn ) error.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要