s action on $\\boldsymbol{B}$ changes the interaction energy between $\\boldsymbol{B}$ and $\\boldsymbol{C}$ in an equal and opposite way to the effect of $\\boldsymbol{C} s action on $\\boldsymbol{B}$ on the interaction energy between $\\boldsymbol{A}$ and $\\boldsymbol{B}$. This result is a curiously detailed accounting of energy flow, as contrasted to standard pointwise conservation laws in fluid dynamics. This result holds for all triples of points $\\boldsymbol{A},\\boldsymbol{B},\\boldsymbol{C}$ in two dimensions; and in 3 dimensions for all points $\\boldsymbol{A},\\boldsymbol{C}$, and all closed vorticity streamlines $\\boldsymbol{B}$. We show this result in 3 dimensions as a consequence of an interaction energy flow around $\\boldsymbol{B}$ that is a function only of the triple $(\\boldsymbol{A},\\boldsymbol{b}\\in \\boldsymbol{B},\\boldsymbol{C})$, a result which may be of independent interest. ","authors":[{"id":"53f42f74dabfaee2a1c96788","name":"Valiant Paul"}],"flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"5ddf9ac53a55ac735a8f1084","num_citation":0,"order":0,"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F19\u002F1911\u002F1911.12289.pdf","title":"New relations for energy flow in terms of vorticity","urls":["https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.12289"],"versions":[{"id":"5ddf9ac53a55ac735a8f1084","sid":"1911.12289","src":"arxiv","year":2019}],"year":2019},{"abstract":"We show that a class of statistical properties of distributions, which includes such practically relevant properties as entropy, the number of distinct elements, and distance metrics between pairs of distributions, can be estimated given a sublinear sized sample. Specifically, given a sample consisting of independent draws from any distribution over at most k distinct elements, these properties can be estimated accurately using a sample of size O ( k log k ). For these estimation tasks, this performance is optimal , to constant factors. Complementing these theoretical results, we also demonstrate that our estimators perform exceptionally well, in practice, for a variety of estimation tasks, on a variety of natural distributions, for a wide range of parameters. The key step in our approach is to first use the sample to characterize the “unseen” portion of the distribution—effectively reconstructing this portion of the distribution as accurately as if one had a logarithmic factor larger sample. This goes beyond such tools as the Good-Turing frequency estimation scheme, which estimates the total probability mass of the unobserved portion of the distribution: We seek to estimate the shape of the unobserved portion of the distribution. This work can be seen as introducing a robust, general, and theoretically principled framework that, for many practical applications, essentially amplifies the sample size by a logarithmic factor; we expect that it may be fruitfully used as a component within larger machine learning and statistical analysis systems.","authors":[{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"},{"id":"53f4639bdabfaedd74e5ea77","name":"Gregory Valiant"}],"doi":"","id":"5c8c5d964895d9cbc6e1a387","lang":"en","num_citation":116,"order":0,"pages":{"end":"37:41","start":"37:1"},"pdf":"\u002F\u002Fstatic.aminer.org\u002Fpdf\u002F20160902\u002Fweb-conf\u002FNIPS\u002FNIPS-2013-2846.pdf","title":"Estimating the Unseen: Improved Estimators for Entropy and other Properties.","urls":["http:\u002F\u002Fdoi.acm.org\u002F10.1145\u002F3125643"],"venue":{"info":{"name":"J. ACM"},"issue":"6","volume":"64"},"versions":[{"id":"599c7f09601a182cd28e6376","sid":"conf\u002Fnips\u002FValiantV13","src":"dblp","vsid":"conf\u002Fnips","year":2013},{"id":"56d8138fdabfae2eee624f7c","sid":"2124055802","src":"mag","year":2013},{"id":"58774e5f0a3ac5b5de65c14b","sid":"58774e5f0a3ac5b5de65c14b","src":"pdf","year":2013},{"id":"5a4aef6f17c44a2190f78423","sid":"journals\u002Fjacm\u002FValiantV17","src":"dblp","vsid":"journals\u002Fjacm","year":2017},{"id":"5a79ad320a3ac53e09baa401","sid":"5a79ad320a3ac53e09baa401","src":"pdf","year":2013},{"id":"5c7571b2f56def979879afc6","sid":"2762763087","src":"mag","vsid":"118992489","year":2017}],"year":2017},{"abstract":"We consider the problem of verifying the identity of a distribution: Given the description of a distribution over a discrete support p = (p1, p2,…, pn) how many samples (independent draws) must one obtain from an unknown distribution, q, to distinguish, with high probability, the case that p = q from the case that the total variation distance (L1 distance) | |p--q| | 1≥ ε? We resolve this question, up to constant factors, on an instance by instance basis: there exist universal constants c, c' and a function f(p, ε) on distributions and error parameters, such that our tester distinguishes the case that p = q from | |p--q| | 1≥ ε using f(p, ε) samples with success probability 2\u002F3, but no tester can distinguish p = q | |p--q| | 1≥ ε c ⋅ ε when given c' f(p, ε) samples. The function f(p, ε) is upper-bounded by a multiple of the | |p| |2\u002F3ε2, but is more complicated, and is significantly smaller in some cases when p has many small domain elements, or a single large one. This result significantly generalizes and tightens previous results: since distributions of support at most n have L2\u002F3 norm bounded by √(n), this result immediately shows that for such distributions, O(√(n)\u002Fε2) samples suffice, tightening the previous bound of O(√(n)polylog n\u002Fε4) for this class of distributions, and matching the (tight) known results for the case that p is the uniform distribution over support n. The analysis of our very simple testing algorithm involves several hairy inequalities. To facilitate this analysis, we give a complete characterization of a general class of inequalities -- generalizing Cauchy-Schwarz, Holder's inequality, and the monotonicity of Lp norms. Specifically, we characterize the set of sequences (a)i = a1,…, ar, (b) i = b1,…, br, (c)i = c1,…, cr, for which it holds that for all finite sequences of positive numbers (x)j = x1,… and (y)j = y1,… rΠi=1(Σj xai, j yjbi) ci ≥1. For example, the standard Cauchy-Schwarz inequality corresponds to the sequences a = (1,0,1\u002F2), b = (0,1,1\u002F2), c = (1\u002F2, 1\u002F2,-- 1). Our characterization is of a non-traditional nature in that it uses linear programming to compute a derivation that may otherwise have to be sought through trial and error, by hand. We do not believe such a characterization has appeared in the literature, and hope its computational nature will be useful to others, and facilitate analyses like the one here.","abstract_zh":"","authors":[{"id":"53f4639bdabfaedd74e5ea77","name":"Gregory Valiant"},{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"10.1109\u002FFOCS.2014.14","id":"557c6e4008b02739a5ca6f43","lang":"en","num_citation":170,"order":1,"pages":{"end":"60","start":"51"},"pdf":"","title":"An Automatic Inequality Prover and Instance Optimal Identity Testing","urls":["http:\u002F\u002Fieeexplore.ieee.org\u002Fxpl\u002FarticleDetails.jsp?tp=&arnumber=6978989","https:\u002F\u002Fdoi.org\u002F10.1137\u002F151002526","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"Foundations of Computer Science","name_zh":""},"issue":"1","volume":"46"},"versions":[{"id":"557c6e4008b02739a5ca6f43","sid":"6978989","src":"ieee","year":2014},{"id":"558b7921612c6b62e5e8960d","sid":"2707449","src":"acm","year":2014},{"id":"599c7e72601a182cd28a1288","sid":"conf\u002Ffocs\u002FValiantV14","src":"dblp","vsid":"conf\u002Ffocs","year":2014},{"id":"56d81459dabfae2eee6761c0","sid":"2105433123","src":"mag","year":2015},{"id":"599c7a95601a182cd26c7d3d","sid":"journals\u002Fsiamcomp\u002FValiantV17","src":"dblp","vsid":"journals\u002Fsiamcomp","year":2017},{"id":"5c756cb1f56def97984b48e1","sid":"2592411267","src":"mag","vsid":"153560523","year":2017},{"id":"5faa029f65ca82c27fa65f3a","sid":"WOS:000366324300006","src":"wos","vsid":"2014 55TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2014)","year":2014}],"year":2017},{"abstract":"A review of analyses based upon anti-parallel vortex structures suggests that structurally stable dipoles with eroding circulation may offer a path to the study of vorticity growth in solutions of Euler's equations in R-3. We examine here the possible formation of such a structure in axisymmetric flow without swirl, leading to maximal growth of vorticity as t(4\u002F3). Our study suggests that the optimizing flow giving the t(4\u002F3) growth mimics an exact solution of Euler's equations representing an eroding toroidal vortex dipole which locally conserves kinetic energy. The dipole cross-section is a perturbation of the classical Sadovskii dipole having piecewise constant vorticity, which breaks the symmetry of closed streamlines. The structure of this perturbed Sadovskii dipole is analysed asymptotically at large times, and its predicted properties are verified numerically. We also show numerically that if mirror symmetry of the dipole is not imposed but axial symmetry maintained, an instability leads to breakup into smaller vortical structures.","authors":[{"id":"53f4a296dabfaedce563083d","name":"Stephen Childress"},{"id":"53f43a9bdabfaee4dc7aaa2b","name":"Andrew D. Gilbert"},{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"10.1017\u002Fjfm.2016.573","flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"5fc6a722e8bf8c1045e22634","num_citation":0,"order":2,"pages":{"end":"30.0","start":"1.0"},"pdf":"https:\u002F\u002Fcz5waila03cyo0tux1owpyofgoryrooa.oss-cn-beijing.aliyuncs.com\u002F96\u002F8C\u002FC5\u002F968CC52784BCE7DAAC4619C2C9608DF4.pdf","title":"Eroding dipoles and vorticity growth for Euler flows in R-3: axisymmetric flow without swirl","urls":["http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"JOURNAL OF FLUID MECHANICS"},"volume":"805"},"versions":[{"id":"5fc6a722e8bf8c1045e22634","sid":"WOS:000384332000005","src":"wos","vsid":"JOURNAL OF FLUID MECHANICS","year":2016}],"year":2016},{"abstract":"Star-convexity is a significant relaxation of the notion of convexity, that allows for functions that do not have (sub) gradients at most points, and may even be discontinuous everywhere except at the global optimum. We introduce a polynomial time algorithm for optimizing the class of star-convex functions, under no Lipschitz or other smoothness assumptions whatsoever, and no restrictions except exponential boundedness on a region about the origin, and Lebesgue measurability. The algorithm's performance is polynomial in the requested number of digits of accuracy and the dimension of the search domain. This contrasts with the previous best known algorithm of Nesterov and Polyak which has exponential dependence on the number of digits of accuracy, but only n(omega) dependence on the dimension n (where omega is the matrix multiplication exponent), and which further requires Lipschitz second differentiability of the function [1]. Despite a long history of successful gradient-based optimization algorithms, star-convex optimization is a uniquely challenging regime because 1) gradients and\u002For subgradients often do not exist; and 2) even in cases when gradients exist, there are star-convex functions for which gradients provably provide no information about the location of the global optimum. We thus bypass the usual approach of relying on gradient oracles and introduce a new randomized cutting plane algorithm that relies only on function evaluations. Our algorithm essentially looks for structure at all scales, since, unlike with convex functions, star-convex functions do not necessarily display simpler behavior on smaller length scales. Thus, while our cutting plane algorithm refines a feasible region of exponentially decreasing volume by iteratively removing \"cuts\", unlike for the standard convex case, the structure to efficiently discover such cuts may not be found within the feasible region: our novel star-convex cutting plane approach discovers cuts by sampling the function exponentially far outside the feasible region. We emphasize that the class of star-convex functions we consider is as unrestricted as possible: the class of Lebesgue measurable star-convex functions has theoretical appeal, introducing to the domain of polynomial-time algorithms a huge class with many interesting pathologies. We view our results as a step forward in understanding the scope of optimization techniques beyond the garden of convex optimization and local gradient-based methods.","authors":[{"name":"Jasper C. H. Lee"},{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"10.1109\u002FFOCS.2016.71","flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"58d83045d649053542fe8458","lang":"en","num_citation":23,"order":1,"pages":{"end":"614","start":"603.0"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F15\u002F1511\u002F1511.04466.pdf","title":"Optimizing Star-Convex Functions.","urls":["http:\u002F\u002Fdx.doi.org\u002F10.1109\u002FFOCS.2016.71","http:\u002F\u002Fdoi.ieeecomputersociety.org\u002F10.1109\u002FFOCS.2016.71","db\u002Fconf\u002Ffocs\u002Ffocs2016.html#LeeV16","https:\u002F\u002Fdoi.org\u002F10.1109\u002FFOCS.2016.71","https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.04466","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"2016 IEEE 57TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS)"},"issue":"","volume":""},"versions":[{"id":"599c7e72601a182cd28a1844","sid":"conf\u002Ffocs\u002FLeeV16","src":"dblp","vsid":"conf\u002Ffocs","year":2016},{"id":"5c610957da56297340b71bce","sid":"1511.04466","src":"arxiv","year":2016},{"id":"5faa03f665ca82c27fa6611c","sid":"WOS:000391198500064","src":"wos","vsid":"2016 IEEE 57TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS)","year":2016}],"year":2016},{"abstract":"As new proposals aim to sequence ever larger collection of humans, it is critical to have a quantitative framework to evaluate the statistical power of these projects. We developed a new algorithm, UnseenEst, and applied it to the exomes of 60,706 individuals to estimate the frequency distribution of all protein-coding variants, including rare variants that have not been observed yet in the current cohorts. Our results quantified the number of new variants that we expect to identify as sequencing cohorts reach hundreds of thousands of individuals. With 500K individuals, we find that we expect to capture 7.5% of all possible loss-of-function variants and 12% of all possible missense variants. We also estimate that 2,900 genes have loss-of-function frequency of \u003C0.00001 in healthy humans, consistent with very strong intolerance to gene inactivation.","authors":[{"id":"53f45575dabfaeb22f4fd5fa","name":"Zou James"},{"id":"53f4639bdabfaedd74e5ea77","name":"Valiant Gregory"},{"id":"53f42f74dabfaee2a1c96788","name":"Valiant Paul"},{"id":"56322ddf45ce1e5968d08527","name":"Karczewski Konrad"},{"id":"53f42e02dabfaee1c0a3bc2f","name":"Chan Siu On"},{"id":"53f44253dabfaedf435bef28","name":"Samocha Kaitlin"},{"id":"5633afbc45cedb339ab52c1e","name":"Lek Monkol"},{"id":"53f43795dabfaeee229b5acd","name":"Sunyaev Shamil"},{"id":"53f48bdadabfaee4dc8b2b40","name":"Daly Mark"},{"id":"53f42c60dabfaedf4350778d","name":"MacArthur Daniel G"}],"doi":"10.1038\u002Fncomms13293","flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"58174fca0cf2c0e4918017ce","lang":"en","num_citation":34,"order":2,"pages":{"end":"","start":"13293"},"pdf":"https:\u002F\u002Fcz5waila03cyo0tux1owpyofgoryrooa.oss-cn-beijing.aliyuncs.com\u002F4B\u002FCC\u002F34\u002F4BCC34EA6DFFAA772922668E38146A12.pdf","title":"Quantifying unobserved protein-coding variants in human populations provides a roadmap for large-scale sequencing projects","urls":["http:\u002F\u002Fdx.doi.org\u002F10.1038\u002Fncomms13293","http:\u002F\u002Fwww.nature.com\u002Fncomms\u002F2016\u002F161028\u002Fncomms13293\u002Ffull\u002Fncomms13293.html","http:\u002F\u002Fdx.doi.org\u002F10.1038\u002Fncomms13293","http:\u002F\u002Fwww.ncbi.nlm.nih.gov\u002Fpubmed\u002F27796292?report=xml&format=text","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"NATURE COMMUNICATIONS"},"issue":"","volume":"7"},"versions":[{"id":"58174fca0cf2c0e4918017ce","sid":"doi:10.1038\u002Fncomms13293","src":"nature","year":2016},{"id":"581a4c6a0cf245bd0c5d13a0","sid":"27796292","src":"pubmed","vsid":"101528555","year":2016},{"id":"5fd561188cdecd4daa1664ec","sid":"WOS:000386456100001","src":"wos","vsid":"NATURE COMMUNICATIONS","year":2016}],"year":2016},{"abstract":"We consider the following basic learning task: given independent draws from an unknown distribution over a discrete support, output an approximation of the distribution that is as accurate as possible in L1 distance (equivalently, total variation distance, or \\\"statistical distance\\\"). Perhaps surprisingly, it is often possible to \\\"de-noise\\\" the empirical distribution of the samples to return an approximation of the true distribution that is significantly more accurate than the empirical distribution, without relying on any prior assumptions on the distribution. We present an instance optimal learning algorithm which optimally performs this de-noising for every distribution for which such a de-noising is possible. More formally, given n independent draws from a distribution p, our algorithm returns a labelled vector whose expected distance from p is equal to the minimum possible expected error that could be obtained by any algorithm, even one that is given the true unlabeled vector of probabilities of distribution p and simply needs to assign labels---up to an additive subconstant term that is independent of p and goes to zero as n gets large. This somewhat surprising result has several conceptual implications, including the fact that, for any large sample from a distribution over discrete support, prior knowledge of the rates of decay of the tails of the distribution (e.g. power-law type assumptions) is not significantly helpful for the task of learning the distribution. As a consequence of our techniques, we also show that given a set of n samples from an arbitrary distribution, one can accurately estimate the expected number of distinct elements that will be observed in a sample of any size up to n log n. This sort of extrapolation is practically relevant, particularly to domains such as genomics where it is important to understand how much more might be discovered given larger sample sizes, and we are optimistic that our approach is practically viable.","authors":[{"id":"53f4639bdabfaedd74e5ea77","name":"Gregory Valiant"},{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"10.1145\u002F2897518.2897641","flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"57d063cfac44367354291a6e","lang":"en","num_citation":53,"order":1,"pages":{"end":"155","start":"142"},"pdf":"https:\u002F\u002Fcz5waila03cyo0tux1owpyofgoryrooa.oss-cn-beijing.aliyuncs.com\u002F02\u002FC4\u002F1F\u002F02C41FF9DE439B906291C8111ECAF18F.pdf","title":"Instance optimal learning of discrete distributions.","urls":["http:\u002F\u002Fdoi.acm.org\u002F10.1145\u002F2897518.2897641","http:\u002F\u002Fdx.doi.org\u002F10.1145\u002F2897518.2897641","http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2897641&preflayout=flat","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"STOC"},"issue":"","volume":""},"versions":[{"id":"599c7c45601a182cd279b6b2","sid":"conf\u002Fstoc\u002FValiantV16","src":"dblp","vsid":"conf\u002Fstoc","year":2016},{"id":"59a298c154ef14563dd8a802","sid":"2897641","src":"acm","year":2016},{"id":"5ce2be7aced107d4c60817cb","sid":"2419099043","src":"mag","vsid":"1190910084","year":2016},{"id":"5fa9ddc465ca82c27fa62114","sid":"WOS:000455404200012","src":"wos","vsid":"STOC'16: PROCEEDINGS OF THE 48TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING","year":2016}],"year":2016},{"abstract":"We introduce the notion of a database system that is information theoretically secure in between accesses—a database system with the properties that 1) users can efficiently access their data, and 2) while a user is not accessing their data, the user’s information is information theoretically secure to malicious agents, provided that certain requirements on the maintenance of the database are realized. We stress that the security guarantee is information theoretic and everlasting: it relies neither on unproved hardness assumptions, nor on the assumption that the adversary is computationally or storage bounded. We propose a realization of such a database system and prove that a user’s stored information, in between times when it is being legitimately accessed, is information theoretically secure both to adversaries who interact with the database in the prescribed manner, as well as to adversaries who have installed a virus that has access to the entire database and communicates with the adversary. The central idea behind our design of an information theoretically secure database system is the construction of a “re-randomizing database” that periodically changes the internal representation of the information that is being stored. To ensure security, these remappings of the representation of the data must be made sufficiently often in comparison to the amount of information that is being communicated from the database between remappings and the amount of local memory in the database that a virus may preserve during the remappings. While this changing representation provably foils the ability of an adversary to glean information, it can be accomplished in a manner transparent to the legitimate users, preserving how database users access their data. The core of the proof of the security guarantee is the following communication\u002Fdata tradeoff for the problem of learning sparse parities from uniformly random n-bit examples. Fix a set S ⊂ {1, . . . , n} of size k: given access to examples x1, . . . , xt where xi ∈ {0, 1} is chosen uniformly at random, conditioned on the XOR of the components of x indexed by set S equalling 0, any algorithm that learns the set S with probability at least p and extracts at most r bits of information from each example, must see at least p · ( n r )k\u002F2 ck examples, for ck ≥ 14 · √ (2e)k kk+3 . The r bits of information extracted from each example can be an arbitrary (adaptively chosen) function of the entire example, and need not be simply a subset of the bits of the example.","authors":[{"id":"53f4639bdabfaedd74e5ea77","name":"Gregory Valiant"},{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"","flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"57a4e91aac44365e35c97de0","lang":"en","num_citation":9,"order":1,"pages":{"end":"","start":""},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F16\u002F1605\u002F1605.02646.pdf","title":"Information Theoretically Secure Databases.","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1605.02646","http:\u002F\u002Feccc.hpi-web.de\u002Freport\u002F2016\u002F078","https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.02646"],"venue":{"info":{"name":"Electronic Colloquium on Computational Complexity (ECCC)"},"issue":"","volume":"abs\u002F1605.02646"},"versions":[{"id":"5e8d92929fced0a24b6187fb","sid":"journals\u002Fcorr\u002FValiantV16","src":"dblp","vsid":"journals\u002Fcorr","year":2016},{"id":"599c7a49601a182cd26a53dc","sid":"journals\u002Feccc\u002FValiantV16","src":"dblp","vsid":"journals\u002Feccc","year":2016},{"id":"5ce2b415ced107d4c6e84230","sid":"2355911333","src":"mag","vsid":"11189987","year":2016},{"id":"5d9edba947c8f7664602317d","sid":"2963144647","src":"mag","vsid":"11189987","year":2016},{"id":"5c61098dda56297340b802b9","sid":"1605.02646","src":"arxiv","year":2016}],"year":2016},{"authors":[{"name":"AD Gilbert"},{"name":"S Childress"},{"id":"53f42f74dabfaee2a1c96788","name":"P Valiant"}],"doi":"10.1017\u002FJFM.2016.573","flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"5ff68cd7d4150a363cd3a5ce","num_citation":2,"order":2,"pages":{"end":"30","start":"1"},"title":"Eroding dipoles and vorticity growth for Euler flows in R 3 : Axisymmetric flow without swirl","urls":["https:\u002F\u002Fore.exeter.ac.uk\u002Frepository\u002Fhandle\u002F10871\u002F23353"],"venue":{"info":{"name":"Journal of Fluid Mechanics"},"volume":"805"},"versions":[{"id":"5ff68cd7d4150a363cd3a5ce","sid":"3103684308","src":"mag","vsid":"152000018","year":2016}],"year":2016},{"abstract":" We consider the following basic learning task: given independent draws from an unknown distribution over a discrete support, output an approximation of the distribution that is as accurate as possible in $\\ell_1$ distance (i.e. total variation or statistical distance). Perhaps surprisingly, it is often possible to \"de-noise\" the empirical distribution of the samples to return an approximation of the true distribution that is significantly more accurate than the empirical distribution, without relying on any prior assumptions on the distribution. We present an instance optimal learning algorithm which optimally performs this de-noising for every distribution for which such a de-noising is possible. More formally, given $n$ independent draws from a distribution $p$, our algorithm returns a labelled vector whose expected distance from $p$ is equal to the minimum possible expected error that could be obtained by any algorithm that knows the true unlabeled vector of probabilities of distribution $p$ and simply needs to assign labels, up to an additive subconstant term that is independent of $p$ and goes to zero as $n$ gets large. One conceptual implication of this result is that for large samples, Bayesian assumptions on the \"shape\" or bounds on the tail probabilities of a distribution over discrete support are not helpful for the task of learning the distribution. As a consequence of our techniques, we also show that given a set of $n$ samples from an arbitrary distribution, one can accurately estimate the expected number of distinct elements that will be observed in a sample of any size up to $n \\log n$. This sort of extrapolation is practically relevant, particularly to domains such as genomics where it is important to understand how much more might be discovered given larger sample sizes, and we are optimistic that our approach is practically viable. ","abstract_zh":"","authors":[{"id":"53f4639bdabfaedd74e5ea77","name":"Gregory Valiant"},{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"","flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"5550417845ce0a409eb3b919","lang":"en","num_citation":20,"order":1,"pages":{"end":"","start":""},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F15\u002F1504\u002F1504.05321.pdf","title":"Instance Optimal Learning.","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1504.05321","https:\u002F\u002Farxiv.org\u002Fabs\u002F1504.05321"],"venue":{"info":{"name":"CoRR","name_zh":""},"issue":"","volume":"abs\u002F1504.05321"},"versions":[{"id":"5e8d92969fced0a24b61c995","sid":"journals\u002Fcorr\u002FValiantV15","src":"dblp","vsid":"journals\u002Fcorr","year":2015},{"id":"56d84b04dabfae2eeed5e521","sid":"1698563072","src":"mag","year":2015},{"id":"5c610914da56297340b61602","sid":"1504.05321","src":"arxiv","year":2015}],"year":2015},{"abstract":"","abstract_zh":"","authors":[{"name":"Jasper C. H. Lee"},{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"","flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"573696046e3b12023e517ee2","lang":"en","num_citation":0,"order":1,"pages":{"end":"","start":""},"title":"Beyond Convex Optimization: Star-Convex Functions.","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.04466"],"venue":{"info":{"name":"CoRR"},"issue":"","volume":"abs\u002F1511.04466"},"versions":[{"id":"5e8d92a59fced0a24b62f309","sid":"journals\u002Fcorr\u002FLeeV15a","src":"dblp","vsid":"journals\u002Fcorr","year":2015}],"year":2015},{"abstract":"We formulate a notion of evolvability for functions with domain and range that are real-valued vectors, a compelling way of expressing many natural biological processes. We show that linear and fixed-degree polynomial functions are evolvable in the following dually-robust sense: There is a single evolution algorithm that, for all convex loss functions, converges for all distributions. It is possible that such dually-robust results can be achieved by simpler and more-natural evolution algorithms. Towards this end, we introduce a simple and natural algorithm that we call wide-scale random noise and prove a corresponding result for the L2 metric. We conjecture that the algorithm works for a more general class of metrics.","abstract_zh":"","authors":[{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"10.1145\u002F2633598","flags":[{"flag":"affirm_author","person_id":"53f42f74dabfaee2a1c96788"}],"id":"5550427345ce0a409eb42631","lang":"en","num_citation":12,"order":0,"pages":{"end":"","start":""},"pdf":"https:\u002F\u002Fcz5waila03cyo0tux1owpyofgoryrooa.oss-cn-beijing.aliyuncs.com\u002F08\u002F37\u002F32\u002F0837325265A33D781FDDB7C5BABAE021.pdf","title":"Evolvability of Real Functions","urls":["http:\u002F\u002Fdx.doi.org\u002F10.1145\u002F2633598","http:\u002F\u002Fdoi.acm.org\u002F10.1145\u002F2633598","http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2663945.2633598&COLL=DL&DL=acm&preflayout=flat","http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2663945.2633598&coll=DL&dl=GUIDE&CFID=520962530&CFTOKEN=71363919&preflayout=flat"],"venue":{"info":{"name":"TOCT"},"issue":"3","volume":"6"},"versions":[{"id":"558af198612c41e6b9d3e20b","sid":"2633598","src":"acm","year":2014},{"id":"599c7a39601a182cd269d9bf","sid":"journals\u002Ftoct\u002FValiant14","src":"dblp","vsid":"journals\u002Ftoct","year":2014},{"id":"56d83c95dabfae2eee6899a0","sid":"2160493469","src":"mag","year":2014}],"year":2014},{"abstract":"We study the question of closeness testing for two discrete distributions. More precisely, given samples from two distributions p and q over an n-element set, we wish to distinguish whether p = q versus p is at least ε-far from q, in either ℓ1 or ℓ2 distance. Batu et al [BFR+00, BFR+13] gave the first sub-linear time algorithms for these problems, which matched the lower bounds of [Val11] up to a logarithmic factor in n, and a polynomial factor of ε. In this work, we present simple testers for both the ℓ1 and ℓ2 settings, with sample complexity that is information-theoretically optimal, to constant factors, both in the dependence on n, and the dependence on ε for the ℓ1 testing problem we establish that the sample complexity is Θ(max{n2\u002F3\u002Fε4\u002F3, n1\u002F2\u002F&epsilon2}).","abstract_zh":"","authors":[{"id":"53f42e02dabfaee1c0a3bc2f","name":"Siu-on Chan"},{"id":"544844badabfae87b7df9ed3","name":"Ilias Diakonikolas"},{"id":"53f4639bdabfaedd74e5ea77","name":"Gregory Valiant"},{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"10.1137\u002F1.9781611973402.88","id":"53e9b82fb7602d97043eb872","lang":"en","num_citation":151,"order":3,"pages":{"end":"1203","start":"1193"},"pdf":"https:\u002F\u002Fstatic.aminer.cn\u002Fstorage\u002Fpdf\u002Farxiv\u002F13\u002F1308\u002F1308.3946.pdf","title":"Optimal algorithms for testing closeness of discrete distributions","urls":["http:\u002F\u002Farxiv.org\u002Fabs\u002F1308.3946","http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2634074.2634162&coll=DL&dl=GUIDE&CFID=520962530&CFTOKEN=71363919&preflayout=flat","http:\u002F\u002Fdx.doi.org\u002F10.1137\u002F1.9781611973402.88","https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.5555\u002F2634074.2634162","https:\u002F\u002Farxiv.org\u002Fabs\u002F1308.3946"],"venue":{"info":{"name":"SODA"},"issue":"","volume":"abs\u002F1308.3946"},"versions":[{"id":"558aeb26612c41e6b9d3d6b8","sid":"2634162","src":"acm","year":2014},{"id":"599c7e7f601a182cd28a74e6","sid":"conf\u002Fsoda\u002FChanDVV14","src":"dblp","vsid":"conf\u002Fsoda","year":2014},{"id":"5e8d92869fced0a24b60b854","sid":"journals\u002Fcorr\u002FChanDVV13","src":"dblp","vsid":"journals\u002Fcorr","year":2013},{"id":"56d8cddfdabfae2eee8b79b0","sid":"2138256420","src":"mag","year":2013},{"id":"5e8ec6779fced0a24b6e966d","sid":"10.5555\u002F2634074.2634162","src":"acm","vsid":"soda","year":2014},{"id":"5c610852da56297340b35290","sid":"1308.3946","src":"arxiv","year":2013}],"year":2014},{"abstract":"We give highly efficient algorithms, and almost matching lower bounds, for a range of basic statistical problems that involve testing and estimating the L1 (total variation) distance between two k-modal distributions p and q over the discrete domain {1,..., n}. More precisely, we consider the following four problems: given sample access to an unknown k-modal distribution p, Testing identity to a known or unknown distribution: 1. Determine whether p = q (for an explicitly given k-modal distribution q) versus p is ε-far from q; 2. Determine whether p = q (where q is available via sample access) versus p is ε-far from q; Estimating L1 distance (\\\"tolerant testing\\\") against a known or unknown distribution: 3. Approximate dTV(p, q) to within additive ε where q is an explicitly given k-modal distribution q; 4. Approximate dTV(p, q) to within additive ε where q is available via sample access. For each of these four problems we give sub-logarithmic sample algorithms, and show that our algorithms have optimal sample complexity up to additive poly(k) and multiplicative polylog log n + polylogk factors. Our algorithms significantly improve the previous results of [BKR04], which were for testing identity of distributions (items (1) and (2) above) in the special cases k = 0 (monotone distributions) and k = 1 (unimodal distributions) and required O((log n)3) samples. As our main conceptual contribution, we introduce a new reduction-based approach for distribution-testing problems that lets us obtain all the above results in a unified way. Roughly speaking, this approach enables us to transform various distribution testing problems for k-modal distributions over {1,..., n} to the corresponding distribution testing problems for unrestricted distributions over a much smaller domain {1,..., ℓ} where ℓ = O(k log n).","abstract_zh":"","authors":[{"id":"53f46831dabfaee43ecfd993","name":"Constantinos Daskalakis"},{"id":"544844badabfae87b7df9ed3","name":"Ilias Diakonikolas"},{"id":"562d446d45cedb3398da7692","name":"Rocco A. Servedio"},{"id":"53f4639bdabfaedd74e5ea77","name":"Gregory Valiant"},{"id":"53f42f74dabfaee2a1c96788","name":"Paul Valiant"}],"doi":"10.5555\u002F2627817.2627948","id":"53e9b577b7602d97040b478a","lang":"en","num_citation":79,"order":4,"pages":{"end":"1852","start":"1833"},"title":"Testing k-modal distributions: optimal algorithms via reductions","urls":["http:\u002F\u002Fdx.doi.org\u002F10.1137\u002F1.9781611973105.131","http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2627817.2627948&coll=DL&dl=GUIDE&CFID=685960569&CFTOKEN=72569009&preflayout=flat","https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.5555\u002F2627817.2627948","http:\u002F\u002Fwww.webofknowledge.com\u002F"],"venue":{"info":{"name":"SODA"},"issue":"","volume":""},"versions":[{"id":"558c08200cf20e727d0f57b2","sid":"2627948","src":"acm","year":2013},{"id":"599c7e7f601a182cd28a7046","sid":"conf\u002Fsoda\u002FDaskalakisDSVV13","src":"dblp","vsid":"conf\u002Fsoda","year":2013},{"id":"56d8772adabfae2eee20dd4f","sid":"2112274965","src":"mag","year":2011},{"id":"5e8ec8b39fced0a24b6f86de","sid":"10.5555\u002F2627817.2627948","src":"acm","vsid":"soda","year":2013},{"id":"619b696f1c45e57ce9e2807b","sid":"WOS:000461897900131","src":"wos","vsid":"PROCEEDINGS OF THE TWENTY-FOURTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS (SODA 2013)","year":2013}],"year":2013}],"profilePubsTotal":52,"profilePatentsPage":1,"profilePatents":[],"profilePatentsTotal":0,"profilePatentsEnd":true,"profileProjectsPage":0,"profileProjects":null,"profileProjectsTotal":null,"newInfo":null,"checkDelPubs":[]}};