Actions

Deep Computation in Statistical Physics - Abstracts: Difference between revisions

From Santa Fe Institute Events Wiki

No edit summary
No edit summary
Line 20: Line 20:
"First Links in the Markov Chain"<br>
"First Links in the Markov Chain"<br>
'''Abstract'''.  A century ago the Russian mathematician A. A. Markov was busy counting the vowels and consonants in Alexander Pushkin’s poem Eugene Onegin. The result of this peculiar passtime was a tool now in use everywhere in computational science and simulation: the Markov chain. How did it come about that Markov’s important technique had its roots in the lexical analysis of poetry? The answer to this question involves a tale of politics, institutional rivalry, personal feuds and even a religious cult---but also a deep inquiry into principles of probability. This nontechnical talk will explore the history of Markov’s ideas, with a brief look at their later evolution, including the invention of Monte Carlo Markov chains about 60 years ago quite near Santa Fe.<br>
'''Abstract'''.  A century ago the Russian mathematician A. A. Markov was busy counting the vowels and consonants in Alexander Pushkin’s poem Eugene Onegin. The result of this peculiar passtime was a tool now in use everywhere in computational science and simulation: the Markov chain. How did it come about that Markov’s important technique had its roots in the lexical analysis of poetry? The answer to this question involves a tale of politics, institutional rivalry, personal feuds and even a religious cult---but also a deep inquiry into principles of probability. This nontechnical talk will explore the history of Markov’s ideas, with a brief look at their later evolution, including the invention of Monte Carlo Markov chains about 60 years ago quite near Santa Fe.<br>
'''Thomas Hayes''', ''University of New Mexico''<br>
Better Coupling via Alternative Distance Metrics, in the Hard Disk Model<br>
(Joint work with Cris Moore)<br>
'''Abstract'''.  We consider a straightforward coupling argument proving a rigorous lower bound on the critical density threshold for the hard disk model.  When the density parameter is small enough, one can prove that, by coupling two copies of a Glauber dynamics for this process (that relocates a single randomly chosen disk to a new random location), the Hamming distance between them can be reduced.  This implies fast mixing for the Glauber dynamics, as well as Gibbs uniqueness for the hard disk model.<br>
One unappealing aspect of the above approach is that the Hamming distance is not robust: even a very small random perturbation to either of the configurations would increase the Hamming distance to its maximum possible value.  We show that, by choosing a more robust metric, combined with a slightly more sophisticated coupling, an improvement can be made over the more straightforward approach.<br> 




Line 75: Line 83:
hard part. The algorithm makes some novel and un-expected use of graph
hard part. The algorithm makes some novel and un-expected use of graph
theory. <br>
theory. <br>


'''Pan Zhang''', ''University of New Mexico''<br>
'''Pan Zhang''', ''University of New Mexico''<br>

Revision as of 19:50, 5 August 2013

Workshop Navigation

Deep Computation in Statistical Physics Workshop
organized by Cris Moore (SFI), Jon Machta (University of Massachusetts, Amherst & SFI), and Stephan Mertens (Magdeburg & SFI)
Santa Fe Institute
August 8-10, 2013


Eli Ben-Naim, Los Alamos National Laboratory
“Nontrivial Exponents in Record Statistics”
Abstract. I will discuss statistics of records for an ensemble of identical and independently distributed variables. I will show that first-passage probabilities associated with persistent configurations of extreme values decay algebraically with the number of variable. The decay exponents are nontrivial and can be obtained using analytic methods, as eigenvalues of integral or differential equations. I will describe in detail three problems involving superior records, inferior records, and incremental records.


Stefan Boettcher, Emory University
“Renormalization Group for Quantum Walks with and Without Coins”
Abstract. I will demonstrate how RG can be applied to gain insights into the asymptotic behavior of QW. The fundamental goal of RG in statistical physics is the classification of dynamical behaviors in universality classes based on general principles, such as symmetries, invariances, conservation laws, etc., independent of the microscopic details. It is not obvious whether universality is a useful concept, and whether it can provide the same amount of insight and control, for QW, or even how to implement RG. Here, I present our first steps in that direction.


Brian Hayes, American Scientist
"First Links in the Markov Chain"
Abstract. A century ago the Russian mathematician A. A. Markov was busy counting the vowels and consonants in Alexander Pushkin’s poem Eugene Onegin. The result of this peculiar passtime was a tool now in use everywhere in computational science and simulation: the Markov chain. How did it come about that Markov’s important technique had its roots in the lexical analysis of poetry? The answer to this question involves a tale of politics, institutional rivalry, personal feuds and even a religious cult---but also a deep inquiry into principles of probability. This nontechnical talk will explore the history of Markov’s ideas, with a brief look at their later evolution, including the invention of Monte Carlo Markov chains about 60 years ago quite near Santa Fe.


Thomas Hayes, University of New Mexico
Better Coupling via Alternative Distance Metrics, in the Hard Disk Model
(Joint work with Cris Moore)
Abstract. We consider a straightforward coupling argument proving a rigorous lower bound on the critical density threshold for the hard disk model. When the density parameter is small enough, one can prove that, by coupling two copies of a Glauber dynamics for this process (that relocates a single randomly chosen disk to a new random location), the Hamming distance between them can be reduced. This implies fast mixing for the Glauber dynamics, as well as Gibbs uniqueness for the hard disk model.

One unappealing aspect of the above approach is that the Hamming distance is not robust: even a very small random perturbation to either of the configurations would increase the Hamming distance to its maximum possible value. We show that, by choosing a more robust metric, combined with a slightly more sophisticated coupling, an improvement can be made over the more straightforward approach.


Koji Hukushima, University of Tokyo
“An Irreversible Monte Carlo Method for Some Spin Systems”

Helmut G. Katzgraber, Texas A&M University
"Static and Dynamic Properties of Spin Glasses as seen through the Parallel Tempering Telescope"
(Work done in collaboration with B. Yucesoy and J. Machta)
Abstract. Despite ongoing research spanning several decades, the low-temperature structure of short-range spin glasses remains controversial. In recent years, however, the advent of fast and cost-effective computers, as well as the use of efficient algorithms such as Exchange (Parallel Tempering) Monte Carlo, has enabled researchers to probe these paradigmatic systems deep into the spin-glass phase. In this talk results of a large-scale numerical study of the three-dimensional Ising spin glass are presented. In particular, we show a striking correlation between the efficiency of the Exchange Monte Carlo method and the roughness of the free-energy landscape. Furthermore, by performing a careful statistical analysis of several thousand independent disorder realizations and using an observable that detects peaks in the overlap distribution, we show that the Edwards-Anderson spin glass does not appear to be mean-field like at low temperatures.


Werner Krauth, Ecole Normale Superieure
"Rejection-free, Irreversible, and Infinitesimal (!) Monte Carlo Algorithms"
(Collaboration with M. Michel and S. C. Kapfer)
Abstract. I show how the lifting principle and a new pairwise decomposition of the Metroplis method allows one to design a class of powerful rejection-free Markov-chain Monte Carlo algorithms that break detailed balance yet satisfy detailed balance. These algorithms generalize the recent hard-sphere event-chain Monte Carlo method. As an application, I demonstrate prove considerable speed-up of the event-chain algorithm for soft-spheres and for Lennard-Jones systems, using a decomposition for the positive and negative parts of the potential. I sketch extensions of the algorithm to the simulation of quantum systems and to general problems in computer science, as the sampling problem in a polytope.


Alan Middleton, Syracuse University
"Spin Glasses on the Plane and Torus: How Far Can You Go With Polynomial Time Algorithms?"
Abstract. Finding spin glass ground states and partition functions is generally a hard problem. Given spin glasses on a plane or torus, however, polynomial time algorithms can be used to compute standard statistical quantities and to exactly sample states. I will give an overview of results for the 2D spin glass. One 2D spin glass is a potential proxy for the low temperature phase of the 3D spin glass. For some quantities, very large lattices are needed to see the thermodynamic limit. Future possibilities include the application of these algorithms to study dynamics at very long time scales.


Stephan Mertens, Otto-von-Guericke University and the Santa Fe Institute
"A Sublinear Algorithm for Percolation"
(This is joint work with Cris Moore)
Abstract. Percolating systems can be simulated in linear time and space, using for example the union-find algorithm of Newman and Ziff.

We explore a connection between percolation and the minimum spanning tree to devise an algorithm that runs in sublinear time and space.

The algorithm can be used to measure the percolation threshold in lattice and continuum percolation to high precision, especially in high dimensions.


Amanda Streib and Noah Streib, NIST
“A Stratified Sampling Approach to the Ising Model”
(Joint work with Isabel Beichl and Francis Sullivan)
Abstract. Approximation algorithms designed to estimate the partition function of the Ising model have been the subject of intense study for many years. Jerrum and Sinclair found the only known fpras for this problem, yet the running time of their Markov chain is too large to be practical. In this talk, we introduce a new approximation method, which combines graph theory and heuristic sampling, and aims at a practical method for the estimation of the partition function and other thermodynamic quantities.


Francis Sullivan, IDA/Center for Computing Sciences
"The Hard, Easy, Hard Phase Transition: The Case of A Particular Instance"
Abstract.

               You can't always get what you want 
But if you try sometimes...

Much published work on the Easy, Hard, Easy transition has focussed on the case of an ensemble of problem instances. If one considers a random case of, say, 3-SAT, the expected computing time to a solution for a given algorithm depends on parameters or constraints on the problem specified.

A slightly different point of view can be considered. When actually solving an instance of some NP-hard problem by some optimization and/or search method, one often observes a 'local' instance of phase transition. Initially, a partial solution is generated quickly and easily. Then things get hard and some kind of backtrack is required. This is sometimes followed by another easy phase where the solution is rapidly completed. In some cases, the easy phase is followed by one or more additional hard phases. An individual faced with a problem to solve might want to know with certainty if the problem actually is hard and can the easy part be identified and done quickly.

We analyze this phenomenon for the case of Sudoku where it's possible to give a polynomial-time algorithm that determines if the problem is hard, solves the easy part (which might be the whole problem) and identifies the hard part. The algorithm makes some novel and un-expected use of graph theory.


Pan Zhang, University of New Mexico
"Message Passing and Pooling Strategies for Non-Adaptive Linear Group Testing"
Abstract. We study non-adaptive pooling strategies for detection of rare faulty items. Given a binary sparse N-dimensional signal x, how to construct a sparse binary M �\times N pooling matrix F such that the signal can be reconstructed from the smallest possible number M of measurements y = Fx? We show that a very low number of measurements is possible for random spatially coupled design of pools F. Our design might find application in genetic screening or compressed genotyping. We show that our results are robust with respect to the uncertainty in the matrix F when some elements are mistaken.


Robert Ziff, University of Michigan
“Algorithms for Percolation”
Abstract. Here we will discuss various algorithms used in studying percolation, including identifying clusters, finding thresholds, and for finding crossing probabilities. The use of hull walks for crossing and studying “pinch points” will be demonstrated. Cluster merging using the Newman-Ziff algorithm, and its use to find thresholds by crossing and wrapping probabilities will be explained. This algorithm has also proved useful in the study of explosive percolation. Another powerful algorithm is the splitting process of rare events, useful for finding very low probability events such as the probability of having multiple crossing clusters on a long rectangular system.