Actions

Deep Computation in Statistical Physics - Abstracts

From Santa Fe Institute Events Wiki

Workshop Navigation

Deep Computation in Statistical Physics Workshop
organized by Cris Moore (SFI), Jon Machta (University of Massachusetts, Amherst & SFI), and Stephan Mertens (Magdeburg & SFI)
Santa Fe Institute
August 8-10, 2013


Eli Ben-Naim, Los Alamos National Laboratory
Nontrivial Exponents in Record Statistics
Abstract. I will discuss statistics of records for an ensemble of identical and independently distributed variables. I will show that first-passage probabilities associated with persistent configurations of extreme values decay algebraically with the number of variable. The decay exponents are nontrivial and can be obtained using analytic methods, as eigenvalues of integral or differential equations. I will describe in detail three problems involving superior records, inferior records, and incremental records.


Stefan Boettcher, Emory University
Renormalization Group for Quantum Walks with and Without Coins
Abstract. I will demonstrate how RG can be applied to gain insights into the asymptotic behavior of QW. The fundamental goal of RG in statistical physics is the classification of dynamical behaviors in universality classes based on general principles, such as symmetries, invariances, conservation laws, etc., independent of the microscopic details. It is not obvious whether universality is a useful concept, and whether it can provide the same amount of insight and control, for QW, or even how to implement RG. Here, I present our first steps in that direction.


Misha Chertkov, Los Alamos National Laboratory
"Getting a Grip on the Grid: Physics and Computations in Electrical Power Systems"
Abstract. Today’s electric power grids, the largest engineered systems ever built, already demonstrate complex nonlinear dynamics where, e.g., localized collective effects of thousands of small consumer appliances may produce serious malfunctions of sections of the grid. These collective dynamics are not well understood and are expected to become more complex in tomorrow’s grids as consumer appliances become more intelligent and autonomous. Tomorrow’s will have to integrate the intermittent power from wind and solar farms whose fluctuating outputs create far more complex perturbations. Guarding against the worst of those perturbations will require taking protective measures based on ideas from probability and statistical physics.

In this talk aimed at applied mathematicians, physicists and network scientists we briefly review the history of electrical grids and then introduce a few of the physical, optimization and control principles and phenomena in today’s grids and those that are expected to play a major role in tomorrow’s grids.

We illustrate the new science of the grid on two example: (a) discussing an efficient and highly scalable Chance Constrained Optimal Power Flow algorithm providing risk-aware control of the transmission system under uncertainty associated with fluctuating renewables (wind farms); and (b) discussing ODE and PDE modeling of the power distribution system, in particular explaining effects of many inductive motors and distributed photo-voltaic generators on the grid stability.


Varsha Dani, University of New Mexico
"Power of Choice"
(Joint work with Josep Diaz, Tom Hayes and Cris Moore)
Abstract. Consider a semi-random boolean formula with n variables and m clauses, in which each clause is chosen from two or more uniformly random clauses. The choice is made online, i.e. without knowledge of what choices will be available in the future. What impact, if any, can such choices have on the phase transition from satisfiable to unsatisfiable formulas at a critical clause density alpha = m/n ? I will speak about the power of choice to alter the behavior of this and other random processes.


Brian Hayes, American Scientist
" First Links in the Markov Chain"
Abstract. A century ago the Russian mathematician A. A. Markov was busy counting the vowels and consonants in Alexander Pushkin’s poem Eugene Onegin. The result of this peculiar passtime was a tool now in use everywhere in computational science and simulation: the Markov chain. How did it come about that Markov’s important technique had its roots in the lexical analysis of poetry? The answer to this question involves a tale of politics, institutional rivalry, personal feuds and even a religious cult---but also a deep inquiry into principles of probability. This nontechnical talk will explore the history of Markov’s ideas, with a brief look at their later evolution, including the invention of Monte Carlo Markov chains about 60 years ago quite near Santa Fe.


Thomas Hayes, University of New Mexico
"Better Coupling via Alternative Distance Metrics, in the Hard Disk Model"
(Joint work with Cris Moore)
Abstract. We consider a straightforward coupling argument proving a rigorous lower bound on the critical density threshold for the hard disk model. When the density parameter is small enough, one can prove that, by coupling two copies of a Glauber dynamics for this process (that relocates a single randomly chosen disk to a new random location), the Hamming distance between them can be reduced. This implies fast mixing for the Glauber dynamics, as well as Gibbs uniqueness for the hard disk model.

One unappealing aspect of the above approach is that the Hamming distance is not robust: even a very small random perturbation to either of the configurations would increase the Hamming distance to its maximum possible value. We show that, by choosing a more robust metric, combined with a slightly more sophisticated coupling, an improvement can be made over the more straightforward approach.


Koji Hukushima, University of Tokyo
“An Irreversible Monte Carlo Method for Some Spin Systems”
Abstract. An irreversible Markov chain Monte Carlo (MCMC) method based on a skew detailed balance condition is discussed. It is analytically found in the Ising chain that a specific type of the transition probability changes the value of dynamical exponent significantly. By applying the method to ferromagnetic Ising models in two- and three dimensions, relaxation dynamics of the order parameter and the dynamical exponent are studied numerically in comparison to that with the conventional reversible MCMC method with the detailed balance condition. We also examine how the efficiency of exchange Monte Carlo method is affected by the combined use of the irreversible MCMC method.


Helmut G. Katzgraber, Texas A&M University
" Static and Dynamic Properties of Spin Glasses as seen through the Parallel Tempering Telescope"
(Work done in collaboration with B. Yucesoy and J. Machta)
Abstract. Despite ongoing research spanning several decades, the low-temperature structure of short-range spin glasses remains controversial. In recent years, however, the advent of fast and cost-effective computers, as well as the use of efficient algorithms such as Exchange (Parallel Tempering) Monte Carlo, has enabled researchers to probe these paradigmatic systems deep into the spin-glass phase. In this talk results of a large-scale numerical study of the three-dimensional Ising spin glass are presented. In particular, we show a striking correlation between the efficiency of the Exchange Monte Carlo method and the roughness of the free-energy landscape. Furthermore, by performing a careful statistical analysis of several thousand independent disorder realizations and using an observable that detects peaks in the overlap distribution, we show that the Edwards-Anderson spin glass does not appear to be mean-field like at low temperatures.


Werner Krauth, Ecole Normale Superieure
" Rejection-free, Irreversible, and Infinitesimal (!) Monte Carlo Algorithms"
(Collaboration with M. Michel and S. C. Kapfer)
Abstract. I show how the lifting principle and a new pairwise decomposition of the Metroplis method allows one to design a class of powerful rejection-free Markov-chain Monte Carlo algorithms that break detailed balance yet satisfy detailed balance. These algorithms generalize the recent hard-sphere event-chain Monte Carlo method. As an application, I demonstrate prove considerable speed-up of the event-chain algorithm for soft-spheres and for Lennard-Jones systems, using a decomposition for the positive and negative parts of the potential. I sketch extensions of the algorithm to the simulation of quantum systems and to general problems in computer science, as the sampling problem in a polytope.


Jon Machta, University of Massachusetts, Amherst and SFI
" Monte Carlo Methods for Rough Free Energy Landscapes: Population Annealing and Parallel Tempering"
Abstract. Parallel tempering and population annealing are both effective methods for sampling the equilibrium distribution of systems with rough free energy landscapes. Both methods overcome the exponential slowing associated with high free energy barriers. Parallel tempering, also known as replica exchange Monte Carlo, has found wide application in physics and chemistry, whereas populations annealing, is not well known. While they share some similarities, population annealing and parallel tempering are in distinct classes of algorithms: population annealing is an example of a sequential Monte Carlo method and is closely related to simulated annealing whereas parallel tempering is a multicanonical Markov chain Monte Carlo method. In this talk we compare and contrast these algorithms and suggest that population annealing may have useful applications.


Alan Middleton, Syracuse University
"Spin Glasses on the Plane and Torus: How Far Can You Go With Polynomial Time Algorithms?"
Abstract. Finding spin glass ground states and partition functions is generally a hard problem. Given spin glasses on a plane or torus, however, polynomial time algorithms can be used to compute standard statistical quantities and to exactly sample states. I will give an overview of results for the 2D spin glass. One 2D spin glass is a potential proxy for the low temperature phase of the 3D spin glass. For some quantities, very large lattices are needed to see the thermodynamic limit. Future possibilities include the application of these algorithms to study dynamics at very long time scales.


Stephan Mertens, Otto-von-Guericke University and the Santa Fe Institute
"A Sublinear Algorithm for Percolation"
(This is joint work with Cris Moore)
Abstract. Percolating systems can be simulated in linear time and space, using for example the union-find algorithm of Newman and Ziff.

We explore a connection between percolation and the minimum spanning tree to devise an algorithm that runs in sublinear time and space.

The algorithm can be used to measure the percolation threshold in lattice and continuum percolation to high precision, especially in high dimensions.


Cris Moore, Santa Fe Institute
"What's the Cavity Method, and What Can It Tell Us About Social Networks?"
(This is joint work with Florent Krzakala, Lenka Zdeborova, Pan Zhang, and Aurelien Decelle)
Abstract. A key problem in the study of networks is community detection, or equivalently labeling vertices in a way that helps explain the presence or absence of edges between them. I will discuss how using Belief Propagation, a.k.a. the Cavity Method, on this problem reveals an interesting kind of phase transition, where communities become impossible to detect (even in theory) unless we know a certain fraction of correct labels to start with.


Mark Newman, University of Michigan
" Dynamics on Networks and Message Passing Algorithms"
Abstract. There has been a lot of interest in recent years in networked systems like social networks, the web, and the Internet, but most of it has focused on their structural properties, for which we have a well developed body of theory. Our ultimate interest in networks, however, is largely in understanding their dynamics, for which things are significantly more difficult. In this talk I'll discuss one promising approach for predicting the outcome of network dynamical processes using message passing algorithms, focusing particularly on contact processes, epidemiology, and percolation. I will show how one can develop a dynamic kind of message passing in which the messages take the form of temporal probability functions. The method allows one to calculate the outcome of some relatively complicated dynamical processes that were only previously accessible by simulation.


Amanda Streib and Noah Streib, NIST
“A Stratified Sampling Approach to the Ising Model”
(Joint work with Isabel Beichl and Francis Sullivan)
Abstract. Approximation algorithms designed to estimate the partition function of the Ising model have been the subject of intense study for many years. Jerrum and Sinclair found the only known fpras for this problem, yet the running time of their Markov chain is too large to be practical. In this talk, we introduce a new approximation method, which combines graph theory and heuristic sampling, and aims at a practical method for the estimation of the partition function and other thermodynamic quantities.


Francis Sullivan, IDA/Center for Computing Sciences
"The Hard, Easy, Hard Phase Transition: The Case of A Particular Instance"
Abstract.

               You can't always get what you want 
But if you try sometimes...

Much published work on the Easy, Hard, Easy transition has focussed on the case of an ensemble of problem instances. If one considers a random case of, say, 3-SAT, the expected computing time to a solution for a given algorithm depends on parameters or constraints on the problem specified.

A slightly different point of view can be considered. When actually solving an instance of some NP-hard problem by some optimization and/or search method, one often observes a 'local' instance of phase transition. Initially, a partial solution is generated quickly and easily. Then things get hard and some kind of backtrack is required. This is sometimes followed by another easy phase where the solution is rapidly completed. In some cases, the easy phase is followed by one or more additional hard phases. An individual faced with a problem to solve might want to know with certainty if the problem actually is hard and can the easy part be identified and done quickly.

We analyze this phenomenon for the case of Sudoku where it's possible to give a polynomial-time algorithm that determines if the problem is hard, solves the easy part (which might be the whole problem) and identifies the hard part. The algorithm makes some novel and un-expected use of graph theory.


Marija Vucelja, New York University
" Sampling with Non-Reversible Markov Chains"
Abstract. Markov Chain Monte Carlo algorithms are ubiquitous across different fields of physics. They are an invaluable numerical tool for studies of complex many body problems, critical phenomena etc. MCMC algorithms are used to create realizations of the desired physical system in its steady state. Most implementations of MCMC use Markov Chains that obey detailed balance, even though this not a necessary requirement for converging to a steady state. I plan to overview several examples that utilize non-reversible Markov Chains, where violating detailed balance has improved the convergence rate. Finally I will pose some open questions and discuss attempts to use non-reversible Markov Chains to accelerate the relaxation of the 2D Ising model and spin glasses to their equilibria.


Pan Zhang, Santa Fe Institute
"Message Passing and Pooling Strategies for Non-Adaptive Linear Group Testing"
Abstract. We study non-adaptive pooling strategies for detection of rare faulty items. Given a binary sparse N-dimensional signal x, how to construct a sparse binary M x N pooling matrix F such that the signal can be reconstructed from the smallest possible number M of measurements y = Fx? We show that a very low number of measurements is possible for random spatially coupled design of pools F. Our design might find application in genetic screening or compressed genotyping. We show that our results are robust with respect to the uncertainty in the matrix F when some elements are mistaken.


Robert Ziff, University of Michigan
“Algorithms for Percolation”
Abstract. Here we will discuss various algorithms used in studying percolation, including identifying clusters, finding thresholds, and for finding crossing probabilities. The use of hull walks for crossing and studying “pinch points” will be demonstrated. Cluster merging using the Newman-Ziff algorithm, and its use to find thresholds by crossing and wrapping probabilities will be explained. This algorithm has also proved useful in the study of explosive percolation. Another powerful algorithm is the splitting process of rare events, useful for finding very low probability events such as the probability of having multiple crossing clusters on a long rectangular system.