CSSS 2009 Santa Fe-Information theory tutorial: Difference between revisions
From Santa Fe Institute Events Wiki
Line 2: | Line 2: | ||
=== Notes from watson === | === Notes from watson === | ||
I left some comments/follow-up on the [[CSSS_2009_Santa_Fe-Blog#Monday_6.2F15|blog]]. Let's do any further follow up discussions on this page. [[watson]] | |||
=== Notes from Steve === | === Notes from Steve === |
Latest revision as of 03:06, 16 June 2009
Notes from Rosemary
Notes from watson
I left some comments/follow-up on the blog. Let's do any further follow up discussions on this page. watson
Notes from Steve
Download my slides here: MaxEntSteveTutorial.pdf.
Roderick Dewar's slides from the Canberra Summer School are available from the links at the summer school program page
A couple of Dewar's papers on arXiv (they may also have been published; I haven't checked):
- R.C. Dewar, Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states, arXiv
- R.C. Dewar & A. Porte, Statistical mechanics unifies different ecological patterns, arXiv
Discussions
Steven Lade: Well, nobody responded to my request so perhaps I'll do it myself :) I learnt a little at another summer school, so maybe I'm the best qualified, I was just hoping I wasn't. :)
Maximum entropy comes out of information theory (which I'm not expert in, so if there's someone knowledgeable to help me there, that would be great). It states simply that the most likely state of a system is the one which maximises its (information-theoretic) entropy. The entropy of statistical mechanics comes naturally from it: indeed, stat mech can be viewed as a subset of information theory from this perspective. But you can apply it to any sort of system, not just physics. It's the closest thing to a 'theory of complex systems' that I've seen!
Tentative time: 7PM, Monday June 15. Add your name if interested.
- I am interested and will happily wear the skeptic hat: essentially if the entropy is sensitive to your parametrization, it loses much of its meaning. I think this is why some people (such as Amari) like to do "coordinate-free statistics". -Gustavo Lacerda
- I'm no expert, but I do have information theory basics under my belt... and more importantly, I happen to have some introductory info. theo. latex slides for a talk I recently gave. So I can offer help with the background, and then sit back & listen to the higher level, post-Shannon applications :) -Rosemary Braun
- On edit: I just noticed Lucas & Roozbeh requested an info theo tutorial -- shall we just roll it all together? -Rosemary Braun
- Lucas Lacasa Great, I'm in. Steve, with the Max-ent stuff you mean Jaynes? I've seen Jaynes formalism two or three times but never saw the difference with a merely Lagrange multiplier optimization, so great to hear about that. Rosemary, I would also be glad to learn some information theory stuff. I know the ultra-basics (Shannon entropy, mutual information, and not much more...).
Angela Onslow: I'd like to come along to this
Roozbeh Daneshvar: FYI, We have ABM tutorial on Monday, June 15th starting at 04:30 PM. I assume that it is finished by 07:00 PM.
Kate Behrman: I also do not claim to be an expert, but have gone over a lot of the theory that is commonly used in biology. I would like to know what physicists think. Count me in!
Steven Lade: Rosemary, would be great if you could start with information theory. Lucas, yes I do mean Jaynes. Lagrange multiplier optimisation is really all that happens, so maybe it is all the same -- I confess I don't know much stat mech either! This session is going to be very much everyone contributing I think. And sounds like you may know more than me, Kate.
Allison Shaw: I'd like to come -- I've had the basics of information theory, but could definitely use a refresher and I'd love to learn the more advanced stuff.
watson: Sounds great. I know nothing about entropy maximization (other than the basic statement of the 2nd law of thermodynamics). But I would be glad to help out Rosemary with explaining some info theory stuff... Rosemary, perhaps we can chat before Monday. I'm thinking: entropy, conditional entropy, mutual information and possibly some topics beyond.
Elliot Martin: There are some differences between maximum entropy in thermodynamics, and what Jaynes does. I would be happy to help out if you want, but I just know a little bit about this stuff.
Massimo Mastrangeli: I am definitely not an expert, anyway I had several bits of information theory throughout time, so I'd like to wrap it up somehow. I'll join you.
Lara Danilova-Burdess Count me in
Matt McMahon: I'm in.
Erin Taylor: Thanks Steven! I'll be there.
Liliana Salvador: Hey! This is great! I would love to join. I studied some information theory some time ago and its relationship with Kolmogorov Complexity (KC). It was a long time ago, but this is a good excuse to revise it. Any interest in hearing about this? Maybe I could give some basic introduction about KC and its relationship with information theory and then we could discuss the advantages and disadvantages of each method to measure information. What do you think?
Lucas Lacasa I'm interested in Kolmogorov, Chaitin and all these crazy people. So great!
Steven Lade OK, so to summarise, we have (correct me if I've missed something): Rosemary and Watson on introduction to information theory, Liliana on Kolmogorov Complexity, me on MaxEnt. (Plus a couple of people like Elliot and Kate pitching in where relevant.) To these presenters: in planning what you intend to discuss please keep in mind we have a lot to cover in this tutorial at the end of a long day and I'm sure there'll be lots of questions!
Steven Lade Final update: We've trimmed the contributor list to Rosemary, Watson and me. Liliana and Gustavo will schedule another time.