Learning & the aging brain: Difference between revisions
From Santa Fe Institute Events Wiki
m (→Concept) |
(→Tools) |
||
Line 35: | Line 35: | ||
* [[Media:PatternGen.doc]] | * [[Media:PatternGen.doc]] | ||
Directions: Pattern generator for JavaNNS. Change the file extension to .m & put in your matlab work directory. From the command window in matlab, type PatternGen(a,b,c), where a = # input units, b = # output units, c = # patterns you want to create. A .pat file will appear in your work directory. | Directions: Pattern generator for JavaNNS. Change the file extension to .m & put in your matlab work directory. From the command window in matlab, type PatternGen(a,b,c), where a = # input units, b = # output units, c = # patterns you want to create. A .pat file will appear in your work directory. | ||
* [[Media:MGen.doc]] | |||
Directions: Creates a weight matrix for the net generator to use. Change the file extension to .m & put in your matlab work directory. Open the m-file & alter the the top 2 lines of code to reflect the kind of network you want to create (e.g. if you want a regularly-connected, 3-layer 3-6-2 network, set Neurons = [3,6,2] and | |||
flips = 0). Run this program right before running the net generator. | |||
== Directions== | == Directions== |
Revision as of 23:23, 21 June 2007
CSSS Santa Fe 2007 |
Concept
Tutorial meeting: Sunday 4pm in the lecture room. Bring a laptop if you've got one - download & install JavaNNS.
Next group meeting: Sunday, June 24, 5pm in the lecture room
We can mimic the effect of aging on the human brain by deliberately corrupting neural network models of human learning (e.g. random deletion of nodes/synapses).
Possible directions include: exploring compensatory mechanisms for neuronal loss (related to self-healing networks?), modeling specific age-related diseases - e.g. Alzheimer's, Parkinson's (chaos & tremors?).
Please feel free to add questions, theories, suggestions.
Who's interested
- Kristen Fortney
- Gregor Obernosterer
- Amitabh Trehan
- Vikas Shah
- Biljana Petreska
- Amelie Veron
- Saleha Habibullah
- Yossi Yovel
- jd
- Natasha Qaisar
- Mike Wojnowicz
- Juergen Pahle
Questions to answer
What sorts of age defects should be incorporated into the network?
What type of neural net should be used as a model? (backprop/attractor/etc)
Tools
Directions: Pattern generator for JavaNNS. Change the file extension to .m & put in your matlab work directory. From the command window in matlab, type PatternGen(a,b,c), where a = # input units, b = # output units, c = # patterns you want to create. A .pat file will appear in your work directory.
Directions: Creates a weight matrix for the net generator to use. Change the file extension to .m & put in your matlab work directory. Open the m-file & alter the the top 2 lines of code to reflect the kind of network you want to create (e.g. if you want a regularly-connected, 3-layer 3-6-2 network, set Neurons = [3,6,2] and flips = 0). Run this program right before running the net generator.
Directions
1. How Do Small-World Properties Affect Aging in the Brain?
We generally seek to compare small-world neural networks to standard versions, and we might begin by asking the very general question of whether small-worldized neural networks show desirable properties. For example, do they have stronger attack tolerance? (Although this may have already been shown in the Barbasi paper).
However, to the extent that we are interested specifically in learning and aging, we might wonder how small-world properties influence specific developmental patterns in neural computation. Below I illustrate some specific developmental patterns with respect to concepts.
As people age, their concepts undergo “progressive disintegration.” Conceptual knowledge disintegrates from the bottom-up: specific conceptual knowledge has degraded, whereas general conceptual knowledge is preserved. The pathological extreme of this is called “semantic dementia.” For an elderly person with semantic dementia, an “ostrich” is a “bird”, a “rose” is a “plant.” The counterintuitive consequence is that the person knows that robins have skin, but doesn’t know that robins sing.
(Conversely, as people grow, their concepts undergo “progressive differentiation.” Conceptual knowledge is built from the top-down: general conceptual knowledge is acquired faster than specific conceptual knowledge. For example, infants know that planes are different than birds before knowing that dogs are different than fish.)
We might ask, then, how this pattern of progressive disintegration (and perhaps differentiation) is affected by small-world networks. How do small-world networks and standard networks compare in accounting for these developmental trends? One benefit of the comparison is that from many kinds of results, we win. If small-world networks do a better job of mimicking human data, then we have very nuanced evidence for the growing thesis that the mind is organized as a small world (for lexicon evidence, see Cancho and Sole, 2001; for fMRI evidence, see Egiluz et al, 2005). If small-world networks are more robust to damage than standard models, then we have evidence that small-world organizational properties can help prevent deterioration through age – and perhaps that inducing them (physiologically? Through experiential training of long-distance cross-modular connections?) can help to prevent or restore conceptual integrity. (Mike)
Background reading
Modeling brain disease
Integrative neurocomputational perspectives on cognitive aging, neuromodulation, and representation. - Li and Sikstrom http://www.lucs.lu.se/People/Sverker.Sikstrom/NBR-Li-Sikstrom.pdf
Neuroengineering models of brain disease. - Finkel
http://www.mssm.edu/cnic/pdfs/FinkelNeuroengineering.pdf
Patterns of functional damage in neural network models of associative memory
http://www.cs.tau.ac.il/~ruppin/spat.pdf
Small worlds & the brain
Scale-Free Brain Functional Networks (Physical Review Letters, PRL 94, 018102, 2005)
Comments: Evidence that brain functionally behaves as a small-world network with scale-invariant properties
I will hand this out at the 6/17 meeting. (Mike)
Faster learning in small-world neural networks - Kroger, arXiv 2005
http://arxiv.org/abs/physics/0402076
Comments: Only small-worlds + backprop paper. Read this!!
Extremely high stability to perturbations in small-world networks - Albert, Jeong, & Barabasi, 2000
(Nature, 406, 378-381)
Comments: This finding treads dangerously close to ours! (Mike)
The meaning of mammalian adult neurogenesis and the function of newly added neurons: the "small-world" network - Manev, Medical Hypotheses 2005
Comments: Kind of half-baked, but good for references & overview
Collective dynamics of 'small-world' networks - Duncan Watts & Steven Strogatz
http://www.nature.com/nature/journal/v393/n6684/abs/393440a0.html
Please contact me (Gregor) for print version in case you don't have access to Nature
What is physiologic complexity and how does it change with aging and disease? - Goldberger, Peng, Lipsitz http://reylab.bidmc.harvard.edu/heartsongs/neurobiology-of-aging-2002-v23-23.pdf
Exploratory committees
General note: all should look at best neural network approach to their problem
- Demyelination: Biljana & Yossi
- Process to model these systems, time-delay in neural networks
- Biology of MS
- Normal aging: Kristen & Vikas & Amitabh
- Biological underpinning, general patterns of damage
- Parkinson's disease: jd & Kristen
- Alzheimer's disease: Gregor & Natasha & Vikas
- Boolean networks and self-healing: Amelie & Amitabh (connected with the Healing strategies for networks project)
- Social implications of aging: Saleha & Amelie
Tutorials
- General neural networks: Biljana
- Attractor neural networks: Kristen & Vikas
- Boolean networks: Amelie & Amitabh
- Biological basis diseases (once chosen)