Learning & the aging brain: Difference between revisions
From Santa Fe Institute Events Wiki
(→Tools) |
|||
(30 intermediate revisions by 5 users not shown) | |||
Line 2: | Line 2: | ||
==Concept== | ==Concept== | ||
'''Next meeting: | '''Tutorial meeting:''' Sunday 4pm in the lecture room. Bring a laptop if you've got one - download & install JavaNNS. <br> | ||
'''Next group meeting:''' Sunday, June 24, 5pm in the lecture room <br> | |||
We can mimic the effect of aging on the human brain by deliberately corrupting neural network models of human learning (e.g. random deletion of nodes/synapses). | We can mimic the effect of aging on the human brain by deliberately corrupting neural network models of human learning (e.g. random deletion of nodes/synapses). | ||
Line 22: | Line 24: | ||
*Natasha Qaisar | *Natasha Qaisar | ||
*Mike Wojnowicz | *Mike Wojnowicz | ||
*Juergen Pahle | |||
==Questions to answer== | ==Questions to answer== | ||
Line 27: | Line 30: | ||
What type of neural net should be used as a model? (backprop/attractor/etc) | What type of neural net should be used as a model? (backprop/attractor/etc) | ||
==Tools== | |||
* [[Media:PatternGen.doc]] | |||
Directions: Pattern generator for JavaNNS. Change the file extension to .m & put in your matlab work directory. From the command window in matlab, type PatternGen(a,b,c), where a = # input units, b = # output units, c = # patterns you want to create. A .pat file will appear in your work directory. | |||
* [[Media:MGen.doc]] | |||
Directions: Creates a weight matrix for the net generator to use. Change the file extension to .m & put in your matlab work directory. Open the m-file & alter the the top 2 lines of code to reflect the kind of network you want to create (e.g. if you want a regularly-connected, 3-layer 3-6-2 network, set Neurons = [3,6,2] and | |||
flips = 0). Run this program right before running the net generator. | |||
* [[Media:NetGen.doc]] | |||
Directions: Makes a JavaNNS Network based on matrix. Change the file extension to .m & put in your Matlab work directory. From the command window in Matlab, type function NetGen(A,B,Neurons,NetName), where A is the name of the connections matrix to use (use A if used MGen.m), B is the name of the weights matrix to use (use B if used MGen.m), Neurons is the vector containing the list of units per layer ((use Neurons if used MGen.m), and NetName is a string to be used as net name for output files. the output files are a .net network file, and 2 CSV files containing the connections matrix and weights matrix respectively. | |||
* [[Media:NetToMatrix.doc]] | |||
Directions: Takes .net network file from JavaNNS and makes a connection matrix and a Weights matrix of the network. Change the file extension to .m & put in your Matlab work directory. From the command window in Matlab, type function NetToMatrix(Ne,IL1,NetName), where Ne is the total number of neurons in the network, IL1 is the number of neurons in the first layer (input layer), and NetName is a string that is the base name of the network to open (without the .net part). The function creates 2 Matrix variables in the main Matlab workspace. C is the matrix of connections and W is the matrix of weights. | |||
* [[Media:Damage.doc]] | |||
Directions: Randomly damages a given connection matrix. Change the file extension to .m & put in your matlab work directory. From the command window in matlab, type Damage(A,p), where A is a connection matrix and p the probability of damage. Run the net generator after running this program to create a damaged net. | |||
* [[Media:Dijk.doc]] | |||
Directions: Computes the shortest path between two nodes in a graph. Change the file extension to .m & put in your matlab work directory. From the command window in matlab, type Dijk(A,s,t), where A is a symmetric connection matrix (run MGen and set A = A + A'), s is the starting node, and t the ending node. You can also input vectors for s & t, e.g. typing type Dijk(A,1:N,1:N) will produce a matrix of all shortest paths in the network. | |||
== Directions== | |||
* Do Small-World Networks Model Semantic Dementia? | |||
As adults age, their concepts undergo "progressive disintegration.” Conceptual knowledge disintegrates from the bottom-up. In the extreme case of semantic dementia, an “ostrich” becomes a “bird”, and a “rose” becomes a “plant.” Specific conceptual knowledge degrades, while general conceptual knowledge persists. The person knows that robins can breathe, but doesn’t know that robins can sing. | |||
Conversely, as infants grow, their concepts undergo “progressive differentiation.” Conceptual knowledge is built from the top-down. An infant distinguishes plants and animals before distinguishing dogs and cats. General conceptual knowledge is learned before specific conceptual knowledge. In tandem with the aging evidence, the overall suggestion for semantic cognition is that general knowledge is privileged over specific knowledge in terms of stability. | |||
Can small-world networks, like standard backprop nets, model this developmental trajectory within semantic cognition? If small-world networks do a better job of mimicking humans, then we have achieved an impressively nuanced demonstration supporting the "small world mind" thesis (for lexicon evidence, see Cancho and Sole, 2001; for fMRI evidence, see Egiluz et al, 2005). If small-world networks are more robust to damage than standard feedforward networks, then our findings suggest that small-world properties prevent conceptual deterioration through age. One possible application: inducing small-world properties (perhaps simply through creative experiences forging long-distance cross-modular connections) may help to prevent damage or restore conceptual integrity. (Mike) | |||
==Background reading== | ==Background reading== | ||
Line 42: | Line 71: | ||
===Small worlds & the brain=== | ===Small worlds & the brain=== | ||
Scale-Free Brain Functional Networks (Physical Review Letters, PRL 94, 018102, 2005) <br> | |||
''Comments: Evidence that brain functionally behaves as a small-world network with scale-invariant properties <br>'' | |||
''I will hand this out at the 6/17 meeting. (Mike)'' | |||
Faster learning in small-world neural networks - Kroger, arXiv 2005<br> | Faster learning in small-world neural networks - Kroger, arXiv 2005<br> | ||
http://arxiv.org/abs/physics/0402076 <br> | http://arxiv.org/abs/physics/0402076 <br> | ||
''Comments: Only small-worlds + backprop paper. Read this!!'' | ''Comments: Only small-worlds + backprop paper. Read this!!'' | ||
Extremely high stability to perturbations in small-world networks - Albert, Jeong, & Barabasi, 2000 <br> | |||
(Nature, 406, 378-381) <br> | |||
''Comments: Networks with power-law distributed connectivities are extremely robust to damage. (Mike)'' | |||
The meaning of mammalian adult neurogenesis and the function of newly added neurons: the "small-world" network - Manev, Medical Hypotheses 2005 <br> | The meaning of mammalian adult neurogenesis and the function of newly added neurons: the "small-world" network - Manev, Medical Hypotheses 2005 <br> | ||
''Comments: Kind of half-baked, but good for references & overview'' | ''Comments: Kind of half-baked, but good for references & overview'' | ||
Collective dynamics of 'small-world' networks<br> | Collective dynamics of 'small-world' networks - Duncan Watts & Steven Strogatz<br> | ||
http://www.nature.com/nature/journal/v393/n6684/abs/393440a0.html<br> | http://www.nature.com/nature/journal/v393/n6684/abs/393440a0.html<br> | ||
''Please contact me (Gregor) for print version in case you don't have access to Nature'' | |||
==Possibly related== | ==Possibly related== | ||
Line 69: | Line 107: | ||
* Boolean networks and self-healing: Amelie & Amitabh (connected with the [[Healing strategies for networks]] project) | * Boolean networks and self-healing: Amelie & Amitabh (connected with the [[Healing strategies for networks]] project) | ||
* Social implications of aging: Saleha & Amelie | * Social implications of aging: Saleha & Amelie | ||
* Semantic Dementia: Mike | |||
==Tutorials== | ==Tutorials== |
Latest revision as of 22:00, 26 June 2007
CSSS Santa Fe 2007 |
Concept
Tutorial meeting: Sunday 4pm in the lecture room. Bring a laptop if you've got one - download & install JavaNNS.
Next group meeting: Sunday, June 24, 5pm in the lecture room
We can mimic the effect of aging on the human brain by deliberately corrupting neural network models of human learning (e.g. random deletion of nodes/synapses).
Possible directions include: exploring compensatory mechanisms for neuronal loss (related to self-healing networks?), modeling specific age-related diseases - e.g. Alzheimer's, Parkinson's (chaos & tremors?).
Please feel free to add questions, theories, suggestions.
Who's interested
- Kristen Fortney
- Gregor Obernosterer
- Amitabh Trehan
- Vikas Shah
- Biljana Petreska
- Amelie Veron
- Saleha Habibullah
- Yossi Yovel
- jd
- Natasha Qaisar
- Mike Wojnowicz
- Juergen Pahle
Questions to answer
What sorts of age defects should be incorporated into the network?
What type of neural net should be used as a model? (backprop/attractor/etc)
Tools
Directions: Pattern generator for JavaNNS. Change the file extension to .m & put in your matlab work directory. From the command window in matlab, type PatternGen(a,b,c), where a = # input units, b = # output units, c = # patterns you want to create. A .pat file will appear in your work directory.
Directions: Creates a weight matrix for the net generator to use. Change the file extension to .m & put in your matlab work directory. Open the m-file & alter the the top 2 lines of code to reflect the kind of network you want to create (e.g. if you want a regularly-connected, 3-layer 3-6-2 network, set Neurons = [3,6,2] and flips = 0). Run this program right before running the net generator.
Directions: Makes a JavaNNS Network based on matrix. Change the file extension to .m & put in your Matlab work directory. From the command window in Matlab, type function NetGen(A,B,Neurons,NetName), where A is the name of the connections matrix to use (use A if used MGen.m), B is the name of the weights matrix to use (use B if used MGen.m), Neurons is the vector containing the list of units per layer ((use Neurons if used MGen.m), and NetName is a string to be used as net name for output files. the output files are a .net network file, and 2 CSV files containing the connections matrix and weights matrix respectively.
Directions: Takes .net network file from JavaNNS and makes a connection matrix and a Weights matrix of the network. Change the file extension to .m & put in your Matlab work directory. From the command window in Matlab, type function NetToMatrix(Ne,IL1,NetName), where Ne is the total number of neurons in the network, IL1 is the number of neurons in the first layer (input layer), and NetName is a string that is the base name of the network to open (without the .net part). The function creates 2 Matrix variables in the main Matlab workspace. C is the matrix of connections and W is the matrix of weights.
Directions: Randomly damages a given connection matrix. Change the file extension to .m & put in your matlab work directory. From the command window in matlab, type Damage(A,p), where A is a connection matrix and p the probability of damage. Run the net generator after running this program to create a damaged net.
Directions: Computes the shortest path between two nodes in a graph. Change the file extension to .m & put in your matlab work directory. From the command window in matlab, type Dijk(A,s,t), where A is a symmetric connection matrix (run MGen and set A = A + A'), s is the starting node, and t the ending node. You can also input vectors for s & t, e.g. typing type Dijk(A,1:N,1:N) will produce a matrix of all shortest paths in the network.
Directions
- Do Small-World Networks Model Semantic Dementia?
As adults age, their concepts undergo "progressive disintegration.” Conceptual knowledge disintegrates from the bottom-up. In the extreme case of semantic dementia, an “ostrich” becomes a “bird”, and a “rose” becomes a “plant.” Specific conceptual knowledge degrades, while general conceptual knowledge persists. The person knows that robins can breathe, but doesn’t know that robins can sing.
Conversely, as infants grow, their concepts undergo “progressive differentiation.” Conceptual knowledge is built from the top-down. An infant distinguishes plants and animals before distinguishing dogs and cats. General conceptual knowledge is learned before specific conceptual knowledge. In tandem with the aging evidence, the overall suggestion for semantic cognition is that general knowledge is privileged over specific knowledge in terms of stability.
Can small-world networks, like standard backprop nets, model this developmental trajectory within semantic cognition? If small-world networks do a better job of mimicking humans, then we have achieved an impressively nuanced demonstration supporting the "small world mind" thesis (for lexicon evidence, see Cancho and Sole, 2001; for fMRI evidence, see Egiluz et al, 2005). If small-world networks are more robust to damage than standard feedforward networks, then our findings suggest that small-world properties prevent conceptual deterioration through age. One possible application: inducing small-world properties (perhaps simply through creative experiences forging long-distance cross-modular connections) may help to prevent damage or restore conceptual integrity. (Mike)
Background reading
Modeling brain disease
Integrative neurocomputational perspectives on cognitive aging, neuromodulation, and representation. - Li and Sikstrom http://www.lucs.lu.se/People/Sverker.Sikstrom/NBR-Li-Sikstrom.pdf
Neuroengineering models of brain disease. - Finkel
http://www.mssm.edu/cnic/pdfs/FinkelNeuroengineering.pdf
Patterns of functional damage in neural network models of associative memory
http://www.cs.tau.ac.il/~ruppin/spat.pdf
Small worlds & the brain
Scale-Free Brain Functional Networks (Physical Review Letters, PRL 94, 018102, 2005)
Comments: Evidence that brain functionally behaves as a small-world network with scale-invariant properties
I will hand this out at the 6/17 meeting. (Mike)
Faster learning in small-world neural networks - Kroger, arXiv 2005
http://arxiv.org/abs/physics/0402076
Comments: Only small-worlds + backprop paper. Read this!!
Extremely high stability to perturbations in small-world networks - Albert, Jeong, & Barabasi, 2000
(Nature, 406, 378-381)
Comments: Networks with power-law distributed connectivities are extremely robust to damage. (Mike)
The meaning of mammalian adult neurogenesis and the function of newly added neurons: the "small-world" network - Manev, Medical Hypotheses 2005
Comments: Kind of half-baked, but good for references & overview
Collective dynamics of 'small-world' networks - Duncan Watts & Steven Strogatz
http://www.nature.com/nature/journal/v393/n6684/abs/393440a0.html
Please contact me (Gregor) for print version in case you don't have access to Nature
What is physiologic complexity and how does it change with aging and disease? - Goldberger, Peng, Lipsitz http://reylab.bidmc.harvard.edu/heartsongs/neurobiology-of-aging-2002-v23-23.pdf
Exploratory committees
General note: all should look at best neural network approach to their problem
- Demyelination: Biljana & Yossi
- Process to model these systems, time-delay in neural networks
- Biology of MS
- Normal aging: Kristen & Vikas & Amitabh
- Biological underpinning, general patterns of damage
- Parkinson's disease: jd & Kristen
- Alzheimer's disease: Gregor & Natasha & Vikas
- Boolean networks and self-healing: Amelie & Amitabh (connected with the Healing strategies for networks project)
- Social implications of aging: Saleha & Amelie
- Semantic Dementia: Mike
Tutorials
- General neural networks: Biljana
- Attractor neural networks: Kristen & Vikas
- Boolean networks: Amelie & Amitabh
- Biological basis diseases (once chosen)