Collective Decision Making: From Neurons to Societies - SOS
From Santa Fe Institute Events Wiki
Workshop |
Self Organized Science
For the afternoon sessions we will allow everyone to self organize around key topics related to the theme of collective decision making. Anyone who wants to is welcome to identify an issue or opportunity related to the theme that they care about exploring with others. Along with announcing and posting the idea to the group, the person must take responsibility for convening the group and insuring that the group's discussions get reported on the wiki. Please note that everyone should feel free to vote with their feet: if you are not learning or contributing to the group that you are in, feel free to go to another group at any time. (The idea of using self organization for setting agendas was inspired by work in complex systems and has been encapsulated by the somewhat confusing term "Open Space Technology.")
Report from Turing Test for Superorganisms
- DeFroment, J. Miller, Morewedge, and Oppenheimer
- We began by debating whether or not one can design a Turing test for a group or superorganism. After some debate, we realized that it might be better to try to design tests of intelligence along a gradient rather than use humans as a benchmark. This would allow one to test a few different interesting questions:
1. Is this group as "smart" as a human?
2. Is group A "smarter" than group B?
3. Is this group "smarter" than an individual member of the group? Does it learn more quickly, demonstrate less noise, etc.?
4. Can the group compare alternatives along more than one dimension (e.g., amount of reward/probability of reward)?
5. How robust is the superorganism? That is, can it survive in novel environments.
6. Can the group deceive another group?
7. Does the group exhibit strategic behavior? How many levels of reasoning can the colony go to?
8. If the majority of the group holds a false belief (e.g., is deceived by an experimenter), can a minority of group members correct this false belief?
9. Can the group engage in generalization of learning?
- We also explored the idea of testing whether a superorganism can learn in the same way that an individual organism does.
1. Are there analogs of learning experiments on, say, conditioning pigeons, that can be applied to ant colonies? For example, can you condition an ant colony to novel stimuli (such as a novel chemical signals) to behave in different ways?
2. Are there analogies between types of ant tasks and different kinds of physiological systems (patrollers as sense organs, foragers as hands, ??? as short term and long term memory)? Is there transactive memory in a colony (is it organized among individual members)?
3. Is there a collective reward structure? In other words, if one member is rewarded for another member's behavior, does the non-rewarded member continue to perform that behavior?
4. Does the colony use some kind of physical memory--do interior properties of a colony reflect the outside food sources (e.g., location of midden piles)?
5. Is there cultural learning? If you remove the members of the group that learned a specific behavior (e.g., association between a reward and a novel pheromone), does the group still retain the association when you remove the specific members who perceived the reward and stimulus first-hand?
- Colony "hard" problems. If colonies work by having individuals with simple sets of interacting programs, it would be insightful to find circumstances under which those programs break down.
Report from Reliability of Collective Decision Making
- Participants: Iain Couzin, Nigel Franks, Michael Mauboussin, Peter Miller, Kevin Zollman
This group was interested in discussing how reliable collective decision making groups are, and if possible what features appeared to support that reliability. In the discussion several interesting points were noted.
- Collective decision making groups (both non-human animal and human) are often very effective in "regular environments". However, this reliability doesn't always extend to rare or manipulated environments. Ants can be made to run around in circles, fish can be made to swim directly toward a predator, markets can over adjust, etc.
- In most situations it seems that there is a trade off between speed and reliability. Obviously this is an optimization problem where the right solution depends on features of the environment. There was some interest in comparing how well these systems do when compared to more computationally complex optimal algorithms.
- Are they well tuned to a small set of environments or all they robust to a relatively wide range of environmental circumstances?
- Variance can often have an important effect on that reliability. For example, individual ants look for different nesting sites and different bees may give different assessments of the same site. This variance might be the result of individual's having high variance in their individual assessments or might be the result of largely heterogeneous populations with individuals who have individually low variance in their assessments.
- There was significant interest in determining if the variance was the result of variance within individuals or heterogeneous groups.
- In either case, understanding the mechanisms which produce and sustain this variation is an interesting topic of study.
- Given features of the environment, it would be interesting to know how "well tuned" this variation is. In some situations the variance appears to change on the basis of environmental features. So groups (or individuals) tune their search algorithm to the environment.
- There was significant interest in determining if the variance was the result of variance within individuals or heterogeneous groups.
Report from Rule Sets in Human Collective Decision Making
- Tom Seeley, Frank Bryan, Kevin Passino, Ed Wilson, Nina Federoff
The New England Town Meeting (NETM) is a form of collective decision-making that provides a useful counterpoint to many of the other forms of collective decision-making being discussed at this workshop (brains, social insect colonies, flocks and herds, …).
Basic features of a NETM as a decision-making unit: 1. It makes consensus (aggregate) decisions: it produces ordinances that are binding on all the members of the town. 2. Its members often have conflicts of interest. 3. It uses an aggregation rule with the following components: • Voting is egalitarian: one (equal-weighted) vote per person • Decision threshold: majority rule 4. Its members operate with global information; each has access to all of the information within the group.
Decision-making protocol for a given problem, i.e. an article listed on the “Warning”: 1. Exploration of the option space: the first option is the article stated on the Warning, (e.g. “Spend $150K for a fire truck”). An individual can then propose an amendment to the article (e.g. “Spend only $100K for a fire truck”) which, if seconded, becomes a second option. An amendment of the amendment can also be proposed (e.g. “Spend $125K for a fire truck), and if it is seconded, then this becomes a third option for consideration. Only one amendment of an amendment can be made. This limits the size of the cognitive task faced by the individuals: at most three options need to be considered at any one time. 2. Deliberation about the options: The Moderator asks for comments and allows one individual to comment at a time. The person making the comment does so by standing, identifying him/herself, addressing the Moderator with his/her thoughts, then sitting down. An individual cannot make a second comment until all others who wish to provide a first comment have done so. Also, each individual can make at most two comments. This procedure helps strike a balance between speed and accuracy: a wide range of viewpoints can be expressed, but the discussion is not open-ended. Throughout the deliberations, each individual is presumably evaluating the options and is deciding which option(s) he/she will support and which one(s) he/she will reject. 3. Consensus building: Once all comments have been made, the Moderator calls for a vote regarding the Article, the Amendment of the Article, or the Amendment of the Amendment of the Article. The last one to be proposed is the first to be voted upon. First a voice vote is called for (all those in favor say yea… all those opposed say naye). If the outcome is not clear, then Division of the House (stand up and be counted) is called for or, if 7 individuals ask for a secret ballot, then a vote by secret ballot will be made. If the first option to be voted upon gets a majority of “yes” votes, then it becomes the town’s choice, but if it gets a majority of “no” votes, then it is rejected and the next option (if any) will be voted on. Ultimately, the town accepts one of the options, or it rejects them all. Thus, for example, the town may decide to spend $125K, or $100K, or $150K, or nothing at all, for a fire truck.
Some thoughts about this form of collective decision-making: 1. The exploration of the option space may be incomplete in some cases, since only three options can be considered, and these may not include the best one. 2. Deliberations about the options are, in principle, open so that all viewpoints can be presented and “kicked around”, but because of fear of public speaking, probably not all viewpoints are presented on each issue. 3. Given the protocols for presenting information within the group (giving comments), and the rather small group size (on average, about 150), it seems that each individual can have a global overview of the information that is shared. 4. The voting scheme has an interesting design for time efficiency in the aggregation of opinions: first try voice vote, then go (if necessary) to stand-up-and-count procedure, or only use the time-consuming secret ballot as a last resort (or if requested by at least 7 individuals). I wonder, though, if this does not lower the accuracy of the aggregation of opinions, for only the secret ballot ensures that individuals will vote independently. 5. Although the participants in a NETM often have strong conflicts of interest on certain issues, there appear to be various things that limit hostility and so promote a rational discussion. As far as I can tell, these include such things as (1) facing the Moderator, not an opponent, when making a comment; (2) having to stand up and identify yourself before making a public comment; (3) choosing a Moderator who is trusted as fair; (4) operating in a fairly small, and stable community, hence one in which individuals have repeated interactions and reputations, i.e. the conditions that favor the maintenance of cooperative behavior (by means of direct and indirect reciprocity).
Report from Group Size and Collective Decision Making
Deborah Gordon, Ana Sendova-Franks, Jeremiah Cohen, Nick Britton, Leo Sugrue
Analogies between effects of size on colony behavior and effects of size in neural systems?
Larger neural systems are more flexible. Why? More switches. What’s a switch? We don’t really know. More layers. What are the layers? Well, instead of a simple circuit, in which a stimulus activates one neuron which then leads to some action - there are more neurons in between. So is the flexibility due to more complicated structure in a larger brain, or just to more steps? We don’t really know.
In ant colonies there seems to be a threshold when the group is too small to function. The smaller the group, the more important individual variation may be. (Is individual variation important in neurons? We don’t know because it’s hard to identify the activity of particular nuerons within a system).
Discussed current experiments with ants by Ana Sendova-Franks, and past ones by Deborah Gordon, in which ants are removed. These show how decreasing the numbers performing a certain task (e.g. foragers) changes the behavior of the rest of the colony. In Gordon’s earlier experiments, removal of small numbers of ants performing one task shifted the numbers performing other tasks. Presumably removals lead to shifts in rate of interaction across task groups which leads individuals to switch task or modify activity level.
Leo told us about a neural network model by Carlos Brody which describes the interactions among large numbers of neurons, and shows how a change in the network leads to a change in behavior. The change is in weighting? Maybe this is an example of a detailed model for an effect of group size.
Report from Individual Bias and Collective Optimality
Can decisions that make individuals less rational be the most efficient ways of groups to solve problems?
How can individual level bias lead to group level optimality? There must be cognitive limitations, for this to happen. Fortunately (unfortunately?) there are lots of cognitive limitations.
The diversity prediction theorem – collective error is based on the individual error minus diversity. Thus collective is always smarter than the average individual. Galton’s wisdom of crowds as evidence. It needn’t be a normal distribution for this to be true.
Are there biases that individuals could show that would help the collective. Biases can increase diversity by increasing the distribution of possible answers which helps reduce error according to diversity prediction theorem. E.g. Anchoring by individuals could be adaptive at the group level because there are so many anchors it increases diversity.
Honeybees – no error in individuals leads to poor collective decision making because of stochastic components in search. There is an optimal level of error among individuals to help the group become more effective. If the evidence is noisy, then you need more diversity of observation to overcome that noise.
If individual biases exist, must collectives be suboptimal? No, only if bias is all in the same direction. E.g. stock market: because we all buy at different times, we all have different anchors which can help with collective decision making. Similar issue in common argument made by sociologists of science – scientists engage in a variety of cognitive biases, so therefore science must lose the claim to truth because the practitioners are suboptimal. This isn’t true – diversity of bias can actually improve overall scientific collective.
Is there evidence for bias/context dependent decision making in groups? Independence of observation/judgment is critical. Authority figure introducing a bias creates the non-independence. Experiment on physics pedagogy, do students who interact learn physics better? Yes, even though this is not independence, but some evidence from Kevin’s lab suggests the domain matters a lot.
Google is tracking who is talking to whom and social networks. Also, cooperative gaming, E.g. world of warcraft. Could you collaborate with Blizzard to find if human behaviors can mimic bees in large collective environment? Could we look at the town meeting data? Could we run a town meeting on the internet. It would be hard because you’d lose face to face cues. Sometimes the belief structure is stronger than any of the data – see madoff or Jonestown. Lots of times the collective interactions and the belief structure don’t converge on truth.
If diversity is good, then conformity biases are always bad? When could conformity be good for the collective. Maybe in emergencies when there’s herding or flock behavior to get away from a predator. When all the evidence has already been had – conformity is like taking the average. But what about group polarization?
Iain’s work, on how it’s good to have uninformed members. While they were uninformed, they did share the trait of cohesiveness, and that helps. Structure of the problem helps. Is it true that if a group contains a couple of people who know nothing about the topic it can improve the accuracy of the decision over a group (as compared to all informed people)? Could we find evidence for this?
Recombinant DNA advisory committee, full of all constituencies, some knowledgeable, and some not. The group couldn’t make a decision until there was internal learning, which forces the evidence to be presented. Need some people who are ignorant so that they can be persuaded by the best evidence, otherwise it’s just the people who happen to have had an initial bias.
In many contexts, ignorance can’t help. It probably depends on the type of question. There’s an optimal amount of ignorance per task. Do the type of questions that we as a society face tend to be those that ignorance is good for? Does the percent of ignorance vary based on the prevalence of those tasks? Uninformed folks are likely to be most helpful where the most informed don’t know enough to solve the equation. What are the consequences of changing the percent of bees that actually go and search?
In a town meeting, you don’t need too many informed people for most issues. You DO need some experts, but most people are uninformed on that issue. As the stakes go up, people are not going to change their mind, so you need some uninformed people who can be persuaded so that quorum can be reached. The people who are willing to change their minds are often the most important to quorum– the ones who have the most information are the least likely to change their mind and most likely to be suffering from confirmation bias.
What uninformed people do is allow a decision to happen, because if everybody was informed, then it wouldn’t be plastic.
Collective decision making could have evolved for the purpose of eliminating these biases. Multilevel selection is going on, it could be that societies can solve the errors that individuals make. Co-evolution of individual bias and social bias creates a mess.
If you’re in a crowd of overconfident people you may do better than if you are a single individual who is overconfident. So, can co-evolution (at the societal selection level) support individual biases? Can society exploit or mitigate the effect of individual biases. Increasing returns based on number of people.
What fraction of “diseased” individuals leads to failure of the group?
- There can be feedback -- in social groups, some individuals gauge how maladaptive they can be and get away with it; some members of a group will compensate for others who are slacking off.
- Some individuals in a group are more important than others – those with high degree distribution.
- Applications: how many sick days can workers take before the factory becomes inefficient; how many roads can be closed on a highway before traffic builds up; how much additional excitability in a neural network leads to epilepsy; ants will care for a “drunk” but not “very-drunk” conspecific (from experiments done in Victorian times)
What is the function of noise in individual assessment?
- The law of large numbers – error can be averaged out
- Noise can help locate a global optimum in an environment with many sub-optimal choices
- Naïve individuals can reinforce a signal.
- Heterogeneity in preferences can be beneficial if there are limited supplies of the “best” option
- Maximizers compare their preferences to other options, while satisficers compare their preferences to a threshold. There are pros and cons of both at the individual and group levels: Since there’s diminishing marginal utility and increasing marginal cost, it is optimal for a group if pain is all given to one individual, while pleasure is spread equally among individuals. Does this happen in nature? Examples from ant ecology. Want more maximizers, the more variability in the environment.
Report from Modeling Collective Decision Making
Modeling It would be useful to have a list of the canonical models of collective behavior, for instance for teaching – what are they?
Collective motion: The Viscek model is one of the simplest models of collective motion. Individuals have position in space, and show a tendency to align. It’s a coupled oscillator model. This is an elaboration of the “xy model“. Individuals to move rather than being on a lattice or some other fixed topology. The systems typically show phase transitions in the amount of alignment (order) as you vary individual level parameters. Spin alignment in a magnet is directly analogous to locust alignment in a hopper band.
Typically people model only part of a larger system. For instance to understand a magnet people just simulate a tiny portion of a magnet. Changes to certain variables (for instance changing diet) may have no effect on the behavior of individuals in isolation, yet have dramatic differences on the behavior of groups.
At which level are decisions being made? A group may make a collective decision to move to one nest site or the other, without any single ant being aware of this decision. Rather, the individuals make many decisions on shorter timescales that collectively lead to the emergent, colony-level decision. Another example: ants’ behavior may be impacted by the size of their colony, but this does not mean that they are “know” the size of the colony. In fact, the extent to which individuals may have information about characteristics of the group as a whole, for instance in fish schools, is an open question.
Changes in individual behavior, in response to local cues, can adjust properties of the collective. For instance army ants regulate the temperature of their bivouacs – formed entirely from their own bodies - by changing the way they link to one another. The model you use depends on what phenomenon you are trying to model. For instance a spin model would be a poor choice if you were trying to model ant foraging.
In biology we often use models that don’t take space into account explicitly. However when you want to model the collective motion of a fish school it may be crucial to incorporate space explicitly. It is often be a good strategy to start with the simplest, non-spatial model you can come up with, and then incorporate space at a later stage.
What about choice models - for instance voting models?
The goal of all models is to reduce dimensionality. What are the key building blocks you can use to explain the behavior of interest? You start by casting around thinking what elements may be crucial – for instance alignment in the collective motion model. Positive and negative feedback are often important.
To understand specific systems the details may become important. For instance many phenomena can be described within the same framework of positive and negative feedback, but the implementation may be very different.