Judgment Under a Diversity of Timescales: Difference between revisions
From Santa Fe Institute Events Wiki
No edit summary |
No edit summary |
||
Line 13: | Line 13: | ||
'''September 13, 2018<br />''' | '''September 13, 2018<br />''' | ||
Co-hosted by Principal Financial Group | |||
'''Overview'''<br /> | '''Overview'''<br /> |
Revision as of 16:55, 4 December 2017
September 13, 2018
Co-hosted by Principal Financial Group
Overview
This ACtioN Topical meeting has three goals:
To bring insights from complexity science to bear on the recognition, identification, and evaluation of new ideas, projects, investments, and research programs. To discuss the subsequent measurement of their adoption and influence. To consider the complicated trade-offs and strategies which arise when considering both the long and the short term.
General Introduction
Questions of judgment and of impact pervade all elements of activity.
How good is this idea? Should I continue along this line of investigation? Does this program have merit? Is this idea original and important? Did this person make the difference or was it their team? Is this portfolio property optimized? How do I design investment strategies that account for different time horizons? Did this data really improve our understanding? These are all questions that call for significant and ideally verifiable judgments.
At the same time, we are interested in the impact of these judgments: “did the project achieve its objectives?” “what was the measurable impact?” “did the ideas deliver on their promises?” and “did this program open up a new market or new opportunity?”
We are all trained to assess contributions within increasingly narrow and specialized domains while at the same time scientific, technological and economic advances cross and distort these boundaries. And frequently we are at a loss when it comes to judging these endeavors. Furthermore, the age of (big) data has brought about an over-reliance on quantitative and semi-quantitative metrics. These can engender an illusion of objectivity and are by everybody’s admission problematic even when useful. Making these twin challenges that much harder is the fundamental difference between judgment and impact over the short-term and the long term. In the short-term judgment is easier than impact – we use reputation, recommendation, and credentials – whereas in the long-term impact often overtakes judgment – where we use metrics, testimonials and achievements.
Judgement can be thought of in terms of three fundamental dimensions: (1) The weight or value of ideas, (2) The distillation and connection of ideas, and (3) the predictions generated by the ideas. By contrast impact has three complementary dimensions of: (1) the frequency of adoption by a market (e.g. profit or market share), (2) the emergence or generation of correlated and complementary ideas (products or markets that depend on the target product), and (3) the long-term innovations produced by the idea (secondary largely unanticipated technologies and markets). Each of these six dimensions can be addressed through several different complexity tools and frameworks.
Complexity, judgment, & impact
Over the last decade, several new approaches and insights from complexity science have been generated that bear directly on the fundamental dimensions of judgment and impact. These come from fields as diverse as network theory, the dynamics of infectious ideas, cultural evolution and phylogenetic inference, the theory of scaling, first-passage processes, collective computation, and even the visualization challenges of complex data sets.
First mover effects through preferential attachment dynamics in networks (e.g. See Newman, 2009) Evolutionary graph theory and Moran processes in structured populations (e.g. See Nowak et. Al, 2005) Citation distributions as limiting distributions of simple stochastic processes (e.g. See Redner, 2004 ) Research strategies and knowledge clusters in large-scale scientific collaboration datasets (e.g. See Foster et. Al, 2013) Random walk evaluation of knowledge impacts (e.g. See Bergstrom, 2007) Phylogenetic analysis of cultural transmission (e.g. See Mace & Holden, 2005) The infection-like transmission of ideas through populations (e.g. See Mace & Holden 2005) Scaling of measures of productivity in system size (e.g. See West, 2017) The visualization of complex data sets as an aid to decision-making (e.g. See Borner, 2010) Merging the dynamics of social networks with the evolution of knowledge systems (e.g. Laubichler and Renn 2015, 2017) Each of these methods can be applied over the short and the long term with very different results. It will be an explicit objective of the meeting to weigh the advantages and disadvantages of each in relation to case studies and clearly stated problems.