Actions

Complex Systems Summer School 2019-Projects & Working Groups

From Santa Fe Institute Events Wiki

Revision as of 03:45, 19 June 2019 by EmCoco (talk | contribs)
Complex Systems Summer School 2019


Project and working group ideas go here.


Two ideas from Cat

The first two ideas are related to datasets that I can make available. I am dedicated to publishing results from both- and co-authorship is welcome if you are interested.

This first idea relates is a Natural Language Processing project with spatial aspects. I have gathered all 482 city and 58 county general plans for California. I have these plans available as both PDFs and with text extracted. These are 400+ page documents that communities put together in order to set the course for developing housing, transportation systems, green space, conservation, etc. This dataset is exciting because no state has a database of city/county plans- and these plans govern land-use. California offers an interesting case because there are mountains, beaches, rural areas, agricultural areas, dessert landscapes and the coast. Each landscape and population will require unique planning. We could use the dataset to answer a variety of questions. We could ask some simple questions with sentiment analysis (who wrote the happiest plans? Are rural areas the most disparaging in their plans- or are urban areas?) We could train a model on state recommendations for plans and see which plans fit (my hypothesis is that plans closest to Sacramento, the state capitol, fit the best). The take away would be that providing 'best practices' for planning is difficult because places and communities are so different in resources and objectives (eg. most rural areas do not want population growth, many urban areas measure success by population growth).. We could also take a topical approach. How much housing is each city/county planning to build in housing-stressed California? How do plans talk about fire prevention management (eg. in the context of housing? transportation? forest management?). How are communities planning for GHG reduction (with a focus mainly on air quality? A focus mainly on transportation? what about energy systems?)


The second project relates to my dissertation and builds into the science of cities. This project would use spatial regression. I hypothesize that cities are like coral reef ecosystems where structural complexity begets more habitat niches and more species diversity, leading to greater total ecosystem resilience g. faster recovery from disease or disaster). I hypothesize that cities might be the same way- more structural complexity (longer urban perimeters in the case of my dataset- but we could use 3d city models as well) would lead to greater land-use diversity and more job diversity- which would help protect against economic downturn. None of the data is normally distributed- so the spatial regression is challenging.

Added by Jessica: So a way that we could evaluate the complexity and information is a method called ascendency. It is basically the same information index calculated Joshua Garland showed us and informs us about the diversity of the networks. Interestingly, years ago when I plotted this information against productivity/Biomass/energy, it got some Lorenz patterns. If we could find a way to model a perturbation in the system, that would make for some interesting predictive analysis.

Interested Participants

Jessica Brumley Dee Romo

Dangerous idea about reviewing

Dan and I came up with this really dangerous idea to break academia over lunch. Reviewer # 2 is AI: We could use existing publications (eg. PlosOne) to train a model. Any paper that is uploaded for review would be reviewed by AI Reviewer #2. The review would take minutes, and would likely result in rejection or accept with modification. The AI could tell you where your paper fits in the broader scholarship on this topic. Does your paper bring together unique disciplines/ideas or test new hypotheses? How many papers have already been published on this topic- and how do your findings compare with regard to sample size, methodology, spatial and temporal context? In essence, have you found an anomaly- or is there more evidence to support a general theory. Where publicly available data exists, the AI could repeat analyses to verify findings. The AI could easily tell you where you have missed out on citing important works- or have been biased in citing the later work of a man over the foundational work of a woman or person of color (eg. everyone cites Robert Putnam for social capital and not Jane Jacobs). Such a reviewer would provide sentiment analyses by discipline (eg. Economics still loves Garrett Hardin's Tragedy of the Commons over Elinor Ostrom's work on the Commons. But all other disciplines are ready to kill Hardin's work) The second phase of this would use predictive modeling. reviewer #2 would write papers- predict new theories. This work would start with literature reviews (as any good PhD student would)- and then move into analyzing public datasets to answer new questions. We could check in after 10 years of human publication time had elapsed (eg. about 5-10 papers)- or 50 years... and see where science went. We could toggle the inputs (more hard sciences or more social sciences) to see how this changed the output and trajectory of science. The real world application could mean that we could do science with very little funding- and we would all be out of a job.


Emergence of cooperative strategies by means of game warping, using network science

(From Shruti)

What if players can transform a noncooperative game to a cooperative positive-sum game? This is possible in certain digital economic systems (such as those on a blockchain) because all contracts are strictly enforceable. These type of "game-warping" transformations are interesting because given any economic model with pre-defined rules, the agents are able to develop unforeseeable cooperation strategies, form coalitions, and expand the scope of potential actions over time. Effectively, players are collectively able to overturn the system dynamics. The economy evolves because the economic rules effectively change w/ time (anyone play Baba Is You?). "Game warping" is defined as using transparent, triggerable, unstoppable punishments to move game-theoretic equilibria. We can extend this to multiple players and model the system using a graph/network, to explore what different cooperation strategies emerge. I trust that studying these systems at a macro-level, using simulations or networks will bring greatest degree of insight and set this research apart. David Wolpert's (SFI) work on "game mining" is also relevant. <ref>https://www.santafe.edu/news-center/news/wolpert-aaec-game-mining</ref>

Citations:

Research questions

  • What gaming strategies emerge when NxN intelligent agents interact in a system that allows game warping? How does the system evolve over time?
  • Agents are able to make Bayesian decisions whose vectors adapt as more historical information becomes available
  • How are sequential games played on a NxN level? Consider contingency trail using Level-K solution concept
  • Simulate an artificial economy with adaptive agents

Interested participants

  • Shruti
  • Aabir
  • Mikaela

Slack

Join #gamewarping channel.

Mathematical formalization of cryptoeconomics

(From Shruti)

Create the Maxwell's equations of cryptoeconomics. Cryptoeconomics is a very new field, alike economics but comes with special properties that traditional economic systems cannot achieve. It is fertile ground that requires a foundation (pun), developing fundamentals, and formalization. Non-exhaustive list of potential approaches: network science, adaptive agent-based simulations, systems modeling (bond graphs), hyperparametric optimization, control systems. I've been thinking about developing this foundation for a few months now, and I will express all these ideas in my SFI talk on Jun 21 - open invite to join the collective nerd out! This topic was also hotly discussed as being the next steps in moving the space of cryptonetworks forward, in today's CollectiveCrypto workshop @ SFI by Geoffrey West, Jessica Flack, David Krakauer, David Wolpert et al. We arrived at the conclusion that this type of research is highly valuable.

This is likely a mini-project, with an intent to publish a paper.

Interested participants

  • Shruti
  • Mikaela

How might we quantify non-monetary value exchanges (like gift giving)?

(From Shruti)

The current financial system doesn't incentivize corporations/individuals to take environmentally and socially positive actions (for the most part). There is no mechanism that accounts for non-monetary value exchanges in today's shareholder model of corporate governance. These exchanges range anywhere from simple gift-giving to a government agreeing to not dump their waste in Vietnam, India etc. Through this project, we want to explore generalizable means to account for currently unquantified (yet important) value exchanges. We're currently toying around with the idea of a human reputation market, we are aware it sounds dystopian and are open to ideas.

Interested participants

  • Shruti
  • Pavel
  • Earnest

Slack

Join #moralmoniezzz

Simulating evolution of bacterial cells’ decision to divide

(From Kunaal)

How do cells decide when is the right time to divide? From a purely efficiency-based perspective, cells can obtain nutrients at a rate proportional to their surface area, but nutrient requirement for growth has a rate proportional to volume of the cell. Thus, there will be a cell size that is optimum for division.

The problem with this reasoning is, cells will tend to divide at the same size on average, irrespective of their initial size. But we know that in most bacterial species, cells that start out small (large) tend to divide at a size smaller (larger) than the average size at division.

This indicates there is a different reason behind cells’ decision to divide. It is an optimal path chosen by evolution, and I intend to simulate cells susceptible to mutations under different conditions to understand how this division mechanism arises through evolution and why it is optimal.

Join #cell-division-sim on Slack if you are interested.


Modelling the spatial diffusion of human languages

The diversification of human languages is a bit like speciation in biology: using comparative and cladistic methods, linguists can group languages into language families and further subgroup them into "phylogenetic" trees or networks. At the same time, we know where these languages are spoken today. The question, then: putting these two sources of data together, can we model the diffusion of languages over physical space and work backwards from the present day to infer the most likely homelands of the corresponding protolanguages? Can the predictions of such a model be made to align what we otherwise know about human migrations in the past? And most importantly (I think), from a complex systems perspective: what facets of the processes of linguistic diffusion and diversification are universal (i.e. not due to accidental historical events)? We could start with a simple random-walk model and take it from there. Slack channel is #language-diffusion.

First meeting: Friday 1pm, lecture room

Data

Papers to read

  • Let's add them here

Interested participants

  • Henri
  • Dee Romo
  • Kenzie Givens
  • Ritu
  • Harun
  • Xin Ran
  • Let's add ourselves here

Future plans

This is (or can be, if we want) a somewhat ambitious project. I'd be happy to continue working towards a publication after CSSS.

Butterflies in Water: Optimal Perturbations for Mixing in Treatment Processes

This idea came from Liz Bradley’s last lecture and her showing us the 2D hurricanes in a box experiment and adding the “butterflies”.

Water treatment processes often need perturbations to mix the water, especially if you need to oxidize and precipitate out a contaminant (iron is a common example). Ultimately you want to do this in the most energy efficient way. The goal when building these systems is to expose the water to the surface area and mix in oxygen (from the atmosphere) for as long as possible. There are various ways to do this: make large surface area ponds; make a “Stream like” pond to make the water flow longer; add small dams for the water to go around; Some people have tried adding poles/sticks to the water; etc. It is yet to be understood which is the most successful method or which might be the optimal level of perturbations for mixing. Could agent based modeling help? Does the mixing and oxidation processes express chaotic behavior?

This is a project that I am seriously thinking about engineering a laboratory model to test as well.

Interested Participants

Jessica Brumley


Computational Social Science in Decision-Making: an Opioid Epidemic Case-Study

Project write-up from Slack: As a part of my (Kyle Furlong) work, I’ve been developing a tool/application that uses computational social science/agent-based modeling to help decision-makers make better data-driven decisions. I’m using the opioid epidemic as a “case study” for this tool. Using NetLogo and R (RShiny), the tool allows the user to explore the multiple social science theories that describe addiction and perform what-if analyses to determine which public policies/programs might be most effective in reducing negative outcomes (overdoses, deaths, etc).

I’ve got an early prototype UI/code (pictured below) running and have built in some basic theories of addiction that I’ve pulled from the literature, but I’d love to collaborate with anyone who is interested in the topic (addiction, drug use, public health), the methods (NetLogo/ABMs, social networks), and/or the approach. Open to informal coffee/not coffee drinking groups to crowd-source on a conceptual level or more technical groups working to improve my admittedly unrefined addiction models.

Communication Channels

Slack Channel: #compsocialsci-opioids

Meeting Schedule & Notes

TBA

Interested Participants

Shamelessly pulled from the whiteboard after the project brainstorming session on 6/13/2019:

  • John Malloy
  • Winnie Poel
  • Robert Coulter
  • Dakota Murray
  • Xin Ran
  • Dee Romo
  • Pablo Franco
  • David Gier
  • Pam Mantri

Science Policy & Communication

How is information transferred from scientists to policymakers to constituents? How much information is lost in translation from scientific papers to news articles and tweets? This group will explore the (potential) information loss along each transition, along with other policy-based issues that will emerge from the interaction between scientists and policymakers.

Direct questions to John Malloy (Slack preferred)

Communication Channels

Slack channel: science-policy

Interested Participants (taken from Slack)

  • Andrew Gillreath-Brown
  • Chris Boyce-Jacino
  • Dakota Murrary
  • David Gier
  • Jackie Brown
  • Mackenzie Johnson
  • Elissa Cohen
  • Jessica Brumley
  • Majorie
  • Mikaela Akrenius
  • Aabir
  • Kyle Furlong
  • Patrick Steinmann
  • Ritu

Modeling and predicting food insecurity using a resilience lens

or Can complex systems help feed the hungry?

Slack channel: food-security

Over 800 million people are hungry today, and vulnerable to drought, floods and crop-disease driven by climate change. I’m interested in modeling the incidence of hunger as a dynamic, stochastic system using a resilience lens. Would like to see if we can predict the incidence of hunger in response to shocks using a neural net. Got some data to play with and open to exploring different models and predictive algorithms. If we get some promising results, we can showcase them to policymakers at USAID and the World Bank who are very interested in this space.

Participants

  • Erwin Knippenberg
  • Travis Moore
  • Ludvig Holmér
  • Andrew Gillreath-Brown
  • Alexander Bakus
  • Pam Mantri
  • Dan Krofcheck
  • Fabian Dablander

Modeling Minecraft's Crafting Web

Map the web of natural resource use in Minecraft and its hierarchy of dependencies, including the potentially circular dependencies (ie you need spider silk to make a bow, which you can then use to kill spiders). Can then infer which resources are most used, their trophic level, and what tools are required to produce them.

Participants

  • Kate Wootton
  • Alexander Bakus
  • Chris Quarles
  • Patrick Steinmann
  • Erwin Knippenberg


Looking for resilient patterns in Conway's Game of Life

Resilience to environmental challenges is one of the hallmarks of life. The goal of this project would be to search for patterns in Conway's Game of Life that can cope with external perturbations and self-organize back into their original forms. Conway's Game of Life[1] is a cellular automaton that has raised a lot of attention due to the life-like forms that it generates. Cellular automata are computational models composed of a grid of cells that can be on either of two (or more) states. At every generation, each of these cells can change according to the state of their neighbours. Interestingly, Conway's Game of Life is Turing-complete, meaning that it can compute any computable function, including the Game of Life itself [2]. For this reason, one should expect to find a wide range of interesting patterns, including those that can detect external perturbations and repair themselves. By finding them, we would be providing a compelling example of one of life's key traits as an emergent behaviour in a simple computational environment.

[1] https://www.youtube.com/watch?v=ouipbDkwHWA

[2] https://imgur.com/T1h2VVS

Participants

  • Alexander Schaefer
  • Dan Krofcheck
  • Kazuya Horibe
  • Arta Cika
  • Elissa Cohen
  • Luther Seet
  • Patrick Steinmann
  • Germán Kruszewski
  • Wenqian Yin

Analyzing Collaboration Throughout CSSS History

How has the nature of collaboration at CSSS changed over time? Using project and participant data from the last 20 years of the program, we plan to explore how topics and group structures have changed over time. Have groups become more interdisciplinary? Is there a pattern to the types of projects that individuals from particular fields tend to work on?

Participants

  • Dakota
  • Emily
  • Fabian
  • Jackie
  • Kyle

Multi-scale inequalities and cities

Increases in inequality and urbanization are two of the challenges facing global sustainable development. However, inequalities in the urban context are conventionally understood by analyzing one city at a time, which limits a multi-scalar understanding. This project proposes to investigate whether there are general patterns in how inequalities manifest across spatial scales and regional contexts and examine the relationships between urban networks and inequalities.

Participants

  • Bhartendu Pandey
  • Christopher Quarles
  • Alec Kirkley
  • Luther Seet

Lingua Technica: The impact of technology on language

Technology and language are related—words like "delete", "reboot", and "reset" only became prominent in our language with the introduction of computing. In other cases, language adopts metaphors of technology such as in phrases like "I'm Dying", "I'm losing you", and "They act like a robot". In this project we will analyze the uptake of such terms in English language text over the past decades. We hope to assess the extent and speed to which technical metaphors are adopted in a variety of mediums. We We will begin with words relating to computing and extent to other technologies such as cars, medicine, and more.

Participants

  • Dakota Murray


Artificial fossilization of animal interaction networks

There has been a rapid increase in the number of papers applying network analysis to ancient communities, inferred from the fossil record. However, many of these studies don't account for the fact that the fossil record is incomplete. For example, most soft-bodied organisms don't preserve well. We hope to ground-truth investigations of past processes by analyzing how information loss affects the structure of modern interaction networks (co-occurrence, food webs, etc) and the inferences we make from them.

Participants

  • Emily Coco
  • Jack Shaw
  • Andrew Gillreath-Brown
  • Anshuman Swain
  • Kate Wootton
  • Dries Daems

The Time Traveler's Tree: What Did Sci-Fi Writers want?

Throughout the 20th century, science fiction writers were busy imagining possible futures, using advanced scientific and technological concepts as a vehicle for their thoughts about the present and the future of the human race. When did we start talking about flying cars, when did we foreshadow the invention of waterbeds (Heinlein did it!) and where do the branches of the fictional tree loop into the branches of the real technological tree of the 20th and 21st century? We explore this by creating a dataset of fundamental scientific and technological ideas appearing in sci-fi classics of our time, primarily novels that have won the Hugo or Nebula award.

Participants

  • Harun Siljak

Big Brother's Agents: Modelling Sci-Fi Communities

How to start a rebellion in the total surveillance society of Orwell's 1984? Is it a case for an agent-based model, or maybe a network, or a cellular automaton? Could an emergent strategy bring down the Death Star? What made the Battle of Winterfell so wrong? This project investigates the great narratives of fiction and fantasy through complex systems modelling.

Participants

  • Harun Siljak
  • Andrew Gillreath-Brown

CSSS Social Network Study

Investigating the structural and dynamical properties of the social network formed by participants in the CSSS, incorporating node-level metadata.

Participants

  • Alec Kirkley
  • Shihui Feng
  • Dr. Kenneth Hunter Wapman III, MD
  • Kate Wootton

Self organizing city

Exploring emergence and how a city can evolve and be shaped by social interactions. Planned cities and organically developed cities all have a network of public spaces. This looks at the use of agent based modelling and adaptive networks to study both the formation and resilience of public space networks in cities.

How do do public spaces and active functions of the city influence the flow of people?

Slack Channel: #selforganizing-city

Participants

  • Luther Seet
  • German Kruszewski
  • Chris Boyce-Jacino
  • Kazuya Horibe
  • Jackie Brown
  • Bhartendu Pandey
  • Ludwig Holmer
  • Travis Moore
  • Please add on

Too Much Information and Segregation

Every entity has a limited capacity to process information. So, when there is too much information, entities need to exclude information that does not benefit them. What happens when there are increases in the amount of information available, such as when technology allows a place-based society to transition to a more connected one? Individuals will have more options, and will also need to be more selective about the information they receive. Does this lead to increased segregation and/or specialization in a social system and/or biological system? We are approaching these questions using a network model, where nodes update their filters based on a utility function.

Participants

  • Christopher Quarles
  • Wenqian Yin
  • Jordi Piñero
  • Xin Ran

Scrutinizing Early Warning Signals for Depression

Historically, depression has been understood within a 'common cause' framework in which the associations between symptoms such as worry, sadness, and lack of sleep is due to an underlying latent variable. This is an extremely successful approach in medicine, where symptoms usually are due to some underlying biological disease. In psychology, this has been less successful. An emerging 'network perspective' instead abandons the assumption of an underlying common cause and views depression as arising out of symptoms that directly influence each other, that is, as a complex system. There has been some work on 'early warning signals' which indicate 'tipping points', i.e., transitions to an alternative stable state. This project aims to extend and critically evaluate how these approaches have been applied to predict the onset and termination of depression.

Participants

  • Fabian
  • Toni
  • Andrea
  • Arta

Network Control

Once one discovers how the structure effects the function of a network, a possible next consideration is controllability of the network. For control to be possible, one must have a reliable map of the interactions occurring between nodes, a formulation of the dynamical equations governing the behavior of each node, and the capability to effect change in the behavior and state of some collection of the nodes. We will examine the roles that network topology and dynamical equations play in the context of network controllability. Our particular interest is in how noise induced on the network topology or dynamics will influence controllability. A few examples of systems relevant to this line of inquiry could include regulatory processes in cells or economies and the operation of power grids.

Participants

  • Billy Braasch
  • Alec Kirkley
  • Brennan Klein
  • Harun Šiljak

Cultural Fractals

Looking at different cladistics-like datasets on human culture, discussion on what ever kinds of complexity themes arise from them. Examples: fractals, chaos, punctuated equilibria.

Participants

  • Marjorie
  • Dries
  • Andrew
  • Kenzie

A chaos metaphor for network topology

We are studying the relationship between generative mechanisms and emergent topology of network ensembles. The aim is to probe for a new conceptual landscape for understanding complexity in network topologies.

Participants

  • Andrea
  • Anton
  • Keith
  • Shruti (?)
  • Travis
  • Xin

Does network structure affect incorporation of novel data?

What happens when one introduces a novel piece of information to a semi structured system? Is the information purged, incorporated, or persist neutrally. There is a neat study of how networks seemingly become more redundant over time, specifically, the data describes protein interaction network across ~1800 species. Idea is to re-create this data and play around with it, ideally to check how the structure of the network may influence whether or not a new protein can be incorporated.

Why proteins? Mutations are the source of evolutionary novelty. Without evolutionary novelty, an organism will struggle to adapt to an alternating environment. Simultaneously, mutations are perceived to mostly be deleterious, causing negative fitness effects. Effectively, we are facing an evolutionary balance act between maintaining current functionality (selection against deleterious mutations), but also incorporating novelty (mutations, diversity) in order to stay opt in a constantly alternating environment. So, how can a systems incorporate novel information, without breaking the current functionality? You can think of proteins as a network of friends, and friends who introduce you to new friends. The practical nature of proteins, is that they have limited personality, thus they are easier to model. At least so we imagine.

Participants

  • April
  • Brennan
  • Keith
  • Ludvig

Computational Synesthesia: A Multi-modal Approach to Automated Text Analysis

What data comprises the meaning of a word? Extant approaches to automated text analysis assume that the meaning of words can be inferred by examining co-occurrence relations (the bag-of-words approach) and formal semantic relationships among words (the linguistics approach). However, a large body of work illustrates that meaning is multimodal; i.e. words gain their meaning through their use in embodied multimodal contexts as labels that refer to “sensory images” – a property of meaning long known to artists, as in the use of visual imagery to inspire linguistically-mediated interpretations in the domain of painting, or in the use of words to evoke mental representations of sensory images in the domain of poetry. Here we propose a method for automated textual analysis that clusters words not based on text-to-text relations, but rather, in text-to-image relations, based on the structure of the sensory images associated with words. Specifically, we develop an automated approach for using large-scale search data from Google to retrieve a continuous set of images associated with search terms, and we provide information-theoretic measures for clustering search terms based on similarities in the color profile (such as ‘palette’) of . We explore how this clustering method can reveal novel dimensions of meaning previously unavailable through purely text-based methods of automated text analysis, for example, by finding unexpected similarities among search terms based on their color profile, or by providing novel measures for how abstract a word is based on the diversity of images associated with its use as a search term.

Participants

  • Douglas
  • Ethan
  • Mark
  • Aabir
  • Bhargav

Expected utility, information, and psychophysics

Slack channel: #information-utility

Project 1

Description

Applying Valence-Weighted Distance (VWD), a novel probability weighting function developed by Mikaela, to a dynamic asset pricing model to explain emergent aggregate behavior.

Probability functions are applied in the context of non-expected utility theories to explain deviations of individual decision makers from the predictions of expected utility theory. A plethora of research has found that people tend to overweight small probabilities and underweight larger probabilities -- however, the psychological bases of this tendency remain unclear.

Grounded in psychophysics, VWD builds on information theoretic principles and aims to provide a psychological explanation for the shape and parameter fits of existing probability weighting functions (Tversky & Kahneman, Prelec, LinLog). In addition, VWD introduces novel (empirically testable) predictions, is sensitive to choice context, and has less free parameters than existing probability weighting functions.

Participants

  • Mikaela

Project 2

Analogously, it is well established within economics and psychology that the utility of money (or any other commodity) is concave -- i.e. that decision makers perceive the difference between e.g. $1 and $2 as greater than the difference between $101 and $102. The concavity of utility has been previously explained e.g. with outcome sampling and comparison in the choice environment (Stewart, 2009), the dissociation between large monetary outcomes and outcomes experienced in everyday life, and the notion of the human mind/brain as an efficient transduction system (e.g. Arkes, 1991; Summerfield & Tsetsos, 2015). As far as Mikaela knows, the latter notion has not yet been formally explored outside an experimental context, i.e. through simulating distributions of economic transactions in the actual environment in which decision makers tested in psychological or economic studies make their purchases.

Provisional idea

  • Get distributions of economic transactions in different societies
  • Simulate a decision maker that encodes monetary value optimally given the prevalence of transactions in that society
  • Compare shapes of resulting utility functions to existing empirical data (observed utility functions)

Participants

  • Mikaela

Perceptions of aesthetic and informational content in expert and novice judgments

Slack channel: #aesthetic-information

Provisional idea

  • Method: Ratings of bifurcation and state space diagrams varied in their aesthetic and informational properties
  • Goal: Assess differences in expert and novice perception
  • Subject pool: SFI CSSS 2019 (pilot)
  • Hypothesis and applications of results: ---

Participants

  • Mikaela

Code Name: Leaf Hunters

Previous work has identified quantitative measurements of leaves such as persistent homology, machine learning (e.g. CNNs), and fractal dimensionality as a method to predict the phylogenetic origin of leaves (a classification task of whether the leaf belongs to the species or not). Although interesting, we find this work isn’t useful for building theoretical models of plants within their environments. It is an interesting question whether environmental complexity (however that is measured!) might have relationships to leaf complexity (however that is measured!). This analysis could be useful for developing knew understandings into how and why complexity emerges in plant evolution.

Participants

  • Levi Fussell
  • Anshuman Swain
  • Emily Coco