Guess what. The climate's changing quickly and we're to blame for it. Well, one good turn deserves another. We're influencing the environment's behavior. It will influence ours, as it always has. But how? Some researchers are battling over the effect of climate on the frequency of warfare and civil unrest. The research group I'm working with - led by Sara Curran
and Matt Dunbar
at the Center for Studies in Demography and Ecology
- just wants to know how the climate will affect human migration patterns, especially between rural and urban places. And I'm betting that climate effects on migration will be, at least in the beginning, and at least if we don't let things get too out of hand, more important than the effects on warfare. And if you've read Peter Turchin's work on the factors leading to civil unrest and civil war
, immigration is one of them, anyway.
Farmers' livelihoods depend on the weather. For example, the farmers in Nang Rong, Thailand, where we've focused our research, depend on the annual monsoon seasons to water their crops, especially rice. If those monsoons become more intense, or more irregular, or if the climate starts alternating between extremely dry and extremely wet conditions, crops might not do as well, and some farmers will look for other ways to get money. In fact, we've shown that's what's happened for decades since Thailand started its rapid urbanization and development. At least...we think that's what we've shown.
We're interested in "slow onset" climate change, as opposed to extreme events, like hurricanes and stuff. Slow onset change is like sea level rise, and changing rainfall patterns. These are trends that last for decades, but that you won't see on CNN's dramatic news breaks. A good way to predict future behavior in response to slow onset change is to look at migration responses to past climatic conditions. Do more or fewer people migrate during years with more or less rainfall? Okay. Now that we know that, let's look at what's likely to happen to rainfall in the future and forecast migration patterns. It's like that.
Believe it or not, climate is a difficult thing to measure mainly because there are so many ways you can do it. You can look at global climatic processes, like the El Niño Southern Oscillation
(ENSO). But this doesn't tell you much about local processes. For local processes, you can look directly at temperature. But temperature is erratic at small time scales. You could look at rainfall, but in places like Nang Rong, you'll only have one rain station, yet widely varying village microclimates.
Another way to measure the local environment is to look at satellite imagery, and that's the approach we've taken. One thing the satellites give you is the amount of light at different wave lengths that gets reflected back into space. Two important wavelengths are the near infrared and red spectral bands. When plants are healthy and growing and dense, more infrared light gets reflected compared to red. When plants are not healthy and sparse, less infrared light gets reflected compared to red. A useful measure of near infrared compared to red reflectance is called the Normalized Vegetation Difference Index
The trouble with satellite data is that there is a lot
of it. We have 24 years of NDVI, with two images every month. What's more, we've got that much data for 49 8x8 km sq parcels of Nang Rong land. That's 28,224 data points. How do we compress all that data into a simple, intuitive measure that predicts individual migration? Should we compare the annual average NDVI for a given year with a long-term average? What about a five year average NDVI comparison? Or does our measure need to take into account the pattern of NDVI within a year, such as how quickly plants "green up", and how long they stay healthy? And how do we reliably measure "green up" anyway? For that matter, shouldn't we use satellite imagery to check what is actually on the landscape to make sure we're tying NDVI to the growth cycles of plants that actually matter to farmers?
So in addition to the global vs. local dimension of environmental measures, there's this simple vs. complex dimension. Global measures, such as ENSO, are simpler than local measures based on NDVI because there's no spatial dimension; ENSO is a global process! Yet within ENSO-based or NDVI-based measures, you can make simpler or more complex decisions about how you measure the process. Simpler variables are cheaper to produce because they take less time, and we all know that time = $. Complex variables are more expensive, but might tell us more about how the climate affects decisions in a local context. If you're trying to predict migration in several regions at once, local variables are more expensive because there is more data to crunch, no matter how simple the measure is. By comparison, there is only one ENSO dataset for every region.
If we'd like to quickly and cheaply measure the effect of climate on migration, we should prefer simpler, global measures. So the questions that our research group is turning to are:
- How complex do our environmental measures need to be to predict migration behavior well?
- And how much do we gain in predictive power for more costly measurement methods?
Stay tuned for answers.
Today, while looking up independent scientist Ethan Perlstein (recently profiled in Science magazine
), I came across something called Microryza
, a crowdfunding engine that lets you "follow & fund" science. As everyone else on the Internet is saying, it's kind of like the Kickstarter of science funding. What I like about Microryza is that the focus is the science. Unlike Kickstarter or Rockethub or whatever, you aren't required to provide tangible rewards to your backers. The science is the reward, and the scientist gets to focus on producing and communicating it through Microryza's beautiful online interface.
And here's my favorite part of their FAQ:
Do I have to be a student or professor at a university?
No, we love to host projects from people outside of research institutions.
So here's my postdoctoral scientrepreneurship plan so far. Here are the things I could potentially focus on and seek funding for.
- Sound Cheks, my political fact checking research institute project.
- My research on hawkish cooperation.
- My collaborative research with Sara Curran, Matt Dunbar, and Jacqueline Meijer-Irons on the effect of climate on migration.
- Developing a personal finance education program in the Commonwealth of Dominica.
- Continuing to publish my work on inferring social dominance structure in collaboration with Zack Almquist (soon to be at U of Minnesota).
Obviously, I can't do all of this at the same time. The way I see it, item 5 is a given. After my dissertation, we'll have two more papers we could still publish, and the ball will have been put in Zack's court. Item 4 is something I could incubate over a few years and then develop over a summer and (hopefully) make self-sustaining through local Dominicans once I set it up. As for items 1, 2, and 3, I'm going to apply for funding for all them simultaneously, see what I get, and allocate my time accordingly. Here's the funding plan.
- Upstart campaign (Sound Cheks ostensibly, but really it will help me do all of this if I have patron investors to help me free up time from working for somebody else).
- Microryza campaign (Sound Cheks measurement models and soundness checking personnel).
- Microryza campaign (hawkish cooperation).
- Microryza campaign (migration and climate research).
- National Science Foundation Interdisciplinary Behavioral and Social Science Research Postdoctoral Research Fellowhsip (migration and climate research).
- Harry Guggenheim Foundation Research grant (hawkish cooperation).
- Rockethub (Sound Cheks UX, webdev, hardware, and alpha testing).
Proceedings B just published an interesting article by theoretical biologists McNally and Jackson called "Cooperation creates selection for tactical deception". The authors analyzed a simple mathematical model and reviewed comparative data on cooperation and decepton across the order Primates to argue...well...exactly what their title says. Irrelevant side note: the Jackson author's first name is Andrew. That is, he shares a name with one of the most badass, cantankerous, and dare I say murderous of American Presidents.
Anyway, the mathematical analysis reveals that, if cooperation evolutionarily prevails over "honest" cheating (where cheaters don't try to hide their cheating), then a new strategy can invade that tactically deceives cooperators (by hiding or misrepresenting their behavior). But this can only happen if cooperators aren't good at recognizing rare cheaters. In that case, you'd expect a mixed population of cooperators and deceptive cheaters. The equilibrium ratio of cooperators to deceptive cheaters depends on how difficult it is to deceive relative to how good cooperators are at recognizing both cheaters and deception. The more difficult deception is and the easier recognizing it is, the greater the ratio of cooperators to deceptive cheaters.
What's really interesting about the mathematical result is that if cooperators are terrible at catching cheaters, then "honest" cheaters can invade the mixed population of cooperators and deceivers because they don't pay the cost of deception that deceivers do, but they still reap all the benefits of cheating. In that case, cooperation prevails. Hurray. But if cooperators are good at catching cheaters, it pays to deceive ... at least for rare deceptive cheaters. I'm pretty sure that these mathematical results make sense and you might guess at them without doing any calculus. That's not a mark against the models. Instead, it's helpful when a mathematical model with explicit, formal assumptions confirms our intuition, which derives from implicit assumptions and informal logic.
The authors argue that their mathematical model implies a positive correlation between the number of cooperative strategies and the number of deceptive strategies in a species. Actually, their model implies that, under some very specific circumstances, we'd expect a positive correlation between the frequency of cooperators and the frequency of defectors. That said, it's not too much of a logical leap.
To examine this prediction, the authors did a comparative analysis of species in the order Primates (to which we belong). The data compiled the presence or absence of different types of cooperative and deceptive behaviors. They used a method called independent contrasts to examine the relationship between cooperativeness and deception, controlling for the phylogenetic relationships among species and for the research effort into a particular species (because more research yields more observations of different types of behaviors). Here is are scatterplots of the independent contrasts with best fitting lines through the points.
The left plot includes only primates in the wild (because behavior in the wild is more relevant than behavior in captivity). The right plot includes both free-ranging and captive individuals. In both cases, the positive correlation between cooperativeness and deception rate is statistically significant, if a bit weak in the case of the full data set.
What's fascinating about the empirical results is that there is no statistically significant relationship between neocortex size (a measure of cognitive capacity) and deception rate when controlling for cooperativeness. This goes against the grain of the Machiavellian intelligence hypothesis, which argues that there should be a positive correlation between deception rate and neocortex size.
But is the non-significance of neocortex size simply due to a collinearity problem? A collinearity problem happens when you fit a regression in which two of the predictor variables are highly correlated. The effect of collinearity is that it inflates the confidence intervals of your regression coefficients (which measure the relationship between the outcome variable and the predictors). Wider confidence intervals mean larger p-values and lower statistical significance. The number of cooperative behaviors in a species and its neocortex size might be correlated. Indeed, R.I.M. Dunbar's classic study found that group size is correlated with neocortex size in primates, and group size is a problematic but still useful proxy for social complexity.
And this is why journals need to allow more room for the methods section: because we should never penalize scientists for doing collinearity diagnostics.
Last week, I gave people advice
on how to collect data from multiple informants about social dominance relations within households. I'm still working with that data, so let's keeping talking about it. Today, I present a very preliminary finding that's entirely tangential to my dissertation, but potentially interesting. Basically, it looks like people are more willing to say which of their friends is physically stronger or more irascible than who is more likely to a win a serious disagreement...at least in one rural village in the Commonwealth of Dominica.
Let's review the data collected. I went to 92 households. I tried to ask every single household member 13 or older some questions about the relationship between every possible pair of fellow household members who are also 13 or older. The three questions that interest us today are (translated into Dominican English or Dominican French Creole):
1. "If these two people got into a serious disagreement, which of them would more likely get what they want?" (more likely to win a row)
2. "Which of these two people can lift a heavier load?" (physically stronger)
3. "Which of these two people has a more fiery temper?" (more irrascible)
For neither of these questions did I force respondents to make a choice. That is, a respondent could say that the two members of the household pair are equally likely to win a serious disagreement, equally strong, or equally irascible.
One of the challenges I faced with these sorts of questions is that it may be taboo to acknowledge that one person is more [insert adjective] than someone else. Indeed, I allowed people to avoid making a decision on who is more likely to win a serious disagreement because my research assistants told me I might be considered rude if I didn't. By comparison, it isn't rude to say that one person is physically stronger than another. Moreover, I've heard people describe others jokingly as one who "ke fache pli vit" (literally, "will get mad faster"). That said, you might think it would be considered more rude to describe someone else as temperamental than to describe them as strong compared to someone else.
A crummy, half-assed way to explore these questions is just to calculate for each of the three questions the proportion of reports on household pairs that are considered roughly equivalent, or "half and half" as Dominicans would put it.
So here's the results of this half-assed study, which completely ignores sampling error, bias, and missing data problems (so take it with a teaspoon of salt):
1. About 11% of reports considered the household pair to be equally likely to win a disagreement.
2. Compare that to about 3% of reports that considered a household pair to be equally strong.
3. Compare also to about 6% of reports that considered a household pair to be equally irascible.
These figures appear to agree with my experienced-based assumptions about the sort of comparisons that rural Dominican are more willing to make.
I also asked people how certain they felt about their choice for who would more likely win a disagreement. Many respondents might consider this a second chance to adhere to the taboo about comparing people's ability to win a disagreement (if such a taboo exists). Specifically, I asked respondents if there were "not at all", "a little bit", or "almost completely" certain in their choice. If we take "not at all" answers as equivalent to "half and half" answers, then the proportion of "half and half" reports bumps up to about 19%. I don't have a similar question for the "stronger" or "more irascible" questions, but I wish I did!
And don't worry. The statistical methods I'll employ in my dissertation are a lot more sophisticated than this. I've got a paper in the works with my collaborator Zack Almquist in which we will estimate not only the rate at which informants consider household pairs to be equal, but also the rate at which informants incorrectly label a pair to be equal, and the rate at which different informants disagree (that word again!) on a comparison between two household members.
Last year, I lived in Dominica, where I collected data on the social dominance structure within rural households. For each pair of household members over age 12, I wanted to ask each household member over age 12 who was more likely to win a serious disagreement, and how certain the informant was in their response. I also asked informants who in a household pair they was physically stronger, and who they thought had a more fiery temper. This is some pretty complicated data. I've learned the hard way how to collect it to minimize data entry errors. So if you're a field anthropologist or something similar and you're collecting data on unordered pairs of individuals from multiple informants...listen close.
1. This stuff is complicated. Respect that!
Suppose you visit h households. Each household has n(h) members, plus up to five other "home people", the colloquial term for villagers who share meals and chores with this household on a daily basis, but don't necessarily sleep in it. In total, there are m(h) home people, including the household members (thus m(h) - n(h) extra-household home people). For each household, there are p(h) = m(h)(1 - m(h))/2 unordered pairs of home people. If you ask household member about each of the unordered pairs, you will end up with n(h)p(h) informant reports for household h. Simple, right?
No. You need at least four linked tables to do this right. First, you need a table for individual villagers that tells you the household they live in, if known (note that you will get the names of extra-household home people who do not necessarily live in the set of households you visit!). Second, you need a table to store the affiliations of home people to households (because a villager could be a home person to multiple households in the village!). Third, you need a table of the unordered pairs of home people. Finally, you need a table that stores the informant reports on each of the unordered pairs of home people affiliated with the household in which they reside. If you do any of this incorrectly, you are fucked. Thankfully, I designed my database correctly.
2. Enter the data directly into a computer.
You have four linked tables. The number unordered pairs increases with the square of the number of household members. You don't have time or cognitive capacity to fiddle around with long lists of home people dyads, villagers, and the link. Don't try it. Create a graphic user interface that links to your database (MS Access is a good way to do this; just make a bunch of forms). You will make fewer data entry errors because you won't be copying things a bunch of times, and you will be able to query information quickly. Thankfully, I did this right, too. But then I went wrong.
3. Create your dyad records and your records for informant reports on dyads before collecting the data on those dyads!
After I got to the field, I had to do a major overhaul on my survey questionnaire, which meant I had to completely redo my graphic user interface forms. I was in the field, lonely, and missing my family. So I was not in my right mind. I also was running short on time. So I decided I would just enter dyads by hand as I collected the data rather than create a SQL query that would automatically create the dyads after I entered in new individuals. Granted, that is actually a difficult think to code into an MS Access form (which is what I was using). Still, by entering the dyads by hand, I increased the probability of errors. Turns out that I have 72 missing dyads (sounds like a whole lot, but there are thousands of dyads in my dataset). That's bad because none of those dyads were represented in any of my informant reports on social dominance networks for the households that include the missing dyads. That means I will have to impute that data. While I know some pretty cool data imputation methods, this of course won't totally solve the problem.
Similarly, I decided I would just enter new informant report records by hand, manually entering in the individual identifiers of the individuals in the dyad the informant was responding to. I also manually entered the identifier into the field storing which dyad member was more likely to win an argument instead of having the field force me to choose one of the dyad members. Again, this would take more time to code, but so does reconciling errors. I haven't counted the errors in my informant report data, but I know there are a few. Many of them are likely reconcilable because the mistakes will be obvious. But not all.
These and other tips will I relate to viewers of my upcoming presentation at the American Anthropological Association Meetings (in November). I'm presenting in a session, led by Stanford's Jamie Jones, on Bayesian inference of social dominance networks from multiple informant reports.
My latest adventure as Teaching Assistant for Dr. Kathy O'Connor
's Reproductive Endocrinology Lab class was a discussion about reproductive ecology, part of which I led. (Reproductive ecology is the study of the factors that influence the modulation of reproductive function, in particular the environmentally specific availability of energy and nutrients.) Class started with Kathy's review of the aims and history of reproductive ecology (from an anthropologist's perspective). She segued into a discussion of some key controversies in the study of female reproductive ecology, and how this ties in with a new interest among reproductive ecologists in the male reproductive axis.
One of the most fascinating questions here is how responsive the female reproductive axis is to environmental cues (such as energy and nutrient intake), and why. A related question is whether the male reproductive axis is more or less robust to environmental change than the female axis and, again, why. To what extent and in what ways is the sex-specfic robustness of the reproductive axis adaptive? Our students, who range from undergraduates in Anthropology to a Communications graduate student, all gave convincing, controversial (in a good way), and (most importantly) testable answers to these questions. It was a great primer on how to think from the perspective of a reproductive ecologist, which will help the students understand why anthropologists are teaching a class on laboratory methods in endocrinology.
So why do anthropologists do endocrinology? The first day of class, Kathy discussed the contrast between the anthropological and biomedical perspectives. Anthropology is holistic, evolutionary, and cross-cultural. The biomedical perspective is more in tune with basic research on physiology, but it determines what is normal based on studies of W.E.I.R.D. (Western, Educated, Industrialized, Rich, and Democratic) populations. But what is normal? Specifically, what are normal estrogen levels? Testosterone levels? A physician might give you a range of female sex hormone values within which a woman can conceive. But an anthropological endocrinologist would show you a graph of Bangladeshi women's sex hormone profiles that are well below that range, and yet they're conceiving and giving birth!
So what is the solution? Kathy teaches that we need to quantify and explain variation
in endocrinological function across and within individuals and populations. The sticky part is understanding when to focus on one level of variation and why. It's that challenge I focused on in the section of the discussion that I led.
Near the end of class, we discussed an article by Bribiescas
on the evolutionary tradeoffs that the human male reproductive axis faces between survival and reproduction. I showed several of the charts and graphs that Bribiescas used, and asked the students to identify the levels of variation he was focusing on or masking in each plot, and why. I asked them what the consequences of masking certain levels of variation might be for the conclusions that Bribiescas was making. We also discussed how he was employing a cross-cultural perspective to outline key features of male reproductive ecology.
Throughout, I hinted to the students at two "mystery" levels of variation that all
of Bribiescas's plots masked and which are extremely important in practical endocrinology (and also to the highfalutin hypotheses that we test using hormone assays). To my surprise, one of the students solved one of the mysteries by noting that some of Bribiescas's graphs showed urinary hormone profiles, others salivary profiles, and still others serum profiles. I seriously jumped for joy that she figured it out. I know, I'm a nerd. Anyway, different matrices (urine, blood, serum) can tell us different things about reproductive function. Sometimes, people forget that.
The students haven't yet figured out what the other important source of variation is. And I'm not going to give it away! Stay tuned for the answer once they get it. They're pretty smart, so I don't doubt they will. I'll probably jump for joy again, nerd-like, when they do.
Next quarter I'll be the teaching assistant for my department's reproductive endocrinology course. In that course, we use biochemistry to measure the amount of hormones in urine, saliva, and other body fluids. In the process, some interesting hypotheses are tested...and lots of sweat is spilled over whether or not you're performing lab procedures properly.
Being the teaching assistant for this course is a daunting task if you're a graduate student who happens to be an endocrinologist. I'm not that. So this will be a big challenge for me. Thankfully, I like challenges and often overcome them.
I've taken this course before, but I need to review the lab procedures and concepts to get myself back up to speed. Today was the first day of my lab work crash review. I have to say. Picking up a pipette again after a few years is kind of like riding a bike. Except you only use your thumb, and it's a lot more precise.
Tomorrow, I prepare my samples (of urine and saliva...ew....but still, cool), standards (help you figure out the limits of detection for your assay, among other things), and controls (help you figure out if your assay sucks or not by remeasuring a sample that has a known hormone concentration). The day or two after that, we'll run the assays. We'll be measuring cortisol in matched urine and saliva specimens, and progesterone in urine. It is only three plates, which is far fewer than I ran back when I did my class project.
Much thanks to the folks at the lab for their assistance in my retraining. I'll maintain this log of my lab course teaching assistance experience.
The U.S. Census Bureau just Tweeted their latest data visualization
, which depicts the components of metropolitan area population change for 2010-2011. How did they decide to visualize this data? By listing a bunch of metropolitan area names followed by a series of four boxes, concatenated by plus, minus, or equal signs. The first box in the series represents natural increase (births minus deaths). The second represents net international migration (immigrants minus emigrants). The third box represents net domestic migration (immigrants minus emigrants). The fourth box is the sum of those three areas.
Where to begin? First, population change has one dimension. So why depict it with area? That just increases the lie factor. Second, why not just put a table of numbers added and subtracted from one another instead of small multiples of different-sized squares? I can differentiate the sizes of numbers about as well as I can differentiate the sizes of squares that aren't that different in area.
"#dataviz" is pretty and nifty and hot and all, but I'd rather stick with the visual display of quantitative information. Or sometimes, just a goddamn table.
Over the last year, My n of 3 chronicled the trials and tribulations of my Fulbright Scholarship in the Commonwealth of Dominica, along with some of the side projects I was working on during my spare time there. My Fulbright Scholarship is over, and I'm back in the United States. So what now?
One theme will continue: my attempt to relate social science to my personal life, while recognizing the pitfalls of doing so. More practically, this blog will, over the next two years, be a dissertation blog. Whereas before I chronicled the trials and tribulations of collecting my dissertation data, now I will chronicle the challenges I face in "cleaning" it, analyzing it, reporting those analyses, and trying to publish. Basically, it will be a public notebook on my dissertation. I'll also continue to post commentary on recent scholarly articles.
One more thing. During Winter Quarter at University of Washington, I'll be the Teaching Assistant for Dr. Kathleen O'Connor's Reproductive Endocrinology Lab course, which I took a few years ago and very much enjoyed. Some of you might not know this but...I'm not a reproductive endocrinologist or an endocrinologist of any kind. So this teaching assistant position will be a challenge, but one that I look forward to meeting. Since my preparation for the position will probably include practicing some of the lab work, which involves the measurement of hormones in urine and saliva, I'm betting there will be more than a few chances to literally report on my n of 1 or 2 or 3, if you catch my drift. Because, hey, you have to get your practice urine and saliva from somewhere!
Shakti Lamba and Ruth Mace have a new article in Proc B
about the evolution of fairness and its cultural diversity among humans. The question they address is the extent to which conformity within cultures explains the cross-cultural diversity of fairness norms. The authors believe that adaptive challenges within shared environments better explains fairness norm diversity.
I'll let their abstract tell you what they did.
Conceptions of fairness vary across the world. Identifying the drivers of this variation is key to understanding the selection pressures and mechanisms that lead to the evolution of fairness in humans. Individuals' varying fairness preferences are widely assumed to represent cultural norms. However, this assumption has not previously been tested. Fairness norms are defined as culturally transmitted equilibria at which bargainers have coordinated expectations from each other. Hence, if fairness norms exist at the level of the ethno-linguistic group, we should observe two patterns. First, cultural conformism should maintain behavioural homogeneity within an ethno-linguistic group. Second, bargainers' expectations should be coordinated such that proposals and responses to proposals should covary. Here we show that neither of these patterns is observed across 21 populations of the same ethno-linguistic group, the Pahari Korwa of central India. Our findings suggest that what constitutes a fair division of resources can vary on smaller scales than that of the ethno-linguistic group. Individuals' local environments may play a central role in determining conceptions of fairness.
I've highlighted what they believe are necessary conditions for cultural conformity to play an important role. Yet neither of these are actually necessary for conformity to have been important in shaping the cultural diversity of fairness norms. Moreover, the authors do not provide much justification for why they are necessary conditions.
Cultural conformism need not maintain behavioral homogeneity within ethno-linguistic groups. If cultural transmission is rapid enough, cross-ethno-linguistic group transmission occurs often enough, cultural transmission networks are geographically constrained enough, or some combination of these three factors, there could be substantial heterogeneity within ethno-linguistic groups.
Bargainers' expectations need not coordinate such that proposals and responses to proposals covary. Different social norms can affect different behaviors. Proposals and responses to proposals are two different behaviors. The number of reasons why proposer and respondent behavior could be uncoordinated, and yet cultural conformity is still important (and, moreover, shared environment is also important) is probably staggeringly large given the complex ways in which culture and cognition interact on evolutionary and behavioral timescales.