There’s been a surge of research seeing if we can change people’s beliefs by telling them the truth about inequality (as we’ve blogged about on the blog several times before). Understanding what’s going on here is tricky, and I was intrigued by a new paper by Jonathan Mijs that adds a further challenge here: that the effects of information will differ in different countries. In this post I want to explain Mijs’ study, sympathetically critique the study itself, and then reflect on a few wider issues that it raises.
What this study found
Jonathan Mijs is a sociologist at Rotterdam/Harvard/LSE, and is indeed lots of really interesting work in this space (I’ve previously blogged about one of this papers here). In a recently-published piece in Social Problems, Mijs – working with Christopher Hoy – add to the small number of studies that look at the effects of giving people information about inequality in different countries, in this case, Australia, Mexico and Indonesia.
The study was a survey experiment, which means that a random half of respondents saw some information about inequality. This information explained how much of the country’s wealth was held by the wealthiest 20% of people (both in words and in a pie chart). To try and gain a stronger impact, they also included a couple of statements in all countries giving a couple of schematic facts about social (im)mobility. This is what it looks like in Australia:
They then asked people, “in your opinion, which of the following is the most important reason why people in [country] are [rich/poor]?” People had to choose one of the following reasons: talent, effort (both meritocratic reasons), luck, family, network (for why rich) / disability (for why poor), or ‘other’ (which in Mexico and Indonesia, was mostly about corruption).
Put simply, they found that this information changed people’s attitudes in all countries. People who saw this information were more likely to say that people were rich because of their family compared to people who didn’t see this information. However, there are also notable differences between countries:
- The extent of this effect varied (they were 14 percentage points more likely to say that people were rich because of family in Australia, but only 4pp in Indonesia and 7pp in Mexico);
- This effect was also found on people’s views on why people were poor (rather than rich) in Australia and Mexico, but not Indonesia;
- In Indonesia, information not only made people more likely to say that family explained why people were rich, but it also made them less likely to choose a meritocratic reason for being rich. In contrast, in Mexico and Australia, the information made no difference to people’s views about meritocracy (instead, people were less likely to give other non-meritocratic reasons: luck, connections, or other (corruption)).
I think that this basic point is really important: information will have different effects in different settings. However, the differences don’t go in the ways that Mijs & Hoy expected – they anticipated that effects would be lowest in Mexico (where inequality is higher and Mexicans know it), higher in Australia, and highest in Indonesia (where belief in meritocracy is strong and information is limited). But this isn’t exactly what happened.
How to think about the ‘effect of information about inequality’
However, I think there’s a broader issue here about what we mean by ‘the effect of information about inequality’. Partly this is going to be depend hugely on whether the information is trusted or not. In the UK qualitative research on this, ‘in cases where the evidence appeared to contradict their original views, participants typically dismissed the evidence as “government propaganda” or “newspaper talk”’ (Knight 2015 cited in my discussion of these issues here). This in turn will depend on who is providing the information, how they provide it, and how this compares to other sorts of information.
One of the ways that comparative data is really valuable is that it makes these assumptions visible – these things are likely to vary cross-nationally, which makes them more visible. But ideally we use this is a springboard to a more sophisticated way of thinking about the effects of information about inequality. ‘Information’ isn’t a very useful term because it conceals so much variety – whether it’s trusted or not; whether it conflicts with prior beliefs or not; how central these beliefs are to wider attitudes about inequality.
And this is what is most powerful about this paper: it reminds me of all these things that are too easily ignored when people focus narrowly on specific bits of information in specific countries.
[Some methodological comments…]
I really like Mijs & Hoy’s paper, but there were a few things that I wished were different. I’ve put these here because otherwise the main post is too long, and they’re not interesting to the casual reader. But Jonathan, Christopher, if you ever read this – your papers would be even better if you fixed these issues in your future work!As I say in the main text, there’s a few ways in which I wish the paper were better – much as I do genuinely like it.
Firstly, there’s bit of the stats that could be more rigorous. For example, it’s really important in comparative work to do a statistical test of the differences between countries, rather than to see if effects were significant or not in each county. (There’s a great simulation study that shows this that I’ve misplaced). And they don’t really test one of their hypotheses (H2) properly, I think because they don’t have enough people to find anything with confidence. But they should be more transparent about this.
Secondly, it would be nice if the data and code were publicly available, and if there was a bit more transparency about the study in general. They preregistered the analyses, but these are split across three (!) preregistration protocols (1/2/3), and they never explain how these protocols relate to this paper (I found it hard to follow).
I also couldn’t see a copy of the exact wording of the whole questionnaire anywhere. This may sound minor, but there’s a real risk in survey experiments of nudging people towards giving the conclusion that you want. For example, a lovely paper by Lucy Barnes & colleagues looks at the effects of a Government ‘taxpayer receipt’ in the UK that explained (badly) what tax was spent on. However, if you look at the detail of their questions, they said to people:
“After you receive your statement, we will send you a follow up survey in which we will ask you several factual questions about the information in your tax statement. If you answer these factual questions correctly, you will be entered in a lottery to win a brand new iPad.”
I think this is not just encouraging people to read the information treatment – it’s encouraging people to instrumentally learn the numbers it contains, as you’re going to test them on it and then potentially reward them if they’re correct. I don’t think this is what Mijs & Hoy were doing, but without more transparency it’s really difficult to judge.
Finally, it’s also worth stressing that these were online surveys. You can get a pretty representative online survey in Australia, but you can’t in Indonesia and Mexico – the samples here are much younger and more educated than the wider population. And given that different people will react differently to this sort of information, this can bias our comparisons across countries. But given a limited budget, it was still worth doing this study – and it would be great if someone can give them some money to do a more expensive, face-to-face comparative survey in future.