This is the second edition of the CTED Crash Course, a regular feature which shares some essential background information that will help explain what we do here at the intersection of innovative technology and economic development. We hope to shed some light on some of the basics of solar energy, mobile technology and Internet accessibility that are at the very core of CTED’s research. We also hope to provide some economic and developmental context in light of these areas of research and to examine their importance in solving some of the issues facing the developing world.
In this Crash Course, we will explain a popular technique predominantly used in the field of medicine, but increasingly fashionable in the social sciences. So much in fact that it is sometimes referred to as the “gold standard” i.e. the method of choice to do quality research. The advantage of a well-made Randomized Controlled Trial (RCT) is that cause and effect can truly be separated – the goal of RCTs is to eliminate bias in treatment assignment. Typically, researchers implement a certain program (“treatment”) in a few areas and compare the outcomes for participants (“the treatment group”) and non-participants (“the control group”).
In a Randomized Controlled Trial, every subject – be it a patient as in the medical field, a village as in development economics or voters in political science – is assigned to receive a specific treatment intervention by a chance mechanism. Randomized evaluation can answer questions like: Was the program effective? How effective was it? Were there side effects that were not intended? Who benefited (most)? Who was harmed? Why did it work or not work? What lessons were learned and could be applied to other contexts? Was the program cost-effective?
The first Randomized Controlled Trials conducted in the social sciences occurred in the 1960s with experimental tests of a negative income tax. These evaluations were useful, but because of their large scale expensive and designed to measure the impact of a single policy with many components, which made it difficult to learn in a cumulative way over time. Later on, RCTs were increasingly used by Non Governmental Organizations to test the effect of their (economic development) programs on people’s lives.
RCTs can be divided into different categories based on their study design. A common design used in development economics is Clustered Randomized Controlled Trials, which are used to avoid spillover effects or “contamination” among treated subjects. In a Cluster Randomized Controlled Trial, groups of subjects, rather than just individual subjects are randomized, which is useful when a treatment of individuals is difficult. CTED, for instance, will soon conduct an RCT measuring the effect Contextual Information Portals – one of our technologies – have on secondary schools in Ghana. If we tried to separate students from the same school into those that receive the treatment (i.e., access to this technology) from those that do not, we could not be certain that there would be no spillovers into the control group (i.e., students sharing the information, etc.). Moreover, failure to use a clustered RCT design may prove impractical in this case, as well, as it would be difficult to prevent students from using a computer lab that is implemented in one school. Therefore, implementing the technology for clusters of students (i.e. the entire school) makes more sense than just giving it to randomly determined individual subjects.
A group famous for using RCTs in their projects is J-PAL (Abdul Latif Jameel Poverty Action Lab). For instance, they administered randomized field trials to determine whether using a camera in classrooms helped fight teacher absenteeism, a pervasive problem throughout the developing world. In a different study using RCTs, they found that offering families in resource-poor settings small, non-financial incentives (such as one kilogram of lentils) to encourage immunization of their children, in addition to reliable services and education, was more effective than just providing services and education alone. A child of the treatment village (where non-financial incentives were distributed) was more than 6 times more likely to be completely immunized than a child from the control group.
Another example of how Randomized Controlled Trials were used in development is Harvard economist Benjamin Olken’s study on reducing corruption in Indonesia. The experiment involved 608 villages in Indonesia, in which roads were about to be built. The villages were divided into six groups (some did receive an audit, some did not, some received invitations to accountability meetings, some did not, invitations to accountability meetings along with anonymous comment forms). Olken found through those RCTs that increasing government audits from 4% to 100% reduced missing expenditures (i.e. corruption) by eight percentage points, whereas increasing grassroots participation in monitoring had very little impact on reducing corruption.
The research design of RCT is not without its critics. Some people claim that the finding of one experiment cannot be generalized for the entire population, see for instance Sanson-Fisher et al. in their 2007 study “Limitations of the Randomized Controlled Trial in Evaluating Population-Based Health Interventions”.
However Glennester and Kremer claim that thanks to the myriad of RCT studies that have been conducted, there are now certain patterns that emerge which are consistent across contexts and cultures. An example of this comes from the field of behavioral economics. In a model of rational choice, having more options and time should not make a person worse off. However, a procrastinator may find a deadline a productivity enhancing commitment device. In one study cited by these same authors, students in the US and farmers in Kenya both chose to restrict their options. With greater understanding, which can often come from evidence collected by RCTs, according to Glennester and Kremer, we might harness these behaviors to improve people’s lives.