This post will give you a first overview of the project. Specifically, I will provide some information about the baseline survey (including sampling technicalities) and the technologies we will use for the mobile phone follow-up.
We are charting new territory here, so we expect to learn a lot from both our mistakes and your comments. (The only project that I know of, which is to some extent comparable to what we will do in terms of approach and scope, is Brian Dillon’s work in rural Tanzania)
The aim of this project is to build up a mobile phone based panel in Dar es Salaam, allowing for high frequency and low cost gathering of up-to-date survey data. In order to get a good sense of our sample respondents and to understand and track individual changes, first a representative face-to-face baseline survey is conducted. The focus of the baseline questionnaire is on the quality of public services such as water supply, waste collection, hospitals and education, but also includes items on household characteristics, wealth, access to information, security, citizen participation, elections & travel time to work (the complete questionnaire in English will be available soon). We draw a stratified sample on different administrative levels (see box below), interviewing a total of 550 citizens.
At the end of the face-to-face interview (which typically lasts for about 1,5 – 2 hours), the respondent receives 3000 Tsh (about $2) in mobile phone credit and is asked to participate in the weekly follow-up survey, for which he/she will receive a credit top-up of between 300 and 500 Tsh ($0.2 / $0.3) per wave.
In principle, four different technologies will be used to gather the weekly panel data. I will probably elaborate a bit more on the different technologies and their implications soon, but for now, these are the options we will be using:
- Interactive Voice Response (IVR)
The respondent receives a text message each Friday morning, asking him/her to “beep” a number (i.e. call and hang up) before Sunday evening. This will trigger the IVR system to call back and a pre-recoded voice-based menu guides the respondent through the questionnaire.
- Wireless Application Protocol (WAP)
Users that have WAP capabilities and the knowledge to use it, will be asked to download a survey application that was developed by DataVision to gather survey data. Just like with IVR, the respondent receives a text message on Friday morning asking to fill in the questionnaire before Sunday evening.
- Unstructured Supplementary Service Data (USSD)
A simple form of communication between the mobile phone user and the provider’s server, supported by all GSM phones. It is widely used in Tanzania and other African countries to transfer money and pay bills (see http://en.wikipedia.org/wiki/M-Pesa), offering an alternative to bank transfers – as many people don’t have bank accounts (as far as I know, in the US and most European countries it is mainly only used to request information such as prepaid credit, by typing *somecode#). Again, the respondent receives a text message on Friday morning to start the questionnaire. Unfortunately, due to technical difficulties, it is not yet clear at what point during the survey we will be able to use USSD.
- Call Center
Possibly the most convenient (for the respondent) and accurate mode of data collection – and also the most expensive one. During the baseline interview, the respondents gets to pick a day & time (during the weekend) when to receive the weekly call from the call center agent.
The reason for us to use this rather broad range of technologies is that we want to look into the implications of the modes of data collection for attrition (panel mortality) and data quality. Furthermore, the process of getting all these channels up and running will teach us a lot about obstacles and advantages that come with each technology (much more so than if we would only focus on for example IVR). In order to be able to estimate these effect, we assign respondents to the different groups, as described by the graph below.
Similarly, we are interested in the effects of the amount of incentive, i.e. the phone credit the respondent receives for completing the weekly mobile phone surveys. We therefore vary this variable in such a manner that the households of each mtaa (street) receive either 300, 400 or 500 Tsh (assigned randomly per mtaa). Therefore, all respondents within the same mtaa receive the same amount as to avoid upsetting respondents who obviously should not be aware that others are getting more / less. While we had some difficulties convincing the team of enumerators of the necessity to do this (they pointed out how unfair this was – and indeed it is), we are quite sure that this will give us very useful information on if and how incentives matter.
As you probably already noticed, the graph on the right always tells you whereabout in the process we are. At the time of writing, our interviewers have started conducting baseline interviews in the district of Temeke which means that we should soon be getting our first data on panel respondent characteristics.
This was it for now. More will follow soon.