# Interpreting Hunger Rates (and their “Changes”)

Categories: Yellow Pad

*Mr. Albert is Senior Research Fellow at the Philippine Institute for Development Studies.** This piece was published in the August 22, 2011 edition of the BusinessWorld, pages S1/4 to S1/5.*

Last March 2011, the Social Weather Stations (SWS) reported that 20.5 percent of households experienced hunger, which was “up” from the November 2010 rate of 18.1 percent. At that time, President Aquino had a knee-jerk reaction to the results, suggesting possible flaws in the design of the surveys conducted by SWS.

In its June 2011 round, the SWS reported a decrease in hunger rate to 15.1 percent. In his 7 July 2011 column in the *Philippine Daily Inquirer *(PDI), Rigoberto Tiglao, who used to react to these figures when he was Press Secretary of former President Arroyo, questioned the validity of the SWS hunger statistics on three grounds. Firstly, he pointed out that hunger estimates should not be fluctuating considerably in a span of three months. He pointed out that “comparable figures” in the US conducted by the US Department of Agriculture (USDA) only fluctuate within a narrow band of one percentage point (except for the year 2007–2008). Secondly, he questions the sampling design of SWS. Thirdly, he critiques the SWS question “In the last three months, did it happen even once that your family experienced hunger and did not have anything to eat?” as likely to be misunderstood. Respondents may not have heard the second part of the question.

As expected, Mahar Mangahas, SWS President, came out with a rejoinder in his PDI column, suggesting that Tiglao was incorrect in his analysis.

This note is a discussion about a survey’s margin of error, an issue that is often mentioned in press releases of organizations such as the SWS, but not fully understood.

Sample surveys are sometimes criticized as being unrepresentative since only a part of the whole population is asked. Such reasoning is largely espoused by those who fail to realize that a doctor does not extract all of a patient’s blood to find out if the patient is sick or not. Samples by their very nature, are not complete, but when done correctly, they are reliable and sufficient to gain understanding of a particular phenomenon.

Survey measurements are not fool-proof. Estimates from sample surveys are prone to two types of survey errors: sampling error and non-sampling error. Non-sampling errors are biases resulting from question wording, interviewer biases, respondent biases (including non- responses), and processing errors. Sampling error pertains to fluctuations in estimates that arise from the use of a sample (and not the whole population). That is, a random sample of 1000 respondents would yield a different estimate from another random sample of 1000 respondents (and this should be the case since the respondents are randomly selected!).

When a probability-based survey estimates the proportion of households that experienced hunger, or a household’s average per capita income, we expect the survey estimate not to hit the mark, but to be within an error margin of the true value of the parameter being estimated. Margins of error give a sense of the extent of sampling error. For instance, if the true hunger rate were 50 percent, then the survey estimate would not necessarily hit the 50 percent mark; it may be lower or higher than 50 percent, but not by much. Depending on the sample size, the sample estimate can come very close. The bigger the sample, the more likely the survey estimate is close to true rate. But bigger samples (or oversized ones) also mean higher costs, and one has to measure the value added for every increase in sample size).

There are certainly no guarantees. For a random sample of size 1200, we would expect that in 19 out of 20 surveys that use the same protocols for random selection, we would be within 3 percentage points (ppts) from the true figure. But in one of the 20 surveys, we may be off by quite a lot.

For percentages, survey margins of error (at 95 percent confidence) can be readily computed as about twice the square root of p x (1 – p) / n where p represents the population proportion, and n the sample size. The margin of error formula gets to be the worst case when p is half (a little knowledge of calculus or a mere graph of this quadratic function would show that), so that even if p is unknown, at worst, the margin of error would be about 3 ppts for a sample of size 1,200, while for a much bigger sample of size 50,000 (which is used by the USDA), we would expect the margin of error to be much lower (0.4 ppt).

Since Metro Manila, Balance Luzon, Visayas, and Mindanao each involve about 300 samples in the SWS surveys, the area percentages have survey error margins of about 6 ppts. Note that these calculations assume that both the SWS and USDA surveys are simple random samples. Often, household surveys are conducted through multi-stage designs, i.e., sampling villages for instance, then within the sampled villages, households are sampled. The consequence of using multi-stage designs is that margins of errors calculated may be understated. That is, the true margin of error may be slightly more than that.

Statistics are more meaningful when they are compared, but controversy arises in comparisons because of lack of understanding about error margins. When comparing two survey estimates, since each has an error margin, we do need to double the error margins to be confident that we have real changes occurring. For instance, the national rates each have error margins of three ppts, so what we really need is a difference of 6ppts across time to be confident about changes.

For instance, when comparing the hunger rate in March 2011 (20.5 percent) to the November 2010 (18.1 percent), the difference in the estimates (of 2.4 ppts) was merely within the error margin! Thus, there is no evidence of real change. For a real increase, the March 2011 figure should have gone up to 23.1 percent. Similarly, when comparing the June 2011 rate (15.1 percent) to the March 2011 figure, again there may not be a real change even though there is a seeming difference of 5.4 ppts. That is, there is no evidence (yet) of a drop in hunger rates.

Drilling down on the SWS areas which have error margins of 6ppts, if we were to compare across time, we would need 12 ppts of differences to be convinced that there is change happening. Tiglao was actually right, without being aware of the statistical meaning: Hunger rates across a quarter should not change that much. He merely failed to see the technical nuances in error margins.

This note says that hunger rates have not changed from November 2010 to June 2011. The government’s conditional cash transfer (CCT) is not directly meant to be a hunger mitigation program. Even those being helped would not necessarily change their hunger or poverty status by getting PhP 1400 per month. The poverty line is about PHP 1000 per person per month. For a family of size ten, that income transfer from the CCT would assist them but not bring them out of poverty. The CCT assists the poor so that they are encouraged to send their children to school. Returns on the CCT are more long-term, rather than short-term.

## No comments yet.