Before they complete a GWI survey, all panelists will have undergone quality checks conducted by their panel. However, GlobalWebIndex then runs stringent testing both during and after the fieldwork to ensure a high-quality and robust sample. These include:
CHECKING COMPLETION TIME
By running high volumes of automatic “test” respondents through the survey, we know approximately how long it should take the “average” person to complete (information which is supplemented by the knowledge we have gained from running similar surveys in previous quarters). Based on this, we have a minimum completion time – if anyone finishes more quickly than this, they are removed.
DETECTING PATTERNED ANSWERS
If a respondent starts answering questions in a certain way, we will review their answers across the survey and, if they appear to be suspicious, we will remove them. Examples of this might include answering a set of agreement questions (where 1=strongly disagree and 5=strongly agree) in the following way: 1, 2, 3, 4, 5, 1, 2, 3, 4, 5.
DETECTING MULTIPLE “NONE OF THE ABOVE” ANSWERS
In line with standard research practice, most behavioral questions (e.g. “Have you done the following?”) will include a “none of the above” option at the end. We will monitor how frequently a respondent is selecting this option and remove them if we deem their “none of the above” responses to be excessively high.
Within our survey, we have a number of “grid” or “list” style questions which invite people to enter a level of agreement across a number of different categories. For example, we might ask you a series of statements and ask you to say how much you agree with them – using a 5-point scale from “strongly disagree” all the way to “strongly agree”. If someone answers in a uniform fashion throughout a list or grid, they will be flagged as a potential “straight-liner” – someone who might not be answering accurately. If someone does this on just one question in isolation, we will review their answer to see if this response pattern could be plausible or logical (for example, it could be the case that someone genuinely has not used any services/platforms within a particular type of list). However, if this behavior is detected in two or more questions, they are automatically removed from the sample.
Our survey contains a number of “logic traps” where poor-quality respondents could contradict themselves. An example of this might be a respondent who says their child’s age is too high to be compatible with their own. Respondents who fail the logic traps are removed.
When undertaking our analysis, respondents who are identified as being potentially suspicious on two or more of the criteria outlined above are automatically removed without any further consideration.
Typically, we remove between 5-15% of respondents in the data-cleaning process. During the fieldwork, we over-recruit in each market to ensure that we can still meet our quarterly sample size commitments once any poor-quality responses have been identified and removed.