By Arundati Dandapani
American Association of Public Opinion Research (AAPOR)’s data quality task-force reported interesting and useful findings about guidelines for using panels to address accuracy issues in non-probability online and convenience sampling data. Complementing those are ESOMAR’s 37 questions to ask your online sample provider in evaluating their adherence to industry standards and best practices. Guidance from oversight organizations and privacy frameworks like OECD, FIPS, NIST and APEC when studied along with data privacy legislation compliance ensures a consistent reinforcement of data quality across the insights value chain.
World Association of Public Opinion Research (WAPOR) presented Jon Krosnick and Gary Langar discussing reliability and validity in convenience sample surveys, continuing the heated debate on accuracy in an evaluation of the 2016 US election poll outcomes dominated by online river sampling (47%). The rise of internet non-probability sample survey methods is subject to criticism around their yielding greater average errors more than opt-in web-based panels. As timelines shrink, costs rise and accuracy needs compete, maintaining transparency around trade-offs made is key when building the case to improving data quality. While experimentation across methods is useful, honestly reporting on the correct metrics and placing caution on accuracy claims is critical to earning stakeholder, citizen and client trust.
From an academic and practitioner standpoint, high data quality represents high accuracy. Is your data representative of the clear truth? Was your sampling strategy, sampling method or sampling frame fit for purpose in addressing the business questions in the best way? Was the instrument short and engaging? Following data protection checklists, conducting risk impact assessments where necessary, and adhering to privacy by design principles of user-centricity and individual rights, data minimization, purpose-limitation, clear retention practices, strong safeguards, etc., secures the trust of those you surveyed and your clients.
Respondent experience is critical to accurate research results at a time post-COVID when the race for sample is more uphill than ever. Studies report a 11% lower response rate on online surveys than other modes, according to Daikeler, 2021, even as digital methods have eclipsed offline research methods, creating new challenges in measuring offline populations. Expenses for incentives have surged with recent inflation. Bots and professional respondents have proliferated, creating growing pools of concern. Complaints about “27-page screeners” still doing the rounds and endless demographic questions set off new alarms for respondents avoiding research studies altogether.
How can we make participating in online surveys attractive? To dig deeper into industry perspectives in 2021, I envisioned and led two public opinion research industry conferences to answer critical questions on how to boost trust, data quality and accuracy across a range of polling methodologies and what the data quality stakes are for “brands” (or private sector participants) and “governments” (or public sector participants). The resulting discussions revealed an increasing acceptance of the validity of evolving methods with the need for transparent reporting and disclosure. There was a clear and consistent need to quickly understand and communicate the scope and parameters of each business question and recognize the relevant measurement errors and unconscious biases that could impede greater understanding of public or consumer opinion across methodologies, in an age where access to both information and misinformation is equal!
Research leaders can prevent poor participant experiences by establishing simple and clear lines of business process alignment, accountability, and adaptability. You do this by knowledge sharing, gaining executive sponsorship and engaging a nexus of inter-departmental privacy champions early on, and strategizing with business colleagues. You create malleable frameworks that strengthen safeguards while minimizing “business disruption” that enable more ownership, accountability and enforcement of respondent-centric principles of meeting end-users or research participants where they are throughout the data lifecycle across your research programs.
