close
close

Association-anemone

Bite-sized brilliance in every update

2024 National Public Opinion Benchmark Survey Methodology
asane

2024 National Public Opinion Benchmark Survey Methodology

Summary

SSRS conducted the National Public Opinion Reference Survey (NPORS) for the Pew Research Center using address-based sampling and a multimode protocol. The survey was sent from February 1, 2024 to June 10, 2024. Participants were first sent a mail invitation to complete an online survey. A paper survey was subsequently mailed to non-respondents. In addition, the emails invited participants to call a toll-free number to take the survey over the phone with a live interviewer. In total, 2,535 respondents completed the online survey, 2,764 respondents completed the paper survey, and 327 respondents completed the telephone survey (total N=5,626). The survey was administered in English and Spanish. The American Association for Public Opinion Research (AAPOR) 1 response rate was 32%.

Sample definition

The sample was extracted from the US Postal Service computerized delivery sequence file and was provided by Marketing Systems Group (MSG). Busy residential addresses (including “delivery points”) in all US states (including Alaska and Hawaii) and the District of Columbia had varying odds of being selected. The draw was a national, randomly stratified sample with differential probabilities of selection between mutually exclusive strata. SSRS designed the sample plan as shown in the table below.

2024 National Public Opinion Benchmark Survey Methodology

Correspondence protocol

The SSRS sent initial mailings in a 9-by-12-inch window envelope by first-class mail to the 18,834 sampled households. These packets included two $1 bills (visible from the outside of the envelope) and a letter asking a household member to complete the survey. The letter provided a URL for the online survey; a toll-free number; a password to enter on the online survey landing page or read to telephone interviewers if they chose to call; and a frequently asked questions section printed on the back. If there were two or more adults in the household, the letter asked the adult the following day to complete the survey. Households that did not respond were subsequently sent a reminder postcard and then a reminder letter by first class mail.

After the web portion of the data collection period ended, SSRS sent households that did not respond with a deliverable address a 9-by-12-inch Priority Mail window envelope. The priority envelope contained a letter with a frequently asked questions section printed on the back, a visible $5 bill, a paper version of the survey, and a postage-paid return envelope. The paper survey was an 11-by-17-inch page folded in a booklet style. The within-household selection instructions were identical to those used in the previous online survey application. A second envelope containing another hard copy of the questionnaire was subsequently sent to the same household by first class post.

The initial shipment was sent in two separate releases: the soft release and the full release. The soft release was 5% of the sample and shipped a few days earlier than the full release. The full release consisted of the remaining sample.

The Pew Research Center developed the questionnaire in consultation with SSRS. The online survey was tested on both desktop and mobile devices. The test data was analyzed to ensure that the logic and randomizations worked as intended before the survey was launched.

Developing and testing the questionnaire

The Pew Research Center developed the questionnaire in consultation with SSRS. The online survey was tested on both desktop and mobile devices. The test data was analyzed to ensure that the logic and randomizations worked as intended before the survey was launched.

Weighting

The survey was weighted to support a reliable inference from the sample to the target population of US adults. The weight was created using a multistep process that includes a baseline weight adjustment for differential selection probabilities and a raking calibration that aligns the survey with population benchmarks. The process begins with the base weight, which took into account the address’s probability of selection from the US Postal Service’s computerized sequence file, as well as the number of adults living in the household, and incorporated a adjusting the adaptive mode for cases that responded in offline mode.

The baseline weights are then calibrated to population reference values ​​using raking or iterative proportional matching. The rake sizes and the source for the population parameter estimates are reported in the table below. All raking targets are based on the non-institutionalized US adult population (age 18 and over). These weights are trimmed at the 1st and 99th percentiles to reduce the loss of precision resulting from weight variation.

Design effect and margin of error

Weighting functions and survey designs that depart from simple random sampling tend to result in an increase in the variance of the survey estimates. This increase, known as the design effect, or “deff,” should be incorporated into the margin of error, standard errors, and tests of statistical significance. The overall design effect for a survey is usually approximated as one plus the squared coefficient of variation of the weights.

For this survey, the margin of error (half the width of the 95% confidence interval) incorporating the design effect for the full sample estimates at 50% is plus or minus 1.8 percentage points. Estimates based on subgroups will have larger margins of error. It is important to remember that random sampling error is only one possible source of error in a survey estimate. Other sources, such as question wording and reporting inaccuracy, may contribute to additional errors. A summary of the weights and their associated design effect is reported in the table below.

The following table shows the unweighted sample sizes and the attributable sampling error that would be expected at the 95% confidence level for various groups in the survey.

Sample sizes and sampling errors for other subgroups are available upon request. In addition to sampling error, you must take into account that the wording of questions and practical difficulties in conducting surveys can introduce errors or biases into the results of opinion polls.

A note on the sample of Asian adults

This survey includes a total sample size of 231 Asian adults. The sample includes mainly English-speaking Asian adults and therefore may not be representative of the total population of Asian adults. Despite this limitation, it is important to report the views of Asian adults on the subjects in this study. As always, responses from Asian adults are incorporated into general population figures throughout this report.

Provisions

The table below shows the dispositions of all sampled households for the survey.