Measurement Issues Associated with Using Survey Data Matched with Administrative Data from the Social Security Administration
Social Security Bulletin, Vol. 69, No. 2, 2009
Researchers using survey data matched with administrative data benefit from the rich demographic and economic detail available from survey data combined with detailed programmatic data from administrative records. The research benefits of using these matched data are too numerous to mention. But there are drawbacks as well, and those drawbacks have received less systematic attention from researchers. We focus on survey data matched with administrative data from the Social Security Administration and address the strengths and weaknesses of each in four specific areas: (1) program participation and benefits, (2) disability and health information, (3) earnings, and (4) deferred compensation. We discuss the implications of these strengths and weaknesses for decisions that researchers must make regarding the appropriate data source and definition for the concepts in question. From this discussion, some general conclusions are drawn about measurement issues associated with using matched survey and administrative data for research, policy evaluation, and statistics.
The authors are with the Division of Policy Evaluation, Office of Research, Evaluation, and Statistics, Office of Retirement and Disability Policy, Social Security Administration.
Acknowledgments: The authors are grateful to Susan Grad, Carolyn Puckett, and Kalman Rupp for helpful comments and suggestions. A previous version of this article was presented at the 2008 Joint Statistical Meetings of the American Statistical Association, Government Statistics Section, Denver, CO.
The findings and conclusions presented in the Bulletin are those of the authors and do not necessarily represent the views of the Social Security Administration.
|CPS||Current Population Survey|
|DER||Detailed Earnings Record|
|HRS||Health and Retirement Study|
|IRS||Internal Revenue Service|
|MBR||Master Beneficiary Record|
|NHANES||National Health and Nutrition Examination Survey|
|NHIS||National Health Interview Survey|
|NSCF||National Survey of SSI Children and Families|
|OASDI||Old-Age, Survivors, and Disability Insurance|
|PHUS||Payment History Update System|
|SIPP||Survey of Income and Program Participation|
|SSA||Social Security Administration|
|SSI||Supplemental Security Income|
Researchers using survey data matched with administrative data benefit from the best of both worlds—the rich demographic and economic detail available from survey data combined with detailed programmatic data from administrative records. Indeed, researchers at the Social Security Administration (SSA) have been using matched survey and administrative data for years, addressing topics spanning policy evaluation, economic research, program statistics, and microsimulation modeling.
The original use of matched survey and administrative data was to assess the accuracy of the survey data and use that information to adjust for error in statistics produced from survey data. SSA and the Census Bureau have a history of matching Census surveys with Social Security administrative data and limited tax return information from the Internal Revenue Service (IRS). The earliest matches with the decennial censuses and periodically with the March Current Population Survey (CPS) from 1964 through 1972 were limited in scope and sample size because of computing constraints. The earliest matched file still being used is the 1973 CPS/SSA/IRS Exact Match Study, which greatly expanded the sample being matched to SSA and IRS data compared with previous matched data sets (Aziz, Kilss, and Scheuren 1978; Kilss and Scheuren 1978). This file provides researchers with rich survey data matched with longitudinal earnings histories that were not available elsewhere, and thus greatly expanded the potential scope of research on many topics in labor economics and public policy. Since the 1973 match, these data also have been used as inputs to Social Security's simulation models (Scheuren and Herriot 1975).
In response to limitations in the CPS with respect to analyzing government transfer programs, which required detailed data on income sources, program participation, and assets, the Income Survey Development Program was initiated in the mid-1970s (Ycas and Lininger 1981; Vaughan, Whiteman, and Lininger 1984). This program effectively served as the pilot study for the Census Bureau's Survey of Income and Program Participation (SIPP), for which the initial design called for matched administrative data on benefits and earnings from SSA (Lininger 1981). Pioneering work by Vaughan (1979) and others on errors in survey reports of program participation and type of beneficiary, some of which used the SIPP matched to SSA administrative data (Vaughan 1989), paved the way for a wide variety of uses of matched survey and administrative data by researchers at SSA.
Currently, researchers are using the SIPP1 (1984, 1990–1993, 1996, 2001, and 2004 panels) and the CPS2 (most years from the 1990s through the 2000s) matched to SSA administrative data and limited IRS earnings data. The matched data are accessed on a restricted basis subject to the terms of interagency agreements between the Census Bureau and SSA and of IRS laws and regulations. The use of matched administrative data as a tool to assess survey data is still a primary function, but other Census and IRS-approved uses of matched data have evolved. Other surveys that have been matched to SSA administrative data include the University of Michigan's Health and Retirement Study (HRS),3 SSA's National Survey of SSI Children and Families (NSCF),4 and the National Center for Health Statistics' National Health Interview Survey (NHIS)5 and National Health and Nutrition Examination Survey (NHANES).6 SSA's data are incomplete with respect to demographics and nonprogram oriented measures of income and wealth. The survey data on these elements supplement the administrative data, enabling the agency to produce a wide variety of research and statistical products about the Old-Age, Survivors, and Disability Insurance (OASDI, or Social Security) and Supplemental Security Income (SSI) programs. These products include detailed and complex microsimulation models that are used to assess the distributional implications of potential OASDI and SSI policy changes, basic economic research on OASDI and SSI beneficiaries, and statistics about beneficiaries and recipients of both programs.
The research benefits of using these matched data are too numerous to mention. But there are drawbacks as well, and those drawbacks have received less systematic attention from researchers. For example, in cases where disability diagnoses are available in both the survey and administrative data, which source is more accurate? In cases where program participation and benefit amounts are available in both the survey and administrative data, which source is correct? By and large, the answer to such questions is, "It depends." It depends on the research questions to be addressed. It depends on the data sources in question. It depends on the analytical techniques to be used. To complicate matters further, different administrative data sources can lead to different values for the same concept.
In this article, we do not attempt to provide definitive answers as to which sources are preferred in which situations. Rather, we attempt to draw together the available evidence on a number of important areas in which researchers using matched survey and administrative data must decide on the appropriate data source and definition for the concept in question. Specifically, in the next four sections of the article we examine and discuss the available evidence in the following areas.
- OASDI and SSI participation and benefits
- Disability diagnosis, health, and functional limitations
- Deferred compensation
Some concluding observations are then offered on these measurement issues and the importance of matched survey and administrative data for research, policy evaluation, and program statistics. Finally, we highlight several areas for future research.
OASDI and SSI Participation and Benefits
The most basic area of comparison between survey and administrative data is program participation and benefit amounts. Several SSA researchers have addressed this issue using data from the SIPP and CPS matched with SSA administrative data on the receipt and amount of OASDI benefits and SSI payments. Survey data may differ from administrative records for three main reasons: (1) survey error, (2) administrative record error, or (3) error in matching survey and administrative records (Huynh, Rupp, and Sears 2002). Although SSA records on program participation and benefit amounts are generally regarded to be more reliable than survey reports, this is not always the case. Before the availability of the Payment History Update System (PHUS), the administrative records for OASDI came only from the Master Beneficiary Record (MBR), which reflected program eligibility, as opposed to the actual benefit amount that was paid in a given month.7 Since 2003, however, the match has included PHUS data with actual payment amounts from 1984 to the present, which is thought to be more consistent with the benefit amount that would be reported by survey respondents.8 The Supplemental Security Record, which provides data on SSI applicants and recipients, has always captured data on both program eligibility and actual payment amounts.
Huynh, Rupp, and Sears (2002) assessed discrepancies in reports of benefit receipt and benefit amounts between SSA's administrative records (Master Beneficiary Record and Supplemental Security Record) and the 1993 and 1996 panels of the SIPP.9 They found that there is confusion among survey respondents as to whether an OASDI benefit or SSI payment was received. Table 1 shows that for the sample months analyzed by those authors, a nontrivial proportion of SSI recipient survey respondents (receiving SSI only or concurrent with OASDI) reported receiving OASDI only; respondents misreported receiving OASDI as SSI, but much less frequently. The authors offered a number of explanations for this pattern.
- Both OASDI and SSI benefits are administered by SSA.
- The OASDI program has greater visibility.
- Stigma may be attached to the receipt of SSI payments.
- The receipt of SSI for a few months often precedes the receipt of Disability Insurance (DI) for working-age individuals with disabilities.
|Administrative record receipt status and observation period||SIPP report of receipt||Total||N|
|Both||Neither||OASDI only||SSI only|
|Both OASDI and SSI|
|Neither OASDI nor SSI|
|SOURCE: Huynh, Rupp, and Sears (2002, Table 2). Data are tabulated from the 1993 and 1996 panels of the SIPP matched to SSA's Master Beneficiary Record and Supplemental Security Record.|
Huynh, Rupp, and Sears (2002) also found that accuracy of SSI reports improved between their observation points within the 1993 and 1996 SIPP panels. In addition they evaluated the discrepancies between reported OASDI and SSI benefits and administrative amounts. The authors confirmed that after wave 1 of the 1993 SIPP, respondents were reporting their OASDI benefits net of the Medicare Part B premium, consistent with the revised questionnaire wording. They noted that use of these reported benefit amounts without adjusting for the Part B premium could substantially bias estimates of total income and poverty status. Also, they concluded that self-reported SSI payments in the SIPP reflect the sum of federal and federally administered state SSI payments, which are provided to recipients in a single payment (check or direct deposit). In addition, the authors found that reporting errors for OASDI and SSI differed dramatically by imputation status, and that errors may be systematically related to sample attrition and interview status. Finally Huynh, Rupp, and Sears (2002) found evidence of selectivity with respect to the survey respondents who were unable to be matched to administrative records.
Koenig (2003) followed a framework similar to that of Huynh, Rupp, and Sears (2002) by assessing the accuracy of self-reported OASDI and SSI data in the 1996 SIPP and the March 1997 Annual Demographic Supplement to the CPS. She compared the accuracy of reported OASDI and SSI receipt and benefit amounts in the two surveys relative to matched SSA administrative records and assessed the effect on poverty estimates when administrative benefit information is used with the survey data. Koenig (2003) found that although both surveys reflected aggregate benefits well, the SIPP overestimated the percentages of individuals who received OASDI and SSI, and the CPS underestimated them. The SIPP was better able than the CPS to identify both OASDI beneficiaries (99 percent versus 95 percent) and SSI recipients (93 percent versus 69 percent). For the sample of respondents receiving OASDI and/or SSI in both the survey and administrative records, the SIPP-reported benefit amount was within $100 of the benefit amount in the administrative records twice as often as the CPS-reported benefit amount for OASDI (47 percent versus 24 percent), but slightly less frequently than the CPS-reported benefit amount for SSI (47 percent versus 55 percent). The impact on total income and poverty estimates of using administrative data in place of self-reported survey data was largest for the group with imputed records (Table 2). The overall poverty estimates were slightly lower in both surveys when administrative data were used in place of self-reported survey data; respondents in the CPS were more likely to exhibit a change in poverty status because of the use of administrative data.
|Imputed benefits||No imputed benefits||Imputed benefits||No imputed benefits|
|Poverty status does not change||89.9||95.8||95.7||98.1|
|Change from in poverty to not in poverty||5.7||2.2||2.5||1.1|
|Change from not in poverty to in poverty||4.4||2.0||1.8||0.8|
|SOURCE: Koenig (2003, Table 9). Data are tabulated from the 1996 SIPP and March 1997 CPS matched to the SSA's Master Beneficiary Record and Supplemental Security Record.|
Nicholas and Wiseman (2009) developed a detailed method for replacing self-reported survey data from the March 2003 Annual Social and Economic Supplement to the CPS with administrative data on SSI payments, OASDI benefits, and earnings. The authors also implemented a propensity scoring system to reweight CPS families in the matched CPS/SSA sample to reflect the U.S. population as a whole. Using a "high" and a "low" version of their matching and data replacement system, the authors then examined the implications of using the matched administrative data for measuring poverty among the general population and among SSI recipients. Their findings for absolute poverty were quite dramatic, especially among SSI recipients, as illustrated in Table 3. Based on public-use CPS data, 44.3 percent of all SSI recipients were in poverty in 2002. Depending on the exact definitions used, the poverty rate was reduced from 44.3 percent to between 38.0 percent and 40.9 percent when SSA administrative data on benefits and earnings were used in place of CPS self-reported data. The effects were the strongest for elderly SSI recipients, whose "official" poverty rate derived from public-use CPS data fell from 48.0 percent to between 38.6 percent and 40.6 percent based on CPS/SSA matched data. The effects were much more modest for the U.S. population in general, which confirms the authors' finding that SSI participation and benefits were substantially underreported in the CPS relative to SSA administrative data.
|Population and age group||Public-use CPS data||CPS income adjusted based on SSA data; matched plus unmatched individuals||CPS income adjusted based on SSA data; matched individuals only|
|"Lower" income adjustment||"Higher" income adjustment||"Lower" income adjustment||"Higher" income adjustment|
|65 or older||10.4||9.1||8.9||8.4||8.1|
|65 or older||48.0||40.6||39.4||39.9||38.6|
|SOURCE: Derived by authors from Nicholas and Wiseman (2009, Table 7). Data are from the 2003 CPS Annual Social and Economic Supplement and matched SSA administrative records.|
Huynh, Rupp, and Sears (2002) and Koenig (2003), among others, questioned the extent to which selectivity in the ability to match administrative records to SIPP and CPS survey records resulted in a match bias. Attrition bias in the SIPP was another prominent concern. To address these issues, SSA awarded a contract to Mathematica Policy Research, Inc. to determine the extent to which attrition and match selectivity influence estimates of income receipt and amounts. After calibrating their sample from the 2001 SIPP to Census demographic controls, Czajka, Mabli, and Cody (2008) found little evidence of bias in estimates of a wide range of characteristics. They also found that although the proportion of SIPP respondents who could be matched with administrative records dropped substantially between the 1996 and 2001 panels of the SIPP, bias in the matched sample did not appear to have increased. Their more limited evaluation of match bias in the CPS focused on retired workers, with results similar to those for the SIPP. Personal, family, and household demographics among the matched sample mirrored the full CPS sample, although matched cases had slightly more income and were slightly less reliant on Social Security benefits.
Fisher (2005, 2008) examined the impact of survey choice and the use of administrative data in place of survey data on estimates of the importance of Social Security relative to total income for the elderly. In particular, she examined the proportion of the elderly receiving all of their income from Social Security. Using the 1996 SIPP and the March 1997 CPS, Fisher (2005) estimated that in 1996, 19.4 percent of the elderly in the CPS and 9.4 percent of the elderly in the SIPP received all of their income from Social Security. The author found that among those receiving all income from Social Security benefits, either in reported or administrative data, the SIPP had a lower rate of beneficiary misclassification than the CPS, as shown in Table 4. In particular, respondents in the CPS were more likely to omit SSI and were also five times as likely to report having no income at all, despite being OASDI (Social Security) beneficiaries. The substitution of administrative data for self-reported survey data had a negligible effect on the estimates, however, because receipt of sources of income other than Social Security is what is essentially being measured.
|Persons showing all income from OASDI benefits||902||100||2,169||100.0|
|No beneficiary misclassification||827||91.7||1,813||83.6|
|100 percent reliance on self-report, but not on administrative records||52||5.8||196||9.0|
|Self-report omitted SSI income||29||3.2||138||6.4|
|Not an OASDI beneficiary||38||4.2||106||4.9|
|Both self-report omitted SSI income and not an OASDI beneficiary||15||1.7||48||2.2|
|100 percent reliance on administrative records, but not on self-report||23||2.5||160||7.4|
|Self-report included SSI income not in administrative records||15||1.7||41||1.9|
|OASDI beneficiary in administrative records, but not in self-report||11||1.2||128||5.9|
|SOURCE: Fisher (2005, Table 5). Data are tabulated from the 1996 SIPP and March 1997 CPS matched to the SSA's Payment History Update System and Supplemental Security Record.|
Fisher (2008) found that the large differences in estimates of the elderly receiving all of their income from Social Security in the CPS and SIPP for 1996 is most likely the result of underreporting the receipt of asset income in the CPS, although most sources of income are significantly more likely to be reported in the SIPP than the CPS. To determine the extent to which these sources of income are underreported in the CPS, particularly asset income and pensions, SSA, the Census Bureau, and the IRS entered into an agreement to match a limited set of variables from individual income tax returns (Form 1040) and informational returns (Form 1099-R) to the March 2007 CPS. Research using these data will begin soon.
These articles and others in this same line of research suggest that self-reported data in the CPS slightly underreport OASDI receipt and significantly underreport SSI receipt. Self-reported data in the SIPP slightly overreport receipt of OASDI; however, the picture is more complicated for receipt of SSI depending on the year of analysis and whether the data are analyzed from a monthly or annual perspective. Estimates from both surveys indicate some confusion among respondents between the two sources of income. Misreporting of income is unlikely to be limited to the OASDI and SSI programs; other sources of income should be assessed in a similar fashion. Confusion between OASDI benefits and SSI payments, which are administered by SSA, is probably not unique; reported data on other programs that are also administered by the same agency, such as Medicare and Medicaid, may also benefit from examining administrative data. Additional research in these areas should lead to improvements in survey measurement of program participation and benefits, which in turn should lead to more accurate estimates of total income, poverty status, and well-being.
Disability Diagnosis, Health, and Functional Limitations
Although similar labels often are applied to the disability and health information available from surveys and administrative data sources, the concepts being measured may be fundamentally different. The SIPP, HRS, NSCF, NHIS, and NHANES contain detailed data on disabling conditions, health status, and functional impairments. These data reflect the respondent's (or the respondent's proxy) subjective perceptions of his or her health and disability status at the time the survey was administered.10 The data reported by the respondent typically are recoded in various ways by the survey administrator before being released to researchers. Social Security administrative records contain data on primary and secondary impairments for disability beneficiaries, which reflect the medical conditions considered in the medical decision about disability or blindness (initial application or continuing disability review). Those administrative records do not contain data on the general health status of disability beneficiaries, their functional limitations, or the severity of their disabling condition(s). For denied disability applicants, SSA's administrative records systems generally do not contain impairment codes. Moreover, SSA disability data document the condition that supports the medical decision regarding eligibility for disability benefits, which is not necessarily the same as the condition that is most disabling from the individual's perspective.
Given this limited background information, consider the data in Table 5 on the disabling conditions of children receiving SSI, which are derived from the NSCF and SSA administrative records and are reproduced from Rupp and others (2005/2006). The distribution of disability types (left side of table) differs greatly between NSCF data reported by the respondent and SSA administrative data. Nearly 44 percent of NSCF respondents report a physical disability, compared with 25.4 percent in SSA administrative data. Only 8 percent of NSCF respondents report mental retardation, compared with 32.5 percent in SSA administrative data. However, if individuals identified by SSA administrative data as being mentally retarded are removed from the sample, the distribution of disabilities in the NSCF more closely matches the distribution of disabilities in SSA administrative data (right side of table). This supports the hypothesis that some respondents are reluctant to report that their child is mentally retarded or that they did not consider mental retardation to be a health condition.
|Type of disability||All children receiving SSI||Children receiving SSI who are not identified
as mentally retarded in SSA records
|NSCF a||SSA records||NSCF a||SSA records|
|Mental retardation||7.9||32.5||3.9||. . .|
|None reported||0.4||. . .||0.3||. . .|
|SOURCE: Rupp and others (2006, Table 3 and note 15) and unpublished tabulations of NSCF data and SSA administrative data.|
|NOTES: NSCF interviews were conducted from July 2001 through June 2002.|
|. . . = not applicable.|
|a. Up to three health problems or conditions were coded in the NSCF. Because sample members can have more than one health problem or condition, the disability categories and subcategories are not mutually exclusive. Therefore, the percentages do not add to 100.|
We conclude that the choice to use self-reported survey data on disabilities and health conditions or administrative disability data should depend on the specific application of the data. For studies that seek to understand the relationship between individual behavior and disabilities, self-reported survey data on disabilities may be more appropriate, whereas administrative disability data may be the better choice for programmatic studies or tabulations of disability beneficiaries. Both survey and administrative measures of disability and health are very complex. Survey data reflect the respondent's perception of his or her disability status and also may be influenced by proxy respondents, coding choices by survey administrators, social norms, and the quality of training provided to survey interviewers. Administrative data tend to be driven by programmatic requirements and complexities. Self-reported disability measures have been criticized in the literature as subjective, inconsistent, and endogenous (Sickles and Taubman 1997; Bound and Waidmann 1992; Kreider 1999). However, it is important to note that survey respondents may have much more detailed information about their own health and functional status than other more objective sources based on limited information. In addition, research has shown that self-reported disability measures at the time of the survey interview are highly correlated with long-term measures of mortality and disability program participation, even after controlling for a variety of demographic and economic characteristics (Rupp and Davies 2004).
The earliest benefit of matching administrative earnings records with survey data was to expand the scope and quality of research in labor economics and public policy. Earnings records derived from IRS W-2 Forms also are used to evaluate the accuracy of survey data, particularly in the SIPP. Bridges, Del Bene, and Leonesio (2003) used the Detailed Earnings Record (DER), which is an extract of SSA's Master Earnings File, matched to the 1992 and 1993 panels of the SIPP to study the accuracy of calendar year 1993 wage and self-employment income in the SIPP. Gottschalk and Huynh (2005) used the DER matched to the 1996 SIPP to determine the effect of measurement error on the mean and dispersion of the distributions of earnings for people of different ages and on the correlation in earnings across years. Individual earnings reported in the SIPP may differ from those in the DER for reasons other than error. Respondents may report on a maximum of two jobs in the survey, and the administrative records report all jobs. Administrative records exclude pretax health care premiums paid by the employee or contributions to 401(k) plans out of earnings that may be accurately reported in the survey as prededuction earnings.11
Gottschalk and Huynh (2005) found that the DER had consistently higher employment rates than those in the SIPP. Respondents with missing SIPP data on earnings tend to have lower earnings in the DER than respondents with observed earnings in both data sets. Similarly, respondents with positive SIPP earnings and no DER earnings had lower earnings than respondents with observed earnings in both data sets, possibly reflecting informal work arrangements. Bridges, Del Bene, and Leonesio (2003) obtained qualitatively similar results from their 1993 SIPP/DER earnings comparisons. Gottschalk and Huynh (2005) found that the number of individuals with positive SIPP earnings and no DER earnings was smaller than the number with positive DER earnings and no SIPP earnings. However, Bridges, Del Bene, and Leonesio (2003) found the opposite pattern. Gottschalk and Huynh (2005) also found that lifetime earnings patterns were similar in the two data sources. Men aged 25–59 had higher earnings in the DER than in the SIPP, but there were no systematic differences in earnings between the two data sources for older men or for women. Finally, correlations between SIPP nonimputed earnings and DER earnings are approximately 0.75 for men and women aged 25–59 and 65 or older. Bridges, Del Bene, and Leonesio (2003) found substantial measurement error in SIPP wage and salary data, with mean SIPP wages understated by 7.5 percent relative to DER wages. The absolute relative error in wage and salary income was 18 percent overall, but 28 percent for those with imputed earnings.
Measurement error for wage and salary income is an important and complex area for future research. Survey data on earnings are reported for different time periods (weekly, monthly, annual), different concepts (gross or net of income taxes), and different sources (primary job, secondary job, wage and salary income, self-employment income). Likewise, administrative earnings records may record different concepts depending on the programmatic purpose for which they are collected. Comparisons of survey data on earnings and matched administrative data on earnings may lead to improvements in survey imputations of missing earnings data, more accurate analyses of individual well-being, and improved policy estimates of the distributional effects of OASDI (Social Security) and SSI reform proposals.
Many researchers have documented the dramatic shift in the employer-provided pension environment from defined benefit (DB) pensions to defined contribution (DC) pensions (Munnell and Sunden 2004; Costo 2006; Buessing and Soto 2006; Poterba and others 2006; Dushi and Iams 2007). Traditional DB pensions are funded by the employer and provide retirement benefits based on a formula that usually considers final salary, years of service, and age. All employees typically are included in the plan. Upon retirement, monthly benefits are generally paid in the form of a life annuity. Defined contribution plans (for example, 401(k) and 403(b) plans), on the other hand, place more risks and responsibilities on employees, and enrollment often is not automatic. After enrolling, employees must make decisions about contribution amounts and investment allocations. Employee contributions to DC pension plans are treated as deferred compensation, meaning that contributions are made on a pretax basis. Taxes are usually paid when funds are withdrawn. Upon retirement, employees face many options for withdrawing their DC account balances, including lump-sum withdrawals, the purchase of whole- or partial-life annuities, and rollover of funds into a tax-preferred individual retirement account from which withdrawals may be made.
The HRS has become a premier source of data for studying changes in the pension environment, pension plan participation by employees, and pension income of retirees, among other important topics related to retirement and older Americans. Importantly, on a restricted basis, researchers can access HRS data matched to SSA administrative data on benefits and earnings. The earnings records are derived from IRS W-2 records submitted by employers on behalf of their employees. These records provide data on annual tax-deferred contributions by employees to DC pension accounts. Dushi and Honig (2008) compared the deferred compensation data from IRS W-2 tax records with the self-reported pension type and pension contributions of HRS respondents to determine the accuracy of the self-reports and to assess employee understanding of the mechanics of DB and DC pension plans.
Table 6 provides some estimates from Dushi and Honig (2008) on the accuracy of self-reported DB and DC pension plan participation among HRS respondents born in the period from 1931 through 1941 (aged 51–61 in 1992). Thirty percent of individuals who reported having a DB-only pension plan had positive contributions to a DC pension plan on their W-2 record, which suggests that these individuals misreported their pension plan type in the HRS. Thirty-nine percent of individuals who reported having a DC-only pension plan had zero contributions to a DC pension plan on their W-2 record. This may reflect misreporting of DB pension plans as DC pension plans, or it may reflect actual lack of contributions to the DC plan during the year in question. Finally, 6 percent of individuals who reported that they were not included in a pension plan had positive contributions to a DC pension plan on their W-2 record, again suggesting a nontrivial amount of misreporting of pension plan type in the HRS. This is clearly an important area for future research.
|Self-reported pension type in the HRS||Amount of contribution to DC pension from W-2 record|
|Both DB and DC||44||56||100||85|
|Not included in a pension plan||94||6||100||1,333|
|SOURCE: Dushi and Honig (2008, Table 3).|
|NOTE: Percentages are weighted. Sample counts (N) are unweighted. Forty-two HRS observations with a missing pension plan type were excluded from the table.|
The ability to use survey data matched with administrative data is tremendously beneficial for a wide variety of research applications, from policy evaluation to economic research and program statistics to microsimulation modeling. A fundamental use of matched survey and administrative data by researchers at SSA has been to assess the accuracy of the survey data and to adjust for error in research and statistics produced from survey data. The primary surveys used in these types of analyses are the SIPP, CPS, and HRS, which may be accessed only on a restricted basis, subject to the terms and conditions specified by their parent entities and the agencies with authority over the matched administrative data files. This article reports on some important findings from these surveys with respect to survey measurement in the areas of OASDI (Social Security) and SSI participation and benefit amounts, disability diagnosis, earnings, and deferred compensation. The general findings regarding OASDI and SSI participation and benefit amounts appear to be quite robust across data sources and in terms of their implications for analyses of beneficiary well-being and poverty status. Research on measuring disability diagnosis, earnings, and deferred compensation using matched survey and administrative data is in its infancy. We summarize the key findings as follows.
- Self-reported data in the CPS slightly underreport OASDI receipt and significantly underreport SSI receipt. Self-reported data in the SIPP slightly overreport receipt of OASDI; however, the picture is more complicated for receipt of SSI depending on the year of analysis and whether the data are analyzed from a monthly or annual perspective. Estimates from both surveys indicate some confusion among respondents between the two sources of income. When administrative data are used in place of self-reported survey data, estimated poverty rates fall, especially among SSI recipients.
- For disability research, both survey and administrative data have appreciable strengths depending on the specific application of the data. Survey data are more likely to better reflect the perspective of the individual and often contain measures of functional limitations and severity that are not available from administrative records. The disability information in matched administrative records may better reflect the concepts of interest for more programmatically oriented studies.
- There appears to be substantial misreporting of pension type based on comparisons between self-reported pension type and administrative data on annual contributions to DC pension accounts. Matched administrative data from IRS W-2 records and other sources hold great promise for improving the measurement of pension plan participation and contribution amounts.
One area that is ripe for future research is the extent to which self-reported earnings in the SIPP, CPS, and HRS agree with earnings captured in SSA's administrative records systems. This is an important measurement issue, especially for the working-age population. It is also a complex measurement issue. Survey data on earnings are captured in many forms (weekly, monthly, annual—gross or net of income taxes) and for different sources (primary job, secondary job, wage and salary income, self-employment income). In SSA's administrative records systems, earnings may be recorded differently depending on whether they are counted when earned or when received, or whether they are actual or countable, estimated or verified, monthly or annual. A systematic comparison of survey-based earnings measures and matched administrative data on earnings may lead to improvements in survey imputations of missing earnings data and more accurate analyses of individual well-being and the distributional implications of OASDI and SSI policies.
Finally, although they were not addressed in this article, some studies on mortality also have used SSA administrative records matched to survey data. Age-specific death rates typically are constructed by combining vital statistics on the number of deaths (numerator) with Census data on the size of the at-risk population (denominator). Administrative records provide these data from a single source (Lauderdale and Kestenbaum 2002), but do not necessarily contain the socioeconomic variables needed to compute subgroup-specific death rates that may be of interest to researchers. Survey data matched with administrative data provide a broader picture of the population; however, very few surveys were conducted long enough ago and have a sufficiently high match rate to administrative data to support detailed analyses.
1 See the SIPP home page for additional details (www.census.gov/sipp/).
2 See the CPS home page for additional details (www.census.gov/CPS/).
3 See the HRS home page for additional details (hrsonline.isr.umich.edu/).
4 See the NSCF home page for additional details (www.socialsecurity.gov/disabilityresearch/nscf.htm). See also Davies and Rupp (2005/2006) and Rupp and others (2005/2006).
5 See the NHIS home page for additional details (www.cdc.gov/nchs/nhis.htm).
6 See the NHANES home page for additional details (www.cdc.gov/nchs/nhanes.htm).
7 Sizeable differences between the MBR and PHUS would arise predominantly for Social Security Disability Insurance (DI) beneficiaries who went through the appeals process. Upon the award of the DI benefit, the MBR would be updated to reflect benefits paid retroactively to the date of entitlement, whereas the PHUS would show one large lump-sum payment for the month of award and zero payments before award.
8 Sears and Rupp (2003) compared results using the MBR and PHUS with Huynh, Rupp, and Sears (2002) and found the differences to be negligible. They found that the percentage of March 1996 respondents who reported the exact amount of the administrative OASDI benefit improved to 51 percent with the PHUS compared with 46 percent in the earlier study using the MBR, but there was no corresponding improvement in the estimated mean error between the survey and administrative benefit amounts. This suggests that large lump-sum payments to DI awardees occurred relatively rarely among SIPP respondents. However, Huynh, Rupp, and Sears (2002) did not disaggregate by age or type of OASDI benefit, so we can only speculate without further research.
9 Olson (2002) analyzed the consistency between Social Security benefit amounts for May 1990 in the SIPP and the MBR.
10 Beginning in 2006, the HRS also collects detailed data on physical performance measures, biomarkers, and psychological topics through enhanced face-to-face interviews with selected respondents. These data are not addressed in this article.
11 Abowd and Stinson (2004) developed a procedure that allows for potential measurement error in both data sources.
Abowd, John, and Martha Stinson. 2004. Estimating measurement error in SIPP annual job earnings: A comparison of Census survey and SSA administrative data. Mimeo (July).
Aziz, Faye, Beth Kilss, and Frederick Scheuren. 1978. 1973 Current Population Survey: Administrative record exact match file codebook, part I—code counts and item definitions. Studies from Interagency Data Linkages, Report No. 8. Washington, DC: Department of Health, Education, and Welfare, Publication No. (SSA) 79-11750.
Bound, John, and Timothy Waidmann. 1992. Disability transfers, self-reported health and the labor force attachment of older men: Evidence from the historical record. Quarterly Journal of Economics 107(4): 1393–1420.
Bridges, Benjamin, Linda Del Bene, and Michael V. Leonesio. 2003. Evaluating the accuracy of 1993 SIPP earnings through the use of matched Social Security Administrative data. 2002 Proceedings of the American Statistical Association, Survey Research Methods Section. Alexandria, VA: American Statistical Association, 306–311.
Buessing, Marric, and Mauricio Soto. 2006. The state of private pensions: Current 5500 data. Issue Brief No. 42. Chestnut Hill, MA: Center for Retirement Research at Boston College (February).
Costo, Stephanie L. 2006. Trends in retirement plan coverage over the last decade. Monthly Labor Review 129(2): 59–64.
Czajka, John L., James Mabli, and Scott Cody. 2008. Sample loss and survey bias in estimates of Social Security beneficiaries: A tale of two surveys. Washington, DC: Mathematica Policy Research, Inc.
Davies, Paul S., and Kalman Rupp. 2005/2006. An overview of the National Survey of SSI Children and Families and related products. Social Security Bulletin 66(2): 7–20.
Dushi, Irena, and Marjorie Honig. 2008. How much do respondents in the Health and Retirement Study know about their tax-deferred contribution plans? A cross-cohort comparison. Working Paper No. 2008-201. Ann Arbor, MI: Retirement Research Center at the University of Michigan.
Dushi, Irena, and Howard Iams. 2007. Cohort differences in wealth and pension participation of near-retirees. Paper presented at the Population Association of America Annual Meeting, New York, NY (March 29–31).
Fisher, T. Lynn. 2005. Measurement of reliance on Social Security benefits. Paper presented at the Federal Committee on Statistical Methodology Research Conference, Washington, DC (November 15).
———. 2008. The impact of survey choice on measuring the relative importance of Social Security benefits to the elderly. Social Security Bulletin 67(2): 55–64.
Gottschalk, Peter, and Minh Huynh. 2005. Validation study of earnings data in the SIPP—Do older workers have larger measurement error? Working Paper No. 2005-07. Chestnut Hill, MA: Center for Retirement Research at Boston College.
Huynh, Minh, Kalman Rupp, and James Sears. 2002. The assessment of Survey of Income and Program Participation benefit data using longitudinal administrative records. Survey of Income and Program Participation Report No. 238. Washington, DC: Census Bureau.
Kilss, Beth, and Frederick J. Scheuren. 1978. The 1973 CPS-IRS-SSA exact match study. Social Security Bulletin 41(10): 14–22.
Koenig, Melissa. 2003. An assessment of the Current Population Survey and the Survey of Income and Program Participation using Social Security Administrative data. Paper presented at the Federal Committee on Statistical Methodology Research Conference, Washington, DC (November 18).
Kreider, Brent. 1999. Latent work disability and reporting bias. Journal of Human Resources 34(4): 734–769.
Lauderdale, Diane S., and Bert Kestenbaum. 2002. Mortality rates of elderly Asian American populations based on Medicare and Social Security data. Demography 39(2): 529–540.
Lininger, Charles A. 1981. The goals and objectives of the Survey of Income and Program Participation. 1980 Proceedings of the American Statistical Association, Survey Research Methods Section. Alexandria, VA: American Statistical Association, 480–485.
Munnell, Alicia H., and Annika Sunden. 2004. Coming up short: The challenge of 401(k) plans. Washington, DC: Brookings Institution Press.
Nicholas, Joyce, and Michael Wiseman. 2009. Elderly poverty and Supplemental Security Income. Social Security Bulletin 69(1): 45–73.
Olson, Janice A. 2002. Social Security benefit reporting in the Survey of Income and Program Participation and in Social Security administrative records. ORES Working Paper Series No. 96. Washington, DC: Social Security Administration.
Poterba, M. James, Steven F. Venti, Joshua Rauh, and David A. Wise. 2006. Defined contribution plans, defined benefit plans, and the accumulation of retirement wealth. NBER Working Paper No. 12597. Cambridge, MA: National Bureau of Economic Research.
Rupp, Kalman, and Paul S. Davies. 2004. A long-term view of health status, disabilities, mortality, and participation in the DI and SSI disability programs. In Research in Labor Economics: Accounting for Worker Well-Being, Vol. 23, Solomon Polachek, ed., 119–183. Amsterdam, Netherlands: Elsevier, JAI Press.
Rupp, Kalman, Paul S. Davies, Chad Newcomb, Howard Iams, Carrie Becker, Shanti Mulpuru, Stephen Ressler, Kathleen Romig, and Baylor Miller. 2005/2006. A profile of children with disabilities receiving SSI: Highlights from the National Survey of SSI Children and Families. Social Security Bulletin 66(2): 21–48.
Scheuren, Frederick, and Roger Herriot. 1975. Chapter 1: General introduction and background, exact match research using the March 1973 Current Population Survey—initial states. Studies from Interagency Data linkages, Report No. 4. Washington, DC: Department of Health, Education, and Welfare, Publication No. SSA 76-11750.
Sears, James, and Kalman Rupp. 2003. Exploring Social Security payment history matched with the Survey of Income and Program Participation. Paper presented at the Federal Committee on Statistical Methodology Research Conference, Washington, DC (November 18).
Sickles, Robin C., and Paul Taubman. 1997. Mortality and morbidity among adults and the elderly. In Handbook of Population and Family Economics, Vol. 1A, Mark R. Rosenzweig and Oded Stard, eds., 559–643. Amsterdam, Netherlands: Elsevier.
Vaughan, Denton R. 1979. Errors in reporting Supplemental Security Income recipiency in a pilot household survey. 1978 Proceedings of the American Statistical Association, Survey Research Methods Section. Alexandria, VA: American Statistical Association, 288–293.
———. 1989. Development and evaluation of a survey-based type of benefit classification for the Social Security program. Social Security Bulletin 52(1): 12–26.
Vaughan, Denton R., T. Cameron Whiteman, and Charles A. Lininger. 1984. The quality of income and program data in the 1979 ISDP research panel: Some preliminary findings. Review of Public Data Use, Vol. 12, 107–131.
Ycas, Martynas A., and Charles A. Lininger. 1981. The income survey development program: A review. 1980 Proceedings of the American Statistical Association, Survey Research Methods Section. Alexandria, VA: American Statistical Association, 486–490.