LaCour may have been lying for years

'People are going to stop trusting experiments,' says one expert

The investigation into a former University of California Los Angeles (UCLA) political science graduate student accused of falsifying data that led to a now retracted paper in Science is ongoing, but it is becoming increasingly likely that it involved a "years-long con," Tom Bartlett reports for The Chronicle of Higher Education.

Initial paper

Michael LaCour may have fabricated data for a paper, co-written with Columbia University Professor Donald Green, that showed short conversations about gay rights with gay canvassers resulted in a significant shift in California voters' opinions on the issue.

Preventing and detecting academic dishonesty among graduate students

The paper, published in December 2014, gained major media attention and won LaCour an award. It also helped him earn an offer from Princeton University to begin as an associate professor in the fall. Whether or not that offer stands is unclear.

A few academics dug into the paper right away and emerged "unimpressed," writes Jesse Singal in New York Magazine's "Science of Us."

Suspicions arise

Stanford University's David Broockman, Yale University's Peter Aronow, and UC Berkeley's Joshua Kalla released their own paper that, among other things, argued LaCour's data mirrors that of the 2012 Cooperative Campaign Analysis Project (CCAP).

New York University's Patrick Perry told Singal the LaCour's findings are "transparently bogus." LaCour's survey and the CCAP used the same scale to measure responses. The only "meaningful difference" between the two data sets is that LaCour's includes several median points—which are commonly used as filler for missing data, says Perry.

When the fillers are added to missing values in the CCAP set, "the data are identical," Singal writes. 

In May, LaCour was asked by his dissertation adviser to produce the raw data. In a 23-page paper posted to his website the next day, he explained he destroyed the entire set in accordance with UCLA policy. Previously, he told Green he had accidentally deleted them.

"Anonymizing survey data is a common practice; wiping out all traces of the data collected for a study is not," writes Bartlett.

LaCour then asked two people—a former UCLA graduate student and the institution's chair of communication studies—for their passwords to Qualtrics, the online survey-design tool he said he used to collect the data. They provided the information, unaware of allegations against him.

According to unidentified UCLA sources, LaCour then created survey panels on each account and posted screenshots of them online. However, he omitted the section that would show whether or not the surveys had actually been sent.

More cover-ups revealed

LaCour has admitted to lying about receiving $793,000 in research grants.

Others have discovered he forged an email, created a fake PDF confirmation of study preregistration, invented a research firm representative, and claimed to have won a fake teaching award.

Still, he stands by his paper, although he has limited public comments at the advice of his lawyer. 

Who is to blame?

Much of the discussion around the scandal is that no one caught the discrepancies until after the paper had been published. Green told NPR he holds "quite a bit" of blame because he "should have been the one to have noticed this."

"Looking back on it, I wasn't suspicious enough," he said.

Alex Coppock, a Columbia political-science graduate student involved with the paper, says it is difficult to spot sets of data that have been forged entirely. The paper was vetted more than most are, yet it still made it through because researchers are trained to look for a fudged number—not an entirely fake set.

Additionally, LaCour has relationships with several senior researchers, and it is possible each thought another was more closely monitoring and guiding the study.

What it means for the field

Because few academic papers receive such significant accolades and media exposure, if the accusations are indeed true, it may affect the entire experimental political science sector, says Coppock.

They may not only have to discover how peoples' minds actually work when speaking with canvassers, he says.

"We lost that knowledge. We can get that back by doing another experiment. The thing I'm worried about is that people are going to stop trusting experiments" he says (Bartlett, The Chronicle of Higher Education, 6/2; Singal, "Science of Us," New York Magazine, 6/1).

Thoughts on the story? Tweet us at @eab_daily and let us know.


  • Manage Your Events
  • Saved webpages and searches
  • Manage your subscriptions
  • Update personal information
  • Invite a colleague