Social Research Glossary

 

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Home

 

Citation reference: Harvey, L., 2012-17, Social Research Glossary, Quality Research International, http://www.qualityresearchinternational.com/socialresearch/

This is a dynamic glossary and the author would welcome any e-mail suggestions for additions or amendments. Page updated 2 January, 2017 , © Lee Harvey 2012–2017.

 

A fast-paced novel of conjecture and surprises
   

_________________________________________________________________

Fraud


core definition

Fraud is criminal deception or the use of false representations to gain unjust advantage.


explanatory context

Fraud sometimes occurs in scientific research and has occasionally been uncovered. It takes three broad forms.

 

First is the complete fabrication of research findings, which may or may not follow some initial genuine attempt at the research. Such fraud may be to make an academic, political or career enhancing point. (Cyril Burt's psychological research is a case of this kind of fraud). Another example is described by Becker and Becker (2011):

For still more questionable scholarship consider the case of an Australian higher education student-satisfaction guru who asserted that his research showed what encourages university students to learn effectively based on a bivariate comparison of student reported outcomes and teaching techniques. The author provided a scatter plot that he claimed showed a positive relationship between a y-ax\s index for his "deep approach" (aimed at student understanding versus "surface leaming") and an x-axis index of "good teaching" (including feedback of assessed work, clear goals, etc.).[FN 8]

When I contacted the author to get a copy of his data and his coauthored "Paper presented at the Annual Conference of the Australian Association for Research in Education, Brisbane (December 1997)," which was listed as the source for his regression of the deep approach index on the good teaching index, he replied that the conference paper had never been written and that due to a lack of research assistance, it would take some time to retrieve the data and referred me to his coauthor. [FN 9]

Aside from the murky issue of citing a paper which this author subsequently admitted does not exist, and his not providing the data on which his published 1998 paper is allegedly based, in Becker (2004) I demonstrated a potential problem in bivariate comparisons aggregated at the university level. [FN10] Subsequent to our correspondence, the author became embroiled in a controversy concerning the suspension of a research director who publicly criticized the Higher Education Academy's National Student Survey as a "hopelessly inadequate improvement tool." (Gill 2008)

Footnotes:

8. See Ramsden (1998, pp. 352-355).
9. McCuUough and Vinod (2003) showed that the replication policy of the American Economic Review was ineffective. Then AER editor Ben Bemanke (2004) adopted the mandatory data and code archive recommended by McCullough and Vinod. This policy, however, would not have guarded against someone citing a non-existent paper as the source of empirical findings.
10. The author claimed to be working with data aggregated at the university level for student self-reported use of a "deep leaming approach" and instmctors' "good teaching practices." Inherent in working with such aggregated data is "Simpson's paradox," where disaggregate results contradicted aggregate results. Because the author could not provide his reported data, to see this phenomenon consider the individual regressions for the following three hypothetical universities, where each show a negative relationship for y (deep approach) and x (good teaching), with the respective slope coefficients of-0.4516, -0.0297, and -0.4664. However, the fourth regression on the university means, which is what the author allegedly used, shows a positive relationship, with slope coefficient of =0.1848. [The hypothetical data is included in the original but ommited here] ... Unlike this attempt to draw inferences from end-of-program student evaluations that suffer from problems of aggregation, sample selection, endogeneity, and heteroscedasticity, Weinberg, Fleisher and Hashimoto (2009) use appropriate model specifications and estimation techniques to address these problems. They show that student evaluations are positively related to grades but unrelated to leaming once the effect of grades is removed. Any weak relationship between learning and student evaluations arises because students are likely unaware of or do not recognize how much they have actually leamed at the time the evaluations are administered.

Second, fraud occurs when only some of the results of the research are reported in such a way that they deliberately subvert the outcome of the research.

 

Third, fraud occurs when the research, undertaken in good faith and reported honestly, misleads by suggesting that the work was done in more depth than it actually was. (E.g. Margaret Mead's anthropological work has been criticised on these grounds).

 

Fraud raises particularly severe problems of ethics in scientific research.


analytical review

Jha (2012) in The Guardian reported as follows:

The proportion of scientific research that is retracted due to fraud has increased tenfold since 1975, according to the most comprehensive analysis yet of how research papers go wrong.

The study, published on Monday in the Proceedings of the National Academy of Sciences (PNAS), found that more than two-thirds of the biomedical and life sciences papers that have been retracted from the scientific record are due to misconduct by researchers, rather than error.

The results add weight to recent concerns that scientific misconduct is on the rise and that fraud increasingly affects fields that underpin many areas of public concern, such as medicine and healthcare.

The authors said their findings could only be a conservative estimate of the true scale of scientific misconduct.

"The better the counterfeit, the less likely you are to find it – whatever we show, it is an underestimate," said Arturo Casadevall, professor of microbiology, immunology and medicine at the Albert Einstein College of Medicine in New York and an author on the study.

Casadevall and his colleagues examined 2,047 papers on the PubMed database that had been retracted from the biomedical literature through to May 2012.

The authors consulted secondary sources such as the US Office of Research Integrity and Retraction Watch blog, which highlights cases of scientific misconduct, to work out the reasons for each of the retractions.

Their results found that 67.4% of retractions were attributable to scientific misconduct and only 21.3% were down to error. The misconduct percentage was composed of fraud or suspected fraud (43.3%), duplicated publications (14.2%) and plagiarism (9.8%).

In addition, the long-term trend for misconduct was on the up: in 1976 there were only three retractions for misconduct out of 309,800 papers (0.00097%) whereas there were 83 retractions for misconduct out of 867,700 papers at a recent peak in 2007 (0.0096%).

Among recent and well-publicised cases of scientific fraud is that of the South Korean stem cell scientist Hwang Woo-suk, who was dismissed from his post at Seoul University in 2006 after fabricating research on stem cells.....


associated issues

 


related areas

See also

Researching the Real World Section 10


Sources

Becker, W. E. and Becker, S. R. , 2011, 'Potpourri: Reflections from husband/wife academic editors', American Economist, 56(2), 74–84.

Jha, A., 2012, 'Tenfold increase in scientific research papers retracted for fraud' The Guardian, 1 October 2012, available at http://www.guardian.co.uk/science/2012/oct/01/tenfold-increase-science-paper-retracted-fraud, accessed 18 January 2013.


copyright Lee Harvey 2012–2017


A NOVEL Who bombed a Birmingham mosque?
Top

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Home