6. How common is misconduct?
Systematic review (screened 3207 papers)
Meta-analysis (18 studies)
• surveys of fabrication or falsification
• NOT plagiarism
2% admitted misconduct themselves
(95% CI 0.9-4.5)
14% aware of misconduct by others
(95% CI 9.9-19.7)
Fanelli PLoS One 2009;4(5):e5738
7. How often is misconduct detected?
PubMed retractions 0.02%
US Office of Research Integrity 0.01-0.001%
(ORI) (1 in 10,000 / 100,000 scientists)
Image manipulation 1%
in J Cell Biology (8/800)
FDA audit – investigators guilty 2%
of serious sci misconduct
8. Because major ethical
problems are (quite) rare
Editors don’t see many cases during their
term of office
Publishers looking after many journals can
provide ‘corporate memory’
AND
Editors are largely untrained
10. What should journals & publishers do?
Educate ?screen
Raise awareness ?discipline
Have clear policies
11. Tools for detecting misconduct
Anti-plagiarism software (eg eTBLAST,
CrossCheck, Turnitin)
Screening images (PhotoShop)
Chemical structure checks
Data review (digit preference)
12. CrossCheck
Based on iParadigms software
Compares text against publishers’ d-base
D-base run by CrossRef (doi system)
D-base currently contains 59,000 titles
Shows % concordance + source
Can exclude “quotes” and references
?False positives / ‘noise’ level
13. Image screening
Pioneered by J Cell Biology
Used in some life sciences journals
Important for research where
the image = the findings
• genetics / cell biology / radiography
Found 1%
Manual check using PhotoShop unacceptable
Requires editor time / expertise manipulation
Rossner & Yamada, JCB 2004;166:11-15
16. Chemical structure checks
Examined structure-factor files
Identified >70 bogus organic structures
Authors had taken a genuine structure and switched
metals (eg Fe / Cu) or chemical groups (CH2 / NH /
OH)
Editors note: “it is a concern and a disappointment
that these [chemically implausible or impossible
structures] passed into the literature”
>70 articles retracted
Acta Crystallographica 2010;E66:e1-2
17. Frequency Where to screen?
high
? yes
low
no ?
Severity
low high
Gross manipulation of blots. (A) Example of a band deleted from the original data (lane 3). (B) Example of a band added to the original data (lane 3).
Gross manipulation of blots. (A) Example of a band deleted from the original data (lane 3). (B) Example of a band added to the original data (lane 3).
This shows a timeline leading up to the current position 1980s – concern about publication bias started to come from people compiling systematic reviews (eg the Cochrane collaboration) 1986 – the first major paper calling for trial registration was by John Simes in the Journal of Clinical Oncology 1990 – a more influential paper was published in JAMA by Iain Chalmers (one of the founders of the Cochrane collaboration) In the same year (1990), Kay Dickersin published an important paper about risk factors for publication bias The study from Tramer et al in 1997 provided clear evidence that covert duplicate publication (in this case about GW's anti-emetic ondansetron) could bias the results of meta-analyses FDAMA (the FDA Modernization Act) came into force in late 1997 and clinicaltrials.gov was set up to register trials ( these will be covered in more detail in later slides) Glaxo Wellcome was one of the first drug companies to establish its own trial register (in 1999) – it was retrospective (i.e. included studies only after a product was licensed) and didn't survive the GSK merger The UK industry association (ABPI) created a register but it was largely ignored
In late 2004, the editors of several major journals announced that trial registration would be compulsory, and that trials had to be registered by Sept 15 th 2005 This graph clearly shows the effects of that deadline It's interesting to note that it's not only commercial but also academic studies being registered (The graph shows the number of NEW registrations per week at clinicaltrials.gov)