陇山陇西郡

宁静纯我心 感得事物人 写朴实清新. 闲书闲话养闲心,闲笔闲写记闲人;人生无虞懂珍惜,以沫相濡字字真。
个人资料
  • 博客访问:
文章分类
归档
正文

Notebook: peer-review advice

(2018-02-26 13:21:21) 下一个
Notebook: http://210.75.240.149/blog-847277-740998.html

博文

英语学报编辑编委感言十诫”雪中送炭锦上添花”二十九亿美元退稿

已有 7647 次阅读 2013-11-12 03:43 |个人分类:英语写作(群组群主)|系统分类:论文交流|关键词:英语

英语学报编辑编委感言十诫雪中送炭与锦上添花-经历二十九亿美元产品退稿***

引子

鲍海飞文献是前人心血铸就, 文献就是武功秘籍!”(Reference#1)得到我的共鸣。我想补充这一点:一个好的文献作品是艺术,通过作者和编辑(编辑加审稿人)一起合作创建。


目的

我担任几个国际规模英语学报期刊的编辑编委,特别在一百四十多岁的(1869年创建)学报任编委多,我知道一些资深学者的编委和编辑。资深学者审稿人的敬业科研精神: 渊博深厚、求实索真的治学态度、清晰、敏锐深刻的洞见、注重细节的心态、真心、体贴、善良、慷慨、追求卓越的科学风范都深深地感染着我。许多学者都有这样的经验,所以,此文目的是抛砖引玉,呼引高人高论。


经验

您的稿件的命运常挂在一至三位评审手中。如果你幸运地找到一种审阅审稿人,你得到有一些建设性的批评,帮助你修改。如果不是幸运,你得到审稿人一挫伤拒绝信, 雪上加霜。一些审稿人评论拒绝您的稿件, 他得到优势,他自己抢先发表。

我的稿件得到过一些审稿人评论,如似乎高中学生写的”;“我试过,不可能!我不相信这研究和实验,拒稿。

我不知道他们是什么意思--脱离论文的细节泛泛而谈。我怀疑这些审稿人的评论,他们知道如何定义高中学生的写作水平?也许一些高中学生写作远远优于这些审稿人的写作水平。

我很生气:什么我试过,什么不可能” -他没读细节,论文中的每一个细节都是有依据的。他是刻板印象,靠个人经验说话-他试过,但没有透露他的参考文章,不知道他到底谈什么。他没有客观的就文论文的具体想法,只有主观的尖叫!这样的审查意见,我不知道如何修改稿件。

幸运的是,不会是一位评审决定稿件生死。存在一些科学的判断标准。

其一,研究课题的开创性和潜力: 实验真的是可重复吗?不同的实验室可严格重复验证?

后来,我们发表了系列文章,被不同的实验室引用三千多次,被认为是该领域系统模型创始文章之一。

其二,研究成果真的可应用的吗?

多年后,一家公司发展我的相关研究其实际用途应用于生产产生经济价值,成交二十九亿美元

后来自认, 那位审稿是老诺贝尔奖获得者。还开玩笑说,“不仅写的似高中生;看起来也像高中生。

令人哭笑不得。我从来没有这样笑话一个稿件辛勤工作。

当我是麻省理工学院哈佛大学的一个学徒时,我感谢我的科研启蒙导师给我审别人的手稿的机会。我花了大量的时间仔细阅读,一行行,一个字一个字,阅读相关文献,彻底消化,然后写我的评论意见。我很自豪地看到,我的导师采纳我的意见。

目前为止,我还是一直在做我为一个学徒的审稿方式,恭恭敬敬地读,于细微处见真实,试图以建设性的,具体的,体贴的语言写我的审稿意见。

比尔·盖茨说,人生是不公平的,人们天生拥有不等量的资源。例如,非以英语为母语,英语写作的阻碍。即使你是一个英语为母语,你可能不擅长写作。即使你擅长英语写作,你可能无法拿出一个科学的逻辑故事,以此来吸引读者的注意力。(我见过一个老普利策奖得主编辑的科学文章, 并没有任何意义)

我的许多同事不想担任编辑编委,因为它需要你的很多时间 (Ref. #2, #3)。有时候,我很难想象一个同事阅读我自己在时间压力下写得不好的稿子版本。

有时审稿, 你能感觉到作者用自己的灵魂和心灵写作。它触及你的心,强制搜索你科学精神的灵魂。给你一个鞭策。

Great authors:

"What really knocks me out is a book that, when you're all done reading it, you wish the author that wrote it was a terrific friend of yours and you could call him up on the phone whenever you felt like it. That doesn't happen much, though."(Jerome David Salinger “TheCatcher in the Rye” 1951 novel)

一个良好的科学手稿应该是清晰的逻辑。如果你有逻辑,读者可以看到通过你的数据集(图和表)的顺序而读你的文字。当我拿起一份手稿,我第一次扫描的图和表, 于细微处见真实。细微处经得起推敲,综论整合有高度 (是否夸大其词, 小头戴个大帽子)


十诫

这些年来,我得到了许多有关同行评审的准则。首席主编大卫·德鲁斌十诫 (神在西奈山Mount Sinai给摩西十诫), 完整,平衡。我张贴在这里要提醒自己,如下:

第一诫:不带偏见, 客观审查, 让数据说话

第二诫: 及时审查稿件(在要求审阅的时间内完成自己的评论)

第三诫: 了解你的角色 (作为一个审稿人评论家,你是一个专业知识的顾问,监控,编辑。你的工作是评估的严谨性和原创性,科学和写作的清晰度。两个或三个审稿的意见的基础上,监测的编辑再决定是否应接受手稿,作者修订,或拒绝。)

第四诫: 承认作者的努力,识别工作的优点和手稿的问题 (手稿的评论应该以积极的声明开始,试图不伤害作者的感情, 尊重承认作者试图完成的努力)

第五诫: 提供建设性的意见 (如何沟通其结果更清晰)

第六诫: 建议额外工作要明智 (提出没有必要的额外的研究和实验,以支持该研究的主要结论,或建议接受提供足够前人的文献来证明手稿作者的结论)

第七诫: 留给子孙后代判断手稿的影响 (很少有可能预测手稿的未来,评审应注重的问题是不是新的,是真的吗?”)

第八诫: 试图在你的研究领域中倡导创建一个正反馈审稿回路 (请记住,善有善报,恶有恶报。有人在他或她的手稿已经收到了不公平的评论更可能同样对待别人。因此,如果您想您的论文审查收到一个公正的方式,按照黄金法则:己所不欲, 勿施于人

第九诫: 请记住,这是不是你的论文 (审查稿件时,你的工作是让手稿更加严谨,完整,并且清楚地呈现。手稿是否符合期刊的质量标准,材料如何进行介绍和解释,作者应该最有发言权。这是他们的论文,不是你的。)

第十诫: 争取一个良好的示范作用

(与您的学生和博士后审稿, 提供一个良好的教学机会。然而,要知道,那个年轻的学生和博士科学家可以有点太急于证明自己的能力找到一份手稿的故障,而不是它的强项。请记住,如果你的学生之一审阅的手稿,由你来确保的意见准确地反映你的意见,因为是你提交审查。)


自勉

我们确实活得艰难,一要承受种种外部的压力,更要面对自己内心的困惑。在苦苦挣扎中,如果有人向你投以理解的目光,你会感到一种生命的暖意,或许仅有短暂的一瞥,就足以使我感奋不已”——杰罗姆·大卫·塞林格《麦田里的守望者(Jerome David Salinger “TheCatcher in the Rye” 1951 novel)

任何傻瓜都可以破坏一个谷仓,但它需要一个好木匠建立一个谷仓。” (美国国会议员和议长萨姆·雷伯恩)

同样地,一个蠢驴可以把手稿丟進垃圾,但它需要一个好的学者作者和审稿人全心全意构建写成好的论文

善有善报,恶有恶报。有人在他或她的手稿已经收到了不公平的评论更可能同样对待别人。"Do not judge; so that you won't be judged. For with the judgment you use, you will be judged, and with the  measure you use, it will be measured to you." (Matthew 7:1,2)

"从个人来讲,一个人内在的力量最为强大,只由心而发的热爱,才能激发自己的想象力和创造力。如果内心没有愿望,那么无论外界的刺激有多大,都很难取得成就,不要迎合社会,要摒弃功名利禄,遵从自己内心的想法" (崔琦--美籍华人物理学家,1998诺贝尔物理学奖获得者)学术人的传统美德讲骨气,讲傲气,讲面子,讲名声,总而言之,都讲虚的。大凡虚的东西,上去了就下不来,下来了那都是没办法的办法。一定要尊重自己内心的选择,凡事没有对错和好坏,皆因立场和心态之不同 (文双) (Ref. #8)

比尔·盖茨说,人生是不公平的,人们天生拥有不等量的资源 (Ref.#4)明白“寸有所长,尺有所短”的道理。应该鼓励学生找到自己的特长、发挥特长。为您组织中的天才撑起一片天空:“创新的产生从下往上,由有才华的个人自我组织的团队”(哈佛研究创新是基层草根的努力而不是权威专家从前面领先的结果 (本文引用地址:http://blog.sciencenet.cn/blog-847277-722667.html ))

了解不同的作者个人的角度 (Ref. #5, Ref. #6, #7, #8),可能会帮助作者和审稿人彼此建立良好的默契 (Ref. #6)。写得不好的稿件, 你知道雪中送炭。写得精美的稿件, 你知道锦上添花


***题注

谨以此文科学网志愿者(作者读者、编辑)致衷心感谢和敬意!此文目的是抛砖引玉,呼引高人高论。我是科学网成员仅仅几个月,我意识到不能涵盖所有关于这一主题的文章。如果你发现任何相关文章, 我希望你加入到我的参考文章清单--我将结合你的思路和参考材料修改我自己的博客。

有所谓事在人为则甘苦自知,言为心声则人所共鸣,思出肺腑则悦逢知己,期待您分享这一路上的洞明练达,得失荣辱。让我们一起努力,创建一个具有理性、建设性的科学主体交往平台,构筑新的科学生活社会”(科学网常务副总编辑李晓)

科学网博大精深, 卧虎藏龙。我的目的是通过参加科学网向别人学习。我从阅读别人的博客文章学到很多东西,这促使我贡献我自己的博客文章。基于科学网是志愿者的平台,我们每个人都应该尝试贡献自己的博客文章,帮助别人最终帮助自己。


 

Reproducibility: The risks of the replication drive

careful, meticulous scientists, says Mina Bissell.



 

************************************************************************

Photo description: “我们确实活得艰难,一要承受种种外部的压力,更要面对自己内心的困惑。在苦苦挣扎中,如果有人向你投以理解的目光,你会感到一种生命的暖意,或许仅有短暂的一瞥,就足以使我感奋不已”——杰罗姆·大卫·塞林格《麦田里的守望者(Jerome David Salinger “TheCatcher in the Rye” 1951 novel)



 

Scholars care greatly about the sources of your insights and your information. You must tell your reader where the raw material came from! Your facts are only as good as the place where you got them and your conclusionsonly as good as the facts they’re based on. (学者们最为关心您的见解和你的信息来源。你必须告诉你的读者的原料来自哪里!)

Reference#1: 鲍海飞: “为什么要紧盯着文献不放已有 16870 次阅读 2013-10-20 20:15 |个人分类:随想|系统分类:观点评述本文引用地址:

http://blog.sciencenet.cn/blog-278905-734681.html )

Ref. #2).孟津-SCI期刊审稿的私房话  精选.已有 7688 次阅读2013-10-26 06:33 |本文引用地址:http://blog.sciencenet.cn/blog-4699-736163.html

Ref. #3).曾泳春: 一个秋天收到的审稿意见,已有 12497 次阅读 2013-10-18 22:21 本文引用地址:http://blog.sciencenet.cn/blog-531950-734202.html

Ref. #4: 哈佛研究创新是基层草根的努力而不是权威专家从前面领先的结果. 本文引用地址:http://blog.sciencenet.cn/blog-847277-722667.html 

Ref. #5: 武夷山三要素与徐晓三要素与情意芋头. 本文引用地址:http://blog.sciencenet.cn/blog-847277-728976.html

Ref. #6: 赵斌科技论文写作中容易忽略却重要的那些事儿精选. 本文引用地址:http://blog.sciencenet.cn/blog-502444-736825.html 

赵斌(说得透彻!比我到位!引用到此!我以为引用是对作者最大的敬意!)

文章修改中对待审稿意见应该不亢不卑。首先,我们是应该尊重那些审稿人的,SCI的审稿几乎都是免费的,要人家静下心来认真拜读您那可能水平并不高的论文,本身就值得我们尊敬。

可以毫不夸张地说,在你的文章出版之前,甚至出版之后,没有几个人会像审稿人那样仔细体会和分析你的文章。因此,在这种背景下,我们首先是应该充满感激的,也认真思考和领会他们所提出的每一条意见,甚至是近乎苛刻的意见,这是前提,这就是我说的不亢。当然,这一点儿大多数人还是能做到的,毕竟人家还局部决定着你文章的生死权。

然而,许多学生在对待审稿意见时要表现出不卑,似乎更难一些。由于编辑在邮件中总是站在审稿者的角度的,因此不敢对审稿者的任何意见造次,哪怕是一些误解或者错误的建议,也总是在想方设法地去附和审稿者的意见。显然,这样的态度也是不对的。

SCI杂志审稿采用的是同行评议(peer review),也就是投稿人和审稿人是同行,大多数情况下应该是水平相当,国内喜欢翻译成专家审稿似乎是一种误导!如果一些作者的工作集中在比较新的领域,他们钻研三年都应该能这方面的专家,至少比他人在自己的这个领域应该更懂一些,因此这些作者自己才是这方面真正的专家,这也是一个前提。

由此可见,在进行修改和回复审稿人的问题时,应该是同行或者说是专家之间讨论问题的口吻,而不是下属回答上司的问题那样唯唯诺诺,信心很重要!在抱有感激之情的前提下,与审稿者心平气和地讨论问题,这是我们对待审稿意见应有的态度”(Ref. #6)(alsoRef. #7)

 

Ref. #7: 李东风审稿人的责任心

已有 79 次阅读 2013-11-207:05 |系统分类:观点评述|关键词:审稿责任心

稿件评审是学术审查的重要一环。审稿人对刊物质量把关,对作者工作肩负重要的责任。

对于稿件,应该认真对待。无论接受或拒绝,都应该提出中肯的意见或建议,这是对作者劳动的起码尊重。

比较国内外期刊,明显感到这方面的差距。国外投稿,审稿人意见很具体,明确,往往写出几页的意见,从学术性,严谨性到文字表达,都给出详实的意见。而国内某些审稿人,往往根据个人对内容的熟悉程度,作者名气,好恶来评判。这就造成某些刊物对稿件的来源偏爱,甚至成为"私家"刊物。拒稿的理由往往匆匆批2句,含糊笼统,好似不愿与你理论一样。这一定程度反映出评审人对稿件的忽视,对投稿人的蔑视心态。还有的大牛以工作忙为理由,一拖很长时间,审查也是敷衍了事。这样的评审人不具备资格!

拒稿要慎重,要有具体理由。要指出文章的问题,以利于作者改进提高。好的评审让作者心悦诚服,的评审让作者看不起你甚至对刊物信誉产生怀疑。

当然,投稿人更要认真,不能把一份未经深思熟虑的稿件随便投寄。这也是对自己,对刊物的不负责。
本文引用地址:http://blog.sciencenet.cn/blog-729911-738210.html 

 

Ref. #8: 文双春:牛刊投稿有玄机”——越热越易中 http://bbs.sciencenet.cn/static/image/blog/recommendico.gif精选

已有 7296 次阅读 2013-8-1017:55 |个人分类:谈点正事|系统分类:博客新闻|关键词:天气论文

投过稿的论文作者都有体会,一篇论文稿件能否投中,除了它内在的水平和质量外,外在的运气成分不能说没有。例如,一篇论文稿件交由哪位编辑处理、特别是最后送到了哪位(些)审稿人手上,很大程度上注定了这篇论文的命运。另外,结婚、乔迁、开张、竣工等庆典都要择黄道吉日,科研人员辛辛苦苦做出成果写成论文后,总希望在牛刊上发表,那么向牛刊投稿难道就没黄道吉日吗?(老文前几天到南岳衡山心愿之旅后特别相信“黄道吉日”了!)

黄道吉日或许太具随机性,从更长远的方向提问:人类的多数生产活动具有明显的周期性或波动性,如农民种地有季节性、经商挣钱有淡旺季,科研人员向刊物投稿相当于推销自己的产品,这种产品的销路是否也有淡旺季呢?

理论上,科研人员论文的“销路”应当与“推销”时间无关,不应有淡旺季。因为,一方面,刊物特别是牛刊,一年365天,天天营业,天天收购,天天童叟无欺;另一方面,科研人员的工作和生活最枯燥、最单调,不分季节、不分寒冬腊月和酷暑烈日,日日做研究、月月写论文。但是,俗话说,来的早不如来的巧;投稿作为一种登门推销的行为,应当也有来的早来的巧的问题。

国外有研究者研究了心理学、物理学、化学等学科的一些牛刊(top journals)的论文投稿和录用率(acceptance rates)问题,发现论文投稿和录用的确有,表现在有所谓的季节性偏差(seasonal bias)现象。发现者并由此给科研人员提供投稿窍门,如:

Write when hot - submit whennot. 热时写,不热时投。

Write when you can and submitwhen you are ready. 能写就写,择机而投。

You should write when you like,but submit in July. 想写就写,投在七月。

多数这类研究表明,科研人员投稿有类似中国乡村的赶集(湖南大多叫赶墟)现象,最火爆的集市大多在炎炎夏日(submissions peak during the summer months)。其中针对《欧洲物理快报》(Europhysics Letters)和美国《物理评论快报》(Physical ReviewLetters)两份物理学杂志的研究表明,向它们投稿的高峰均在7月份。(Thereis considerable seasonal variation in submissions within each year, with Julybeing typically the month of most submissions.)

当然,投稿最重要的是投中。去年一篇对《欧洲物理快报》杂志的研究表明,该杂志稿件录用率最高的月份与投稿量一样,都是7月!最近,在美国物理学会主编Gene Sprouse的鼓励下,《物理评论快报》杂志的一位高级助理编辑Manolis Antonoyiannakis研究了该杂志论文录用率的季节性偏差问题。对从19901月至20129月之间的273个月的190,106篇投稿论文的统计分析表明,每年的12个月当中,8月份的月均论文录用率(the monthly average acceptance rates)最高,其次是7月份。总之,天气最热的月份,不仅投稿最多,而且录用率最高!

这样看来,科研人员自觉或不自觉地选择在炎热夏季扎堆投稿,原来不是巧合,而是暗藏玄机。国际上的牛刊主要被老外占据,现在看来主要是老外比我们先掌握投稿玄机,或掌握了更多的投稿玄机。老文也是到南岳衡山拜了菩萨后才领悟到这个玄机的,虽为时不晚,但毕竟稀里糊涂了多年。迫切希望我们国家尽快资助一支专门研究投稿玄机的队伍!

不过,老文仔细研究这些玄机后,发现一些牛刊的稿件录用率虽然的确如庄稼地里的果蔬一样有一定的季节性,但这种季节性偏差非常小,正如《物理评论快报》杂志编辑所说,月稿件录用率没有明显的统计意义上的偏差(No statistically significantvariations were found in the monthly acceptance rates)。也就是说,从统计意义上说投稿时机对论文录用率几乎没有影响。

不管怎样,在此几十年一遇的酷热难耐的暑期,老文向各位仍在科研战场战天斗地的同志们知晓这种玄机,对安慰心灵、安抚情绪、鼓舞斗志,还是具有十分重要的现实意义的。牛刊录稿究竟有否玄机?此时此刻,宁可信其有,不可信其无!

References

ManolisAntonoyiannakis,Acceptance rates in Physical Review Letters: No seasonal bias,arXiv:1308.1552 [physics.soc-ph],and references therein. 

1990-2012年间PRL杂志的月均论文录用率(8月最高)


本文引用地址:http://bbs.sciencenet.cn/blog-412323-715891.html 

 

 

 


 



http://210.75.240.149/blog-847277-740998.html

上一篇:推出天才的基因组测序遴选我国百名具冲击诺贝尔奖潜力人才?!
下一篇:哈佛大学威尔逊研究蚂蚁发明香水创造社会生物学(一)

14 曹聪 张忆文 李泳 李明阳 刘克 李本先 苏光松 鲍海飞 刘进平 Editage意得辑 王桂颖 王加升 杨正瓴 沈文锋

该博文允许注册用户评论 请点击登录 评论 (8 个评论)

[8]???  2014-10-26 07:14
Peer review: Opinions - inspirational
[25]????  2014-10-25 01:29
??4????????????n review????????????????????????????????????????????????????40???????ÿ???????4-6?????ÿ?????15??????????????????????????????????????4????
?ÿ???????4-6?????ÿ?????15??????????????????????????????????????4????
??????br /> ??????(2014-10-25 01:45)?????????????????????????????????????????û??15??????????????????-6??????????????????????
[28]????  2014-10-25 03:23
????????????????ô?????????????????????塣???????????κκô????????????????λ???????????¡????????????
??????(2014-10-25 07:08)???????????????????????壬????壬?ô?4????????????????????
?????????????????
[27]wwmwwm  2014-10-25 03:06
??????????????λ??
?????????λ??????к??????γ????λ????????
???????????????
????õ?????????
??????(2014-10-25 07:07)??????????л????????
[11]liangzx  2014-10-24 22:24
[1]????nbsp; 2014-10-25 10:21
????????????????????????????????????????????????????????????????????????
??????(2014-10-25 10:26)??????????????????????????????????????þ???????????????????????????????
=====
?????ɡ?
??????(2014-10-24 23:27)??лл??
[2]????nbsp; 2014-10-24 18:29
лл????????????????????????????????
??????(2014-10-24 18:59)??   ?лл??????

[1]????nbsp; 2014-10-24 18:21
????????????????????????????????????????????????????????????????????????
??????(2014-10-24 18:26)??????????????????????????????????????þ???????????????????????????????
??????????http://blog.sciencenet.cn/blog-502444-838360.html
[7]???  2014-10-26 07:02
??????????????????????  ???
?? 5769 ????2014-10-24 07:45 |????????????????|?????? ????? Publons    ?????

????????????????????????????????????¶??????????????????????????Щ??????????ò???????????????飬??????????????????????û?????????????????????ô?????????????????þ????????????10??????????????????????????-4?????????????????????????Associate Editor????????????4??????????ÿ????????С??????????????????????????????????????????Щ????????????????????????????????????????????д???????????????????????????????????Щ???????õ?¶????????????????ɡ??}???Nature????l?????}???[1, 2]???????????????????????????????????????????????}????????????????

??????????????????????????????????壬????û??õ?????????????????????κα??????????!?????????????????????????????????????????δ???????û????4?????????????????????????????'????????????????????4?????????????????????????????????????????????????????????????????????????????4???<????????30???????ublons?????????????????????????????????Amazon Web Services????redit????????Publons??????????????????????Publons????????????¼???????????????????????????????????????????????????????????????????????

Publons????????Preston?????????????????????????????????û???????õ?????????????????????³??????????????????????Щ?????м????????????????????????Щ??????????4??????????????????????????????????????Publons?????????????????????????????????????????????????????????????????????õ??????????????????4j????????ô?????????????????????????¼??????????????????????????????6????????????redit??????ô?????????????????????????????????壬??????????????????????Publons????????????????????????Щ??????????????????????????????????????δ???õ???????????????????????????#?????ons?????????????????????ø????ô????罫???????????

?????????????????????????????????????<?????????Щ?????????????????????Publons?????b??????????????????????????????????????????????????????RCID??????о????????????b?????????????????????????????????????b???????????????????????λ????????blon?????????????????????????????¹???о???????ogendra Kumar Mishra??????????????????????????????????????????????alcolm Jobling???ôPublons????????ÿ????????Jobling??????????????????????ublons???4???????9??????????????00???壬????????????Mishra??????ÿ????????????ublons?????2?????????????????????????????3-5????????Jobling??????????????????????????????????????????????????Jobling?????????????????????????

??????????????????????????????????????????????????????????????????????????????4???????????????????????á???????????????????????????Щ??????????????????????????????????????????ÿ???????????????????????????????????(?????????????????????????Щ???????о??????Ч???????<???????????õ??????????????????????????????????????????????????????????van Rooyen????????5?????????Review Quality Instrument?????yen??????????????????J Clin Epidemiol.1999;52:625-629)??????4???????????????????????????????????????????????????????????%??

ÿ??????????????????bling?????????????????3С????????????????????????????????????12С??????????????????????????4??????????????????????????Щ????????????????????????????????????????????????????Mishra?????????????????????????????Щ????????????????????????????????????????????????????????????????????û????????????????????????????????????????????????????????????????????????????????????????????????????-12С???????????ature????????????4??????????????????????????????????

???????????ù?????????????????obling??????????????????????????????????????ù???????????????????????????????????????????????????????????????????????????????????????????????????j???????????õ??????????????????????????????????塱??Mishra?????????4?????????ý???????????????Publons??????????????????????????????????????????????Publons?????????????????????????????????????????????????????е???????????????????????????????????????û???????????f????f?????????????????????Preston???????????????????????????????????????????????????Peerj??igaScience???????????????????????Publons??????????????????F1000??????ô??????????????b?????????????????????õ??????????????????????档????????????????????????????????????????????????????????????????????????????????ô????????????õ???????

??????У?????????PeerJ????????PeerJ ?????????????????????????????????????????????????????????????????????????????????????????????? 40%?????????????????????????????????????????????????? 80%???????????????????????????????????????????????????ô?????????????????????????????????????ô???????????????????????

?????????????????????????Щ??????????????????????????????????ø???????ô????????壿????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????ɡ????????????????????????????û??õ??????????????????õ??????????????????λ???????????????????????????????????κν?????????????????????ô??????????????????????????????õ?????????????????????????????????????û??о???Щ???????????????????????????????????????????????ature??cience????????????

??????????Щ?????????????????????????????Щ?f????????????????????õ???????Щ?????????????????????4????????????????????¡?????е?????????????????????????????????????????-3????????????????????????????????????????????????Щ????????????????Щ??????????????????????????4????????????????????

?????Щ?????????????????????????????????????????????????????????????????????о????????????????в???????



?ο?????

(1) The scientists who get credit for peer review, Nature doi:10.1038/nature.2014.16102, 09 October 2014

(2) Review rewards, Nature 514, 274 (16 October 2014) doi:10.1038/514274a





??????????http://blog.sciencenet.cn/blog-502444-838360.html
[6]???  2014-2-5 06:32
More came as below:
http://www.nature.com/news/reproducibility-the-risks-of-the-replication-drive-1.14184

Noam Harel•2013-12-27 04:53 AM
The tone of this commentary and its title both seem to lobby against the growing communal push for replicating basic scientific results. However, Dr. Bissell's own examples pretty clearly support the exact reasons why the 'Replication Drive' has been growing in the first place - to be trusted, scientific results need to be reproducible; when there are discrepancies, scientists should work together to determine why. This is good for science. The bottom line? Everything has a cost. The societal cost of devoting resources toward reproducing scientific results is far outweighed by the benefits. Conversely, the societal cost of publishing papers based on the one or two times an experiment "worked" (ie got the desired result), while ignoring the 10 or more times the experiment "didn't work" has for too long been a dirty, open secret among scientists. It's great to see this issue subjected to a more public debate at long last.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Elizabeth IornsElizabeth Iorns•2013-12-21 09:36 PM
Andrew Gelman, Director of the Applied Statistics Center at Columbia University's response to this is available here: http://andrewgelman.com/2013/12/17/replication-backlash/
Share to TwitterShare to FacebookShare link to this comment
Avatar for John J. PippinJohn J. Pippin•2013-12-08 07:22 PM
So replication threatens the cache of irreproducible experiments? For those removed from labs, the more important issue is the robustness and reliability of basic science experiments, not the ability to get a desired result at one point but not ever again. Basic science research is not (to the rest of us) about whether you can publish and thereby keep the grants coming. It's about getting to the truth, and hopefully about translating that truth to something useful rather than arcane. Dr. Bissell's opinion piece beggars the ability of laboratory science to stand on merit, and asks for permission either to be wrong or at least irrelevant. That's not science.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Etienne BurdetEtienne Burdet•2013-11-28 01:50 PM
I tried to replicate the LHC experiments and failed. This is a proof that the Higgs boson is not science. Can I have my Nobel please ?
Share to TwitterShare to FacebookShare link to this comment
Avatar for Yishai ShimoniYishai Shimoni•2013-11-25 05:54 PM
I think there are a few points that were ignored thus far in the comments: 1. In principle it should not be obligatory for an experiment to be reproducible. It is useful to report on surprising results that cannot be explained. However, in such a case the reasons should be clarified, and until then no real conclusion can be drawn from the results. The results may be stochastic, they may depend on a specific overlooked detail, or they may be an artifact. Until the scientific community understands the conditions under which a result is reproducible or to what extent it is reproducible, it is not useful and should be regarded as a possible artifact. 2. Requiring anyone who wants to reproduce a result to contact the authors is not practical, especially if it is an important result. What if there are a few hundred labs who want to use the result? what if the researcher who performed the experiment changed position? changed fields? or even passed away? 3. The suggestion that anyone who wants to reproduce the results should contact the authors may very easily lead to knowledge-hoarding. It is a natural tendency to want to become a world authority on the specific technique, especially after spending years to arrive at the result. Unfortunately, this may mean holding back on some specific essential detail, and only by working with the author and adding them as co-authors is it possible to publish anything on it.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Replication Political ScienceReplication Political Science•2013-11-24 11:44 AM
I do agree with many of the points about being careful and considerate when replicating published work. But let??s talk about the newcomers a bit more. First of all, I??m not sure I like the word ??newcomer??. Although this might be a misinterpretation, it sounds as if those trying to replicate work are the ??juniors?? who are not quite sure of what they are doing, while the ??seniors?? worked for years on a topic and deserve special protection against reputational damage. It goes without saying that anyone trying to replicate works should try to cooperate with the original authors. I agree. However, I would like to point that original authors don??t always show the willingness or capacity to invest time into helping someone else reproducing the results. As Bissell says herself, experiments can take years ?C and once the paper is published and someone decides to try to replicate it, the authors might already work on a new, time-intensive topic. My students in the replication workshop were sometimes frustrated when original authors were not available to help with the replication. So I??d say, let??s not just postulate that ??newcomers?? should try to cooperate, but that original authors should make time to help as well to produce the best possible outcome when validating work. It is in the interest of original authors to clearly report the experimental conditions, so that others are not thrown off track due to tiny differences. This goes for replication based on re-analysing data as well as for experiments. The responsibility of paying attention to details lies not only with those trying to replicate work. From my experience and that of my students trying to replicat work in the social sciences, papers do not always list all steps that led to the findings. My students are often surprised at how authors came up with a variable, recoded it, which model specifications they used etc. Sometimes STATA or R code is incomplete. Therefore, while those replicating work should try to be aware of such details, original authors need to work on this as well. Original research might take years, but it really should not take years to replicate them, just because not all information was given. Bissell states that replicators should bear the costs of visiting research labs and cooperating with the original authors (??Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.??). I??m not quite sure of this. As Bissell points out herself at the beginning of her article, it is often students who replicate work. Will their lab pay for expensive research trips? Will they get travel grants for a replication? And for the social sciences that often works without a lab structure, who will pay for the replication costs? I feel the statement that a replicator must bear all costs ?C even though original authors profit from cooperation as well, can be off-putting for many students engaging in replication.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Irakli LoladzeIrakli Loladze•2013-11-24 04:18 PM
Why would some 'senior' resist replication initiative? The federal investment in R&D is over $140 billion annually. Almost a half of it goes to NIH, NASA, NSF, DOE, and USDA. A huge chunk of it is given away on the basis of grant proposals. For every grant that a scientist wins, about a half of it goes to her university as an overhead cost. So deans and provosts salivate over every potential grant and promote those scientists that win more grants, not those that pursue truth. The reproducibility of research is the last thing on their minds, if it is on their minds at all. The system successfully turns scientists from being truth seekers to becoming experts in securing external funding. The two paths do not have to be mutually exclusive but often and increasingly they are conflicting. As the grantsmanship sucks in more and more skills and time, the less time is devoted to genuine science. The disease is systemic affecting both empirical and theoretical research. The system discourages multi-year painstaking analysis of biological systems to distill the kernels of truth out of the enormous complexity. Instead, it encourages hiding sloppy science in complexity and rushing to make flashy publications. The splashy publications in turn lead to more grants. Big grants engender even bigger grants. Rich labs are getting richer. This loop is self-reinforcing. Insisting on reproducibility is anathema to the pathological loop.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Kenneth PimpleKenneth Pimple•2013-11-22 07:50 PM
I am glad to have read Dr. Bissell's piece as well as the responses. I am a humanist who studies research integrity and the responsible conduct of research, and issues of replication clearly fall in this domain. I have two questions, both of which may be hopelessly naive; if so, please forgive me. 1. I don't see how Dr. Bissell's second example is related to replication. At core the issue seems to be that the paper under review challenged accepted understanding; this being the case, the reviewers asked for additional proof. I should think this would be good practice - if one claims something outside the mainstream and the evidence is not airtight, one should expect to face skepticism and to be asked to make an adequate case. 2. I wonder how often replication, in a strict sense, is actually necessary. Is it always the case that the very specific steps and very specific outcomes must be replicated identically? I should think that in some instances the mechanism or phenomenon or model or underlying process (I don't know the best term) would be adequately suggestive, even if not definitive, to merit additional efforts along the same line. I would like to understand these things better, but I suppose my second question is trying to make a point: It isn't replication that matters; discovery and reliable knowledge matter. Replication is a good way (perhaps the best) to verify discovery, but surely there are often multiple ways to arrive at identical knowledge.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Irakli LoladzeIrakli Loladze•2013-11-22 10:38 AM
"A result that is not able to be independently reproduced ... using ... standard laboratory procedures (blinding, controls, validated reagents etc) is not a result. It is simply a 'scientific allegation'." Colin really nailed it here. Who would oppose the Reproducibility Initiative? Those that are bound to lose the most - the labs that mastered the grantsmanship, but failed to make their results reproducible. The current system does not penalize for publishing sexy but non-reproducible findings. In fact, such publications only boost the chances of getting another grant. It is about time to end this vicious cycle that benefits a few but hurts science at large.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Prashanth Nuggehalli SrinivasPrashanth Nuggehalli Srinivas•2013-11-22 09:34 AM
Very interesting. I see this from the perspective of a public health researcher. The problem of reproducibility understandably acquires more complexity with addition of human behaviours (and organisational/societal behaviours). The impossibilities of "controlling" the experimental conditions imposes many more restrictions on researchers and the results are quite evident; these sciences have more "explanatory hypotheses" for the change observed rather than the "mechanisms" in the way laboratory sciences see things. I am sure other systems like biological systems also constantly deal with such problems, where the micro-environmental conditions could have system-wide effects. I would have thought that this kind of laboratory research would be the one most amenable to such replication drives....perhaps not? Higher coordination between people at these laboratories certainly appears to be a good way of dealing with this problem.
[5]???  2013-11-30 08:24
????????????                                                   ???                                                                 
?? 425 ????2013-7-9 14:53|????о????|????????                       ?????

?????????????????idde Ploegh?????????Nature????¡?Endthe wasteful tyranny of reviewer experiments?????????????????Щ???û?ô?????????????????о?????????Щ???????????reviewerexperiments???????????????????????????????????????????Ploegh???????о?????????????????û??????????????????????????????????????????ø?ε??????????????????loegh??????????????????????????????????????????????????????????????????????????????????????????????????????????
??????????????????????????????????????????????????????????????????????????????棬???????????????????????????????????ø??????????????????????????????????90%???????????????????????????????????????????????????????????????????????????????????????????????????????????硣
??????????????????????????????????????????????????????????????????????????4???????????????????????????????????ò???????
???????????????????????????????????????????????????????????????????????????????4???????????????????????????????????????
???????????????????????????????????????????????????????????????????
??????????????????????????????????????????????????????????????????????????????????????ô????????????????????????????????????????????????????????
д???????????????ô???????????????¹???????????嵽?????????????????????????????д?4?2???????????2????????????????????????????????????????????ô????????????????????????????????????????????????????????????????????λ????????<????????????????????4????????±???壬??????????????????????????????????????????????ô???????????????????????????????????????????????????????????????????????????????????????????????????ä???????????????????????ò?????????????е???????????r />
                    

                    ??????????http://blog.sciencenet.cn/blog-41174-706652.html
[4]???  2013-11-27 01:54
@??? /> ??????????????????????ñ????????????????????????????????????????????以?????????????-----????r /> 11?23? 12:41 ????????(1)   ????(1)
11?24? 02:37 ????  |   ???  |   ??   | ???br /> ???:
@???:
????У???????У?????У??????%28%E9%98%B4%E9%99%A9%29
11?23? 19:13 ????????(1)   ????(0)
11?24? 02:35 ????  |   ???  |   ??   | ???br /> ???:
Well said!
@sijin2012:
?????????????}?????????????????飬????????????5??????????????????õ??????ÿ???????????????????????????η????????????ô??---?¼?????????????¼
11?23? 22:22 ????????(1)   ????(0)
11?24? 02:34 ????  |   ???  |   ??   | ???br /> ???:
@???5800:
???????????????????????????????????????·??????????????????????????????????????С?????????????????ò??????????????...
[3]???  2013-11-22 09:13
Perhaps, we shouldn't emphasize that much on Reproducibility: The risks of the replication drive
Mina Bissell
20 November 2013
The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists, says Mina Bissell.

Article tools
PDFRights & Permissions
Subject terms:
Cell biology Publishing Research management

PAUL BLOW
Every once in a while, one of my postdocs or students asks, in a grave voice, to speak to me privately. With terror in their eyes, they tell me that they have been unable to replicate one of my laboratory's previous experiments, no matter how hard they try. Replication is always a concern when dealing with systems as complex as the three-dimensional cell cultures routinely used in my lab. But with time and careful consideration of experimental conditions, they, and others, have always managed to replicate our previous data.


Humans interbred with a mysterious archaic population
How the capacity to evolve can itself evolve
The weak statistics that are making science irreproducible
Articles in both the scientific and popular press1?C3 have addressed how frequently biologists are unable to repeat each other's experiments, even when using the same materials and methods. But I am concerned about the latest drive by some in biology to have results replicated by an independent, self-appointed entity that will charge for the service. The US National Institutes of Health is considering making validation routine for certain types of experiments, including the basic science that leads to clinical trials4. But who will evaluate the evaluators? The Reproducibility Initiative, for example, launched by the journal PLoS ONE with three other companies, asks scientists to submit their papers for replication by third parties, for a fee, with the results appearing in PLoS ONE. Nature has targeted5 reproducibility by giving more space to methods sections and encouraging more transparency from authors, and has composed a checklist of necessary technical and statistical information. This should be applauded.

So why am I concerned? Isn't reproducibility the bedrock of the scientific process? Yes, up to a point. But it is sometimes much easier not to replicate than to replicate studies, because the techniques and reagents are sophisticated, time-consuming and difficult to master. In the past ten years, every paper published on which I have been senior author has taken between four and six years to complete, and at times much longer. People in my lab often need months ?? if not a year ?? to replicate some of the experiments we have done on the roles of the microenvironment and extracellular matrix in cancer, and that includes consulting with other lab members, as well as the original authors.

Related stories
Reproducibility: Six red flags for suspect work
Announcement: Reducing our irreproducibility
Nature focus: Reproducibility
People trying to repeat others' research often do not have the time, funding or resources to gain the same expertise with the experimental protocol as the original authors, who were perhaps operating under a multi-year federal grant and aiming for a high-profile publication. If a researcher spends six months, say, trying to replicate such work and reports that it is irreproducible, that can deter other scientists from pursuing a promising line of research, jeopardize the original scientists' chances of obtaining funding to continue it themselves, and potentially damage their reputations.

Fair wind
Twenty years ago, a reproducibility movement would have been of less concern. Biologists were using relatively simple tools and materials, such as pre-made media and embryonic fibroblasts from chickens and mice. The techniques available were inexpensive and easy to learn, thus most experiments would have been fairly easy to double-check. But today, biologists use large data sets, engineered animals and complex culture models, especially for human cells, for which engineering new species is not an option.

Many scientists use epithelial cell lines that are exquisitely sensitive. The slightest shift in their microenvironment can alter the results ?? something a newcomer might not spot. It is common for even a seasoned scientist to struggle with cell lines and culture conditions, and unknowingly introduce changes that will make it seem that a study cannot be reproduced. Cells in culture are often immortal because they rapidly acquire epigenetic and genetic changes. As such cells divide, any alteration in the media or microenvironment ?? even if minuscule ?? can trigger further changes that skew results. Here are three examples from my own experience.


Expand
Cells of the same human breast cell line from different sources respond differently to the same assay.
JAMIE INMAN/BISSELL LAB
My collaborator, Ole Petersen, a breast-cancer researcher at the University of Copenhagen, and I have spent much of our scientific careers learning how to maintain the functional differentiation of human and mouse mammary epithelial cells in culture. We have succeeded in cultivating human breast cell lines for more than 20 years, and when we use them in the three-dimensional assays that we developed6, 7, we do not observe functional drift. But our colleagues at biotech company Genentech in South San Francisco, California, brought to our attention that they could not reproduce the architecture of our cell colonies, and the same cells seemed to have drifted functionally. The collaborators had worked with us in my lab and knew the assays intimately. When we exchanged cells and gels, we saw that the problem was in the cells, procured from an external cell bank, and not the assays.

Another example arose when we submitted what we believe to be an exciting paper for publication on the role of glucose uptake in cancer progression. The reviewers objected to many of our conclusions and results because the published literature strongly predicted the prominence of other molecules and pathways in metabolic signalling. We then had to do many extra experiments to convince them that changes in media glucose levels, or whether the cells were in different contexts (shapes) when media were kept constant, drastically changed the nature of the metabolites produced and the pathways used8.

A third example comes from a non-malignant human breast cell line that is now used by many for three-dimensional experiments. A collaborator noticed that her group could not reproduce its own data convincingly when using cells from a cell bank. She had obtained the original cells from another investigator. And they had been cultured under conditions in which they had drifted. Rather than despairing, the group analysed the reasons behind the differences and identified crucial changes in cell-cycle regulation in the drifted cells. This finding led to an exciting, new interpretation of the data that were subsequently published9.

Repeat after me
The right thing to do as a replicator of someone else's findings is to consult the original authors thoughtfully. If e-mails and phone calls don't solve the problems in replication, ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours. Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.

When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. They could confirm only 11% of the papers3. I think that if more biotech companies had the patience to send someone to the original labs, perhaps the percentage of reproducibility would be much higher.

It is true that, in some cases, no matter how meticulous one is, some papers do not hold up. But if the steps above are taken and the research still cannot be reproduced, then these non-valid findings will eventually be weeded out naturally when other careful scientists repeatedly fail to reproduce them. But sooner or later, the paper should be withdrawn from the literature by its authors.

One last point: all journals should set aside a small space to publish short, peer-reviewed reports from groups that get together to collaboratively solve reproducibility problems, describing their trials and tribulations in detail. I suggest that we call this ISPA: the Initiative to Solve Problems Amicably.

Nature 503, 333?C334 (21 November 2013) doi:10.1038/503333a
References

Naik, G. 'Scientists' Elusive Goal: Reproducing Study Results' The Wall Street Journal (2 December 2011); available at http://go.nature.com/aqopc3.
Show context
Nature Med. 18, 1443 (2012).
ArticlePubMedISIChemPort
Show context
Begley, C. G. & Ellis, L. M. Nature 483, 531?C533 (2012).
ArticlePubMedISIChemPort
Show context
Wadman, M. Nature 500, 14?C16 (2013).
ArticlePubMedISIChemPort
Show context
Nature 496, 398 (2013).
ArticleISI
Show context
Barcellos-Hoff, M. H., Aggeler, J., Ram, T. G. & Bissell, M. J. Development 105, 223?C235 (1989).
PubMedChemPort
Show context
Petersen, O. W., Rønnov-Jessen, L., Howlett, A. R. & Bissell, M. J. Proc. Natl Acad. Sci. USA 89, 9064?C9068 (1992).
ArticlePubMedChemPort
Show context
Onodera, Y., Nam, J.-M. & Bissell, M. J. J. Clin. Invest. (in the press).
Show context
Ordinario, E. et al. PLoS ONE 7, e51786 (2012).
ArticlePubMedChemPort
Show context
Related stories and links

From nature.com
Reproducibility: Six red flags for suspect work
22 May 2013
Announcement: Reducing our irreproducibility
24 April 2013
Nature focus: Reproducibility
Author information

Affiliations
Mina Bissell is Distinguished Scientist in the Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA.
Corresponding author
Correspondence to: Mina Bissell
For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.

Comments
7 commentsSubscribe to comments
Avatar for colin begleycolin begley•2013-11-21 06:39 PM
Thanks Minna. I appreciate your comments, but do not share your views. First to clarify, in the study in which we reported the Amgen experience, on many occasions we did go back to the original laboratories and asked them to reproduce their own experiments. They were unable to do so in their own labs, with their own reagents, when experiments were performed blinded. This shocked me. I did not expect that to be the case. Second, the purpose of my research over the last decade has been to bring new treatments to patients. In that context 'miniscule' changes that can alter an experimental result are very troubling. A result that is not sufficiently robust that it can be independently reproduced will not provide the basis for an effective therapy in an outbred human population. A result that is not able to be independently reproduced, that cannot be translated to another lab using what most would regard as standard laboratory procedures (blinding, controls, validated reagents etc) is not a result. It is simply a 'scientific allegation'. C. Glenn Begley
Share to TwitterShare to FacebookShare link to this comment
Avatar for Gaia ShamisGaia Shamis•2013-11-21 05:15 PM
Here's a great post about how can try and fix the irreproducibility of scientific papers. We should all strive to "publish every important detail of your method and every control, either in the main text or in that wonderful Internet-age invention, the Supplementary Materials. " http://www.myscizzle.com/blog/scientific-papers-contain-irreproducible-results-can/
Share to TwitterShare to FacebookShare link to this comment
Avatar for A nonymousA nonymous•2013-11-21 08:49 AM
I would be a rich man if I had received a penny for every time I heard the expression "in our hands" at a scientific lecture during my (brief) scientific career in biochemistry (back in the 1990's). I have the impression that Mrs Bissell argues that we should not care too much about making sure that published results can be reproduced because "that could be bad for the business." It does not answer the basic question: how interesting is a result that can be obtained only by a particular researcher in a particular lab ? I disagree completely that "the push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists." I believe that the opposite is true. I quit scientific research while doing my first post-doc, in great part because, after one year, I could not reproduce most of the (published!) results of the previous scientist who had worked on the project before me in the same lab (and who had then gone elsewhere). These results were the whole basis of my project. I have no doubt that if I had tried, say, 10 or 20 more times, then I would have obtained the desired result at least once. But how good would that kind of science have been ? If your experiments cannot be reproduced, no matter how meticulous you were, then they're useless to the scientific community because nothing can be built on non-reproducible results. Except a career, for the person who obtained them, of course. Scientists should be encouraged to report and publish when they fail to replicate other's experiments. That will make science (but maybe not scientific careers) progress much faster.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Nitin GandhiNitin Gandhi•2013-11-21 03:14 AM
There are reports that not more then 5% of papers failed when tried to replicate. The very fact that we have to take the issue of replication so seriously and spent lots of time (and money) over this -at the hard times- it-self speaks out loudly that things are very wrong in Biological research. We have two options -one is as the author (indirectly) indicates -sweep the dirt under the carpet- or second options is go for the head on collision and face the reality, I personally believe that taking the second option will be eventually inevitable so why not do it NOW?
Share to TwitterShare to FacebookShare link to this comment
Avatar for Anita BandrowskiAnita Bandrowski•2013-11-21 01:34 AM
Thank you William, that is a rather amicable description of the Reproducibility Initiative and I salute you for spearheading this. Robustness of effect is a very important issue when trying to take science to the clinic or even an undergraduate lab. The article mentions a point about large data sets that I would like to follow up on. The author states that "But today, biologists use large data sets, engineered animals and complex culture models...". The fact that a data set is large should not preclude someone from reproducing it. Yes, there is going to be a different set of expertise required to know what the numbers mean, but this should not significantly change the way that data are interpreted. In a paper we published last year (Cachat et al, 2012), we looked at a single data set deposited in the GEO database. The authors' data was included in their supplementary materials and brought into a database called the drug related gene databse (DRG) along with their interpretation as to which genes were significantly changed in expression. An independent group from the University of British Columbia, with a tool called Gemma, took the same data set and ran it through their pipeline along with thousands of other data sets. After alignment steps and several difficulties described in detail in the paper, we found the following: "From the original sets of comparisons, we selected a set of 1370 results in DRG that were stated to be differentially expressed as a function of chronic or acute cocaine. Of these 617 were confirmed by the analysis done in Gemma. Thus, only half of the original assertions were confirmed by the reanalysis." Note, there is no biological difference between these two data sets and statistically speaking we would expect ~5% misalignment not 50%. I really can't see that any scientist can argue that not being able to reproduce a finding, especially once you have just a pile of numbers, is a good way to do science. We have started the Resource Identification Initiative to help track data sets, analysis pipelines and software tools to make methods more transparent and I salute Nature, and many other journals that are starting to ask for more from authors. If anyone here would like to join the efforts please visit our page on the Force11 website where the group is coordinating efforts with publishers to put in place a consistent set of standards across all major publishers.
Share to TwitterShare to FacebookShare link to this comment
Avatar for William GunnWilliam Gunn•2013-11-20 10:21 PM
Thanks for this thoughtful post, Mina. Nature and PLOS, as well as the Reproducibility Initiative, of which I'm co-director, are all worthy efforts. Let me share some preliminary information about the selection process we went through. We searched both Scopus and Web of Science for papers matching a range of cancer biology terms. For each of 2012, 2011, and 2010, we then ranked those lists by the number of citations and picked the top 16 or 17 from each year. As you might expect, many of the results were reviews, so we excluded those, as well as clinical trials. We also excluded papers which simply reported the sequencing of a new species. Our selection criteria also specified exclusion of papers using novel techniques requiring specialized skills or training, such as you refer to in your post. However, we didn't encounter very many of those among the most highly cited papers from the past three years. If I recall, there was only one where the Science Exchange network didn't have a professional lab which could perform the technique. So it may well be true that some papers are hard to replicate because the assays are novel, but this is not the majority of even high-impact papers. Two other points: 1) Each experiment is being done by an independent professional lab which specializes in that technique, so if it doesn't replicate in their hands, in consultation with the primary authors, then it's not likely any other lab will be able to get it to work either. The full protocols for carrying out the experiments will be shared with the primary authors before the work is started, allowing them to suggest any modifications or improvements. The amended protocols, as well as the generated data, will be openly available on the Open Science Framework, so any interested party can see the protocol and data for themselves. At a minimum, this process will add value to the existing publication by clarifiying steps that may have been unclear in the original paper. 2) It would be good if the replications could be uncovered by other labs working in the same area, but that's not what happens in practice. In fact, in a 2011 paper in Nature Reviews Drug Discovery http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html , Prinz et al found that whether or not Bayer could validate a target in-house had nothing to do with how many preclinical papers were published on the topic, the impact factor of the journals those studies were in, or the number of independent groups working on it. In the Bayer studies, most of the ones that did replicate, however, showed robustness to minor variations, whereas even 1:1 replications showed inconsistencies with ones that didn't. As far as Amgen, they often did contact the original labs, and found irreproducibilities with the same researcher, same reagents, in the same lab. We will be working closely with the authors of the papers we're replicating as the work is being conducted and feedback so far has been positive, you might almost say amicable. In the end, this is the effort of two scientists to make science work better for everyone. The worst that could happen is that we learn a lot about what level of reproducibility to expect and how to reliably build on a published finding. At best, funders will start tacking a few percent on to grants for replication and publishers will start asking for it. That can only be good for science as a whole.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Cell PressCell Press•2013-11-20 09:54 PM
I couldn't agree more. See my blog at: http://news.cell.com/cellreports/cell-reports/in-defense-of-science
[2]???  2013-11-21 02:58
????????quot;??????????" ??? "?????³???"????
[1]??????/a>  2013-11-12 16:16
?????????????õõ????????????????
??????(2013-11-14 03:11)????úã????????#8226;??????????????, ??????????????????--??????????????????????????档???????????????硣?????³????????????? ???5µ???
[ 打印 ]
阅读 ()评论 (0)
评论
目前还没有任何评论
登录后才可评论.