隴山隴西郡

寧靜純我心 感得事物人 寫樸實清新. 閑書閑話養閑心,閑筆閑寫記閑人;人生無虞懂珍惜,以沫相濡字字真。
個人資料
  • 博客訪問:
文章分類
歸檔
正文

Notebook: peer-review advice

(2018-02-26 13:21:21) 下一個
Notebook: http://210.75.240.149/blog-847277-740998.html

博文

英語學報編輯編委感言十誡”雪中送炭錦上添花”二十九億美元退稿

已有 7647 次閱讀 2013-11-12 03:43 |個人分類:英語寫作(群組群主)|係統分類:論文交流|關鍵詞:英語

英語學報編輯編委感言十誡雪中送炭與錦上添花-經曆二十九億美元產品退稿***

引子

鮑海飛文獻是前人心血鑄就, 文獻就是武功秘籍!”(Reference#1)得到我的共鳴。我想補充這一點:一個好的文獻作品是藝術,通過作者和編輯(編輯加審稿人)一起合作創建。


目的

我擔任幾個國際規模英語學報期刊的編輯編委,特別在一百四十多歲的(1869年創建)學報任編委多,我知道一些資深學者的編委和編輯。資深學者審稿人的敬業科研精神: 淵博深厚、求實索真的治學態度、清晰、敏銳深刻的洞見、注重細節的心態、真心、體貼、善良、慷慨、追求卓越的科學風範都深深地感染著我。許多學者都有這樣的經驗,所以,此文目的是拋磚引玉,呼引高人高論。


經驗

您的稿件的命運常掛在一至三位評審手中。如果你幸運地找到一種審閱審稿人,你得到有一些建設性的批評,幫助你修改。如果不是幸運,你得到審稿人一挫傷拒絕信, 雪上加霜。一些審稿人評論拒絕您的稿件, 他得到優勢,他自己搶先發表。

我的稿件得到過一些審稿人評論,如似乎高中學生寫的”;“我試過,不可能!我不相信這研究和實驗,拒稿。

我不知道他們是什麽意思--脫離論文的細節泛泛而談。我懷疑這些審稿人的評論,他們知道如何定義高中學生的寫作水平?也許一些高中學生寫作遠遠優於這些審稿人的寫作水平。

我很生氣:什麽我試過,什麽不可能” -他沒讀細節,論文中的每一個細節都是有依據的。他是刻板印象,靠個人經驗說話-他試過,但沒有透露他的參考文章,不知道他到底談什麽。他沒有客觀的就文論文的具體想法,隻有主觀的尖叫!這樣的審查意見,我不知道如何修改稿件。

幸運的是,不會是一位評審決定稿件生死。存在一些科學的判斷標準。

其一,研究課題的開創性和潛力: 實驗真的是可重複嗎?不同的實驗室可嚴格重複驗證?

後來,我們發表了係列文章,被不同的實驗室引用三千多次,被認為是該領域係統模型創始文章之一。

其二,研究成果真的可應用的嗎?

多年後,一家公司發展我的相關研究其實際用途應用於生產產生經濟價值,成交二十九億美元

後來自認, 那位審稿是老諾貝爾獎獲得者。還開玩笑說,“不僅寫的似高中生;看起來也像高中生。

令人哭笑不得。我從來沒有這樣笑話一個稿件辛勤工作。

當我是麻省理工學院哈佛大學的一個學徒時,我感謝我的科研啟蒙導師給我審別人的手稿的機會。我花了大量的時間仔細閱讀,一行行,一個字一個字,閱讀相關文獻,徹底消化,然後寫我的評論意見。我很自豪地看到,我的導師采納我的意見。

目前為止,我還是一直在做我為一個學徒的審稿方式,恭恭敬敬地讀,於細微處見真實,試圖以建設性的,具體的,體貼的語言寫我的審稿意見。

比爾·蓋茨說,人生是不公平的,人們天生擁有不等量的資源。例如,非以英語為母語,英語寫作的阻礙。即使你是一個英語為母語,你可能不擅長寫作。即使你擅長英語寫作,你可能無法拿出一個科學的邏輯故事,以此來吸引讀者的注意力。(我見過一個老普利策獎得主編輯的科學文章, 並沒有任何意義)

我的許多同事不想擔任編輯編委,因為它需要你的很多時間 (Ref. #2, #3)。有時候,我很難想象一個同事閱讀我自己在時間壓力下寫得不好的稿子版本。

有時審稿, 你能感覺到作者用自己的靈魂和心靈寫作。它觸及你的心,強製搜索你科學精神的靈魂。給你一個鞭策。

Great authors:

"What really knocks me out is a book that, when you're all done reading it, you wish the author that wrote it was a terrific friend of yours and you could call him up on the phone whenever you felt like it. That doesn't happen much, though."(Jerome David Salinger “TheCatcher in the Rye” 1951 novel)

一個良好的科學手稿應該是清晰的邏輯。如果你有邏輯,讀者可以看到通過你的數據集(圖和表)的順序而讀你的文字。當我拿起一份手稿,我第一次掃描的圖和表, 於細微處見真實。細微處經得起推敲,綜論整合有高度 (是否誇大其詞, 小頭戴個大帽子)


十誡

這些年來,我得到了許多有關同行評審的準則。首席主編大衛·德魯斌十誡 (神在西奈山Mount Sinai給摩西十誡), 完整,平衡。我張貼在這裏要提醒自己,如下:

第一誡:不帶偏見, 客觀審查, 讓數據說話

第二誡: 及時審查稿件(在要求審閱的時間內完成自己的評論)

第三誡: 了解你的角色 (作為一個審稿人評論家,你是一個專業知識的顧問,監控,編輯。你的工作是評估的嚴謹性和原創性,科學和寫作的清晰度。兩個或三個審稿的意見的基礎上,監測的編輯再決定是否應接受手稿,作者修訂,或拒絕。)

第四誡: 承認作者的努力,識別工作的優點和手稿的問題 (手稿的評論應該以積極的聲明開始,試圖不傷害作者的感情, 尊重承認作者試圖完成的努力)

第五誡: 提供建設性的意見 (如何溝通其結果更清晰)

第六誡: 建議額外工作要明智 (提出沒有必要的額外的研究和實驗,以支持該研究的主要結論,或建議接受提供足夠前人的文獻來證明手稿作者的結論)

第七誡: 留給子孫後代判斷手稿的影響 (很少有可能預測手稿的未來,評審應注重的問題是不是新的,是真的嗎?”)

第八誡: 試圖在你的研究領域中倡導創建一個正反饋審稿回路 (請記住,善有善報,惡有惡報。有人在他或她的手稿已經收到了不公平的評論更可能同樣對待別人。因此,如果您想您的論文審查收到一個公正的方式,按照黃金法則:己所不欲, 勿施於人

第九誡: 請記住,這是不是你的論文 (審查稿件時,你的工作是讓手稿更加嚴謹,完整,並且清楚地呈現。手稿是否符合期刊的質量標準,材料如何進行介紹和解釋,作者應該最有發言權。這是他們的論文,不是你的。)

第十誡: 爭取一個良好的示範作用

(與您的學生和博士後審稿, 提供一個良好的教學機會。然而,要知道,那個年輕的學生和博士科學家可以有點太急於證明自己的能力找到一份手稿的故障,而不是它的強項。請記住,如果你的學生之一審閱的手稿,由你來確保的意見準確地反映你的意見,因為是你提交審查。)


自勉

我們確實活得艱難,一要承受種種外部的壓力,更要麵對自己內心的困惑。在苦苦掙紮中,如果有人向你投以理解的目光,你會感到一種生命的暖意,或許僅有短暫的一瞥,就足以使我感奮不已”——傑羅姆·大衛·塞林格《麥田裏的守望者(Jerome David Salinger “TheCatcher in the Rye” 1951 novel)

任何傻瓜都可以破壞一個穀倉,但它需要一個好木匠建立一個穀倉。” (美國國會議員和議長薩姆·雷伯恩)

同樣地,一個蠢驢可以把手稿丟進垃圾,但它需要一個好的學者作者和審稿人全心全意構建寫成好的論文

善有善報,惡有惡報。有人在他或她的手稿已經收到了不公平的評論更可能同樣對待別人。"Do not judge; so that you won't be judged. For with the judgment you use, you will be judged, and with the  measure you use, it will be measured to you." (Matthew 7:1,2)

"從個人來講,一個人內在的力量最為強大,隻由心而發的熱愛,才能激發自己的想象力和創造力。如果內心沒有願望,那麽無論外界的刺激有多大,都很難取得成就,不要迎合社會,要摒棄功名利祿,遵從自己內心的想法" (崔琦--美籍華人物理學家,1998諾貝爾物理學獎獲得者)學術人的傳統美德講骨氣,講傲氣,講麵子,講名聲,總而言之,都講虛的。大凡虛的東西,上去了就下不來,下來了那都是沒辦法的辦法。一定要尊重自己內心的選擇,凡事沒有對錯和好壞,皆因立場和心態之不同 (文雙) (Ref. #8)

比爾·蓋茨說,人生是不公平的,人們天生擁有不等量的資源 (Ref.#4)明白“寸有所長,尺有所短”的道理。應該鼓勵學生找到自己的特長、發揮特長。為您組織中的天才撐起一片天空:“創新的產生從下往上,由有才華的個人自我組織的團隊”(哈佛研究創新是基層草根的努力而不是權威專家從前麵領先的結果 (本文引用地址:http://blog.sciencenet.cn/blog-847277-722667.html ))

了解不同的作者個人的角度 (Ref. #5, Ref. #6, #7, #8),可能會幫助作者和審稿人彼此建立良好的默契 (Ref. #6)。寫得不好的稿件, 你知道雪中送炭。寫得精美的稿件, 你知道錦上添花


***題注

謹以此文科學網誌願者(作者讀者、編輯)致衷心感謝和敬意!此文目的是拋磚引玉,呼引高人高論。我是科學網成員僅僅幾個月,我意識到不能涵蓋所有關於這一主題的文章。如果你發現任何相關文章, 我希望你加入到我的參考文章清單--我將結合你的思路和參考材料修改我自己的博客。

有所謂事在人為則甘苦自知,言為心聲則人所共鳴,思出肺腑則悅逢知己,期待您分享這一路上的洞明練達,得失榮辱。讓我們一起努力,創建一個具有理性、建設性的科學主體交往平台,構築新的科學生活社會”(科學網常務副總編輯李曉)

科學網博大精深, 臥虎藏龍。我的目的是通過參加科學網向別人學習。我從閱讀別人的博客文章學到很多東西,這促使我貢獻我自己的博客文章。基於科學網是誌願者的平台,我們每個人都應該嚐試貢獻自己的博客文章,幫助別人最終幫助自己。


 

Reproducibility: The risks of the replication drive

careful, meticulous scientists, says Mina Bissell.



 

************************************************************************

Photo description: “我們確實活得艱難,一要承受種種外部的壓力,更要麵對自己內心的困惑。在苦苦掙紮中,如果有人向你投以理解的目光,你會感到一種生命的暖意,或許僅有短暫的一瞥,就足以使我感奮不已”——傑羅姆·大衛·塞林格《麥田裏的守望者(Jerome David Salinger “TheCatcher in the Rye” 1951 novel)



 

Scholars care greatly about the sources of your insights and your information. You must tell your reader where the raw material came from! Your facts are only as good as the place where you got them and your conclusionsonly as good as the facts they’re based on. (學者們最為關心您的見解和你的信息來源。你必須告訴你的讀者的原料來自哪裏!)

Reference#1: 鮑海飛: “為什麽要緊盯著文獻不放已有 16870 次閱讀 2013-10-20 20:15 |個人分類:隨想|係統分類:觀點評述本文引用地址:

http://blog.sciencenet.cn/blog-278905-734681.html )

Ref. #2).孟津-SCI期刊審稿的私房話  精選.已有 7688 次閱讀2013-10-26 06:33 |本文引用地址:http://blog.sciencenet.cn/blog-4699-736163.html

Ref. #3).曾泳春: 一個秋天收到的審稿意見,已有 12497 次閱讀 2013-10-18 22:21 本文引用地址:http://blog.sciencenet.cn/blog-531950-734202.html

Ref. #4: 哈佛研究創新是基層草根的努力而不是權威專家從前麵領先的結果. 本文引用地址:http://blog.sciencenet.cn/blog-847277-722667.html 

Ref. #5: 武夷山三要素與徐曉三要素與情意芋頭. 本文引用地址:http://blog.sciencenet.cn/blog-847277-728976.html

Ref. #6: 趙斌科技論文寫作中容易忽略卻重要的那些事兒精選. 本文引用地址:http://blog.sciencenet.cn/blog-502444-736825.html 

趙斌(說得透徹!比我到位!引用到此!我以為引用是對作者最大的敬意!)

文章修改中對待審稿意見應該不亢不卑。首先,我們是應該尊重那些審稿人的,SCI的審稿幾乎都是免費的,要人家靜下心來認真拜讀您那可能水平並不高的論文,本身就值得我們尊敬。

可以毫不誇張地說,在你的文章出版之前,甚至出版之後,沒有幾個人會像審稿人那樣仔細體會和分析你的文章。因此,在這種背景下,我們首先是應該充滿感激的,也認真思考和領會他們所提出的每一條意見,甚至是近乎苛刻的意見,這是前提,這就是我說的不亢。當然,這一點兒大多數人還是能做到的,畢竟人家還局部決定著你文章的生死權。

然而,許多學生在對待審稿意見時要表現出不卑,似乎更難一些。由於編輯在郵件中總是站在審稿者的角度的,因此不敢對審稿者的任何意見造次,哪怕是一些誤解或者錯誤的建議,也總是在想方設法地去附和審稿者的意見。顯然,這樣的態度也是不對的。

SCI雜誌審稿采用的是同行評議(peer review),也就是投稿人和審稿人是同行,大多數情況下應該是水平相當,國內喜歡翻譯成專家審稿似乎是一種誤導!如果一些作者的工作集中在比較新的領域,他們鑽研三年都應該能這方麵的專家,至少比他人在自己的這個領域應該更懂一些,因此這些作者自己才是這方麵真正的專家,這也是一個前提。

由此可見,在進行修改和回複審稿人的問題時,應該是同行或者說是專家之間討論問題的口吻,而不是下屬回答上司的問題那樣唯唯諾諾,信心很重要!在抱有感激之情的前提下,與審稿者心平氣和地討論問題,這是我們對待審稿意見應有的態度”(Ref. #6)(alsoRef. #7)

 

Ref. #7: 李東風審稿人的責任心

已有 79 次閱讀 2013-11-207:05 |係統分類:觀點評述|關鍵詞:審稿責任心

稿件評審是學術審查的重要一環。審稿人對刊物質量把關,對作者工作肩負重要的責任。

對於稿件,應該認真對待。無論接受或拒絕,都應該提出中肯的意見或建議,這是對作者勞動的起碼尊重。

比較國內外期刊,明顯感到這方麵的差距。國外投稿,審稿人意見很具體,明確,往往寫出幾頁的意見,從學術性,嚴謹性到文字表達,都給出詳實的意見。而國內某些審稿人,往往根據個人對內容的熟悉程度,作者名氣,好惡來評判。這就造成某些刊物對稿件的來源偏愛,甚至成為"私家"刊物。拒稿的理由往往匆匆批2句,含糊籠統,好似不願與你理論一樣。這一定程度反映出評審人對稿件的忽視,對投稿人的蔑視心態。還有的大牛以工作忙為理由,一拖很長時間,審查也是敷衍了事。這樣的評審人不具備資格!

拒稿要慎重,要有具體理由。要指出文章的問題,以利於作者改進提高。好的評審讓作者心悅誠服,的評審讓作者看不起你甚至對刊物信譽產生懷疑。

當然,投稿人更要認真,不能把一份未經深思熟慮的稿件隨便投寄。這也是對自己,對刊物的不負責。
本文引用地址:http://blog.sciencenet.cn/blog-729911-738210.html 

 

Ref. #8: 文雙春:牛刊投稿有玄機”——越熱越易中 http://bbs.sciencenet.cn/static/image/blog/recommendico.gif精選

已有 7296 次閱讀 2013-8-1017:55 |個人分類:談點正事|係統分類:博客新聞|關鍵詞:天氣論文

投過稿的論文作者都有體會,一篇論文稿件能否投中,除了它內在的水平和質量外,外在的運氣成分不能說沒有。例如,一篇論文稿件交由哪位編輯處理、特別是最後送到了哪位(些)審稿人手上,很大程度上注定了這篇論文的命運。另外,結婚、喬遷、開張、竣工等慶典都要擇黃道吉日,科研人員辛辛苦苦做出成果寫成論文後,總希望在牛刊上發表,那麽向牛刊投稿難道就沒黃道吉日嗎?(老文前幾天到南嶽衡山心願之旅後特別相信“黃道吉日”了!)

黃道吉日或許太具隨機性,從更長遠的方向提問:人類的多數生產活動具有明顯的周期性或波動性,如農民種地有季節性、經商掙錢有淡旺季,科研人員向刊物投稿相當於推銷自己的產品,這種產品的銷路是否也有淡旺季呢?

理論上,科研人員論文的“銷路”應當與“推銷”時間無關,不應有淡旺季。因為,一方麵,刊物特別是牛刊,一年365天,天天營業,天天收購,天天童叟無欺;另一方麵,科研人員的工作和生活最枯燥、最單調,不分季節、不分寒冬臘月和酷暑烈日,日日做研究、月月寫論文。但是,俗話說,來的早不如來的巧;投稿作為一種登門推銷的行為,應當也有來的早來的巧的問題。

國外有研究者研究了心理學、物理學、化學等學科的一些牛刊(top journals)的論文投稿和錄用率(acceptance rates)問題,發現論文投稿和錄用的確有,表現在有所謂的季節性偏差(seasonal bias)現象。發現者並由此給科研人員提供投稿竅門,如:

Write when hot - submit whennot. 熱時寫,不熱時投。

Write when you can and submitwhen you are ready. 能寫就寫,擇機而投。

You should write when you like,but submit in July. 想寫就寫,投在七月。

多數這類研究表明,科研人員投稿有類似中國鄉村的趕集(湖南大多叫趕墟)現象,最火爆的集市大多在炎炎夏日(submissions peak during the summer months)。其中針對《歐洲物理快報》(Europhysics Letters)和美國《物理評論快報》(Physical ReviewLetters)兩份物理學雜誌的研究表明,向它們投稿的高峰均在7月份。(Thereis considerable seasonal variation in submissions within each year, with Julybeing typically the month of most submissions.)

當然,投稿最重要的是投中。去年一篇對《歐洲物理快報》雜誌的研究表明,該雜誌稿件錄用率最高的月份與投稿量一樣,都是7月!最近,在美國物理學會主編Gene Sprouse的鼓勵下,《物理評論快報》雜誌的一位高級助理編輯Manolis Antonoyiannakis研究了該雜誌論文錄用率的季節性偏差問題。對從19901月至20129月之間的273個月的190,106篇投稿論文的統計分析表明,每年的12個月當中,8月份的月均論文錄用率(the monthly average acceptance rates)最高,其次是7月份。總之,天氣最熱的月份,不僅投稿最多,而且錄用率最高!

這樣看來,科研人員自覺或不自覺地選擇在炎熱夏季紮堆投稿,原來不是巧合,而是暗藏玄機。國際上的牛刊主要被老外占據,現在看來主要是老外比我們先掌握投稿玄機,或掌握了更多的投稿玄機。老文也是到南嶽衡山拜了菩薩後才領悟到這個玄機的,雖為時不晚,但畢竟稀裏糊塗了多年。迫切希望我們國家盡快資助一支專門研究投稿玄機的隊伍!

不過,老文仔細研究這些玄機後,發現一些牛刊的稿件錄用率雖然的確如莊稼地裏的果蔬一樣有一定的季節性,但這種季節性偏差非常小,正如《物理評論快報》雜誌編輯所說,月稿件錄用率沒有明顯的統計意義上的偏差(No statistically significantvariations were found in the monthly acceptance rates)。也就是說,從統計意義上說投稿時機對論文錄用率幾乎沒有影響。

不管怎樣,在此幾十年一遇的酷熱難耐的暑期,老文向各位仍在科研戰場戰天鬥地的同誌們知曉這種玄機,對安慰心靈、安撫情緒、鼓舞鬥誌,還是具有十分重要的現實意義的。牛刊錄稿究竟有否玄機?此時此刻,寧可信其有,不可信其無!

References

ManolisAntonoyiannakis,Acceptance rates in Physical Review Letters: No seasonal bias,arXiv:1308.1552 [physics.soc-ph],and references therein. 

1990-2012年間PRL雜誌的月均論文錄用率(8月最高)


本文引用地址:http://bbs.sciencenet.cn/blog-412323-715891.html 

 

 

 


 



http://210.75.240.149/blog-847277-740998.html

上一篇:推出天才的基因組測序遴選我國百名具衝擊諾貝爾獎潛力人才?!
下一篇:哈佛大學威爾遜研究螞蟻發明香水創造社會生物學(一)

14 曹聰 張憶文 李泳 李明陽 劉克 李本先 蘇光鬆 鮑海飛 劉進平 Editage意得輯 王桂穎 王加升 楊正瓴 沈文鋒

該博文允許注冊用戶評論 請點擊登錄 評論 (8 個評論)

[8]???  2014-10-26 07:14
Peer review: Opinions - inspirational
[25]????  2014-10-25 01:29
??4????????????n review????????????????????????????????????????????????????40???????ÿ???????4-6?????ÿ?????15??????????????????????????????????????4????
?ÿ???????4-6?????ÿ?????15??????????????????????????????????????4????
??????br /> ??????(2014-10-25 01:45)?????????????????????????????????????????û??15??????????????????-6??????????????????????
[28]????  2014-10-25 03:23
????????????????ô?????????????????????塣???????????κκô????????????????λ???????????¡????????????
??????(2014-10-25 07:08)???????????????????????壬????壬?ô?4????????????????????
?????????????????
[27]wwmwwm  2014-10-25 03:06
??????????????λ??
?????????λ??????к??????γ????λ????????
???????????????
????õ?????????
??????(2014-10-25 07:07)??????????л????????
[11]liangzx  2014-10-24 22:24
[1]????nbsp; 2014-10-25 10:21
????????????????????????????????????????????????????????????????????????
??????(2014-10-25 10:26)??????????????????????????????????????þ???????????????????????????????
=====
?????ɡ?
??????(2014-10-24 23:27)??лл??
[2]????nbsp; 2014-10-24 18:29
лл????????????????????????????????
??????(2014-10-24 18:59)??   ?лл??????

[1]????nbsp; 2014-10-24 18:21
????????????????????????????????????????????????????????????????????????
??????(2014-10-24 18:26)??????????????????????????????????????þ???????????????????????????????
??????????http://blog.sciencenet.cn/blog-502444-838360.html
[7]???  2014-10-26 07:02
??????????????????????  ???
?? 5769 ????2014-10-24 07:45 |????????????????|?????? ????? Publons    ?????

????????????????????????????????????¶??????????????????????????Щ??????????ò???????????????飬??????????????????????û?????????????????????ô?????????????????þ????????????10??????????????????????????-4?????????????????????????Associate Editor????????????4??????????ÿ????????С??????????????????????????????????????????Щ????????????????????????????????????????????д???????????????????????????????????Щ???????õ?¶????????????????ɡ??}???Nature????l?????}???[1, 2]???????????????????????????????????????????????}????????????????

??????????????????????????????????壬????û??õ?????????????????????κα??????????!?????????????????????????????????????????δ???????û????4?????????????????????????????'????????????????????4?????????????????????????????????????????????????????????????????????????????4???<????????30???????ublons?????????????????????????????????Amazon Web Services????redit????????Publons??????????????????????Publons????????????¼???????????????????????????????????????????????????????????????????????

Publons????????Preston?????????????????????????????????û???????õ?????????????????????³??????????????????????Щ?????м????????????????????????Щ??????????4??????????????????????????????????????Publons?????????????????????????????????????????????????????????????????????õ??????????????????4j????????ô?????????????????????????¼??????????????????????????????6????????????redit??????ô?????????????????????????????????壬??????????????????????Publons????????????????????????Щ??????????????????????????????????????δ???õ???????????????????????????#?????ons?????????????????????ø????ô????罫???????????

?????????????????????????????????????<?????????Щ?????????????????????Publons?????b??????????????????????????????????????????????????????RCID??????о????????????b?????????????????????????????????????b???????????????????????λ????????blon?????????????????????????????¹???о???????ogendra Kumar Mishra??????????????????????????????????????????????alcolm Jobling???ôPublons????????ÿ????????Jobling??????????????????????ublons???4???????9??????????????00???壬????????????Mishra??????ÿ????????????ublons?????2?????????????????????????????3-5????????Jobling??????????????????????????????????????????????????Jobling?????????????????????????

??????????????????????????????????????????????????????????????????????????????4???????????????????????á???????????????????????????Щ??????????????????????????????????????????ÿ???????????????????????????????????(?????????????????????????Щ???????о??????Ч???????<???????????õ??????????????????????????????????????????????????????????van Rooyen????????5?????????Review Quality Instrument?????yen??????????????????J Clin Epidemiol.1999;52:625-629)??????4???????????????????????????????????????????????????????????%??

ÿ??????????????????bling?????????????????3С????????????????????????????????????12С??????????????????????????4??????????????????????????Щ????????????????????????????????????????????????????Mishra?????????????????????????????Щ????????????????????????????????????????????????????????????????????û????????????????????????????????????????????????????????????????????????????????????????????????????-12С???????????ature????????????4??????????????????????????????????

???????????ù?????????????????obling??????????????????????????????????????ù???????????????????????????????????????????????????????????????????????????????????????????????????j???????????õ??????????????????????????????????塱??Mishra?????????4?????????ý???????????????Publons??????????????????????????????????????????????Publons?????????????????????????????????????????????????????е???????????????????????????????????????û???????????f????f?????????????????????Preston???????????????????????????????????????????????????Peerj??igaScience???????????????????????Publons??????????????????F1000??????ô??????????????b?????????????????????õ??????????????????????檔????????????????????????????????????????????????????????????????????????????????ô????????????õ???????

??????У?????????PeerJ????????PeerJ ?????????????????????????????????????????????????????????????????????????????????????????????? 40%?????????????????????????????????????????????????? 80%???????????????????????????????????????????????????ô?????????????????????????????????????ô???????????????????????

?????????????????????????Щ??????????????????????????????????ø???????ô????????壿????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????ɡ????????????????????????????û??õ??????????????????õ??????????????????λ???????????????????????????????????κν?????????????????????ô??????????????????????????????õ?????????????????????????????????????û??о???Щ???????????????????????????????????????????????ature??cience????????????

??????????Щ?????????????????????????????Щ?f????????????????????õ???????Щ?????????????????????4????????????????????¡?????е?????????????????????????????????????????-3????????????????????????????????????????????????Щ????????????????Щ??????????????????????????4????????????????????

?????Щ?????????????????????????????????????????????????????????????????????о????????????????в???????



?ο?????

(1) The scientists who get credit for peer review, Nature doi:10.1038/nature.2014.16102, 09 October 2014

(2) Review rewards, Nature 514, 274 (16 October 2014) doi:10.1038/514274a





??????????http://blog.sciencenet.cn/blog-502444-838360.html
[6]???  2014-2-5 06:32
More came as below:
http://www.nature.com/news/reproducibility-the-risks-of-the-replication-drive-1.14184

Noam Harel•2013-12-27 04:53 AM
The tone of this commentary and its title both seem to lobby against the growing communal push for replicating basic scientific results. However, Dr. Bissell's own examples pretty clearly support the exact reasons why the 'Replication Drive' has been growing in the first place - to be trusted, scientific results need to be reproducible; when there are discrepancies, scientists should work together to determine why. This is good for science. The bottom line? Everything has a cost. The societal cost of devoting resources toward reproducing scientific results is far outweighed by the benefits. Conversely, the societal cost of publishing papers based on the one or two times an experiment "worked" (ie got the desired result), while ignoring the 10 or more times the experiment "didn't work" has for too long been a dirty, open secret among scientists. It's great to see this issue subjected to a more public debate at long last.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Elizabeth IornsElizabeth Iorns•2013-12-21 09:36 PM
Andrew Gelman, Director of the Applied Statistics Center at Columbia University's response to this is available here: http://andrewgelman.com/2013/12/17/replication-backlash/
Share to TwitterShare to FacebookShare link to this comment
Avatar for John J. PippinJohn J. Pippin•2013-12-08 07:22 PM
So replication threatens the cache of irreproducible experiments? For those removed from labs, the more important issue is the robustness and reliability of basic science experiments, not the ability to get a desired result at one point but not ever again. Basic science research is not (to the rest of us) about whether you can publish and thereby keep the grants coming. It's about getting to the truth, and hopefully about translating that truth to something useful rather than arcane. Dr. Bissell's opinion piece beggars the ability of laboratory science to stand on merit, and asks for permission either to be wrong or at least irrelevant. That's not science.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Etienne BurdetEtienne Burdet•2013-11-28 01:50 PM
I tried to replicate the LHC experiments and failed. This is a proof that the Higgs boson is not science. Can I have my Nobel please ?
Share to TwitterShare to FacebookShare link to this comment
Avatar for Yishai ShimoniYishai Shimoni•2013-11-25 05:54 PM
I think there are a few points that were ignored thus far in the comments: 1. In principle it should not be obligatory for an experiment to be reproducible. It is useful to report on surprising results that cannot be explained. However, in such a case the reasons should be clarified, and until then no real conclusion can be drawn from the results. The results may be stochastic, they may depend on a specific overlooked detail, or they may be an artifact. Until the scientific community understands the conditions under which a result is reproducible or to what extent it is reproducible, it is not useful and should be regarded as a possible artifact. 2. Requiring anyone who wants to reproduce a result to contact the authors is not practical, especially if it is an important result. What if there are a few hundred labs who want to use the result? what if the researcher who performed the experiment changed position? changed fields? or even passed away? 3. The suggestion that anyone who wants to reproduce the results should contact the authors may very easily lead to knowledge-hoarding. It is a natural tendency to want to become a world authority on the specific technique, especially after spending years to arrive at the result. Unfortunately, this may mean holding back on some specific essential detail, and only by working with the author and adding them as co-authors is it possible to publish anything on it.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Replication Political ScienceReplication Political Science•2013-11-24 11:44 AM
I do agree with many of the points about being careful and considerate when replicating published work. But let??s talk about the newcomers a bit more. First of all, I??m not sure I like the word ??newcomer??. Although this might be a misinterpretation, it sounds as if those trying to replicate work are the ??juniors?? who are not quite sure of what they are doing, while the ??seniors?? worked for years on a topic and deserve special protection against reputational damage. It goes without saying that anyone trying to replicate works should try to cooperate with the original authors. I agree. However, I would like to point that original authors don??t always show the willingness or capacity to invest time into helping someone else reproducing the results. As Bissell says herself, experiments can take years ?C and once the paper is published and someone decides to try to replicate it, the authors might already work on a new, time-intensive topic. My students in the replication workshop were sometimes frustrated when original authors were not available to help with the replication. So I??d say, let??s not just postulate that ??newcomers?? should try to cooperate, but that original authors should make time to help as well to produce the best possible outcome when validating work. It is in the interest of original authors to clearly report the experimental conditions, so that others are not thrown off track due to tiny differences. This goes for replication based on re-analysing data as well as for experiments. The responsibility of paying attention to details lies not only with those trying to replicate work. From my experience and that of my students trying to replicat work in the social sciences, papers do not always list all steps that led to the findings. My students are often surprised at how authors came up with a variable, recoded it, which model specifications they used etc. Sometimes STATA or R code is incomplete. Therefore, while those replicating work should try to be aware of such details, original authors need to work on this as well. Original research might take years, but it really should not take years to replicate them, just because not all information was given. Bissell states that replicators should bear the costs of visiting research labs and cooperating with the original authors (??Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.??). I??m not quite sure of this. As Bissell points out herself at the beginning of her article, it is often students who replicate work. Will their lab pay for expensive research trips? Will they get travel grants for a replication? And for the social sciences that often works without a lab structure, who will pay for the replication costs? I feel the statement that a replicator must bear all costs ?C even though original authors profit from cooperation as well, can be off-putting for many students engaging in replication.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Irakli LoladzeIrakli Loladze•2013-11-24 04:18 PM
Why would some 'senior' resist replication initiative? The federal investment in R&D is over $140 billion annually. Almost a half of it goes to NIH, NASA, NSF, DOE, and USDA. A huge chunk of it is given away on the basis of grant proposals. For every grant that a scientist wins, about a half of it goes to her university as an overhead cost. So deans and provosts salivate over every potential grant and promote those scientists that win more grants, not those that pursue truth. The reproducibility of research is the last thing on their minds, if it is on their minds at all. The system successfully turns scientists from being truth seekers to becoming experts in securing external funding. The two paths do not have to be mutually exclusive but often and increasingly they are conflicting. As the grantsmanship sucks in more and more skills and time, the less time is devoted to genuine science. The disease is systemic affecting both empirical and theoretical research. The system discourages multi-year painstaking analysis of biological systems to distill the kernels of truth out of the enormous complexity. Instead, it encourages hiding sloppy science in complexity and rushing to make flashy publications. The splashy publications in turn lead to more grants. Big grants engender even bigger grants. Rich labs are getting richer. This loop is self-reinforcing. Insisting on reproducibility is anathema to the pathological loop.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Kenneth PimpleKenneth Pimple•2013-11-22 07:50 PM
I am glad to have read Dr. Bissell's piece as well as the responses. I am a humanist who studies research integrity and the responsible conduct of research, and issues of replication clearly fall in this domain. I have two questions, both of which may be hopelessly naive; if so, please forgive me. 1. I don't see how Dr. Bissell's second example is related to replication. At core the issue seems to be that the paper under review challenged accepted understanding; this being the case, the reviewers asked for additional proof. I should think this would be good practice - if one claims something outside the mainstream and the evidence is not airtight, one should expect to face skepticism and to be asked to make an adequate case. 2. I wonder how often replication, in a strict sense, is actually necessary. Is it always the case that the very specific steps and very specific outcomes must be replicated identically? I should think that in some instances the mechanism or phenomenon or model or underlying process (I don't know the best term) would be adequately suggestive, even if not definitive, to merit additional efforts along the same line. I would like to understand these things better, but I suppose my second question is trying to make a point: It isn't replication that matters; discovery and reliable knowledge matter. Replication is a good way (perhaps the best) to verify discovery, but surely there are often multiple ways to arrive at identical knowledge.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Irakli LoladzeIrakli Loladze•2013-11-22 10:38 AM
"A result that is not able to be independently reproduced ... using ... standard laboratory procedures (blinding, controls, validated reagents etc) is not a result. It is simply a 'scientific allegation'." Colin really nailed it here. Who would oppose the Reproducibility Initiative? Those that are bound to lose the most - the labs that mastered the grantsmanship, but failed to make their results reproducible. The current system does not penalize for publishing sexy but non-reproducible findings. In fact, such publications only boost the chances of getting another grant. It is about time to end this vicious cycle that benefits a few but hurts science at large.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Prashanth Nuggehalli SrinivasPrashanth Nuggehalli Srinivas•2013-11-22 09:34 AM
Very interesting. I see this from the perspective of a public health researcher. The problem of reproducibility understandably acquires more complexity with addition of human behaviours (and organisational/societal behaviours). The impossibilities of "controlling" the experimental conditions imposes many more restrictions on researchers and the results are quite evident; these sciences have more "explanatory hypotheses" for the change observed rather than the "mechanisms" in the way laboratory sciences see things. I am sure other systems like biological systems also constantly deal with such problems, where the micro-environmental conditions could have system-wide effects. I would have thought that this kind of laboratory research would be the one most amenable to such replication drives....perhaps not? Higher coordination between people at these laboratories certainly appears to be a good way of dealing with this problem.
[5]???  2013-11-30 08:24
????????????                                                   ???                                                                 
?? 425 ????2013-7-9 14:53|????о????|????????                       ?????

?????????????????idde Ploegh?????????Nature????¡?Endthe wasteful tyranny of reviewer experiments?????????????????Щ???û?ô?????????????????о?????????Щ???????????reviewerexperiments???????????????????????????????????????????Ploegh???????о?????????????????û??????????????????????????????????????????ø?ε??????????????????loegh??????????????????????????????????????????????????????????????????????????????????????????????????????????
??????????????????????????????????????????????????????????????????????????????棬???????????????????????????????????ø??????????????????????????????????90%???????????????????????????????????????????????????????????????????????????????????????????????????????????硣
??????????????????????????????????????????????????????????????????????????4???????????????????????????????????ò???????
???????????????????????????????????????????????????????????????????????????????4???????????????????????????????????????
???????????????????????????????????????????????????????????????????
??????????????????????????????????????????????????????????????????????????????????????ô????????????????????????????????????????????????????????
д???????????????ô???????????????¹???????????嵽?????????????????????????????д?4?2???????????2????????????????????????????????????????????ô????????????????????????????????????????????????????????????????????λ????????<????????????????????4????????±???壬??????????????????????????????????????????????ô???????????????????????????????????????????????????????????????????????????????????????????????????ä???????????????????????ò?????????????е???????????r />
                    

                    ??????????http://blog.sciencenet.cn/blog-41174-706652.html
[4]???  2013-11-27 01:54
@??? /> ??????????????????????ñ????????????????????????????????????????????以?????????????-----????r /> 11?23? 12:41 ????????(1)   ????(1)
11?24? 02:37 ????  |   ???  |   ??   | ???br /> ???:
@???:
????У???????У?????У??????%28%E9%98%B4%E9%99%A9%29
11?23? 19:13 ????????(1)   ????(0)
11?24? 02:35 ????  |   ???  |   ??   | ???br /> ???:
Well said!
@sijin2012:
?????????????}?????????????????飬????????????5??????????????????õ??????ÿ???????????????????????????η????????????ô??---?¼?????????????¼
11?23? 22:22 ????????(1)   ????(0)
11?24? 02:34 ????  |   ???  |   ??   | ???br /> ???:
@???5800:
???????????????????????????????????????·??????????????????????????????????????С?????????????????ò??????????????...
[3]???  2013-11-22 09:13
Perhaps, we shouldn't emphasize that much on Reproducibility: The risks of the replication drive
Mina Bissell
20 November 2013
The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists, says Mina Bissell.

Article tools
PDFRights & Permissions
Subject terms:
Cell biology Publishing Research management

PAUL BLOW
Every once in a while, one of my postdocs or students asks, in a grave voice, to speak to me privately. With terror in their eyes, they tell me that they have been unable to replicate one of my laboratory's previous experiments, no matter how hard they try. Replication is always a concern when dealing with systems as complex as the three-dimensional cell cultures routinely used in my lab. But with time and careful consideration of experimental conditions, they, and others, have always managed to replicate our previous data.


Humans interbred with a mysterious archaic population
How the capacity to evolve can itself evolve
The weak statistics that are making science irreproducible
Articles in both the scientific and popular press1?C3 have addressed how frequently biologists are unable to repeat each other's experiments, even when using the same materials and methods. But I am concerned about the latest drive by some in biology to have results replicated by an independent, self-appointed entity that will charge for the service. The US National Institutes of Health is considering making validation routine for certain types of experiments, including the basic science that leads to clinical trials4. But who will evaluate the evaluators? The Reproducibility Initiative, for example, launched by the journal PLoS ONE with three other companies, asks scientists to submit their papers for replication by third parties, for a fee, with the results appearing in PLoS ONE. Nature has targeted5 reproducibility by giving more space to methods sections and encouraging more transparency from authors, and has composed a checklist of necessary technical and statistical information. This should be applauded.

So why am I concerned? Isn't reproducibility the bedrock of the scientific process? Yes, up to a point. But it is sometimes much easier not to replicate than to replicate studies, because the techniques and reagents are sophisticated, time-consuming and difficult to master. In the past ten years, every paper published on which I have been senior author has taken between four and six years to complete, and at times much longer. People in my lab often need months ?? if not a year ?? to replicate some of the experiments we have done on the roles of the microenvironment and extracellular matrix in cancer, and that includes consulting with other lab members, as well as the original authors.

Related stories
Reproducibility: Six red flags for suspect work
Announcement: Reducing our irreproducibility
Nature focus: Reproducibility
People trying to repeat others' research often do not have the time, funding or resources to gain the same expertise with the experimental protocol as the original authors, who were perhaps operating under a multi-year federal grant and aiming for a high-profile publication. If a researcher spends six months, say, trying to replicate such work and reports that it is irreproducible, that can deter other scientists from pursuing a promising line of research, jeopardize the original scientists' chances of obtaining funding to continue it themselves, and potentially damage their reputations.

Fair wind
Twenty years ago, a reproducibility movement would have been of less concern. Biologists were using relatively simple tools and materials, such as pre-made media and embryonic fibroblasts from chickens and mice. The techniques available were inexpensive and easy to learn, thus most experiments would have been fairly easy to double-check. But today, biologists use large data sets, engineered animals and complex culture models, especially for human cells, for which engineering new species is not an option.

Many scientists use epithelial cell lines that are exquisitely sensitive. The slightest shift in their microenvironment can alter the results ?? something a newcomer might not spot. It is common for even a seasoned scientist to struggle with cell lines and culture conditions, and unknowingly introduce changes that will make it seem that a study cannot be reproduced. Cells in culture are often immortal because they rapidly acquire epigenetic and genetic changes. As such cells divide, any alteration in the media or microenvironment ?? even if minuscule ?? can trigger further changes that skew results. Here are three examples from my own experience.


Expand
Cells of the same human breast cell line from different sources respond differently to the same assay.
JAMIE INMAN/BISSELL LAB
My collaborator, Ole Petersen, a breast-cancer researcher at the University of Copenhagen, and I have spent much of our scientific careers learning how to maintain the functional differentiation of human and mouse mammary epithelial cells in culture. We have succeeded in cultivating human breast cell lines for more than 20 years, and when we use them in the three-dimensional assays that we developed6, 7, we do not observe functional drift. But our colleagues at biotech company Genentech in South San Francisco, California, brought to our attention that they could not reproduce the architecture of our cell colonies, and the same cells seemed to have drifted functionally. The collaborators had worked with us in my lab and knew the assays intimately. When we exchanged cells and gels, we saw that the problem was in the cells, procured from an external cell bank, and not the assays.

Another example arose when we submitted what we believe to be an exciting paper for publication on the role of glucose uptake in cancer progression. The reviewers objected to many of our conclusions and results because the published literature strongly predicted the prominence of other molecules and pathways in metabolic signalling. We then had to do many extra experiments to convince them that changes in media glucose levels, or whether the cells were in different contexts (shapes) when media were kept constant, drastically changed the nature of the metabolites produced and the pathways used8.

A third example comes from a non-malignant human breast cell line that is now used by many for three-dimensional experiments. A collaborator noticed that her group could not reproduce its own data convincingly when using cells from a cell bank. She had obtained the original cells from another investigator. And they had been cultured under conditions in which they had drifted. Rather than despairing, the group analysed the reasons behind the differences and identified crucial changes in cell-cycle regulation in the drifted cells. This finding led to an exciting, new interpretation of the data that were subsequently published9.

Repeat after me
The right thing to do as a replicator of someone else's findings is to consult the original authors thoughtfully. If e-mails and phone calls don't solve the problems in replication, ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours. Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.

When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. They could confirm only 11% of the papers3. I think that if more biotech companies had the patience to send someone to the original labs, perhaps the percentage of reproducibility would be much higher.

It is true that, in some cases, no matter how meticulous one is, some papers do not hold up. But if the steps above are taken and the research still cannot be reproduced, then these non-valid findings will eventually be weeded out naturally when other careful scientists repeatedly fail to reproduce them. But sooner or later, the paper should be withdrawn from the literature by its authors.

One last point: all journals should set aside a small space to publish short, peer-reviewed reports from groups that get together to collaboratively solve reproducibility problems, describing their trials and tribulations in detail. I suggest that we call this ISPA: the Initiative to Solve Problems Amicably.

Nature 503, 333?C334 (21 November 2013) doi:10.1038/503333a
References

Naik, G. 'Scientists' Elusive Goal: Reproducing Study Results' The Wall Street Journal (2 December 2011); available at http://go.nature.com/aqopc3.
Show context
Nature Med. 18, 1443 (2012).
ArticlePubMedISIChemPort
Show context
Begley, C. G. & Ellis, L. M. Nature 483, 531?C533 (2012).
ArticlePubMedISIChemPort
Show context
Wadman, M. Nature 500, 14?C16 (2013).
ArticlePubMedISIChemPort
Show context
Nature 496, 398 (2013).
ArticleISI
Show context
Barcellos-Hoff, M. H., Aggeler, J., Ram, T. G. & Bissell, M. J. Development 105, 223?C235 (1989).
PubMedChemPort
Show context
Petersen, O. W., Rønnov-Jessen, L., Howlett, A. R. & Bissell, M. J. Proc. Natl Acad. Sci. USA 89, 9064?C9068 (1992).
ArticlePubMedChemPort
Show context
Onodera, Y., Nam, J.-M. & Bissell, M. J. J. Clin. Invest. (in the press).
Show context
Ordinario, E. et al. PLoS ONE 7, e51786 (2012).
ArticlePubMedChemPort
Show context
Related stories and links

From nature.com
Reproducibility: Six red flags for suspect work
22 May 2013
Announcement: Reducing our irreproducibility
24 April 2013
Nature focus: Reproducibility
Author information

Affiliations
Mina Bissell is Distinguished Scientist in the Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA.
Corresponding author
Correspondence to: Mina Bissell
For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.

Comments
7 commentsSubscribe to comments
Avatar for colin begleycolin begley•2013-11-21 06:39 PM
Thanks Minna. I appreciate your comments, but do not share your views. First to clarify, in the study in which we reported the Amgen experience, on many occasions we did go back to the original laboratories and asked them to reproduce their own experiments. They were unable to do so in their own labs, with their own reagents, when experiments were performed blinded. This shocked me. I did not expect that to be the case. Second, the purpose of my research over the last decade has been to bring new treatments to patients. In that context 'miniscule' changes that can alter an experimental result are very troubling. A result that is not sufficiently robust that it can be independently reproduced will not provide the basis for an effective therapy in an outbred human population. A result that is not able to be independently reproduced, that cannot be translated to another lab using what most would regard as standard laboratory procedures (blinding, controls, validated reagents etc) is not a result. It is simply a 'scientific allegation'. C. Glenn Begley
Share to TwitterShare to FacebookShare link to this comment
Avatar for Gaia ShamisGaia Shamis•2013-11-21 05:15 PM
Here's a great post about how can try and fix the irreproducibility of scientific papers. We should all strive to "publish every important detail of your method and every control, either in the main text or in that wonderful Internet-age invention, the Supplementary Materials. " http://www.myscizzle.com/blog/scientific-papers-contain-irreproducible-results-can/
Share to TwitterShare to FacebookShare link to this comment
Avatar for A nonymousA nonymous•2013-11-21 08:49 AM
I would be a rich man if I had received a penny for every time I heard the expression "in our hands" at a scientific lecture during my (brief) scientific career in biochemistry (back in the 1990's). I have the impression that Mrs Bissell argues that we should not care too much about making sure that published results can be reproduced because "that could be bad for the business." It does not answer the basic question: how interesting is a result that can be obtained only by a particular researcher in a particular lab ? I disagree completely that "the push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists." I believe that the opposite is true. I quit scientific research while doing my first post-doc, in great part because, after one year, I could not reproduce most of the (published!) results of the previous scientist who had worked on the project before me in the same lab (and who had then gone elsewhere). These results were the whole basis of my project. I have no doubt that if I had tried, say, 10 or 20 more times, then I would have obtained the desired result at least once. But how good would that kind of science have been ? If your experiments cannot be reproduced, no matter how meticulous you were, then they're useless to the scientific community because nothing can be built on non-reproducible results. Except a career, for the person who obtained them, of course. Scientists should be encouraged to report and publish when they fail to replicate other's experiments. That will make science (but maybe not scientific careers) progress much faster.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Nitin GandhiNitin Gandhi•2013-11-21 03:14 AM
There are reports that not more then 5% of papers failed when tried to replicate. The very fact that we have to take the issue of replication so seriously and spent lots of time (and money) over this -at the hard times- it-self speaks out loudly that things are very wrong in Biological research. We have two options -one is as the author (indirectly) indicates -sweep the dirt under the carpet- or second options is go for the head on collision and face the reality, I personally believe that taking the second option will be eventually inevitable so why not do it NOW?
Share to TwitterShare to FacebookShare link to this comment
Avatar for Anita BandrowskiAnita Bandrowski•2013-11-21 01:34 AM
Thank you William, that is a rather amicable description of the Reproducibility Initiative and I salute you for spearheading this. Robustness of effect is a very important issue when trying to take science to the clinic or even an undergraduate lab. The article mentions a point about large data sets that I would like to follow up on. The author states that "But today, biologists use large data sets, engineered animals and complex culture models...". The fact that a data set is large should not preclude someone from reproducing it. Yes, there is going to be a different set of expertise required to know what the numbers mean, but this should not significantly change the way that data are interpreted. In a paper we published last year (Cachat et al, 2012), we looked at a single data set deposited in the GEO database. The authors' data was included in their supplementary materials and brought into a database called the drug related gene databse (DRG) along with their interpretation as to which genes were significantly changed in expression. An independent group from the University of British Columbia, with a tool called Gemma, took the same data set and ran it through their pipeline along with thousands of other data sets. After alignment steps and several difficulties described in detail in the paper, we found the following: "From the original sets of comparisons, we selected a set of 1370 results in DRG that were stated to be differentially expressed as a function of chronic or acute cocaine. Of these 617 were confirmed by the analysis done in Gemma. Thus, only half of the original assertions were confirmed by the reanalysis." Note, there is no biological difference between these two data sets and statistically speaking we would expect ~5% misalignment not 50%. I really can't see that any scientist can argue that not being able to reproduce a finding, especially once you have just a pile of numbers, is a good way to do science. We have started the Resource Identification Initiative to help track data sets, analysis pipelines and software tools to make methods more transparent and I salute Nature, and many other journals that are starting to ask for more from authors. If anyone here would like to join the efforts please visit our page on the Force11 website where the group is coordinating efforts with publishers to put in place a consistent set of standards across all major publishers.
Share to TwitterShare to FacebookShare link to this comment
Avatar for William GunnWilliam Gunn•2013-11-20 10:21 PM
Thanks for this thoughtful post, Mina. Nature and PLOS, as well as the Reproducibility Initiative, of which I'm co-director, are all worthy efforts. Let me share some preliminary information about the selection process we went through. We searched both Scopus and Web of Science for papers matching a range of cancer biology terms. For each of 2012, 2011, and 2010, we then ranked those lists by the number of citations and picked the top 16 or 17 from each year. As you might expect, many of the results were reviews, so we excluded those, as well as clinical trials. We also excluded papers which simply reported the sequencing of a new species. Our selection criteria also specified exclusion of papers using novel techniques requiring specialized skills or training, such as you refer to in your post. However, we didn't encounter very many of those among the most highly cited papers from the past three years. If I recall, there was only one where the Science Exchange network didn't have a professional lab which could perform the technique. So it may well be true that some papers are hard to replicate because the assays are novel, but this is not the majority of even high-impact papers. Two other points: 1) Each experiment is being done by an independent professional lab which specializes in that technique, so if it doesn't replicate in their hands, in consultation with the primary authors, then it's not likely any other lab will be able to get it to work either. The full protocols for carrying out the experiments will be shared with the primary authors before the work is started, allowing them to suggest any modifications or improvements. The amended protocols, as well as the generated data, will be openly available on the Open Science Framework, so any interested party can see the protocol and data for themselves. At a minimum, this process will add value to the existing publication by clarifiying steps that may have been unclear in the original paper. 2) It would be good if the replications could be uncovered by other labs working in the same area, but that's not what happens in practice. In fact, in a 2011 paper in Nature Reviews Drug Discovery http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html , Prinz et al found that whether or not Bayer could validate a target in-house had nothing to do with how many preclinical papers were published on the topic, the impact factor of the journals those studies were in, or the number of independent groups working on it. In the Bayer studies, most of the ones that did replicate, however, showed robustness to minor variations, whereas even 1:1 replications showed inconsistencies with ones that didn't. As far as Amgen, they often did contact the original labs, and found irreproducibilities with the same researcher, same reagents, in the same lab. We will be working closely with the authors of the papers we're replicating as the work is being conducted and feedback so far has been positive, you might almost say amicable. In the end, this is the effort of two scientists to make science work better for everyone. The worst that could happen is that we learn a lot about what level of reproducibility to expect and how to reliably build on a published finding. At best, funders will start tacking a few percent on to grants for replication and publishers will start asking for it. That can only be good for science as a whole.
Share to TwitterShare to FacebookShare link to this comment
Avatar for Cell PressCell Press•2013-11-20 09:54 PM
I couldn't agree more. See my blog at: http://news.cell.com/cellreports/cell-reports/in-defense-of-science
[2]???  2013-11-21 02:58
????????quot;??????????" ??? "?????³???"????
[1]??????/a>  2013-11-12 16:16
?????????????õõ????????????????
??????(2013-11-14 03:11)????úã????????#8226;??????????????, ??????????????????--??????????????????????????檔???????????????硣?????³????????????? ???5µ???
[ 打印 ]
閱讀 ()評論 (0)
評論
目前還沒有任何評論
登錄後才可評論.