"What really knocks me out is a book that, when you're all done reading it, you wish the author that wrote it was a terrific friend of yours and you could call him up on the phone whenever you felt like it. That doesn't happen much, though."(Jerome David Salinger “TheCatcher in the Rye” 1951 novel)
“我們確實活得艱難,一要承受種種外部的壓力,更要麵對自己內心的困惑。在苦苦掙紮中,如果有人向你投以理解的目光,你會感到一種生命的暖意,或許僅有短暫的一瞥,就足以使我感奮不已”——傑羅姆·大衛·塞林格《麥田裏的守望者》(Jerome David Salinger “TheCatcher in the Rye” 1951 novel)。
“善有善報,惡有惡報。”有人在他或她的手稿已經收到了不公平的評論更可能同樣對待別人。"Do not judge; so that you won't be judged. For with the judgment you use, you will be judged, and with the measure you use, it will be measured to you." (Matthew 7:1,2)
Photo description: “我們確實活得艱難,一要承受種種外部的壓力,更要麵對自己內心的困惑。在苦苦掙紮中,如果有人向你投以理解的目光,你會感到一種生命的暖意,或許僅有短暫的一瞥,就足以使我感奮不已”——傑羅姆·大衛·塞林格《麥田裏的守望者》(Jerome David Salinger “TheCatcher in the Rye” 1951 novel)。
Scholars care greatly about the sources of your insights and your information. You must tell your reader where the raw material came from! Your facts are only as good as the place where you got them and your conclusionsonly as good as the facts they’re based on. (學者們最為關心您的見解和你的信息來源。你必須告訴你的讀者的原料來自哪裏!)
不過,老文仔細研究這些“玄機”後,發現一些牛刊的稿件錄用率雖然的確如莊稼地裏的果蔬一樣有一定的季節性,但這種季節性偏差非常小,正如《物理評論快報》雜誌編輯所說,月稿件錄用率沒有明顯的統計意義上的偏差(No statistically significantvariations were found in the monthly acceptance rates)。也就是說,從統計意義上說投稿時機對論文錄用率幾乎沒有影響。
??????????????????????????????????壬????û??õ?????????????????????κα??????????!?????????????????????????????????????????δ???????û????4?????????????????????????????'????????????????????4?????????????????????????????????????????????????????????????????????????????4???<????????30???????ublons?????????????????????????????????Amazon Web Services????redit????????Publons??????????????????????Publons????????????¼???????????????????????????????????????????????????????????????????????
Noam Harel•2013-12-27 04:53 AM The tone of this commentary and its title both seem to lobby against the growing communal push for replicating basic scientific results. However, Dr. Bissell's own examples pretty clearly support the exact reasons why the 'Replication Drive' has been growing in the first place - to be trusted, scientific results need to be reproducible; when there are discrepancies, scientists should work together to determine why. This is good for science. The bottom line? Everything has a cost. The societal cost of devoting resources toward reproducing scientific results is far outweighed by the benefits. Conversely, the societal cost of publishing papers based on the one or two times an experiment "worked" (ie got the desired result), while ignoring the 10 or more times the experiment "didn't work" has for too long been a dirty, open secret among scientists. It's great to see this issue subjected to a more public debate at long last. Share to TwitterShare to FacebookShare link to this comment Avatar for Elizabeth IornsElizabeth Iorns•2013-12-21 09:36 PM Andrew Gelman, Director of the Applied Statistics Center at Columbia University's response to this is available here: http://andrewgelman.com/2013/12/17/replication-backlash/ Share to TwitterShare to FacebookShare link to this comment Avatar for John J. PippinJohn J. Pippin•2013-12-08 07:22 PM So replication threatens the cache of irreproducible experiments? For those removed from labs, the more important issue is the robustness and reliability of basic science experiments, not the ability to get a desired result at one point but not ever again. Basic science research is not (to the rest of us) about whether you can publish and thereby keep the grants coming. It's about getting to the truth, and hopefully about translating that truth to something useful rather than arcane. Dr. Bissell's opinion piece beggars the ability of laboratory science to stand on merit, and asks for permission either to be wrong or at least irrelevant. That's not science. Share to TwitterShare to FacebookShare link to this comment Avatar for Etienne BurdetEtienne Burdet•2013-11-28 01:50 PM I tried to replicate the LHC experiments and failed. This is a proof that the Higgs boson is not science. Can I have my Nobel please ? Share to TwitterShare to FacebookShare link to this comment Avatar for Yishai ShimoniYishai Shimoni•2013-11-25 05:54 PM I think there are a few points that were ignored thus far in the comments: 1. In principle it should not be obligatory for an experiment to be reproducible. It is useful to report on surprising results that cannot be explained. However, in such a case the reasons should be clarified, and until then no real conclusion can be drawn from the results. The results may be stochastic, they may depend on a specific overlooked detail, or they may be an artifact. Until the scientific community understands the conditions under which a result is reproducible or to what extent it is reproducible, it is not useful and should be regarded as a possible artifact. 2. Requiring anyone who wants to reproduce a result to contact the authors is not practical, especially if it is an important result. What if there are a few hundred labs who want to use the result? what if the researcher who performed the experiment changed position? changed fields? or even passed away? 3. The suggestion that anyone who wants to reproduce the results should contact the authors may very easily lead to knowledge-hoarding. It is a natural tendency to want to become a world authority on the specific technique, especially after spending years to arrive at the result. Unfortunately, this may mean holding back on some specific essential detail, and only by working with the author and adding them as co-authors is it possible to publish anything on it. Share to TwitterShare to FacebookShare link to this comment Avatar for Replication Political ScienceReplication Political Science•2013-11-24 11:44 AM I do agree with many of the points about being careful and considerate when replicating published work. But let??s talk about the newcomers a bit more. First of all, I??m not sure I like the word ??newcomer??. Although this might be a misinterpretation, it sounds as if those trying to replicate work are the ??juniors?? who are not quite sure of what they are doing, while the ??seniors?? worked for years on a topic and deserve special protection against reputational damage. It goes without saying that anyone trying to replicate works should try to cooperate with the original authors. I agree. However, I would like to point that original authors don??t always show the willingness or capacity to invest time into helping someone else reproducing the results. As Bissell says herself, experiments can take years ?C and once the paper is published and someone decides to try to replicate it, the authors might already work on a new, time-intensive topic. My students in the replication workshop were sometimes frustrated when original authors were not available to help with the replication. So I??d say, let??s not just postulate that ??newcomers?? should try to cooperate, but that original authors should make time to help as well to produce the best possible outcome when validating work. It is in the interest of original authors to clearly report the experimental conditions, so that others are not thrown off track due to tiny differences. This goes for replication based on re-analysing data as well as for experiments. The responsibility of paying attention to details lies not only with those trying to replicate work. From my experience and that of my students trying to replicat work in the social sciences, papers do not always list all steps that led to the findings. My students are often surprised at how authors came up with a variable, recoded it, which model specifications they used etc. Sometimes STATA or R code is incomplete. Therefore, while those replicating work should try to be aware of such details, original authors need to work on this as well. Original research might take years, but it really should not take years to replicate them, just because not all information was given. Bissell states that replicators should bear the costs of visiting research labs and cooperating with the original authors (??Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.??). I??m not quite sure of this. As Bissell points out herself at the beginning of her article, it is often students who replicate work. Will their lab pay for expensive research trips? Will they get travel grants for a replication? And for the social sciences that often works without a lab structure, who will pay for the replication costs? I feel the statement that a replicator must bear all costs ?C even though original authors profit from cooperation as well, can be off-putting for many students engaging in replication. Share to TwitterShare to FacebookShare link to this comment Avatar for Irakli LoladzeIrakli Loladze•2013-11-24 04:18 PM Why would some 'senior' resist replication initiative? The federal investment in R&D is over $140 billion annually. Almost a half of it goes to NIH, NASA, NSF, DOE, and USDA. A huge chunk of it is given away on the basis of grant proposals. For every grant that a scientist wins, about a half of it goes to her university as an overhead cost. So deans and provosts salivate over every potential grant and promote those scientists that win more grants, not those that pursue truth. The reproducibility of research is the last thing on their minds, if it is on their minds at all. The system successfully turns scientists from being truth seekers to becoming experts in securing external funding. The two paths do not have to be mutually exclusive but often and increasingly they are conflicting. As the grantsmanship sucks in more and more skills and time, the less time is devoted to genuine science. The disease is systemic affecting both empirical and theoretical research. The system discourages multi-year painstaking analysis of biological systems to distill the kernels of truth out of the enormous complexity. Instead, it encourages hiding sloppy science in complexity and rushing to make flashy publications. The splashy publications in turn lead to more grants. Big grants engender even bigger grants. Rich labs are getting richer. This loop is self-reinforcing. Insisting on reproducibility is anathema to the pathological loop. Share to TwitterShare to FacebookShare link to this comment Avatar for Kenneth PimpleKenneth Pimple•2013-11-22 07:50 PM I am glad to have read Dr. Bissell's piece as well as the responses. I am a humanist who studies research integrity and the responsible conduct of research, and issues of replication clearly fall in this domain. I have two questions, both of which may be hopelessly naive; if so, please forgive me. 1. I don't see how Dr. Bissell's second example is related to replication. At core the issue seems to be that the paper under review challenged accepted understanding; this being the case, the reviewers asked for additional proof. I should think this would be good practice - if one claims something outside the mainstream and the evidence is not airtight, one should expect to face skepticism and to be asked to make an adequate case. 2. I wonder how often replication, in a strict sense, is actually necessary. Is it always the case that the very specific steps and very specific outcomes must be replicated identically? I should think that in some instances the mechanism or phenomenon or model or underlying process (I don't know the best term) would be adequately suggestive, even if not definitive, to merit additional efforts along the same line. I would like to understand these things better, but I suppose my second question is trying to make a point: It isn't replication that matters; discovery and reliable knowledge matter. Replication is a good way (perhaps the best) to verify discovery, but surely there are often multiple ways to arrive at identical knowledge. Share to TwitterShare to FacebookShare link to this comment Avatar for Irakli LoladzeIrakli Loladze•2013-11-22 10:38 AM "A result that is not able to be independently reproduced ... using ... standard laboratory procedures (blinding, controls, validated reagents etc) is not a result. It is simply a 'scientific allegation'." Colin really nailed it here. Who would oppose the Reproducibility Initiative? Those that are bound to lose the most - the labs that mastered the grantsmanship, but failed to make their results reproducible. The current system does not penalize for publishing sexy but non-reproducible findings. In fact, such publications only boost the chances of getting another grant. It is about time to end this vicious cycle that benefits a few but hurts science at large. Share to TwitterShare to FacebookShare link to this comment Avatar for Prashanth Nuggehalli SrinivasPrashanth Nuggehalli Srinivas•2013-11-22 09:34 AM Very interesting. I see this from the perspective of a public health researcher. The problem of reproducibility understandably acquires more complexity with addition of human behaviours (and organisational/societal behaviours). The impossibilities of "controlling" the experimental conditions imposes many more restrictions on researchers and the results are quite evident; these sciences have more "explanatory hypotheses" for the change observed rather than the "mechanisms" in the way laboratory sciences see things. I am sure other systems like biological systems also constantly deal with such problems, where the micro-environmental conditions could have system-wide effects. I would have thought that this kind of laboratory research would be the one most amenable to such replication drives....perhaps not? Higher coordination between people at these laboratories certainly appears to be a good way of dealing with this problem.
Perhaps, we shouldn't emphasize that much on Reproducibility: The risks of the replication drive Mina Bissell 20 November 2013 The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists, says Mina Bissell.
PAUL BLOW Every once in a while, one of my postdocs or students asks, in a grave voice, to speak to me privately. With terror in their eyes, they tell me that they have been unable to replicate one of my laboratory's previous experiments, no matter how hard they try. Replication is always a concern when dealing with systems as complex as the three-dimensional cell cultures routinely used in my lab. But with time and careful consideration of experimental conditions, they, and others, have always managed to replicate our previous data.
Humans interbred with a mysterious archaic population How the capacity to evolve can itself evolve The weak statistics that are making science irreproducible Articles in both the scientific and popular press1?C3 have addressed how frequently biologists are unable to repeat each other's experiments, even when using the same materials and methods. But I am concerned about the latest drive by some in biology to have results replicated by an independent, self-appointed entity that will charge for the service. The US National Institutes of Health is considering making validation routine for certain types of experiments, including the basic science that leads to clinical trials4. But who will evaluate the evaluators? The Reproducibility Initiative, for example, launched by the journal PLoS ONE with three other companies, asks scientists to submit their papers for replication by third parties, for a fee, with the results appearing in PLoS ONE. Nature has targeted5 reproducibility by giving more space to methods sections and encouraging more transparency from authors, and has composed a checklist of necessary technical and statistical information. This should be applauded.
So why am I concerned? Isn't reproducibility the bedrock of the scientific process? Yes, up to a point. But it is sometimes much easier not to replicate than to replicate studies, because the techniques and reagents are sophisticated, time-consuming and difficult to master. In the past ten years, every paper published on which I have been senior author has taken between four and six years to complete, and at times much longer. People in my lab often need months ?? if not a year ?? to replicate some of the experiments we have done on the roles of the microenvironment and extracellular matrix in cancer, and that includes consulting with other lab members, as well as the original authors.
Related stories Reproducibility: Six red flags for suspect work Announcement: Reducing our irreproducibility Nature focus: Reproducibility People trying to repeat others' research often do not have the time, funding or resources to gain the same expertise with the experimental protocol as the original authors, who were perhaps operating under a multi-year federal grant and aiming for a high-profile publication. If a researcher spends six months, say, trying to replicate such work and reports that it is irreproducible, that can deter other scientists from pursuing a promising line of research, jeopardize the original scientists' chances of obtaining funding to continue it themselves, and potentially damage their reputations.
Fair wind Twenty years ago, a reproducibility movement would have been of less concern. Biologists were using relatively simple tools and materials, such as pre-made media and embryonic fibroblasts from chickens and mice. The techniques available were inexpensive and easy to learn, thus most experiments would have been fairly easy to double-check. But today, biologists use large data sets, engineered animals and complex culture models, especially for human cells, for which engineering new species is not an option.
Many scientists use epithelial cell lines that are exquisitely sensitive. The slightest shift in their microenvironment can alter the results ?? something a newcomer might not spot. It is common for even a seasoned scientist to struggle with cell lines and culture conditions, and unknowingly introduce changes that will make it seem that a study cannot be reproduced. Cells in culture are often immortal because they rapidly acquire epigenetic and genetic changes. As such cells divide, any alteration in the media or microenvironment ?? even if minuscule ?? can trigger further changes that skew results. Here are three examples from my own experience.
Expand Cells of the same human breast cell line from different sources respond differently to the same assay. JAMIE INMAN/BISSELL LAB My collaborator, Ole Petersen, a breast-cancer researcher at the University of Copenhagen, and I have spent much of our scientific careers learning how to maintain the functional differentiation of human and mouse mammary epithelial cells in culture. We have succeeded in cultivating human breast cell lines for more than 20 years, and when we use them in the three-dimensional assays that we developed6, 7, we do not observe functional drift. But our colleagues at biotech company Genentech in South San Francisco, California, brought to our attention that they could not reproduce the architecture of our cell colonies, and the same cells seemed to have drifted functionally. The collaborators had worked with us in my lab and knew the assays intimately. When we exchanged cells and gels, we saw that the problem was in the cells, procured from an external cell bank, and not the assays.
Another example arose when we submitted what we believe to be an exciting paper for publication on the role of glucose uptake in cancer progression. The reviewers objected to many of our conclusions and results because the published literature strongly predicted the prominence of other molecules and pathways in metabolic signalling. We then had to do many extra experiments to convince them that changes in media glucose levels, or whether the cells were in different contexts (shapes) when media were kept constant, drastically changed the nature of the metabolites produced and the pathways used8.
A third example comes from a non-malignant human breast cell line that is now used by many for three-dimensional experiments. A collaborator noticed that her group could not reproduce its own data convincingly when using cells from a cell bank. She had obtained the original cells from another investigator. And they had been cultured under conditions in which they had drifted. Rather than despairing, the group analysed the reasons behind the differences and identified crucial changes in cell-cycle regulation in the drifted cells. This finding led to an exciting, new interpretation of the data that were subsequently published9.
Repeat after me The right thing to do as a replicator of someone else's findings is to consult the original authors thoughtfully. If e-mails and phone calls don't solve the problems in replication, ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours. Of course replicators must pay for all this, but it is a small price in relation to the time one will save, or the suffering one might otherwise cause by declaring a finding irreproducible.
When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. They could confirm only 11% of the papers3. I think that if more biotech companies had the patience to send someone to the original labs, perhaps the percentage of reproducibility would be much higher.
It is true that, in some cases, no matter how meticulous one is, some papers do not hold up. But if the steps above are taken and the research still cannot be reproduced, then these non-valid findings will eventually be weeded out naturally when other careful scientists repeatedly fail to reproduce them. But sooner or later, the paper should be withdrawn from the literature by its authors.
One last point: all journals should set aside a small space to publish short, peer-reviewed reports from groups that get together to collaboratively solve reproducibility problems, describing their trials and tribulations in detail. I suggest that we call this ISPA: the Initiative to Solve Problems Amicably.
Nature 503, 333?C334 (21 November 2013) doi:10.1038/503333a References
Naik, G. 'Scientists' Elusive Goal: Reproducing Study Results' The Wall Street Journal (2 December 2011); available at http://go.nature.com/aqopc3. Show context Nature Med. 18, 1443 (2012). ArticlePubMedISIChemPort Show context Begley, C. G. & Ellis, L. M. Nature 483, 531?C533 (2012). ArticlePubMedISIChemPort Show context Wadman, M. Nature 500, 14?C16 (2013). ArticlePubMedISIChemPort Show context Nature 496, 398 (2013). ArticleISI Show context Barcellos-Hoff, M. H., Aggeler, J., Ram, T. G. & Bissell, M. J. Development 105, 223?C235 (1989). PubMedChemPort Show context Petersen, O. W., Rønnov-Jessen, L., Howlett, A. R. & Bissell, M. J. Proc. Natl Acad. Sci. USA 89, 9064?C9068 (1992). ArticlePubMedChemPort Show context Onodera, Y., Nam, J.-M. & Bissell, M. J. J. Clin. Invest. (in the press). Show context Ordinario, E. et al. PLoS ONE 7, e51786 (2012). ArticlePubMedChemPort Show context Related stories and links
From nature.com Reproducibility: Six red flags for suspect work 22 May 2013 Announcement: Reducing our irreproducibility 24 April 2013 Nature focus: Reproducibility Author information
Affiliations Mina Bissell is Distinguished Scientist in the Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA. Corresponding author Correspondence to: Mina Bissell For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.
Comments 7 commentsSubscribe to comments Avatar for colin begleycolin begley•2013-11-21 06:39 PM Thanks Minna. I appreciate your comments, but do not share your views. First to clarify, in the study in which we reported the Amgen experience, on many occasions we did go back to the original laboratories and asked them to reproduce their own experiments. They were unable to do so in their own labs, with their own reagents, when experiments were performed blinded. This shocked me. I did not expect that to be the case. Second, the purpose of my research over the last decade has been to bring new treatments to patients. In that context 'miniscule' changes that can alter an experimental result are very troubling. A result that is not sufficiently robust that it can be independently reproduced will not provide the basis for an effective therapy in an outbred human population. A result that is not able to be independently reproduced, that cannot be translated to another lab using what most would regard as standard laboratory procedures (blinding, controls, validated reagents etc) is not a result. It is simply a 'scientific allegation'. C. Glenn Begley Share to TwitterShare to FacebookShare link to this comment Avatar for Gaia ShamisGaia Shamis•2013-11-21 05:15 PM Here's a great post about how can try and fix the irreproducibility of scientific papers. We should all strive to "publish every important detail of your method and every control, either in the main text or in that wonderful Internet-age invention, the Supplementary Materials. " http://www.myscizzle.com/blog/scientific-papers-contain-irreproducible-results-can/ Share to TwitterShare to FacebookShare link to this comment Avatar for A nonymousA nonymous•2013-11-21 08:49 AM I would be a rich man if I had received a penny for every time I heard the expression "in our hands" at a scientific lecture during my (brief) scientific career in biochemistry (back in the 1990's). I have the impression that Mrs Bissell argues that we should not care too much about making sure that published results can be reproduced because "that could be bad for the business." It does not answer the basic question: how interesting is a result that can be obtained only by a particular researcher in a particular lab ? I disagree completely that "the push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists." I believe that the opposite is true. I quit scientific research while doing my first post-doc, in great part because, after one year, I could not reproduce most of the (published!) results of the previous scientist who had worked on the project before me in the same lab (and who had then gone elsewhere). These results were the whole basis of my project. I have no doubt that if I had tried, say, 10 or 20 more times, then I would have obtained the desired result at least once. But how good would that kind of science have been ? If your experiments cannot be reproduced, no matter how meticulous you were, then they're useless to the scientific community because nothing can be built on non-reproducible results. Except a career, for the person who obtained them, of course. Scientists should be encouraged to report and publish when they fail to replicate other's experiments. That will make science (but maybe not scientific careers) progress much faster. Share to TwitterShare to FacebookShare link to this comment Avatar for Nitin GandhiNitin Gandhi•2013-11-21 03:14 AM There are reports that not more then 5% of papers failed when tried to replicate. The very fact that we have to take the issue of replication so seriously and spent lots of time (and money) over this -at the hard times- it-self speaks out loudly that things are very wrong in Biological research. We have two options -one is as the author (indirectly) indicates -sweep the dirt under the carpet- or second options is go for the head on collision and face the reality, I personally believe that taking the second option will be eventually inevitable so why not do it NOW? Share to TwitterShare to FacebookShare link to this comment Avatar for Anita BandrowskiAnita Bandrowski•2013-11-21 01:34 AM Thank you William, that is a rather amicable description of the Reproducibility Initiative and I salute you for spearheading this. Robustness of effect is a very important issue when trying to take science to the clinic or even an undergraduate lab. The article mentions a point about large data sets that I would like to follow up on. The author states that "But today, biologists use large data sets, engineered animals and complex culture models...". The fact that a data set is large should not preclude someone from reproducing it. Yes, there is going to be a different set of expertise required to know what the numbers mean, but this should not significantly change the way that data are interpreted. In a paper we published last year (Cachat et al, 2012), we looked at a single data set deposited in the GEO database. The authors' data was included in their supplementary materials and brought into a database called the drug related gene databse (DRG) along with their interpretation as to which genes were significantly changed in expression. An independent group from the University of British Columbia, with a tool called Gemma, took the same data set and ran it through their pipeline along with thousands of other data sets. After alignment steps and several difficulties described in detail in the paper, we found the following: "From the original sets of comparisons, we selected a set of 1370 results in DRG that were stated to be differentially expressed as a function of chronic or acute cocaine. Of these 617 were confirmed by the analysis done in Gemma. Thus, only half of the original assertions were confirmed by the reanalysis." Note, there is no biological difference between these two data sets and statistically speaking we would expect ~5% misalignment not 50%. I really can't see that any scientist can argue that not being able to reproduce a finding, especially once you have just a pile of numbers, is a good way to do science. We have started the Resource Identification Initiative to help track data sets, analysis pipelines and software tools to make methods more transparent and I salute Nature, and many other journals that are starting to ask for more from authors. If anyone here would like to join the efforts please visit our page on the Force11 website where the group is coordinating efforts with publishers to put in place a consistent set of standards across all major publishers. Share to TwitterShare to FacebookShare link to this comment Avatar for William GunnWilliam Gunn•2013-11-20 10:21 PM Thanks for this thoughtful post, Mina. Nature and PLOS, as well as the Reproducibility Initiative, of which I'm co-director, are all worthy efforts. Let me share some preliminary information about the selection process we went through. We searched both Scopus and Web of Science for papers matching a range of cancer biology terms. For each of 2012, 2011, and 2010, we then ranked those lists by the number of citations and picked the top 16 or 17 from each year. As you might expect, many of the results were reviews, so we excluded those, as well as clinical trials. We also excluded papers which simply reported the sequencing of a new species. Our selection criteria also specified exclusion of papers using novel techniques requiring specialized skills or training, such as you refer to in your post. However, we didn't encounter very many of those among the most highly cited papers from the past three years. If I recall, there was only one where the Science Exchange network didn't have a professional lab which could perform the technique. So it may well be true that some papers are hard to replicate because the assays are novel, but this is not the majority of even high-impact papers. Two other points: 1) Each experiment is being done by an independent professional lab which specializes in that technique, so if it doesn't replicate in their hands, in consultation with the primary authors, then it's not likely any other lab will be able to get it to work either. The full protocols for carrying out the experiments will be shared with the primary authors before the work is started, allowing them to suggest any modifications or improvements. The amended protocols, as well as the generated data, will be openly available on the Open Science Framework, so any interested party can see the protocol and data for themselves. At a minimum, this process will add value to the existing publication by clarifiying steps that may have been unclear in the original paper. 2) It would be good if the replications could be uncovered by other labs working in the same area, but that's not what happens in practice. In fact, in a 2011 paper in Nature Reviews Drug Discovery http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html , Prinz et al found that whether or not Bayer could validate a target in-house had nothing to do with how many preclinical papers were published on the topic, the impact factor of the journals those studies were in, or the number of independent groups working on it. In the Bayer studies, most of the ones that did replicate, however, showed robustness to minor variations, whereas even 1:1 replications showed inconsistencies with ones that didn't. As far as Amgen, they often did contact the original labs, and found irreproducibilities with the same researcher, same reagents, in the same lab. We will be working closely with the authors of the papers we're replicating as the work is being conducted and feedback so far has been positive, you might almost say amicable. In the end, this is the effort of two scientists to make science work better for everyone. The worst that could happen is that we learn a lot about what level of reproducibility to expect and how to reliably build on a published finding. At best, funders will start tacking a few percent on to grants for replication and publishers will start asking for it. That can only be good for science as a whole. Share to TwitterShare to FacebookShare link to this comment Avatar for Cell PressCell Press•2013-11-20 09:54 PM I couldn't agree more. See my blog at: http://news.cell.com/cellreports/cell-reports/in-defense-of-science