Making more scientific research true

Author: Mirte Golverdingen

Speaker: John P.A. Ioannidis

Department: Meta-Research Innovation Center at Stanford (METRICS), Department of Medicine, Stanford Prevention Research Center, Department of Health Research and Policy, Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, California, United States of America

Subject: Making more scientific research true             

Location: Erasmus University College

Date: 9 November 2015      

 In the week of 9 November the 102nd ‘Dies Natalis’ of the Erasmus University Rotterdam was celebrated. Because of this 102nd anniversary two persons got a honorary doctorate and one of those persons is professor John P.A. Ioannidis. He got the honorary doctorate because of the global orientation of his research and his broad and critical view on research in general. Because of this occasion professor Ioannidis gave an interactive lecture about two of his research articles.

In his research he tries to answer the question how do we communicate research and how can we do it the right way? He is not the only one who is busy with this research, a lot of different groups are investigating this question. They are looking at different topics that can be improved like: methods, reporting, reproducibility, evaluation and incentives. Each topic has its own difficulties and this concludes that it is very hard to make science better.

When scientist publish their articles 97% of them claims that they found something significant. This results in an abundance of information and we are not even sure if it is really significant. Although the percentage of significant results, due to the writers, slowly is decreasing it is still a lot. Do we need to change the scientific system so we can be sure that the published work is significant?

Articles can be labeled as insignificant when the study does a lot of work in an underpowered setting. This means that they have  too less result to trust and therefore have a low statistical power. Moreover, a lot of articles have reproducibility issues. It is impossible to reproduce the same result. And when authors reanalyze their own articles they have to come up with a different result, otherwise it is not acceptable. This makes that re-analysis is used to make the original publication better.

How are we going to change this system? Ioannidis put forward a two new rewarding systems. You are more rewarded for doing peer review, for successfully translate publications and for contribute to education and training. You also lose rewarding for granting funding, see also table 1. Ioannidis thinks that this new rewarding system contribute to make more published research true.

However, are these changes in the system reliable? It is hard to say, but major stakeholders all have different looks at what is useable and what kind of paper you need. So some stakeholders will profit from this new system, while others prefer to stay at the old system. Yet the majority of research effort is still wasted and we know that changes in the system helps to make scientific research more successfully pursue its goals.

 

Tabel 1: An illustration of different exchange rates for various currencies and wealth items in research. (Source: Ioannidis, J. P. A. (2014). How to Make More Published Research True. PLoS Medicine, 11(10), e1001747. http://doi.org/10.1371/journal.pmed.1001747)

Tabel 1: An illustration of different exchange rates for various currencies and wealth items in research. (Source: Ioannidis, J. P. A. (2014). How to Make More Published Research True. PLoS Medicine, 11(10), e1001747. http://doi.org/10.1371/journal.pmed.1001747)

It is very important to stay focused on making scientific research better. More and more papers are published and we need to know if all this information is significant. To do this we can use different examples of rewarding systems. However, it still is very hard to change the way scientific research works.

CRISPR-based interference: from Cas9 to Cpf1

BioNanoscience seminar, 12.11.15

Title: CRISPR-based interference in prokaryotes: from exploration to exploitation

Speaker: John van der Oost (WUR)

Author: Edgar Schönfeld

CRISPR/CAS9 is a new genome editing tool that has revolutionized biology. It is better than previously used methods in terms of cost, time and precision. The tool is based on a bacterial defense mechanism against viruses that was discovered in the 80’s. Then, researchers found clusters of identical repeating DNA sequences that were separated by variable regions in bacterial genomes. Much later, this pattern was named “clustered regularly interspaced short palindromic repeats”, or CRISPR. Despite from regarding it as a funny structure, nobody saw the significance of these repeats until it was discovered that some of the variable regions were homologous to viral genes. Soon scientists realized that it was part of an adaptive immune system in bacterial cells, which works in the following way: When viral DNA enters the cell, Cas-proteins (CRISPR-associated proteins) fragment the invading sequence and select a certain region, which is inserted as a spacer in the CRISPR locus. The variable regions (called spacers) in the CRISPR locus constitute a library of previously invading DNA sequences. The CRISPR locus is preceded by a leader sequence and an operon of genes encoding the Cas-proteins, which have individual functions in the viral defence mechanism. After the CRISPR locus is transcribed into pre-RNA it is cut into individual spacer crRNAs (CRISPR-RNA) by a Cas-protein. Thereby the repeating RNA regions mark the borders between different spacer crRNA-sequences. Each such crRNA associates with other specific Cas-proteins. Finally, these CRISPR ribonucleoprotein-complexes (crRNPs) scan the DNA of invading viruses. Thereby the RNA-component acts as a guide that recognizes the invading DNA sequences it descended from, through its homology. Eventually, the Cas protein degrades the targeted sequence.

CRISPR
From: “Revenge of the phages: defeating bacterial defences” Julie E. Samson, Alfonson H. Magadán, Mourad Sabri & Sylvain Moineau, Nature Reviews Microbiology 11, 675-687 (2013)

There are two classes of CRISPR mechanisms. Class I genes contain an operon representing different Cas-proteins called “cascade”, whereas class II systems contain only Cas9. In such a system Cas9 does the trimming of the pre-RNA and also associates with the resulting guide RNAs. In both processes, Cas9 requires the interaction of tracrRNA, which is transcribed together with the Cas-genes and the CRISPR locus. In contrast to a crRNA-crRNP complex involving Cas3 and the cascade, the crRNA only connects to a single unit (Cas9) in a class II system. Cas9 induces a double strand break in the targeted DNA, which is repaired by means of the error-prone homologous end-joining. This CRISPR-Cas9 system is employed by researchers to knock out genes. In order to insert a new gene, a template DNA sequence must be provided as well, which will be automatically inserted via homologous end joining.
Interestingly, there is another Class II effector protein that offers several advantages compared to Cas9, and John van der Oost was involved in its recent discovery. This protein is called Cpf1. Its most distinct feature is its independence from tracrRNA. This simplifies the design of the CRISPR genome editing system, because less RNA need to be synthesized. In addition, Cpf1 creates sticky ends when cutting DNA, as opposed to the blunt ends resulting from Cas9-mediated cleavage. That facilitates the insertion of new genetic sequences, and also allows to choose the orientation of insertion.
The CRISPR-Cas9 system induced a patent fight between two American universities, which might come to an end if the CRISPR-Cpf1 system proves to be superior to CRISPR-Cas9.

CRISPR-based interference in prokaryotes from exploration to exploitation

Speaker:         Prof. dr. John van der Oost

Department:    Bionanoscience

Subject:          CRISPR-based interference in prokaryotes

Location:         Delft

Date:               12-11-2015

 Author: Carolien Bastiaanssen

Prof. John van der Oost from the Laboratory of Microbiology of Wageningen University gave a BN seminar about CRISPR. This is a defence system which about 85% of archaea and 40% of bacteria have. The topics that were covered in this seminar were the discovery and mechanism of CRISPR, the different classes and their characteristics and the applications of the system.

Bacteria and archaea have developed various ways of dealing with viral threats: inhibition of adsorption, inhibition of DNA injection, degradation of DNA and abortive infection. In the 1980s a Japanese research group discovered a peculiar structure in the DNA of bacteria. At that time they did not know its function, but it turned out to be yet another defence mechanism of archaea and bacteria. The name of this mechanism, CRISPR, is an acronym that stands for Clustered Regularly Interspaced Short Palindromic Repeats. Between these repeats there are sequences, called spacers, which are homologous to viral DNA. Situated next to the CRISPR loci are the CRISPR associated (Cas) genes. The whole pathway is often referred to as CRISPR-Cas and it provides a form of adaptive and heritable immunity.

The process can be divided into three stages. The first stage is the acquisition of spacers. If a virus injects DNA into a bacteria, this DNA is recognized as foreign and it will be cut into pieces. Part of it will be incorporated in the bacteria’s own genome as a spacer. In the next stage the Cas genes are transcribed and pre-crRNA (CRISPR RNA) is produced. This is processed into mature crRNA. The final stage is the actual target interference where the Cas proteins and the crRNA locate foreign DNA and disable it. A very important aspect is that the system has to discriminate between non-self and self DNA. Because the foreign sequences are incorporated into the DNA of the bacterium itself, only looking at the sequence is not enough. CRISPR therefore relies on a protospacer adjacent motif (PAM). This short sequence is present in the viral DNA but not in the spacer. A second control point is the seed sequence, if there are any mismatches in this sequence the crRNA will not bind.

Overview of the CRISPR cas system

The figure above gives an overview of the CRISPR-Cas system. Source: Van der Oost, J. et all, Unravellling the structural and mechanistic basis of CRISPR-Cas systems, Nature Reviews Microbiology (2014)

There is a huge diversity of CRISPR-Cas systems, primarily in the Cas genes and proteins. There are three main types and each of them has several subtypes. Type I and Type III are very similar and are also referred to as Class I. Type II and a recently discovered Type V are referred to as Class 2. In Class I the different Cas genes produce a multisubunit complex, named CRISPR-associated complex for antiviral defence (Cascade). Together with Cas6, Cascade is responsible for pre-crRNA processing. When a target is near, Cascade undergoes a conformational change, probably to recruit Cas3. Together they target the DNA. The spacer acquisition is done by Cas1 and Cas2. In Class 2 there is no Cascade but only a single protein, for Type II this is Cas9. The unique features of Type II are that there is tracrRNA, which stands for trans-activating CRISPR RNA and that the nuclease used to cut the DNA is RNaseIII. The most recently discovered Type V (or Cpf1) also has a single subunit like Cas9. However it has no tracrRNA and here the PAM is at the 5’ side instead of at the 3’ side. It uses a non-Cas RNase and the double-strand DNA break that it makes is staggered unlike the cut in the other types. All types target DNA except for Type III which targets RNA.

Apart from protecting bacteria from viral attacks CRISPR can be used for various other purposes. Prof. Van der Oost and his co-workers engineered spacers from the lambda phage and introduced them into E. Coli. In this way they made E. coli immune for the lambda phage. Another application of CRISPR could be an interesting alternative for antibiotics. Using CRISPR to engineer good viruses to target harmful bacteria. And last but not least CRISPR has great possibilities as a genome editing tool.

How to improve the scientific process.

Speaker: John P.A. Ioannidis
Department: –
Subject: How to improve the scientific process.
Location: Erasmus University College
Date: 09-11-2015

Author: Kristian Blom

Today 09-11-2015 is a rather special day for the Erasmus University, since they celebrate their 102nd dies natalis. For this special event, Prof. John P.A. Ioannidis, Chair in Disease Prevention and Professor of Medicine at Stanford University, gave a masterclass about the efficiency of published research. Later that day he received an honorary doctorate of the university.

Image 1: PPV as a function of the pre-study odds for various numbers of conducted studies, n.  Panels correspond to power of 0.20, 0.50, and 0.80.   (Source: Why most published research findings are false, John P.A. Ioannidis)
Image 1: PPV as a function of the pre-study odds for various numbers of conducted studies, n.
Panels correspond to power of 0.20, 0.50, and 0.80.
(Source: Why most published research findings are false, John P.A. Ioannidis)


Prof. Ioannidis started the masterclass by mentioning that in the period of 1996 to 2011 more than 15,153,000 different scientists have published their research in major scientific journals. But of all these published researches, just a few several thousand were major discoveries. Another thing that is stated in one of his articles[1] is that it can be proven that most claimed research findings are false. So what we see is that the efficiency of research findings is rather low. One way to define the efficiency is the ratio between correctness (quality) and the time it took to do the research (quantity).

If we want to increase the efficiency in published research, we need to know which measures leads to the maximum efficiency.
For instance, if we make reanalysis mandatory, what would happen with the efficiency? One can say that the efficiency shall increase, because reanalyzing your data will give less errors in published research. On the other hand, more time will be spent on the same research, thereby the efficiency will decrease. So by increasing the quality of research, we automatically decreasing the quantity.
If we look back at the number of scientist who have published their research between 1996 and 2011, we can say that a decrease in quantity isn’t that worse, since the equilibrium is more shifted to the quantitative side than the qualitative side. So in general, making reanalysis mandatory should increase the efficiency of published research.

But when reanalysis is done, how do you know whether the new/old claimed research finding is true or not? This question is very difficult to answer. Sometimes it can occur that a claimed research finding is accepted as true, but decades later people find out that the finding actually was false.
For example: About two weeks ago Prof. Ronald Hanson published an article in Nature were he showed that quantum entangled electrons in two diamonds can send information to each other instantaneously. This founding showed that Einstein was wrong about his idea that quantum entanglement isn’t possible. Although it was already accepted that quantum entanglement exists, it took 80 years of research before researchers were able to prove it.

One thing that Prof. Ioannidis found out was that the efficiency of research increases with the amount of pre-study that is done (see image 1). Of course this makes sense, but what is less obvious is that the PPV (probability that a research finding is true) doesn’t depend on the power (the importance of a study in the scientific world). So that indicates that no matter how important/famous a research is, it is always important as a researcher to do enough pre-study before you start with the real work.

‘Where will science be in 10 years?’ was one of the last questions during the masterclass. Prof. Ioannidis gave as answer: ‘There will be a great increase in researchers, and hopefully many great discoveries will be achieved.’ Of course this is something we all hope, but therefore alterations in the scientific process are necessary for improvement. Not only for an increase in efficiency as we discussed here, but also for keeping science as successful as it is nowadays.

[1]Article: Why most published research findings are false.
Author: John P.A. Ioannidis

The Origin of Cellular Life

26-06-2015

Department of Bionanoscience

Today there was a very special guest speaker at TU Delft: Nobel Prize winner Jack Szostak came to talk about his research on ‘the Origin of Cellular Life’. He started with one of the core questions he and his team are trying to answer:

“Is it Easy or Hard to make life?”

Or slightly differently formulated:

“Is life really that complicated or does it just seem that way because it has been evolving for more than 3.5 billion years?”

When you think about it, it is a very hard question to answer. We only know the concept of cells, life’s basic building blocks, for roughly 300 years.

Today it is generally accepted that the genetic information of organisms is written in the DNA code. This code reproduces itself (and thereby makes mistakes) for its offspring and it transcribes itself into a slightly different code of RNA. This RNA then translates to proteins that are the real functional elements. But who said this Central Dogma, as we know it today, is 3.5 billion years old? Szostack explained to us the theory that in early life, DNA and proteins weren’t present as a vital part for organisms. Maybe these organisms’ genome consisted just of RNA. RNA can be seen as an intermediate between DNA and protein, having both the chemical structure for code encryption like DNA, and also the ability to be to fold into a functional molecule just like proteins do.

However, there are still some problems with RNA as molecular motor behind life. RNA is much less stable and also single-stranded. Also, non-enzymatic RNA replication, which theoretically could be possible, is too slow and inaccurate. A third but not last problem is that the A:U bonding in RNA has only two hydrogen bonds, where the A:T bonding in DNA has three. Thereby the structure of RNA is much more sensitive to wobble-formation and also less stable.

While (nano) biologists, chemists, physicists and the rest of the scientific world will struggle for quite some time with the questions mentioned above, it is fascinating to see how far we already are. It is in my opinion one of the most intriguing fields of research within our curriculum, and to have it being explained by a Nobel Prize winner made this lecture extra special.

 

Kasper Spoelstra

wkspoelstra@hotmail.nl