Light harvesting and photo-protection in photosynthesis

Speaker: Lijin Tian
 Biophysics of photosynthesis
Light harvesting and photo-protection in photosynthesis
Location: TU Delft
Date: 24-05-2017

Author: Kristian Blom

At the 24th of June I visited a talk given by dr. Lijin Tian, a postdoc candidate for the Nynke Dekker lab who is currently working in the lab of Prof. dr. Roberta Croce in the department of biophysics of photosynthesis at the VU. The main research goal of Prof. dr. Croce is to get an understanding of the light reactions of photosynthesis at the molecular level, with particular emphasis on light absorption, excitation energy transfer and photo-protection.

Dr. Lijin Tian started the talk by introducing photosynthesis and the main actors involved in this process. Photosynthesis can be divided in two separate processes: light-dependent and light-independent. In the light-dependent process the solar energy is harvested by chlorophylls and thereafter converted into chemical energy. In the light-independent reactions, called the Calvin cycle, carbohydrate molecules are assembled from carbon dioxide using the chemical energy harvested during the light-dependent process. For the light-dependent process there are two multiprotein complexes, called Photosystem I and II, who catalyze the process of light harvesting and conversion of light energy into chemical energy. Both these complexes can on their turn be divided into two parts: an antenna system that harvest the light energy and is responsible for the energy transfer to the reaction center, and a core complex in which charge separation and electron transfer takes place.

For the harvesting of light energy there is an critical value regarding the amount of energy influx into the reaction centers. Above this value irreversible damage to the photosynthetic system can be caused. To circumvent this problem, photo-protective mechanisms in the antenna system can reduce the energy influx by quenching excess excitation energy as heat in a process known as nonphotochemical quenching (NPQ). In their latest research paper, Prof. dr. Roberta Croce and her lab show that LHCII, the main light harvesting complex of algae, can only switch to a quenched conformation as a response to a pH change when LHCSR1 (light-harvesting complex stress related 1) is present in low concentration.

Figure 1: Fluorescence traces of LCHII-only and LCHII+LHCSR1 cells. (A) LHCII-only, (B) LHCII+LHCSR1 cells with (red)/without (black) nigericin. The signal was collected at 680 nm. Nigericin (100 μM) addition and pH changes are indicated by arrows.
Image from: E Dinc, L Tian, LM Roy, R Roth, U Goodenough, R Croce (2016) “LHCSR1 induces a fast and reversible pH-dependent fluorescence quenching in LHCII in Chlamydomonas reinhardtii cells”

Regarding the talk itself, I didn’t found it that interesting. For me it was hard to follow the story, since the speaker his English wasn’t that good. Also, and this is not the first time that I notice this, the slides were overcrowded with images and data. It was quite surprising for me that there isn’t any research lab in BN that focusses on photosynthetic systems, since it is such a fundamental field in cell biology. At the end of the talk one of the PI’s of BN asked a lot of questions regarding the status of the research field in photosynthetic systems. From my point of view it almost looked like he/she was planning to start a new lab with a specialization in the biophysics of photosynthesis. Perhaps that within a few years from now we have a new lab in the BN department.


EMT controlled phenotype switching drives malignant progression

Speaker: Geert Berx
Department: Molecular and Genetic Oncology Lab, Ghent University, Belgium
Location: Erasmuc MC Rotterdam
Date: May 24, 2017
Author: Teun Huijben

Geert Berx is introduced by one of our teachers Riccardo Fodde as one of the pioneers in the field of the epithelial-mesenchymal transition (EMT). After this introduction he starts to give us an introduction on EMT.

As the name implies, EMT is the transition of an epithelial cell to a mesenchymal cell. The epithelial cells are polar and very tightly connected to there surrounding cells, thereby forming a clear boundary between the underlying tissue and the outside world. To become a mesenchymal cell, the mesenchyme is the connective tissue lying underneath the epithelium, the cell has to loose its polarity and connections to other cells. This is done by losing the cell-cell interaction protein E-Cadherin (epithelial Cadherin). E-Cadherin is down-regulated as a result of binding of transcription factors to specific E-boxes near the promotor.

The most important transcription factor doing this, and discovered by Geert Berx himself, is ZEB2. If ZEB2 is high in expression the E-cadherin is down-regulated, resulting in less cell-cell interactions enabling EMT. Control experiments in different mouse models and human cell lines showed that knocking-out ZEB2 resulted in more E-cadherin and no EMT, proving this theory.

The reason why EMT is widely studied is because of its importance in cancer. When malignant epithelial cells undergo EMT, they can travel through the mesenchyme to the blood and travel then to new places to form metastases. For a long time, people thought of EMT in a very binary way; a cell is either epithelial or mesenchymal. However, Geert and his colleagues proved that there are also multiple transitional states between epithelial and mesenchymal cells, and they showed at least 8 different metastable intermediates. The distinct states differ in levels of amongst other things E-cadherin, EpCAM and ZEB2. In both normal tissue as tumors, a wide variety of these states is found, indicating that the EMT system is way more difficult than thought.

Further research into the importance of ZEB2 in EMT and tumor formation resulted in many new insights. ZEB2 appeared also important in the maintenance of stem cells, spontaneous tumor formation and the p53 pathway. However, in the study of ZEB2 importance in human melanoma cell lines they found something interesting. When ZEB2 was knocked-out the tissue didn’t differentiate anymore, and high levels of its counterpart ZEB1 were measured. Indicating that ZEB2 is important in differentiation and proliferation. Further studies showed that ZEB1 is important in stem cell maintenance and tumor invasion. This resulted in a clear model where either ZEB1 of ZEB2 is present, supported by experimental data.

However, when ZEB2 was over-expressed, they found more metastases, which contradicted the current model. Further investigation resulted in the finding that TNF (tumor necrose factor) down-regulates the ZEB2 protein, resulting in higher ZEB1 levels and thereby creating more metastases. All of this knowledge together resulted in an oscillating model of ZEB1 and ZEB2 levels during tumor progression (see Figure 1).

Schermafbeelding 2017-05-24 om 13.43.33Figure 1. The levels of ZEB1 and ZEB2 oscillate during the progression of cancer. In the primary tumor ZEB2 is highly expressed, resulting in a high proliferation. During the transient state, ZEB2 is down-regulated paving the way for ZEB1 to be active and facilitate invasion. In the metastases again ZEB2 is present to stimulate proliferation and tumor outgrowth.  

After all, I found the talk by Geert Berx very interesting. Although it made very clear how many players are important in the progression of cancer and how difficult it is to do research on it.

The Pathways Traveled: Structural Studies of Virus Assembly

Speaker: Dr. Elizabeth Wright
Department of Pediatrics, Emory University
Subject: The Pathways Traveled: Structural Studies of Virus Assembly
Location: A1.100 TU Delft
Date: 19-05-2017

Author: Kristian Blom

On the 19th of May I visited a BN colloquium given by Elizabeth Wright, principal investigator at Emory University. The Wright lab is interested in the use of cryo-electron microscopy (cryo-EM) and molecular biology approaches to explore the three dimensional structures of viruses and cells. The goal is to use this information to aid in the development of novel antimicrobials, therapeutics, and vaccines.

Dr. Wright started by mentioning the benefits and methods in cryo-EM. One of the benefits is that samples stay in their ‘native’ state because all the molecules within the sample are frozen and thus do not move over time. The other benefit, which I think is even better than the first, is that with conventional cryo-EM specimen preparation artifacts are eliminated. While I’m writing I now realize that the cause of this benefit wasn’t mentioned during the talk, but I think it has to do with the cooling of the sample.

Within the realm of cryo-EM, there are different methods one can use to analyze your sample. The most extensively used methods are single particle analysis, electron crystallography, helical reconstruction and tomography. The latter method is imaging by sections. From these sections it is possible to make a 3D image by stacking the individual 2D images. During the talk dr. Wright showed us one example of a 3D image constructed by cryo-EM tomography.

After a short review of the different methods we moved to the recent advances in cryo-EM. These advances can be separated in three different areas: Sample preparation, data collection and data processing. Especially the data collection part has made some big improvement in 2008, when Direct-Electron introduced the large-format Direct Detection Device (DDD®). In traditional transmission electron microscopy (cry-TEM) cameras use a so-called scintillator. This is a material that produces a flash of light by the passage of a particle through it. For cryo-TEM this particle is an electron that causes a photon to be emitted by the scintillator to the CCD sensor. In contrast, the DDD directly detects image-forming electrons in the microscope without the use of a scintillator. This direct electron sensing results in better resolution, signal-to-noise ratio and sensitivity. The data processing improvements do mainly come from faster computing, better algorithms and auto-segmentation.

Figure 1: The difference between traditional transmission electron data collection and DDD data collection. Image from:

The second part of the talk was devoted to the current studies of the Wright lab. Besides the research, there is also a big interest to the technological developments of cryo-EM. One of the recent innovations is the correlation of fluorescence microscopy with electron microscopy. This allows one to improve the resolution and identify certain parts of the sample by staining them with a specific fluorophore.

To be honest, I didn’t found the talk that interesting. The slides that dr. Wright used during her talk were overcrowded with text and sometimes lacking important information. Therefore it was quite hard for me to keep my focus. Every time I was halfway through reading one slide, dr. Wright already went to the next slide.  Also I couldn’t found any structure in her presentation, and that is even more annoying for me because I always need some structure if I want to understand the complete picture of the talk. What I do like about here presentation is that she knew a lot about her field of research. All the questions she got from the audience were answered in a very nice way.

The Pathways Traveled, Structural Studies of Mononegales Virus Assembly

Speaker: Elizabeth R Wright
Department: Department of Pediatrics, Emory University
Subject: The Pathways Traveled, Structural Studies of Mononegales Virus Assembly
Delft University of Technology
Date: 19 May 2017
Author: Romano van Genderen

Professor Wright started by giving us an overview of different size scales and the techniques used when studying life on that scale. One thing she pointed at is that electron microscopy, her field of interest, is improving on two different terrains. It can now both make pictures of smaller and smaller structures, but also on a bigger scale.

Afterwards, she discusssed the advantages of Cryo-EM. She told us that since it does not use any staining methods, it can measure a specimen in its native state, unlike techniques such as crystallography. Also, there are few to no artifacts visible from the preparation of the specimen.

Next, she showed us the recent advances in the field of Cryo-EM. We now can not only prepare better samples due to techniques like active substrates, but also gather better data thanks to phase plates and process the data more efficiently, not only because of more processing power of contemporary computers, but also because of advantages in auto-segmentation algorithms. But according to her, the most important improvement was the invention of direct electron detectors. These DDCs are far more useful than the previously used CCD cameras, which turn electrons into photons that are then counted. This conversion leads to loss of signal. These new cameras therefore give better signals, and can also be used to record videos in real-time with framerates up to fifty frames per second.

Her lab is currently using these new tools to their best abilities, leading to research on correlative microscopy, overlaying EM and fluorescence images to better locate compounds in the cell. They also study enveloped viruses, the topic of the rest of her talk.

She started by explaining the structure of such a virus, a completely new topic to me. To be specific, she studied paramyxo and pneumoviruses. These viruses have glycoproteins on their surface, a lipid envelope, matrix proteins and a nucleocapsid protein that surrounds and protects their RNA genome. These viruses are very common, for example the common measles belongs to this class.


Figure 1: The structure of a myoxivirus

This class has commonly been regarded as hard to purify. Firstly, people are unsure whether or not it still looks like its native configuration after purification. Also, there are a lot of artifacts and damage introduced by current purification methods. And to finish things off, you also only get very small numbers of virus particles back from it.

This is why professor Wright wanted to improve this method. She used a method currently used in protein purification, using nickel-NTA along with histadine tags binding to it. But in her case she incorporated the nickel-NTA into the cryo-EM grid and added HIS tags to the surface glycoproteins. This makes the grid attract the viruses and leads to far better yields and also removes a lot of artifacts from the sample.

Next, she wanted to study the interior of these viruses during virus assembly and release. There were two common hypotheses on the location of the membrane protein during these processes. One says that the matrix protein covers the inside of the capsid. The other says that it forms a protective layer around the nucleocapsid protein for even more protection. Using cryo-EM, she was able to directly see the matrix protein, and that it does not cover the nucleocapsid protein.

Also, she saw a mesh of fusion proteins, proteins that play a role in binding to the host. These proteins form a two-layered lattice, but there is something weird about this lattice. It seems to have a hole in it. This hole does seem to be the same size as the protein on the host’s surface, suggesting that this a binding pocket.

The final part of her talk made the topic a bit more practical. She talked about how her techniques were used to discover more about the virus known as RSV. This virus causes asthma in newborns and there is currently no vaccine known against it. Her research found that this virus is filamentous when secreted. Also, that its structure can be discovered in large detail by using cryo-EM. One peculiar fact she found was that the RSV-F fusion protein forms a hexamer-of-trimers when in its pre-fusion form. This knowledge can be used to develop a vaccine for this virus.

I did really enjoy the first half of the talk, where the techniques and their advantages were discussed. I did notice a lack of disadvantages, a fact that I find very suspicious to say the least…

On the other hand, the second half was not that interesting, because the main points got drowned in all the details about the virus and its shape. Also, the images were not understandable, even when she told what we were supposed to see. But perhaps that was the fault of the lighting or the screen in the room.

New chemical therapeutics of genetic disease by manipulating the transcriptome

Speaker: Masatoshi Hagiwara
Department: Developmental biology
Subject: New chemical therapeutics of genetic disease by manipulating the transcriptome 
Location: Erasmus MC Rotterdam
Date: 3 April 2017
Author: Carolien Bastiaanssen

Professor Hagiwara leads a research group at Kyoto University Graduate School of Medicine Japan. His long-lasting dream is to cure genetic diseases with the compounds he and his colleagues develop.  Nowadays, with the availability of CRISPR-Cas9 as a genome editing technology, it has become relatively easy to manipulate DNA. However, before this discovery it was easier to manipulate RNA than DNA using small and cheap molecules. The research of professor Hagiwara is therefore focused on compounds that influence the splicing of RNA and that can be used in splicing therapies for genetic diseases.

In order to study the effect of different compounds on the splicing of pre-mRNA, professor Hagiwara and his colleagues developed a way to visualize alternative splicing. As a model organism the nematode C. elegans was used with egl-15 as the model gene. This gene encodes a fibroblast growth factor receptor and alternative splicing gives rise to two different isomers containing one of the mutually exclusive exons 5B and 5A. The first isomer, EGL-15(5B),  is essential for viability and the second isomer, EGL-15(5A), is involved in the migration of sex myoblasts. Depending on which exon is present, cells express either GFP or RFP (Figure 1). After this first success professor Hagiwara and his colleagues also developed reporters to for alternative splicing dependent on the developmental stage. Furthermore they succeeded in expressing their reporter system in mice and in mammalian cells.

 alternative splicing reporterFigure 1: Alternative splicing reporter in C. elegans. A) The construct for the alternative-splicing reporter. B) Transgenic C. elegans expressing the aforementioned reporter. From left to right: RFP, GFP, the first two merged, and differential interference contrast (DIC) image. C) Close up of the vulva showing that the vulval muscles express E5A-RFP and not E5B-GFP. Adapted from: Kuroyanagi, H. et al. Transgenic alternative-splicing reporters reveal tissue-specific expression profiles and regulation mechanisms in vivo. Nat Meth 3, 909–915 (2006).

Once they were able to visualize alternative splicing, professor Hagiwara and his colleagues tried to find compounds that could correct for aberrant splicing in order to treat patients with for example familial dysautonomia (FD). This is a hereditary disease caused by mutations in the IkB kinase complex-associated protein (IKAP) gene. In these patients exon 20 is skipped, especially in neurons, and this results in a truncated protein product. FD patients could benefit from a treatment that stimulates exon 20 inclusion. Using a reporter similar to the one shown above, professor Hagiwara and colleagues tested all kinds of compounds in their chemical libraries. One of these compounds increased exon 20 inclusion in cells of FD patients. They named the compound rectifier of aberrant splicing or in short RECTAS. This compound was shown to be effective in cells from FD patients, future studies on FD mouse models are now required to get RECTAS towards clinical trials.

RECTASFigure 2: RECTAS is a small molecule that rescues aberrant splicing in FD cells A) Structure of RECTAS B) Cells that lack exon 20 (thus FD phenotype) express RFP and wildtype cells that do have exon 20 express GFP. DMSO is a negative control and kinetin is a positive control. RECTAS rescues aberrant splicing in FD cells and it does so to a larger extent than kinetin.  C) Quantification of GFP/RFP ratios after treatment with RECTAS or kinetin. Lower concentrations of RECTAS were required to obtain the same effect that was achieved with higher concentrations of kinetin. Source: Yoshida, M. et al. Rectifier of aberrant mRNA splicing recovers tRNA modification in familial dysautonomia. Proc. Natl. Acad. Sci.  112, 2764–2769 (2015).

Another example of a disease where patients can benefit from splicing therapy is Duchenne muscular dystrophy (DMD). This fatal disease is caused by a mutation in the dystrophin gene that results in a lack of the dystrophin protein. The mutation introduces a frameshift, thereby creating a premature stop codon. Thus no dystrophin protein is produced. A milder phenotype of DMD is Becker muscular dystrophy (BMD). Patients with this phenotype also have a mutation in the dystrophin gene, however this mutation does not cause a shift of the reading frame.  Instead the mutation promotes skipping of an exon. Although part of the protein is missing, it is still partially functional therefore BMD patients show less severe symptoms than DMD patients. Thus by treating DMD patients with a compound that causes the mutated exon to be skipped, the symptoms can be drastically reduced. Professor Hagiwara and colleagues found such a compound. Its name is TG003 and it stimulates skipping of  the mutated exon while it does not affect the wildtype exon. More importantly, TG003 did not affect the splicing patterns of the other exons in the dystrophin gene.

All in all, Professor Hagiwara showed that splicing therapies with small molecules can be used to treat FD and Duchenne muscular dystrophy patients. These results are promising, not only for these groups of patients but in the future splicing therapies with small molecules can potentially be used for other genetic diseases as well. Professor Hagiwara tried to explain everything clearly, yet due to his heavy accent I had to pay very close attention to follow the talk. However his passion for his work was obvious and the promising results are of interest for multiple groups at the Erasmus MC who might want to use splicing therapy for the patients they try to help.




Information processing in neural and gene regulatory networks

Speaker: Gašper Tkačik
IST Austria
Information processing in neural and gene regulatory networks
Location: A1.100 TU Delft
Date: 22-03-2017

Author: Kristian Blom

On the 22nd of March I visited a seminar given by Gašper Tkačik, a theoretical physicist who is interested in using statistical physics and information theory to explain phenomena related to the cell. The most fundamental principle that underlies all the research that dr. Tkačik conducts is that information processing networks have evolved or adapted to maximize the information transmitted from their inputs to the outputs, given the biophysical noise and resource constraints.

Dr. Tkačik showed us multiple examples of his research during his talk. For now I’d like to focus on the most interesting one (from my point of view), which is about reading the positional code in early development. It is commonly known that a morphogen gradient in early development generates different cell types in distinct spatial orders. This is called the French flag model. Despite decades of biological study, a quantitative answer to how much appositional information there is in an expression pattern remained unanswered. Therefore Dr. Tkačik to look at the French flag model from an information theory point of view and asked the following question: How much information is there in spatial patterns of gene expression? Using the gap genes in the Drosophila embryo he measured the amount of information in bits. I will now discuss shortly how one can measure the information contained in gap genes.

Figure 1: Normalized dorsal profiles of fluorescence intensity, which we identify as Hb expression level g, from 24 embryos selected in a 38- to 48-min time interval after the beginning of nuclear cycle 14. Considering all points with g = 0.1, 0.5, or 0.9 (Left) , yields conditional distributions with probability densities P(x|g) (Right). Note that these distributions are much more sharply concentrated than the uniform distribution P(x) shown in light gray. Image adapted from: Dubuis, J.O.; Tkačik, G.; Wieschaus, E.F.; Gregor, T; Bialek, W. PNAS, 2013, 110 (41), pp 16301-16308

We start by looking at the early stages of Drosphila development. At this stage most cells are similar in morphology, so we do not have any information about the position of cell when we neglect gene expression information. Mathematically we can say that the position of the cell is drawn from a distribution of possibilities P(x). If we know take into account the gene expression levels, our uncertainty in position is reduced.  Looking specifically at the expression levels of the hunchback gap (Hb) gene (figure 1), one can see that a certain expression level (g) is not a unique indicator for the position of the cell along the posterior/anterior axis.  Instead there is a range of positions that have the value g. Let P(x|g) be the conditional probability that a cell with expression level g is located at position x.

We define the entropy  of our two probability distributions as:

The information gain due to an observation of the hg expression level at on cell is now given by

From this point I will leave the mathematical expressions as it is, but I challenge you to get a firm understanding of why the final expression represents the information gain. After a small adaption to the final formula, Dr. Tkačik  used that result to make a ‘’direct’’ measurement of the amount of information carried in the gap genes. Using this method he found that individual genes carry almost two bits of information. In the extension of this result he also found that four gap genes carry enough information to define a cell’s location within an error bar of ~1% along the anterior/posterior axis of the embryo. How cool is that!

Although the talk went a bit fast, the content was really good. During the talk I was reminded of the lectures we had during evolutionary & developmental biology (evodevo), since it was this course where I got familiar with the gap genes in drosophila development. Therefore I decided to inform one of the evodevo teachers with the content of this talk, because it might be of good use in the future for them. Although it sounds a bit cliché, afterwards I was again (it happens on a regular basis) astonished by the fact that nanobiology is a really strong field of science. What Dr. Tkačik did fits very well into our program because he used mathematics, especially information theory, to understand why those gap genes function the way they do. For me it was really a wakeup call to keep questioning myself: Why? If one keeps asking this again and again, I think at some point you will find yourself in the fields of mathematics and physics where the answer will be waiting for you to be found.

The Von Willebrand factor/ADAMTS13 axis in ischemic stroke

Speaker: Simon de Meyer
Department: Department of Thrombosis, KU Leuven, Belgium
Location: Erasmuc MC Rotterdam
Date: May 15, 2017
Author: Teun Huijben

Simon de Meyer studied Industrial Engineering and Bio-engineering and obtained his PhD in Leuven focusing on the von Willebrand disease. Later he did post-doctoral research for three year at Harvard University. Afterwards he returned to Leuven to take an associate professorship at the Faculty of Biomedical Science in the department of Thrombosis.

Simon starts his talk by explaining the basic principles of a stroke. During a stroke a small clot of blood (thrombus) gets trapped in some small blood vessel, thereby obstructing the blood. This obstruction can lead to tissue damage which is especially dangerous when it happens in the brain or heart muscle. A typical stroke exists of two phases. The first phase is called the acute phase during which the thrombus is blocking the blood flow. The second phase is the diffusive phase in which the thrombus has been resolved, but the surrounding tissue still suffers from the period of reduced oxygen supply.

The problem with strokes is that they are unpredictable and there are almost no effective therapies. The best known therapy is the drug tPA (tissue plasminogen activator) that helps the rapid resolving of the thrombus. The disadvantages of tPA are that it can cause dangerous bleedings in other places in the body, you can become tPA resistant and it can be neurotoxic. Because it has to be given in less than 4.5 hours after the stroke, only 10% of the patients can have the drug on time, and only 50% of them reacts positively to it. All of this together makes tPA not the best option to treat strokes.

Another therapy is thrombectomy which is removal of the thrombus with a mechanical device. Besides that it is a good alternative to tPA is also gives us the advantages of collecting fresh human thrombi to study their composition. And this is exactly the part Simon is most interested in: of which cells/components is the thrombus made off and can this explain the different behaviour of thrombi during tPA treatment?

They study the thrombi by making sections, stain them with different dyes and look at them with the microscope. Besides the fact that all thrombi look very different, also the percentages fibrin, red blood cells, neutrophils and platelets differ enormously between different thrombi. They knew that Von Willebrand factor (VWF) is very important during wound healing and the formation of thrombi. Because VWF binds to both the collagen as to the platelets, providing a scaffold to form a blood clot. Using this knowledge they started looking at the VWF in the thrombi and found that this differed a lot between different thrombi. They also found that the concentration of VWF in the thrombi negatively correlated with the percentage of red blood cells in the thrombi, strongly suggesting that the VWF is important in the formation and composition of the thrombus.

It was known that the protein ADAMTS13 cuts VWF. So the hypothesis of Simon and his colleagues was that using ADAMTS13 promotes thrombus resolution in a stroke. To test this they used a mouse model in which thrombi were introduced by opening the skulls and adding a chemical that activates the blood clotting pathways. When the thrombi were formed and the mouse visibly suffered from a stroke, ADAMTS13 was added. And as expected, the thrombus was resolved, the blood flow increased and the damaging effects reduced. Even when ADAMTS13 was added more than 1 hour after the stroke, it was still helpful in reducing the injuries.

However, when the thrombus is resolved and the blood flow is restored, the problems are not over. It is known that after a stroke the surrounding tissue and vessels are getting more damaged for a while. Simon and his group showed that VWF knock-out mice experience less damage after a stroke, suggesting that VWF stimulates tissue damaging. Here ADAMTS13 treatment has the same effects as during the acute phase, it reduces the injury.

In the future, Simon will try to further quantify the effects of tPA, ADAMTS13 and nuclease (a promising drug he mentioned in the last part of the talk) hoping to find the perfect cocktail to treat patients suffering from strokes.

After all, I really enjoyed Simons talk and he explained patiently and very clear all the experiments leading to his conclusions. Although this is not directly my field of interested it was enjoyable and educational.

Short term plasticity and E/I balance combine to control Purkinje cell discharge in the cerebellum

Speaker: Philippe Isope
Subject: Short term plasticity and E/I balance combine to control Purkinje cell discharge in the cerebellum
Location: Erasmus MC
Date: 03-04-2017
Author: Renée van der Winden

Philippe Isope came to talk to us about his work on some of the workings of the cerebellum. He started with giving us a very brief overview of how the cerebellum works, namely, that it is for motor coordination. He mentioned two mechanisms that were important for the rest of his talk. Those were the fact that the cerebellum can predict the sensory input caused by a voluntary movement and that it adapts its feedback systems through plasticity. One of the questions Isope was concerned with was: ‘Do different tasks of the cerebellum rely on the same processing mechanism?
He then went on to talk about Purkinje cells, which provide the sole output of the cerebellar cortex. The firing of these cells can precede movement, which is linked to the predictive function of the cerebellum. The next topic was the different modules in the cerebellum. It turns out the cerebellum is physiologically heterogeneous and is divided into different modules. The parallel processing this makes possible ensures precision in the cerebellum. Moreover, the communication between the different modules did not seem to be important. This led to a working hypothesis, which said that the individual modules can be coordinated by parallel fibers. However, this raises the question of how they can be precise if the information is spread between them.

Seminar 6

Figure 1: A brief overview of the different ways a Purkinje cell is regulated

In order to test this, Isope and his group first identified the modules they wanted to work with. After that, they mapped granule cell inputs in the Purkinje cells. They found out that activity can quite easily tune these maps, so they are apparently not genetically determined. This shows the cerebellum is capable of plasticity. The conclusion was that the E/I (excitation/inhibition) balance is spatially organized and that that leads to precision. In the end, these two things were put together to show that both short term plasticity and the E/I balance working together to control the discharge of the Purkinje cells.
I thought this talk was quite difficult to follow, in part because of the very thick accent of the speaker. This made it less enjoyable to listen to. However, I am still curious about neuroscience so the topic in itself interested me. However, I am sorry to say that I just did not understand quite enough of it to find it truly interesting.

Information processing in neural and gene regulatory networks

Speaker: Gašper Tkačik
Subject: Information processing in neural and gene regulatory networks
Location: TU Delft
Date: 22-03-2017
Author: Renée van der Winden

Gašper Tkačik came to talk to us about his research on information processing in biological networks. His main goal is to predict what biological networks do from first principles and to quantify their function in this sense. He first gave us a brief introduction into Shannon’s information theory, which quantifies and optimizes information transmission. He also posed the question: ‘How can we recover the input at the end of a process?’. To illustrate his points, Tkačik explained two examples to us.

The first example was about the retina and how it encodes information. Through measuring the information flow into and out of the retina, it was predicted what modification the neurons make on the incoming light. Namely, that they perform center-surround filtering. After this prediction was made, it was confirmed by measurements. So in this case they succeeded in predicting the function of a network from first principles. Continuing with the retina, a different experiment was performed in which the pattern of neurons firing when a movie was shown was examined. Through measurements, the scientists found a probability distribution for these patterns. Looking at this distribution they found out that the neural output actually is not decorrelated, as was previously thought. In fact, each pair of neurons is weakly correlated. Moreover, they succeeded in decoding what movie had been shown by looking at the output information provided by the retina.

Seminar 5 (2)

Figure 1: A shortened overview of how the movie was decoded from the retinal code

The second experiment was about how morphogen gradients convey information in early development. The question that was posed was: ‘How much information is stored in the pattern?’. It turned out that the answer is approximately 2 bits per gene. However, four genes store 4.3 bits of information. By finding these numbers, Tkačik formalized an established concept of positional information.

The idea of quantifying what happens in biological networks is very interesting to me. I am interested in how organisms work, but I also really like the certainty that mathematics and physics give you. This is a way to combine the two. The talk was relatively easy to understand, which is always nice. It was also the first seminar in which I recognized concepts that I have learned during my own courses.

Real-time observation of translation of single mRNA molecules in live cells.

Speaker: Marvin Tanenbaum
Department: Department of Bionanoscience
Subject: Real-time observation of translation of single mRNA molecules in live cells.
Location: Delft University of Technology

Date: 24 March 2017
Author: Romano van Genderen

The research by professor Tanenbaum was about the kinetics of translation. He started by talking about how many genes, about 20%, oscillate in gene expression during the cell cycle, setting off processes of division and differentiation. For this to happen, a very strong regulation of the genes is necessary. This happens on many levels, not only on the scale of transcription, but also translation. Translational regulation has been shown to be the most important process to regulate how much of a certain protein is produced. This regulation happens through miRNA and RNA binding proteins.

In order to more carefully study this regulation, it is needed to see the RNA translation in action. A relatively old method for visualizing RNA was developed by Singer labs. They build in a series of hairpins in the RNA. These hairpins are a binding site for a protein called MCP. This MCP protein has a GFP tag to allow for visualization.

Showing the translation product sounds like something that would not be too hard. Just let the ribosome translate the RNA for GFP and it should be visible. But a problem about this method is that you create a lot of background noise from the free-floating GFP. Another problem is that GFP needs some time to maturate, which takes longer than the translation. So they used a free-floating GFP-antibody complex that binds the protein that is being synthesized. This SunTag system they developed is very bright and allows for very good visualization.


Fig 1. An overview of the SunTag system. You can see them being used to look at proteins that are being synthesized (Tanenbaum et al, A Protein-Tagging System for Signal Amplification for Signal Amplification in Gene Expression and Fluorescence Imaging, Cell 159)

So now you can combine the two aforementioned approaches. First fuse some small peptides to your protein of interest. Then let green antibodies bind the peptides. At the same time, you attach mCherry to a newer form of MCP, PP7 and let it bind the RNA hairpins which are non-coding. Wherever you see yellow, active translation is going on. When PP7 is injected into the cytoplasm, it can now be used to follow a piece of mRNA from the export into the cytoplasm until its eventual degradation.

Using this technique a few experiments were done.

Firstly, they were able to count the number of ribosomes on a single piece of mRNA, showing that the average number was around 20.

Next, they showed the translation speed. Because when the translation would be very slow, the ribosome signal would slowly dim after adding a translation initiation inhibiting drug. If translation would be very fast, the signal would be lost immediately.

Other research was done on the regulation of translation. They did find that specific RNA cutting proteins only work when RNApol collides with them, “bumping them off”. The cutting step of this protein works automatically, but they need a collision with RNApol to release the protein and therefore the cut strand.

I did really enjoy the technical overview of the versatile SunTag procedure and the applications of it. I do expect even more findings to come from this method, especially if this method can also be expanded to work on DNA as well. This would be a good improvement of the method. But I continue to doubt if understanding of the kinetics of translation has any practical applications.