Steven Chu (2014) - You can see a lot by observing: Optical Microscopy 2.0

So let me just begin and explain first of all the subtitle to Optical Microscopy 2.0. It refers to the greatest American philosopher of the 20th century. In case you are wondering who that is, that’s Yogi Berra; he played catcher for the New York Yankees. And on the right hand side you see him in a philosophical conversation with an umpire. He said a lot of wise things. He said for example, “If you come to a fork in the road, take it”. I advise that of all the students. He also said, “You can observe a lot of by just watching.” So why is a physicist talking in a session on physiology or medicine? And just to remind you: physicists have made some contributions to medicine. Actually the first Nobel Prize awarded to a physicist was Roentgen 1901 for his discovery of X-rays; and certainly X-ray imaging has had a profound influence on medicine. But it's deeper than that. If you look at all the modern medical techniques that we use today in medicine and biology, in addition to X-ray imaging there’s electron microscopy, there’s the Nuclear Magnetic Resonance techniques for structural biology but also Magnetic Resonance Imaging. There’s computer tomography with X-rays, there’s Positron Emission Tomography But also, in addition to those imagining technologies, I should also point out that the labelling of crucial elements, particularly proteins, was a big deal both in poly- and monoclonal antibody labelling, radioactive nucleotides, EM immuno-staining, fluorescent labelling and green fluorescent protein labelling - all of those things were a big deal. What's the colour code here? And then more recently: single molecule manipulation and imaging and sub-wavelength optical resolution. The colour code is simple: All the things in colour have been awarded Nobel Prizes. The things in white have yet to be awarded Nobel Prizes. So let me just talk a little bit about one part of optical microscopy, which has really changed the way we do biological research. It started with the first manipulation of cells, Paramecium E. coli by Art Ashkin at Bell Laboratories. Steve Block picked this up very quickly and pinned E. coli down on a microscope slide, grabbed E. coli with an optical tweezer, twirled it around to study the motors. And I started to manipulate a single molecule of DNA. So that's really the ABC of biotrapping. And when I got to Stanford in the late ‘80s, I wanted to see if I could directly manipulate molecules, in particular DNA. This is a picture of a single molecule DNA stained with some fluorescent dyes. A laser was introduced into the optical microscope, it was connected to a motorised mirror-joystick, and you can wiggle around a single molecule DNA. Needless to say this fascinated the graduate students who were doing this. They would disappear in the lab and would do this for hours on end, just like video games. And finally I had to say, this is all fun but let’s do some experiments. So anyway, let me also say I helped Jim Spudich, Bob Simmons. Bob Simmons was visiting Stanford for a year and he approached me and said, can you help us study some actin-myosin system with Jim Spudich. I said sure. And together we proposed this and worked on... set up the system of holding on to a single actin filament with optical tweezers, lifted off the substrate with a polystyrene sphere and the goal was to actually measure the force when actin hydrolysed ATP. We got close in these things but we couldn't get to the ultimate limit, namely could we see the power stroke of a single actin filament? Because my lab was on the second floor at Varian, Jim and Bob Simmons and with the graduate student Jeff Finer, we moved it down into the basement of Beckman and with that, finally, we were able to actually resolve the pull of a single molecule of actin on myosin. So that was a very nice achievement. Let me also talk about some other revolutions that have been occurring in microscopy. If you think of optical microscopy and think of the diffraction limit, if you will; it's given by the wavelength of light divided by two times the index refraction times sign failure, where sign failure tells you how closely focused the rays are; and for a typical optical microscope that could be 250 to 300nm. But if you ask a different question: you don’t ask, what's the blur circle, the blob of a single florescent molecule as the best resolution, but ask: what's the centre of that resolution? So in principle if the signal to noise were 10/1 then instead of 300nm you would get 30nm. In principle, if it was 100/1 you’d get 3nm. So if you can build up images of independent molecules: first that one, then this one, then this one, normally that would all smear together, you wouldn't have good optical resolution. But Eric Betzig, Harold Hess and independently Xiaowei Zhuang, and in fact there were two other groups in the same period 2006 which came up with this way of enhancing the optical resolution. And so that turned out to have a lot of impact. Also another thing I’ll mention very briefly: stimulated emission depletion microscopy invented by Stefan Hell also was a big deal. So let me give you an example of an application. This was done by my last postdoc in my lab, actually while I was Secretary of Energy, and published recently in Science. And this has to do with studying biofilms. And so if you look at this picture, normally you think of bacteria as free-floating bacteria in the so-called planktonic form. But most of the bacteria you find in nature are not individual cells, but they actually land on surfaces and form communities of bacteria. Are they the same bacteria or different bacteria in what is called the biofilm? So this is a cartoon of a biofilm growing on a surface. In these biofilms most of the mass are not bacteria but proteins and polysaccharides excreted by the bacteria. And here you see an edge-on view of a biofilm growing in our lab. So this is what you might see in an optical microscope with a resolution of about 300 or 400nm. This is what you see in a biofilm using super-resolution techniques. So let's expand that dotted region and here it is. This is a cholera bacteria emitting some proteins that formed the biofilm. And you see that the resolution is better. The good thing about this is that these are pictures, movies if you will, slow motion movies, taken of live growing biofilms. One can get very high resolution because the biofilm structure makes things immobile. And so you can get resolutions in the order of 10/15nm. That’s one example. We are continuing to do this. I should say that once you are able to study biofilms with 10/15nm resolution, the whole world opens up. Because you begin to see how these bacteria communicate with each other through exocytosis of vesicles. You can see the actual structures of the biofilms, the protein and polysaccharide metrical structures - things of that nature. And so that first paper began to probe these things. Let me also briefly mention super-resolution imagining on a signalling pathway, where in the signalling pathway, if molecules are mutated, things go bad and form cancer. This work was done by my second-to-last postdoc Xiaolin Nan, who’s now at the Oregon State Health University, and also in collaboration with Joe Grey, who also moved from Lawrence Berkley National Lab to there, and Frank McCormick at UCSF. So this is a cartoon description of what happens. For those of you who don’t know about this, this is not data. MAT-K signalling is activated through binding of a growth factor to the extra-cellular domain with the Tyrosine kinase receptor. Signalling molecules Grb2 and Sos are next recruited to the internal docking site, resulting in Ras activation at the membrane. The efficiency and duration of signal transmission is regulated by the scaffolding protein kinase supressor of Ras, KSR. Ras triggers a phosphor relation cascade involving Raf, MEK and ERK proteins, leading to ERK activation and translocation to the nucleus. Once in the nucleus, ERK activates several transcription factors that mediate gene expression. Target genes thus act ... Okay, so let me just review that: There’s a receptor molecule on the surface of the cell, TKR. When a ligand lands on it and activates it, the cellular side of it becomes phosphorylated. They then activate a molecule called Ras, it then reaches in and is able to grab another molecule called Raf, then MEK, then ERK. In this chain there’s some amplification of signal but all these molecules, except Ras, are becoming phosphorylated. If Ras or Raf become mutated you don’t need that outside signal in order to have cell replication. And mutations in Raf or Ras account for roughly Ras mutations for example are associated with more than 90% of pancreatic cancer, two thirds of multiple myeloma. So some of the very bad actors are associated with Ras mutation. So in any case it was suspected using electron microscopy, immunoelectron microscopy, that perhaps in these mutated forms of Ras there were clusters that form, and this is data you see on the left hand side. This is an image of an antibody attached to a gold particle that then targets mutant form of Ras. And by doing some analysis of this image, authors were beginning to suspect that perhaps there were clusters of Ras’s forming, up to 5 - 8 Ras molecules. And that cluster actually triggers the cell signalling that leads to cell proliferation and uncontrolled cell division. So that was where we started. What we did to this Ras mutant gene, we added an initiation factor that we can dial up and down with a tetracycline, not a tetracycline but a tetracycline derivative that can defuse into the cell. So when this Dox lands on the promoter site you can actually dial up the expression of the mutant protein. And this is what we see in this mutant so-called KRas protein where the mutation has been identified on the 12th site, you substitute glycine and you put in aspartic acid. So here is a cell, a cancerous cell. And if I zoom in on this little white square you see with 200nm, well with the scale bar being 200nm, you see these little red dots; those are individual molecules of the mutant Ras. And so what did we do? We simply began to look statistically at how many of these mutant Ras’s there are and are they forming clusters? In a very low dosage of Dox, 1 ng/mL, what we find... this is a statistical test that tells of clustering, but indeed you don’t need that old traditional statistical so-called Ripley’s test, you can actually count pairs, doubles, triples, singles. And in this when we have 1 ng/mL of Dox what we find is virtually all of them are isolated single molecules, a very small fraction are doubles, triples. When we increase the concentration of Dox, so that there’s roughly a 7 times higher concentration of individual molecules, what we find predominantly there’s still singles, that’s the one in green. But if we look at the number of doubles we find that roughly 18% of the molecules are now formed as doublets. The beauty of this super-resolution is you just see dumbbells, okay. And the signal to noise as you note isn't very good. So when you start... and then we did this for various doses. So what we found... this is in a gel where we over here increased the concentration of the tetracycline. This is the wild-type Ras as we increased the mutant form. As this dials up remarkably this dials down, so actually in the cell there seems to be a regulation that says I only want a certain amount of Ras in this cell. This was seen in 2 cell lines, so that’s a little by-product. But here’s the interesting point. Going from 1-2 ng/mL of Dox, what happens is the ERK becomes phosphorylated. So that ERK becomes phosphorylated is a signature we take that there’s downstream signalling activity and the cell will divide. So that would be our assay for the mutant... the molecule has reached a high enough concentration that it's turned on the cell division signal. Alright. The next thing we did is the following. In this cartoon we took our Ras molecule and this is our florescent dye, it's an mCherry, it's a derivative of the green-fluorescent-protein-type dyes. And then we had a little amino acid chain on the end, which when coupled with what we call a dimerising agent shown in this cartoon with these little orange dots. And what we do is, we put this into the cell and what we found is in Dox concentrations when there’s no Dox, that means there’s no mutant form of Ras being expressed, what we find is there’s no phosphorylation of ERK. In the concentration of Dox where there was no phosphorylation of ERK before, we again find there’s no phosphorylation of ERK, but when we put in the dimerising agent - voila! So what this tells us is that you need dimer formation - it's at a low concentration, at roughly 18%. And if you go even lower than that but force it to form dimers, you also get downstream signalling. So it apparently appears to us as though dimer formation is both necessary and sufficient to signal the cell to proliferate. And you don’t need higher order clusters. Again this is because the single molecule fluorescence is very, very sensitive. It also immediately suggests a target. Because what we also did - and I'm not going to show this data – we looked at this mutant form of Ras with and without the GTP as part of the molecule. So all you actually have is the linker site that is embedded in the protein and we found that the linkers actually formed dimers with the same rate as the full mutant form of the protein. So the suggested drug would be, this is the mutant GD12 site, so you would want to have this drug attached to this part, but also have it interfere with this linker arm shown in red to prevent dimerisation. So with this work you say ah! We think we have a rational design for a way to target the drug. So again it's an application of super-resolution imaging. Alright. So let me go back and tell you a little bit about some of the limitations of fluorescence. Here in this cartoon what you have is, you have this single fluorescent dot that is now imaged onto a CCD array. It is typically 3 to 4 or 5 pixels wide. And what you do is, you find the centre of this image and it's usually to 120th/150th of a pixel. But suppose the pixels are not responding to the light uniformally, that they could vary by 1, 2 or 3%. And for example one of the pixels instead of showing this, it actually responds and gives you a slightly higher charge. When you fit this pattern you actually shift the image. So we began to suspect in 2006, when this first came out, we started playing with it that is it possible that the variation from pixel to pixel of a CCD camera was actually limiting the resolution. And so what we did was a very simple experiment. We took a little pinhole and put a white light behind it. And this little pinhole was then imaged both in red and green spaces onto the CCD array. And then we moved the pinhole, as is shown here, with a precision translation stage, that’s encoded, we know exactly how far we move it. And as we move it of course the red and green spots move. But we fit where the centre of those red and green spots are, and if the camera was okay they would be marching across uniformally, as the translation stage marches across uniformally. And what we found is, it didn’t march across uniformally. As we went across - and you can move the translation stage a very small fraction of a pixel on the camera and this gives you a contour map of the difference – as you are moving the microscope stage across the other thing is giggling around and giggling around by 6 to 10nm. It's baked into the CCD array which means you can correct that after you’ve taken the image. And when we do that, what we find is lo and behold we can improve the resolution, the super-resolution. In a Nature paper in 2010 when we had a multiple system, it's DNA 2 dyes, but we stretched a dozen systems so it's an identical system exactly the same way. What we found is we had about a half a nanometre resolution of the spacing of these dyes in water, so 5 angstroms. So that told us, at least to a 5 angstrom precision, if you had enough photons you can get exquisite resolution. But with a single dye we did hit the shot noise limit; about in fact... this is a log curve, this solid line is the shot noise limit, this is the earlier work. And what we found is you can improve things by 2 or 3 but you are still in this 5nm range, because of the fact that the dyes actually are unstable. So here I picked some data of how unstable the dyes are. These are the green fluorescent protein dyes, typical organic dyes and quantum dots. This is out of Wikipedia so it must be right. In any case it is a general ballpark of how many fluorescent photons you can get from these probes. And for dyes, you typically collect about 10^4 photons before it photo-bleaches, and it has to be collected in something like 1 to 10 seconds. If you want a faster time resolution you can't simply turn your laser up because if you did you’d go to doubly excited states of the dye molecule and they would simply blow apart. So you are really limited in time resolution. And if you have excitation in PALM or STORM, then it takes roughly a minute to form a reasonable image. And also invisible light excitation, one is always worried about photodamage. So I just want to point out that these very high resolution pictures that are out there in the literature, the ones I’ve shown, you are actually taken in dead cells, in dead tissue. And there’s where you get the highest resolution. But can you get this similar high resolution in live cells? And the answer is maybe. And so what we are looking at now are a few nanoprobes, 1 is a diamond NV centre, N is nitrogen, V is vacancy. The lifetimes are about 20 nanoseconds. And we are trying to optimise the NV centre concentrations. There’s also a silicone vacancy centre being looked at. But we think that perhaps, if we are lucky, we can get about a billion photons/second, if we fail we’ll get 100 million photons/second. So you are at least in the order of one magnitude or 2, abided in a quantum dot, And one other thing they don’t photobleach at all, so they are very stable. The size is comparable to GFP. This is a GFP protein that lights up, this is a 5nm quantum dot or 5nm particle, and these are other familiar proteins that you’ve heard about. We are also looking at rare earth ions. This is out of a website for lasing materials. This is Neodymium YAG, a very industrial important laser. And so if you excite from the ground state manifold up to 808nm, it quickly relaxes down into this state, the 4F3/2 state. And from there you get fluorescence back down the ground state and you get all these other fluorescent lines. This line here 1064 is the very famous YAG laser line that has been used for a dozen... decades, 3 or 4 decades actually, as a very important laser. The mission rate is brighter than a quantum dot and it has several other important properties. Both the nanodiamonds and the rare particles allow you to do STED. So let me briefly remind you what that is. This is in a dye molecule if you go from the ground state to the upper row vibrational band excited state of a dye, that excitation quickly relaxes down to the bottom of the singlet state. You get a fluorescent photon to another set of states down below and it quickly empties out. And so Stefan Hell proposed and showed a while ago that first you use a single laser point to excite the volume. Then you come in with another laser beam with a donut hole in the middle and this laser beam in this region over here depletes this excited state going from position L2 to L3. So what you are left with is a tiny dot of excitation up here in this excited state because you’ve depleted this entire region. The higher up intensity you go with this beam the smaller and smaller this dot becomes. So in fact your intensity should be 100s of times higher than the intensity that would make the spontaneous rate equal to the stimulated rate. And so that way you get better spatial resolution. Alright. In an organic dye the lifetime of the dye molecules is a few nanoseconds and typical saturation intensities And what you have to use is about 500MW of intensity in order to get a very high-resolution spot. The good thing about a rare earths system is the lifetime is much longer. The bad thing is the lifetime is much longer but you compensate by putting in many more emitters, say 500; but the point here is that the saturation intensity has decreased by roughly 4 orders of magnitude. And because of that this makes it much more powerful to do STED with a rare earth. And if you work out what the STED would be - that’s just a repeat, so I'm not going to go into it. Now since you have higher intensities what you can do is you can use multiple interference. If you introduce 3 laser beams into a microscope objective, let the objectives come to a focal point; the pattern you form is - it's not exactly this pattern, but it's a pattern of nodes and anti-nodes in a triangular shape. And again you saturate everything except little dots, and what you find now is, you can get maybe with just an off-the-shelf YAG laser you should be able to get about 30nm spatial resolution. Now how is this different than STORM or PALM? What this, is you’re exciting the probe and then you are waiting for the natural lifetime to deplete itself. So it's no longer a small fraction of the excitation. You are looking, collecting all the fluorescence. So instead of only exciting say 1 or 2% or 3% or 5% of the area, you excite that whole area, it just happens to be much smaller. And so the data-taking rate to get a full image is at least 2 orders of magnitude faster. And so we think we can maybe at least recalculate, take a full image in something like half a second at 30nm resolution. So that’s another advantage. Here’s another application. Now, the reason I'm not telling you this and we haven’t done it makes me a little bit vulnerable. In fact I'm just getting lab space starting in 2 weeks from today, and my postdocs are coming starting in 1 week from today. So we are eager to get this going. Now there’s another group of us who have come together in neuroscience, informal neuroscience group And we’ve been talking a lot about what we can use in neuroscience; how we can apply these new methods in neuroscience. And there are exciting possibilities. I don’t think I'm going to go through most of those possibilities, but suffice it to say it turns out that in this small department I'm in, in Molecular and Cellular Physiology, Tom Sudhof is also there, Axel Brunger is also there and Brian Kolbilka is also there. And so out of a department of 12, we have a good start. Anyway, let me skip, what we're... let me just say what we are trying to do is, we were trying to see with millisecond resolution when neurons get... when there’s a voltage spike and releases a vesicle, on average 1 vesicle per voltage spike, in a synapse. Can we capture the synapse release with millisecond resolution? When the neurotransmitters get released they go onto receptors on the other side, they undergo a complex set of events, chemical events, that lead to the phosphorylation of molecules on the other side. Can we see that phosphorylation? So I'm going to skip this. And just say that we are also developing a probe to see phosphorylation or other chemical changes in the cell in the molecular footprint region - that would be very nice to do this . We did a toy experiment, and published this in 2012, where we took light from a synchrotron radiation. We put it through a micro-interferometer, put it onto a sample, looked at the back scattered light. And the idea here was when you take for example a cell and you stimulate it with a ligand - in this case it was a PC12 cell, and nerve growth factor was the stimulating ligand - it causes the PC12 cell to differentiate. And if you look at the infrared spectra over a couple of days, what you find is there are several lines that actually change. We did some microscopy, it was very crude microscopy - the scale bar is 50 microns. And here the wave number changes and you can see chemical changes in the cell. Well this is fundamentally uninteresting because the spatial resolution is terrible. And so the question is, can you get better spatial resolution? Super-resolution spatial resolution? And the answer is, yes we can - we think we can. This is a picture of a microscope objective, a high-resolution visible objective. But if you look at the very bottom of this objective what you see is that... more than a hemispherical shaped surface. And that’s the final focussing of these high numerical objectives. So the idea is we take an infrared objective, either reflective or transmittive, that has a numerical aperture at - about .7 is the highest one can buy commercially. And you design a final element but you make it out of germanium. Why germanium? I’ll tell you in a second, it's because the index refraction of germanium is 4, it's not 1.5. And if you have a field of view 50 microns on its side, you don’t care that much about chromatic aberration. One should be able to get an NA of about 1.4, if it were a visible microscope with initial fraction 1.5. So that means it gives you a ballpark of where your sin(Theta) should be, it should be about 0.93. If you have sin(Theta) at 0.93 that means using 10 micron light, which is in the region for the phosphorylation infrared signal. You have a very high numerical objective and we think we can get maybe 1.3/1.4 micron resolution even though it's 10 micron light. And since this light doesn’t travel well through water, the skin depth is a little over 10 microns. This is actually an immersion, it's dipping into water and it’s a little microlens, now dumped right on the cell surface, and that gives you white field illumination. So that’s the beginning. But we are not going to stop there. We want actually... this is the end part. Back in some paper I wrote with Michael Fee, who is now a neurobiologist, when he was my graduate student, and also with Ted Hänsch. And what we did is we took microwave radiation, we put it down a miniature coaxial cable, maybe a 1.5mm in diameter. That little central conductor we sharpened with a file. And if you think about this, the electric magnetic radiation that’s going down this coaxial cable is still propagating white light. It actually travels at the speed of light as modified by the dielectric material. And it's irrelevant what the wave length is, okay. It all gets cramped into this small coaxial cable because you can have a wavelength of 10m and it still travels at the speed of light. It's localised by the conductor. And so what we did in this toy experiment is we put down electromagnetic radiation, it reflects back. We look at the interference between the light that gets reflected back and a reference beam, the cross term. It's a very, very sensitive way of detecting any phase shift, any change. And what we see is as we pass it over a 100 micron grid that we could see phase shift changes. And so in this toy experiment where we were using microwave radiation on a scale of a couple of centimetres, we got land over 4,000 resolution. In that same paper we suggested - oh, by the way you can do this in infrared, because the skin depth of the metals and everything else allows you to do this. So in this two and a half page paper we said, oh yeah you can do near-field in the infrared and get very good spatial resolution. Then it was 1989 I went back to trapping atoms and cooling and atom interferometry. Ted went back to hydrogen. And so we could have been famous, but... In any case after my 4 1/2 years sabbatical with the government, I came back and I said well I want to do infrared spectroscopy. And I'd learnt while I was giving a talk at Lawrence Berkeley Lab that indeed it had been rediscovered. This is a piece of silicon, anisotropic, and the edges etched into a pyramid and they coated not all sides but only 2 sides with metal. And if it's all sides with metal it’s a waveguide. If it's 2 sides of metal it becomes a transmission line. And because it’s a transmission line you can squeeze the electromagnetic radiation into a smaller volume. This is the calculation of this, in here it's tip 40nm across, you see the enhanced electromagnetic field. And so this is what we intend to do. We intend to use tips like this in conjunction with super-resolution visible and near-infrared radiation, again poised directly over the cell. We think hopefully we will get tens of nanometre resolution, chemical resolution, with this technique. So that’s what's on the drawing board, stay tuned and maybe within 6 months or a year I could share some experimental results. But in any case this is why I called it microscopy 2.0; one can really achieve land over a hundred or better spatial resolution, not only in the visible and the near-infrared but also in live cells. Once we get millisecond imaging the motion in live cells doesn’t become relevant anymore. And so it's that motion average over a second or tenths of a second that actually prevents you from getting a really super-resolution. And so I close with this image of the Leeuwenhoek microscope, and I just want to remind you that this was a single lens microscope but the centre of that lens is a small glass sphere. And it's the final focusing element of our infrared lens. So it goes round back to 1700 that things don’t really change, optics are still optics. Thank you.

Steven Chu (2014)

You can see a lot by observing: Optical Microscopy 2.0

Steven Chu (2014)

You can see a lot by observing: Optical Microscopy 2.0

Abstract

Biological research and medicine were transformed by the invention and improvement of the optical microscope. Since the early 1990s, there has been another revolution in optical imaging, and manipulation of individual biological molecules and bio-molecular systems have been demonstrated and applied to a wide variety of systems. Most recently, innovations in “super-resolution” optical imaging, such as STORM and PALM have been used to construct biological images with ~ 10 nm resolution. With bright optical probes and corrections to the slight differences in sensitivity to the CCD or CMOS camera, < 1 nm resolution is possible in biological samples in water.[1] Recent applications of super-resolution imaging to cancer signaling[2], and bio-films[3] will be discussed.

Finally, the development of sub-wavelength micro-spectroscopy in the fingerprint region of the infrared spectra (wavelength = 4 – 12 µm) to observe changes in biological states with 20 nm spatial resolution will be outlined.

[1] Subnanometre single-molecule localization, registration and distance measurements, Alexandros Pertsinidis, Yunxiang Zhang, Steven Chu, Nature 466 , 647–651 (2010).
[2] Ras-GTP Dimers Activate the Mitogen-Activated Protein Kinase (MAPK) Pathway, Xiaolin Nan, Tanja Meyer-Tamgüney, Eric A. Collisson, Li-Jung Lin, Cameron Pitt, Jacqueline Galeas, Sophia Lewis, Joe W. Gray, Frank McCormick, Steven Chu, submitted (2014)
[3] Molecular Architecture and Assembly Principles of Vibrio cholerae Biofilms, Veysel Berk, Jiunn C. N. Fong, Graham T. Dempsey, Omer N. Develioglu, Xiaowei Zhuang, Jan Liphardt, Fitnat H. Yildiz, Steven Chu, Science 337, pp. 236-239 (2012)

Content User Level

Beginner  Intermediate  Advanced 

Cite


Specify width: px

Share

COPYRIGHT

Content User Level

Beginner  Intermediate  Advanced 

Cite


Specify width: px

Share

COPYRIGHT


Related Content