Quantcast

arivis is a market leading software company focused on the life sciences industry. With a broad product portfolio, our solutions address industry and academic environments. We offer software solutions as well as services provided by our experienced IT project engineers and experienced subject matter experts.

Your guide to Complete and Fast 3D Image Analysis in Microscopy

Dr Chris Zugates

arivis webinar gate

Your guide to Complete and Fast 3D Image Analysis in Microscopy
  • Subscribe to Watch this Premium Webinar

    Get more information from arivis and get instant access.
  • ​​All emails include an unsubscribe link. You may opt-out at any time. See our Privacy Policy & Terms & Conditions
  • This field is for validation purposes and should be left unchanged.

If image analysis is a place you fear to tread, or if you struggle with over complicated and time-consuming microscopy image analysis workflows, this is your opportunity to go beyond those limits.

You will learn a fast, efficient and flexible approach to 4D microscopy image analysis, which yields high quality images and results.

We’ll cover:

• How to handle and process large image data quickly

• How to detect cells, nuclei, membranes and cellular structures easily with interactive image analysis including the use of virtual reality for editing

• How your research can benefit from using server environments for higher throughput and collaboration with other researchers.

Join Dr. Chris Zugates as he takes you through a typical workflow of image analysis, learn how to easily process your data and find the objects of interest quickly and interactively creating meaningful results for publication.

Chris will then show how arivis can support you through the whole workflow from efficient image acquisition up to the presentation of your work via Web or via Virtual Reality to the scientific community.

 

 

 

 

Transcript

Bitesize Bio:     Welcoming you to this Bitesize Bio web seminar, which today is sponsored by Arivis. Arivis is a market leading software company focused on life sciences. With a broad product portfolio, their solutions address both industry and academic environments. Arivis offer software solutions for the interactive visualization and analysis, especially of very large image data in microscopy. Adaptable, forward thinking, collaborative, and easy to handle solutions are [unintelligible] future-facing development goals. Today’s presentation is titled Your Guide to Complete and Fast 3D Image Analysis in Microscopy, and it’s been presented by Dr. Chris Zugates from Arivis. Chris Zugates is an application engineer at Arivis. He started in imaging 20 years ago as an application scientist and project manager in Raman-based imaging systems where he took a particular interest in Raman SEM correlation. Then after training as a developmental neurogeneticist, he ran one of the first truly industrial scale imaging projects in modern brain science. He appreciates the difficulties of imaging, scale, and endeavors to help Arivis customers get the best possible results from massive challenging image sets. As always, we will have a question answer session after the presentation, so please type any questions that you have into the questions box which appears in the right hand side of your screen and I’ll put them to Chris at the end. The recording of the webinar will be available at bit.ly/3dimageanalysiswebinar. All one word and lower case. So now over to you, Chris for the presentation.

 

Chris:     Hi everybody. This is Chris.  When you say that I appreciate the problems of image analysis, I’ll phrase that another way. I feel all of your pain. Today, we’re going to present to you a guide to complete and fast 3D analysis in microscopy. As many of you know, when working with imagery, something like a complete guide to analysis is a bit ambitious for a one-hour presentation. But I think today what we’ll endeavor to do is take a few steps together forward and I’ll show you some of the tips and tricks that I use for performing 3D image analysis.

 

So the first thing I want to say is image analysis is everywhere. It’s something we do all the time. In fact, I just did it 30 minutes ago. I went downstairs to have a bit of a snack. And in the refrigerator there are two kinds of yogurt. There’s this really sugary yogurt that my wife likes and there’s this really good Greek yogurt that I like. Of course, I can instantly recognize my kind of yogurt and I can pick it up immediately. And it’s this kind of a categorization and understanding of things in visual space that we’re endeavoring to do in science with our data. So, image analysis helps us gain insights about the scientific materials that we work with. And image analysis software is pretty special. It enables us to record not only these data but also to record the things that we find in the data and also helps us to focus in on specific objects and maybe even ponder them for some time. Software gives you the ability to hold onto images and come back to it another day. It also gives you a way to easily compare mathematically, algorithmically these different kinds of objects that you see in the image.

 

And then of course, as scientists, we want to make abstractions about these objects, come up with general principles and descriptions about these objects, standard ways of viewing them, and come up with new theories. Our visual system and brain is really good at parsing out all of this information around us. And nowadays we have instrumentation that is helping us to see even more about the world around us.

 

And this is in the form of a very high-resolution imaging systems that capture the tiniest and fastest moving of objects. And now we’ll be able to see things that we could never before see.  I really like to go to this example in the history of science. I think we can learn so much about what we do from, from the Copernican Revolution, and I think it applies in the realm of software and how software can help us as well. So very briefly, we have this, this guy Tycho Brahe, who builds the best instruments for collecting the most accurate measurements in the heavens at the time. And this is like what the imaging system manufacturers are doing for us these days. They’re building these amazing instruments and we’re able to have these wonderful advances in histology and tissue clearing, and apply it for collecting observations.

 

And what Tycho did was collected these observations of the heavens and compiled them in the Rudolphine Tables. Because these observations were very accurate and trusted, this enabled Johannes Kepler to later come sit down with the data and the tables and he discovered the ellipse, which changed the way we view our universe. And this desire to explore and discover… these are the basic desires of, of all of us, and especially as scientists and software is enabling us to take this to another level. And so how does it happen? Well, number one, we start with the instrumentation that creates the imagery. And nowadays, as I said, we’re seeing smaller and smaller objects, more and more resolution, and more complete views of the material that we’re interested in studying.

 

In microscopy we see data coming from light sheet, confocal to photon, live cell imaging, structured illumination, super resolution microscopy, and electron microscopy. Now we’re bathed in all of these gorgeous images and we need to extract the information from the images.

So let’s think about the case where we have a very high quality histology, and really advanced cell targeting and imaging instrumentation. Iin this case, we were able to see the morphology of all the cells inside of a brain. Nowadays our imaging systems are so fast and produce data very quickly at very high resolution, we’re able to do this kind of imaging at scale.

 

 

So now we’re able to look across many organisms and do comparisons. And by doing these comparisons, we start to gain new insights. We start to see statistical phenomena, things that are happening some percentage of the time, and we’re able to do correlations between something like behavior and anatomy. Also, occasionally we’re finding these needles in haystacks. So as we’re imaging thousands, tens of thousands, hundreds of thousands of samples, we’re seeing very rare and interesting phenomena. Then also, even within one organism, within one tissue, we’re able to understand the relationships and the connections between the cells at a resolution just was not possible before. And then finally we’re able to look at these images over time and begin to assemble how these organisms develop and how these cells move through time.

 

So, let’s have a look at some of the basic problems of image analysis and we’ll start to see how image analysis can help us to extract the information from the images that we need. So here’s a very basic detection scheme. Our imaging instrumentation works basically like this: we have nine pixels and these pixels are capable of detecting some event in the real world. And we have the case of this worldly glowing object. And if we placed this worldly glowing object in front of this array of pixels, it will trigger some, some reaction in this pixel in the middle. And now we can represent this event in the computer. So now we can digitize what we’re seeing in the world and this is very powerful and very special.

 

So now we’re able to store and hold onto the data, apply powerful mathematical tools to the data, come back to the data at a later time, and ponder it as at our convenience. But we have a problem and we represent this event as ones and zeros inside of a machine. And because our imaging instrumentation is getting better and better, now we’re able to collect more and more pixels. So now we collect very large arrays of pixels. And these imaging instruments are very good at detecting very subtle changes in the intensity of these objects, which means typically they have a dynamic range that’s better than the range of our eyes. Our eyes are very good at distinguishing many colors, but our eyes are not as good as these instruments at detecting these very subtle differences in intensity. And so, one of the basic problems we have is how do we take a dataset that’s so rich in terms of the intensity value and examine it easily. And then another basic problem and that probably all of us are dealing with this nowadays is the 3D problem.

 

We have the 3D problem. So now, not only do we collect very large arrays of pixels with two dimensional arrays, we now collect really large three dimensional arrays. And of course we’d like to now collect three dimensional arrays over time. So how do we deal with all of this information? How do we explore it and extract information from it? And what software can help? And I think probably the core issue with image analysis is how do we take what we see with our own eyes and in our own mind and how do we mathematically encode this in the machine so that we can share these procedures with other people? So let’s look at this example. So here’s an example of where we have an image, it’s quite small and the pixels have been scaled so that we can see some objects in the image and these happened to be objects of interest.

 

And if we do something like say we’re interested in understanding the volume or the size of these objects, we can do something very simple like use an intensity threshold to define a border around the objects. In some cases this is completely fine, but in this case it’s not because if we study the image very carefully, the experienced person knows there are really three objects here that need to be studied. And the question is how do we capture these three individual objects and measure their properties? And for that we need something more sophisticated than a simple intensity threshold. So let’s discuss some concepts of solutions. How can software help us to overcome some of the challenges? We have lots of pixels and we have lots of files and one thing that software can do for us is it can simply provide a file and data structure that makes sense so the images can be stored as a file.

 

The results that we compute on the image, say these boundaries around the objects can be stored as files; settings and shortcuts and the analysis recipes can also be stored as files. And we would like these things to be in our control. We’d like to know where they are and we’d like them to be associated in a way that that makes sense to us. I think this is no small issue to discuss because nowadays a lot of us are using our cell phone, we have these apps and we don’t really know what they’re doing, but in science we really need to know where the data lives because we need to come back to it and we need to potentially perform algorithms that depend on what kind of hardware they sit.

 

And so, we wanted to have control over where the files are. We also need a multi scheme exploration of the images. As we saw, the imaging instruments can have dynamic range greater than our eyes. So at least in the software we want to be able to examine these various intensity values. We also want to be able to move through the planes seamlessly. We’d very much like to represent these arrays of pixels as volumes and make them appear as they appear in the real world. And we’ll come back to that later. We also want to be able to look through these images through time. And in addition, we would like to be able to apply colors and various enhancements to the data sets. And of course, as we capture the objects and the images, we want to be able to see those results right on the voxels themselves.

 

Thirdly in some cases, a very simple algorithm can work, so something like an intensity threshold can really give us the border of the object that we’re interested in. But this is not always the case. And I think what we need is the ability to combine various operators together in customized ways in order to extract what we need from an image. And so, we need a very flexible analysis tool. And finally, we want to move from the ones and the zeros and we want to take those high-resolution data and we want to give them back to our brain, sort of in the way that we’re used to dealing with it. So, as you’re prepping the tissue in the lab, you’re working with the organism on a day to day basis and you’re used to seeing it a certain way and perhaps you want to understand the relationships of the objects that exist inside of the tissue.

 

And so, we want to be able to reconstruct it in a real-world way. And this I think provides us with more information about the image and can really help us to start to decipher the three dimensional spacial relationships between the objects and the images. And here’s just a quick example of a bunch of 2D slices and here’s what happens when we begin to make various kinds of projections of these 2D slices. We take their many slices and we start to build something that makes sense with us and we have lots of different techniques to provide ever more information about the relationships between the objects. And then with the visualization techniques, we can do things like really open up an image.

 

In this case we have some neurons and they’re labeled with a fluorescent dye and they have a very solid appearance. By using some direct volume rendering tricks, we can render the raw data in a way that really opens the image and enables us to go inside of it and explore it and really begin to see the three-dimensional relationships between the objects. What we do here is provide more information in a single shot. So, let’s really look at some solutions.

 

Okay. So, here’s what software is really going to do for us. Here are a couple of images and we’re going to do some different things with these images. As I’m sure you can all see, we have some bright pixels here and we have some dim pixels here and again, we have some bright objects and we have some dim objects. The experienced mind can see, hey, we can see what is there and have an idea of exactly what to extract from the image. But here’s the issue. And I think many of you run into run into this and it just reminds me of the quote from, from Tolstoy’s famous novel Anna Karenina.

 

“All happy families are alike. Each unhappy family is unhappy in its own way.” I think this applies exactly to images. The images that are “happy”, that is, easy to analyze mathematically, algorithmically, and are at scale, are all alike. They don’t have optical artifacts, they have extremely low noise, and there’s perfect contrast around all the objects of interest. However, each challenging image is challenging completely in its own way. And I think we all run into this problem from time to time where we sit down with our image and we start to apply algorithmic approaches and they fail for some reason because there’s some optical problem or a staining artifact or so on.

 

But the first thing we need to do is to be able to move through these images and do what I call scouting for the truth. We have to be able to move through the planes, move through the volumes and find these regions of truth, or objects that we know they really exist. And then we also have to be able to find the problems. And once we find the truthful objects and the problematic objects, then we can really build an analysis paradigm for that image. Let’s start with an example with an image with two little “blobs”.

 

And we can scale this image in a couple of ways. If we scale it so that these appear quite bright, we can see that there’s quite a bit of bright pixels, but we know that there’s nothing here. And if we try to do something simple like use the intensity-based threshold to grab these objects, what happens? Well, we get the objects, but we get lots of other things that we don’t want and this is really a problem. This is especially a problem for when the images are really big, because now we have to devote our computing resources to dealing with all of these objects.

Of course, we can later on do something like filter out all the small objects and keep the big objects. But what does it mean? It means we have to do some computation. So how could we skip this? How could we apply a simple filter to the edge and solve the problem? A simple idea is to take the raw data and for every pixel in the images, we compute the mean value and we write it back into the computer. And then we grab the borders of the objects. And what happens? Well, if things become very smooth, we lose all of these non-interesting objects.

 

I think this is filtering is probably something people in image analysis to do quite often. Here’s a case where we have objects in a cell that are associated with the cytoskeleton of the cell. And we may want to do something like grab the cell because we want to begin to compute the number of things in the cell. We want to understand cell by cell a phenomenon. And what happens is if we try to do something like an intensity threshold? We start to get the border of the cell based on this out of plane light.

 

We see roughly the border of the cell, and this may be good enough in some cases, but in some cases maybe it’s not. So what happens if we reduce the intensity threshold? Well, if we come in with a lower intensity threshold, we get less of these objects outside the cell, but because the staining is very punctated, we start to see these openings in this cell and that prevents us from really being able to grab the cell’s morphology. You might be asking yourself, well, why would you try to grab this cell based on a stain like this? And the reality is we just don’t always have the luxury of using exactly the histology that we want.

 

A lot of times we’re looking at the data later on and thinking, oh, I have a great idea. If I can grab this cell, then I can do some kind of cell by cell analysis. And you’re sort of thinking after the fact, what can I do to get the most out of the image? Here’s a case where we reduced the intensity threshold quite a bit and now the cell is completely broke. There are a couple things we can try. One is to try a very aggressive, median filtering technique, and the image would become a bit more smooth and we begin to be able to grab the outline of the cell.

 

This is approximately correct, but it can be quite time consuming because we have to compute this with a very large radius. And this takes up a lot of computational time. Here’s a case where I would use a very, quite a simple motif. I would first take the image and do an intensity threshold. I binarize it, by setting only the pixels above a certain value are going to exist and have a value and all the pixels that are below the value become completely become zero.

 

And then I would basically do an operation that dilates these little puncta, and where they join each other, they stay joined after an erosion. We’re following the contour of the real thing that we see in the image. We don’t want to generate some kind of fake result. We want our borders around the objects to hug real objects in the image. And so this morphology motif, which is a mixture of opening and closing, preserves clusters, so it keeps all of these ones that are very close together, but these little loners disappear.

 

Finally, we want to do things like to study the objects when grabbing objects is not so easy. In this case we have two nuclei and they’re butted right up against each other and we need something a little bit more advanced to grab them as individual objects. So we might do something like a watershed to split the two objects. We modeled their morphology by doing a bit of a blur. We seed the objects and then we put a watershed between them. And then finally we want to understand their distributions. How many objects are in this cell?

 

So, I don’t really see analysis as a series of steps. I see analysis as a series of cycles. That means we sit down with our image and we scout around, we find these regions of truth or happiness and then we find these other regions of untruth or unhappiness and we come up with ways to filter, denoise, and apply motifs and these help us get objects of interest and then we might sort, sift, and examine the objects of interest. And then if our images are really large, , or we have many images, we want to move on to other portions of the image and that means we’re going to apply this cycle to a different portion of the image. And we may find some other regions where this fails. And so we want to cycle through again and we want to build analysis pipelines that work not just for one or two or three regions, but to optimize to get the most reliable, high quality results from our images.

 

So I want to take you all into the real software environment and show you how you might do these things in practice. I’m going to use Vision4D for this and we’ll look at a simple example and we’ll then we’ll move onto to this motifs example. Here’s a very simple example. You want to be able to, of course, scan through the planes and through time, and toggle things off and on? So we might want to toggle a channel off that we’re not particularly interested in working on at the time. We might want to change the colors. And these things, you want to be able to do very smoothly and intuitively.

 

And now we look at this image and we know that it has some sort of this like salt and pepper noise and we want to remove it. And how do we remove it? Well, what we can do is begin to apply various filters. How the heck would you know what filter to apply ahead of time? There are two ways. One is you can go through the literature, talk to your colleagues, and begin to make a real study of image analysis and become an expert at image analysis. If you’re like me, I haven’t spent my life doing image analysis. So for me, I just need to try things and see what works and this is exactly how I work with an image like this. I would start by opening up some filtering tools.

 

I might know I want to try denoising. I know that lots of people compute Gaussian blurs very frequently, so I might compute a Gaussian blur and just have a look and see what does it do? Here’s what we see after, here’s what we see before. And what I see is it’s not removing the salt and pepper noise for me. And so I might just want to try something different. Not even knowing what a mean or a median or a Gaussian does I can begin to work with this image and now I can see I’m preserving the border of the thing that I’m interested in and I have significantly reduced the stuff that I’m not interested in. Now I can come in and just touch on the image and start to grab various objects and I can parameterize an intensity threshold interactively.

 

In this example, we have this absolutely a gorgeous image. We’ve got multiple color channels and we can explore through the planes and we see… it’s absolutely beautiful. We see there’s some chromatin material labeled in blue. We see something labeled in red, which I would presume is something associated with the cytoskeleton. We see a bunch of little green dots. These are are associated with proteins on the Chromatin that are involved in the splitting of the cells. And let’s just have a quick exploration of the image.

 

So, one way to explore it is of course, to move through the planes, but another way is to really look at it in 3D. So, we say, okay, let’s enable a three-dimensional view of the image and now we can control the opacities and the transparencies of the various color channels, and if we want to, we can toggle channels off. So maybe we only want to look at the Chromatin, perhaps we want to look at the entire sort of cell cellular morphology. Perhaps we want to begin to open up the image a little bit and be able to see through the condensed chromosomes so we can begin to explore the image in the volume space. This gives me ideas about how I want to attack the analysis of this image.

 

Let me give you a couple of examples of how we might analyze an image like this. In this software environment, I have access to any portion of the image at any time I want. I picked a cell to show you guys how we apply these morphology motifs. And what I did was to store the operation itself in this pipeline tool.

 

I would typically focus on a region of the image like this where I clearly see what I want to see. I want this border, I want to get it and I have an idea about how to get it. I would begin to parameterize an intensity threshold and I would do this interactively. I would try different things. I would try low intensity values. I would try high values, try like 2000 and probably the whole thing would disappear. I would zero in on the one that I think is going go to work for me and then when I’m happy with it I would compute it and then I would move onto the next step.

 

And the next step in this case would be to start to close up all of these open portions of the homepage. We can close very lightly. We could close very aggressively – the more aggressively we close, of course, the more computational time it takes. But this is how I would begin to build this idea of closing up the borders of the cell. And the next step is I will do some opening and I will try to remove all of these tiny objects. And as you can see with the opening set with this radius of two, I can simply remove the objects. So you might ask yourself, well, how would you find all of these?

 

Well, they’re organized in the menu here, so I can try different ideas via drag and drop. I drag in using filters, I drag in morphology filters and I dragged them out. And I parameterize very quickly. And then finally I might come to the point where I’ve opened it up. I’ve removed the tiny objects. I’ve got a pretty nice border of the cell and then finally I might want to do something like fill the hole. In an environment like this you can continue to optimize and add more operators to refine because I want to get as close to the truth as possible.

 

And so what I get is something I’m starting to approach something true or truer than I would get with something like a simple denoising. Let me move onto another example. This is the watershed example. Here’s a tiny portion of those green objects in the cell. And we can move through the planes and we can examine it as a volume.

 

I’m going to do a split view so I can have the raw data rendered in this window, and I’ve got the two-dimensional result rendered in this one. And just as a note, I like to parameterize all of my analysis pipelines with a 2D view because I want to see the raw pixel. I don’t want to see the rendered pixel. I really want to see it its raw value and what we do in this case, it’s something pretty simple. So this is the case where you have two hot blobs and you want to grab them as individual objects and this is where something like a watershed comes in really quite handy.

 

So, you could do something like drop in an intensity threshold and you could try to parameterize it like this. But what happens in this case is as you look through more and more of the image, you’re losing objects of interest and you don’t want to do that. So, what happens is that after you’ve tried this idea and you find that it doesn’t work, you say, ah, I can try something like watershed. And so we have an operator in Vision4D called the blob finder, which does exactly what it says. It’s very good at finding the blobs. So, it finds these hot spots and it splits them. And then what I can do is compute it. And then once I compute it, let’s do something a little bit more fancy. Let’s see if we can let’s see if we can show the annotations.

 

So now we’ve created real objects. And let’s say we’re interested to see where is this object in 3D. So I’m going to link the two viewers so that when I click on an object over here, so I’m going to find that little split. If I click on an object here, I should see that object highlighted over here. Let’s go ahead and put the bounding box around it. And so here is this object that we see over here. And so, by parameterizing this operator and viewing it in the 2D mode and the 3D mode, we can begin to approach an ever more truthful representation of all of these blobs in the image. Let’s move to another example. Now we’re really looking at the cell in three dimensions.

 

I’ve used this little motif to compute the border of the cell. And I also used the similar motif to compute the border of all the chromatin in the cell. And I found something pretty interesting. So, I found that there’s actually two groups of Chromatin in the cell. There’s sort of this large group and there’s this small group. And by using these analysis tools, I was able to find the small group all by itself and find the large group all by itself. Now we can count these centromere associated objects inside of here.

 

We could take our small chromatin object and then we could give the pipeline all of those bright green objects. There’s maybe 5,000 of these bright green objects in the image. And we could do something like object-based co-localization. We could take this small chromatin object, and we could use it as a reference, and then we can take these green objects as a subject set. We could say we want to report all the green objects that are completely covered inside of the chromatin object. And that’s exactly what we get. Now we can go to our 3D view and we’ve pulled the chromatin or centromere associated objects that are inside small group.

 

 

In the environment, we have a completely modular design, where there are modules for bringing the files in. You drag the file onto the desktop and it’s in the environment. We’re able to view the data and in the 2D mode and 3D mode. We’re able to view our results in a table and we have controls so we can turn the image, we can move through the planes, and we can use a little dropper tool to parameterize our intensity thresholds. We have a results database that’s computing features of the results for us that we can utilize.

 

Transformations and corrections like drift correction and bleach correction, exist in a module. There are all of our analysis tools, as well as a multitude of operations, filters, morphology, and previews. And we have a storyboarding tool which enables us to create the movies. The thing about this modular design is that we can really customize and move easily through the images and through different operations to figure out what the answer is going to be, because it’s not always obvious from the raw data how you’re going to get to the final answer.

 

So nowadays these imaging instruments are producing more data and more resolution not only in x, y, and Z, but also in time. What does it mean? It means the size of the datasets are bigger than the size of the computer. And what happens when you use your computer to hold all of your data is that it can become unstable. We sought to address the gap between your processing needs and the computer size, so Arivis created a redundancy free data structure that makes efficient use of the system resources, so we don’t have to touch the ram.

 

Your computer can remain stable and then we can give information to the RAM when we need to and when we want to. And the Arivis file format, which is called sis, gives you access to any portion of the image at any resolution at any point of time and region of interest inside the image for any kind of operation. Because the last thing that any of us want to do is waste our time. We do not want to try to compute via some approach that’s going to fail. We want to know that it’s going to work.

 

And then secondly, is it really going to work? Is it worth to apply to the whole image? So, I think about if I take a big step back and I look at, okay, what are all the resources that we need in this day and age to investigate really big data sets? One thing is of course minimal obstacles. And what does that mean? It means you want to work on any hardware, you don’t want to have to go somewhere where you have to wait for a special machine or special hardware.

 

I will put the data on a drive and plug it into the laptop computer. You want stable access to the data so you don’t want to crash. And when I’m out interacting with our customers, I find this to be one of the things that really irritates them, something that they don’t want to deal with. And that’s why they come to us. They invest lots of time in working on their images and building some kind of analysis, a pipe pipeline or paradigm.

 

And then they’re having problems with software stability and so they come to us. And what we endeavor to do at Arivis is build a very stable platform and one of the reasons why it’s stable is because it’s not going to eat your entire computer. You also want to be able to work with the data immediately. You don’t want to be locked out of the software. And this is one of the things that our developers take very seriously. They try to build the software in a way that enables you to not only work as soon as the data is there, but also to continue to interact with it while you’re running various operations.

 

Typically, you might start on a little operation, but you have another idea and you will want to try that idea. And so with Vision4D software we try not to lock you out of the software. We want to enable you to express yourself on your images and continue to think even while your computer is thinking. And the flow of information is extremely important. You want a rich visualization of the data and you want to see the results right on the data volumes. Rich visualization means options for making this object look real, based on the raw data, and give you information so you can see what you need to see. And we want to see your results right on the data. We also want to give you natural experiences with the 3D data. This means not only natural controls with your keyboard and your mouse, but it also means natural using the latest technology like VR technology to let you reach into an image and touch it and interact with it.

 

We also want you to be able to publish your results. And we want you to take the beautiful images that you’ve created and make a spectacular image. And by spectacular I mean information rich. You want to put a panel in the paper or on the cover of the journal that conveys as much information as possible to the readers. Also, we spend quite a bit of time and we’re very interested in building web interfaces and collaborative workflows. We also have tools for enabling you to interact with these sis files completely via the web and in a completely collaborative way.

 

And then finally, we have this very flexible computational tool set and we really just started to scratch the surface of that today. But in my view, the important thing is that you need to be able to do rapid prototyping and rapid means. It means that you want to be able to click around and get your ideas into the computer without writing lines of code. And I think that you can navigate the Arivis environment in a very easy way with a mouse and the keyboard. So at Arivis, we’re endeavoring to do much more than just this standard experience with the data where you sit down at the computer and you sort of move through the planes and you parameterize a pipeline.

 

We really want you to be able to move through the data in a way that it’s very easy. It’s very interactive and a little sort of a continuous flow of experience. And we want to connect you with your collaborators, with other people. We want you to be able to share your results very easily.  And we want you to be able to work together. And so the way we endeavor to do this is to build future facing tools. We imagine that you could store your data anywhere you want – you could store it on the cloud, you can store it on a local server, you can store it on your local machine, and then what we enable you to do is through our environment, access it via desktop devices.

 

You can even do it on mobile devices. And then also you might want to plug in and use the latest virtual reality tools to analyze the data. And I’m going to very briefly give you a little taste of what we have coming in VR and in the collaborative workflows. In the VR space what you’re able to do is really immerse yourself inside the data and you start to do something different. Your brain starts to see more. You start to feel it as an environment and when you can reach out and touch data, you can do things very quickly just by touching it.

 

And I wanted to show you guys a really quick example, so you can start to solve really hard problems. So, we get FIB SEM data and our customers want to see certain objects inside of FIB SEM I think a lot of people would naturally want to do something like machine learning AI and try to train a machine to find it. With our VR technology, which we call InViewR, you can hold that FIB SEM volume out in front of you and you can really slice it up any way you to

 

So again, this is like a motif but a more advanced kind of motif where you can sculpt over the regions where you know there’s something you want. So you can target your AI onto certain regions and that makes the problem so much easier for machine learning. What we do a lot of times is we reach out into the volume we sculpt, write on it, and depending on how big the volume is, it might be a matter of minutes, it might be a matter of an hour or two. But in the end, after you’ve sculpted it and you train your AI on the volume and you get something really nice, like in this case, we grab all the colleges.

 

Very briefly, we also have a product called the WebView and the WebView is a way for you to interact with the sis files and the results completely via the web and if you want to in a completely collaborative way. So multiple people access the data and you can apply analysis pipelines to the data, you can do what people call like crowd source kind of annotations. And then the beautiful thing is that all of these different tools, they all fit perfectly together, so they’re all using the Arivis sis file format and they all plug into each other. So that means that you can build a completely custom imaging workflow in your lab.

 

When we were preparing the webinar, we were talking about, okay, how do we, how do we introduce this idea of image analysis to everybody? And I said you know, image analysis is something we do every day. I just went to Whole Foods last night and there’s a particular kind of whole milk that I like. I walk up to the refrigerator and I pick the milk that I want. And so I thought, okay, how about I just take a picture of the freezer case in Whole Foods. And so here it is and I used Vision4D and I’ve used the trainable segmenter to output a probability image.

 

So, I trained it on what’s a gallon versus what’s not a gallon. I used exactly the morphology of motifs that we talked about today. Opening and closing to sort of solidify those gallons and remove the background objects. And I trained Vision4D to first to objects that were a high probability of being a gallon. Then I found whole milks. After I had it find the whole milks, I had to find the Whole Foods brand because that’s the brand I particularly like.

 

And then the jug that I picked last Friday night was a jug that was in the front and didn’t have a dent in it. So I used the analysis pipeline to find the jug without the dent that was in the front. We were looking at our images and we’re the experts. We know what’s in there, we know what we want to measure, we see the phenomenon, and now we need to capture it because we need to share it with, with other scientists. And that’s exactly what I did here was I encoded the process that I went through in the Arivis environment. And that’s exactly what you want to do a with your images.

 

I want to thank you all for, for meeting with us today. It’s really my pleasure to speak with you about this and it’s been my pleasure in working with Arivis for the last almost three years and meeting many of you and sitting down with you and working through your image analysis problems. I’ve learned a tremendous amount and I hope that you have too and hope that we can continue to work together. And I’ll just have to say it’s been a real pleasure to work with everybody at Arivis. Arivis really cares about, about delivering a product that’s very powerful and provides a very high quality result, but also provides a joyful and high quality experience to you. Finally, I’d like to thank Nick and Martin for, for working with us on developing the webinar today. And with that I would be happy to take questions.

 

Bitesize Bio:     Thanks Chris. That was an excellent and thoroughly enjoyable presentation. We have a few questions from the audience. If anyone else has a question, please feel free to post it in the questions box that appears on the right of your screen. So, I’ve got one here from Neeraj and they ask in the 2D viewer can you draw an outliner on the object of interest?

 

Chris:     Absolutely. Let’s have a look together. We have a tool for, for drawing… so you can draw a different sort of shape. The two ways that I do at the most is simply to draw polyline. So I select a tool for adding an annotation object to the image, and I just will use this to simply draw the poly line around the object of interest.

 

Chris:     I would guess that you might ask, could we do this in 3D? And the answer is yes. And there are a couple of different ways, but one of the ways that I would tend to do it is to move to another plane and draw another polyline on another plane in this way and I can interpolate between the two poly lines to create a three dimensional object. We could do something like magic wanding. So we have a magic wand tool and I can parameterize the magic wand. I’ve got a tolerance value that, I can apply and I can use that to also grab the objects in the 2D viewer.

 

Bitesize Bio:     Okay, brilliant. Thanks Chris. I’ve got a question here from Yan. Is the Arivis Vision4D software compatible with Marker Free Technologies?

 

Chris:     I don’t know, I got to admit I don’t know what the Marker Free is and it’s something maybe our developers or other application engineers are familiar with this kind of technology. It’s unfortunately, it’s not something that I’m familiar with so either you have to explain it a little bit to me or somebody asked to explain it or I have to find out from somebody else.

 

Bitesize Bio:     Okay. That was the question that I got. Moving onto the next one.  Garima asked um a question during the live demo of the Arivis Vision4D, is it possible to measure the orientation coherency and length of tubulin?

 

Chris:      Maybe we have to get back to you on what we really can do.  I would say not definitely not directly. So we’ve got lots of objects. We compute lots of different properties for the object, like their length, their aspect ratio, their position in x, Y, and Z. So, all of this information is exportable into the spreadsheet, so we can get all of these computed properties out of Vision4D. What I would guess is that in order to do this kind of thing, we would pick certain properties or certain features of of the objects and then we would compute that stuff outside of Vision4D, at least for now.

 

Bitesize Bio:     Okay. Okay.  I have a question here from Paula. She just asked is it compatible with Zen files from ZEISS?

 

Chris:     It is, and we support the import of numerous file formats and when the files are coming from the big reputable imaging companies like ZEISS, usually these are cases where the import is the absolute easiest. When the files were coming from Zen and from Zeiss, they share all the information with us about the file format and then it makes it really easy for us to catch all of the relevant information from their files. So in those cases, it’s really as simple as a drag and drop.

 

Bitesize Bio:     Okay. Okay. And questioning here from Katelyn, have you done any analysis on non-fluorescent non-SEM data such as a differential interference contrast microscopy?

 

Chris:    The answer is yes. And so we’ve looked outside of fluorescent, so I want to say the tools in Vision4D are very comfortable for working with florescence images. I think we make it pretty easy with SEM. I think there’s certain tools you need for working with these blocks of  data, but that I would like to see in the future in Vision4D, but it’s, it’s pretty good. I’m really having a quite a bit of fun with that. Things like 3D x ray data, it’s, it’s a complete joy. So, if you’re working with something like CT imagery or medical MRI imagery, is a complete joy to work in Vision4D the DIC stuff.

 

Chris:     It’s a bit more challenging. So, I just started to work with some of these with some time series earlier this year, collected in DIC. So, there’s a wide field or bright field and DIC. And at first it was new for me, but I figured out the motifs basically. So, I have some ideas and I’d be happy to sit down and work with you in a web session, show you some of these motifs, but I tend to use the trainable segmentation, so a bit of machine learning and a bit of morphology and a bit of filtering in various motifs in order to catch the cells from the DIC images. And I would say that the pipelines that I had to build, were a little bit complicated because the cells were reaching out and touching each other very frequently. And so it’s like spaghetti noodles, you know, all intertwined and that makes things a little bit more challenging. But I was pretty happy with the results. So, I would say yes, we can work with DIC images and I’d be happy to sit with you and, and look at some of the motifs.

 

Bitesize Bio:     Okay. Actually, a problem where they’ve seen artefacts that have been mistaken for metastases in their samples. Is it possible that 4D imaging analysis can, can help with that problem. Is it possible to separate artefacts from metastases?

 

Chris:     Yeah, it depends on, right? It depends on the nature of the artefacts. So it’s the kind of thing where we want to be able to explore an image like that very easily. And that’s of course what you can you can do in a software like Vision4D and where you would have to get a sense of what is going on here. So sometimes we would have a case where It might be a problem with the clearing is not optimized or the imaging parameters are not optimized. And so, what you’re seeing is sort of some out of plane optical issues and there are some cases where we can account for that and then focus just the regions of what I consider to be truth. So I would say yes, it just depends on the images. In the simplest case, it may be that in some optical sections you have a problem and other optical sections you do not, and you might just simply want to focus just on optical sections where you don’t have the problem and that way you’re not misled, but we should have a look together and we can figure it out.

 

Bitesize Bio:     Okay. Next question here is from Baptiste and they’re asking are there any quality control algorithms for the reconstruction of 3D SIM Images?

 

Chris:     Let’s think, where you want to measure? So, I would say we don’t have a quality control operation or something like that, but we could do something like build a pipeline that perform some series of operations, grabbed some objects and reports to you properties of those objects that you could use for quality control.  I think one of the first things that I did a couple of years ago was I looked at data that was coming out of a very high throughput process. And I was interested to build a pipeline to measure histology artefacts because I was curious about what’s the performance of the histology over time. And I built a pipeline that could grab certain artefacts out of the image and by counting those and the various data sets that give me some idea of what’s the quality over time. So maybe it would be something similar. It would be some kind of a pipeline that would extract a specific feature in your imagery.  And then we can compute for you certain features that are important for quality control.

 

Bitesize Bio:     Right. Okay. Okay. Question here from Esther and she asks, how does the Arivis software interface with existing image capture software provided with the instrumentation?

 

Chris:     I think we kind of answered that. In some cases, like in the case of Zeiss instruments.  I think most of the time when I’m sitting with a ZEISS customer, I’m finding that the import is just really easy. They’re saying, oh, this is a complex image, we’ve got lots of channels and this and that, and I just drag and drop it in. The whole thing is working perfectly fine. But the import tool in Arivis Vision4D, I didn’t talk about what might happen if you’ve got some tiffs coming off of any old instrument.

 

Chris:      Let’s say you built your own instrument and you are just making tiffs and you need to assemble them in a certain way. Vision4D has a very flexible import tool, so it allows you to choose from common scenarios. So, in this case, I just picked a couple of stacks so they could be tiffs and, and it’s asking me, hey, should all these be assembled as a bunch of planes top of each other? Should they be assembled as time series? Does each one represent a color channel and then if none of these scenarios are covering it, then I have a completely custom import. I don’t know if I’ve ever run into a case where I couldn’t get the data from an imaging instrument into Arivis. We really support the import of lots of different files and being able to import files like tiffs and jpegs and bit. I imported that milk picture from my cell phone. So being able to import files from anywhere and be able to control how they’re assembled, it means you can pretty much get the data into the Arivis environment from anywhere.

 

Bitesize Bio:     Okay. Brilliant. Question here from Lindsey and she asks, you talked about the capability to handle and officialize and analyze large data what’s the limit or the biggest image size you’ve had?

 

Chris:     That’s a great question.  There is some kind of upper theoretical limit and it was computed. It was told to me it’s some weird name. It’s not petabyte, it’s like feto mega google bite. I have no idea what it is, but there’s some upper limit. It really exists. But in practice, I personally have worked with data sets as big as 12 terabytes and I will say that the performance of the software was excellent, stable. So probably the most amount of data I was accessing at one time was maybe something like 30 plus terabytes and the software was really stable and I’m able to focus in on portions of the image a parameter or as pipelines. We’ve done quite a bit of movie rendering and screenshot rendering from these really big images. Let’s say for me it’s something around 12 terabytes in a single image. It’s maybe over 30 terabytes in a session. And I think our development team in Rostock has had their hands on a single image that was a bit bigger. It might be on the order of 15 or 16 terabytes.

 

Bitesize Bio:     Okay. Brilliant. Question here from Laura and she asks, can you also handle time lapse images to do something like tracking over time in 3D and can you also measure intensities or features along the track over time?

 

Chris:     Absolutely. That’s a great question. I think we had a little bit of debate about whether we should cover the tracking stuff in this webinar and because we all like it very much. It’s really cool, but we thought it might be a little bit too much for the webinar, but we can absolutely do tracking in 3D. I have to say that Vision4D is really good at working with time series. This is a real strength of the software and we should really do a webinar that’s specifically about the time series in the future. This one is a time series and we’ve got all these objects wiggle waggling around in here. It might be a little bit time consuming to compute this mean filter, but I computed anyways. There’s a little progress indicator here and then boom, I got all of these little.

 

Chris:     Okay, so I got a bunch of objects and then I’m going to do a tracking operator so we have an operator that I can insert into the pipeline and I can parameterize it. And when it’s done it has tracked the movement of these objects in 3D. So if I flip over to our 3D viewer, now you can see we have, its position at the track computed through time.

 

Chris:    Let’s go to the table. So here’s the track. I’m going to click on that track and I’m going to group all of the results by the track. This object lives in the track, it’s called, in this case, track number three, and the red object is represented in time by all of these segments. And so, what I can do is go over to what we call timeline view, and we could do something like look at its volume over time. So for this object, now I’m computing at volume over time and maybe the reason why it’s decreasing is because I didn’t photobleach correct the image or something like that. There’s probably a reason for it, but my point is, is that you can absolutely look at the property, track the objects through time and access the properties through time.

 

Bitesize Bio:     Okay. This question from Karen kind of follows on from that and you said it is possible to explore videos of your 3D data. So how customizable is it?

 

Chris:      Quite. It’s something that I personally find really fun. If you’d like to tell stories about the data in the movies, Vision4D is a really a good tool for that. Let me show you what the interface looks like briefly. I’m going to get rid of the spindle, this data set for a second, and we’re going to focus on this one. If we take our little cell of interest, let’s put all of our colors back on and we’ll make this blue again and then let’s look at it in 3D and let’s turn off all of the segments and let’s change the lighting.

 

Chris:     As you can see, there’s lots and lots of controls and once you know where all this stuff is, then it really becomes easy. So, we have this storyboard tool and it works like you assign key frames and then what Vision4D does is interpolates between the key frames. So, let’s say we want to have an angle on top view as a key frame and then let’s say we want to do something like let’s barrel roll our way into the cell. We want this to be the first key frame and I just double click on it and I’m there. What Vision4D does if I hit this play button, it does a interpolation between the first key frame and the second and then what Vision4D enables you to do is control that transition so you can control how long it’s going to take. You can control whether the camera moves in a linear way, accelerated, a decelerated way, and these are really great ways of focusing on objects and building tension in your movie and all this kind of stuff.

 

Chris:     You have complete control. We want to fade the red channel out so I might want to add a key frame where I faded the red channel. So now when I play the preview I get the barrel roll and then when we get close to the Chromatin, then all of this red stuff is going to is going to fade away. I may want that fade to happen more quickly. So in that case I would just go to the transition and instead of a five second transition, I can make it a two second transition. And then another thing that makes storyboarding movies really easy in Vision4D is that you can edit these key frames really easily. So, you see exactly what they are, and you might say, I really don’t want to have the red channel showing in this one anymore. So, what you could do is just double click on it. You reduce it and you might even say, well, forget it. I don’t want the blue channel either. So we take those away and we right click on it and we’ll replace it. In Vision4D, you have this visual indicator of what your movie’s going to look like. You have complete control over the transitions and you can edit very easily the key frames. I think it’s a very logical and intuitive approach and we get quite a bit of feedback from our customers about this being a really good feature in inside of Vision4D.

 

Bitesize Bio:     Okay. Well that’s all the questions we have for you Chris, and that brings us to the end of the seminar. Thanks again Chris. That was a fantastic presentation and a great discussion and thanks also to our sponsor Arivis. And finally, thanks to you the audience for taking your time to attend and listen in. If you’ve enjoyed the seminar and would like to view the video recording of this session, please visit the seminars page on bitesizebio.com. It should be available within the next 24 hours. There you can also see the other webinars we’ve lined up for you in Bitesize Bio’s webinar festival. So until next time, good luck in your research and goodbye from all of us at Arivis and Bitesize Bio.