Masking GUI
I am building a computational pipeline for larval zebrafish whole-brain activity quantification. This blog post is about a GUI I made for one step of the pipeline-- masking. First, let me tell you the steps of the pipeline.
So we have an array of plane images across time. In this case the recording from on fish has the dimensionality (1800, 21, 1024, 1024). We have 1800 time points, 21 imaging planes and the imaging resolution of 1024x1024 pixels. This data is about 73 GB. The temporal sampling is about 2 Hz.
1. Affine registration -we first have to use rigid registration to align the timelapse images onto a particular frame. The code for this was implemented to run on the GPU by Fabian Kabus.
(* 2. Using symmetric diffeomorphic normalization algorithm to morph the recorded brain onto a reference brain such as the z-brain atlas https://engertlab.fas.harvard.edu/Z-Brain/home/ This useful to delineate which neurons are serotonergic, dopaminergic, gabaergic and get their gene expression patterns).
2. Masking the registered frames-- this is a manual step unfortunately. We have to segment out the eyes and olfactory bulb of the fish to extract the cells that we want in the next point.
3. Cell segmentation-- the algorithm uses a simple time-lapse standard deviation based metric to extract cell coordinates and timeseries'.
In this blog post I'll talk about the GUI I made for masking the registered frames.
I was showing an awesome open source software (to a cancer biologist) from a previous collaborator Carsen Stringer called CellPose. It includes a GUI for segmenting cell shapes from images and setting them to a server to add training data to a collective deep learning model for cell segmentation http://www.cellpose.org/ (see github icon). The GUI for cell pose is written in pyqtgraph. I worked with pyqtgraph in the context of a calcium imaging pipeline project https://github.com/MouseLand/suite2p/pull/520/commits/f15239e52715466f8e669567c5866714dcd8e885 So seeing Carsen's gui made me think, hey, I can make my own gui for fish segmentation. Another option would have been FiJi, but I wanted full control and an all-python pipeline. So I started digging into Carsen's code as a pointer to how to make a GUI suitable for my needs.
So there was a lot of stuff in Carsen's code that I didn't need. I spent 3 days just going through it https://github.com/MouseLand/cellpose/blob/master/cellpose/gui.py https://github.com/MouseLand/cellpose/blob/master/cellpose/guiparts.py I started copying stuff that I thought was relevant. Then I made the GUI show a black screen. And then I made it display an image. And then I found a simple example from pyqtgraph examples gui http://www.pyqtgraph.org/ (There are also many examples to look through; for a menu of examples run:python -m pyqtgraph.examples) that made everything click! So there's this thing called the DrawKernel and drawing shapes on the image works through it. You can add a drag event and everything works like magic. You just have to store the points that are acquired. I had a basic implementation for drawing shapes on my image.
Next thing I needed was an algorithm to get the area within the shape. I simply googled and found this https://stackoverflow.com/questions/50847827/how-can-i-select-the-pixels-that-fall-within-a-contour-in-an-image-represented-b
https://stackoverflow.com/questions/31542843/inpolygon-for-python-examples-of-matplotlib-path-path-contains-points-method So I just adopted it and it worked like magic.
There was some work to do with the buttons and linking them with functions that do some of the functionality I needed. It's really simple to create buttons and link them with your functions in pyqtgraph. I also made a textbox to store the planes that have already segmented it and the logic for sorting, updating it and deleting the entry if a segmentation is cleared. I should mention that my implementation was based on two numpy arrays: one containing the original data and one for the masked data. There was also an array for storing the masks themselves.
Next I messed with the layout and found a nice color map and contrast for the image.
Finally, I wrote code to apply the mask from the first timepoint to the rest of the frames and save the results.
Here's the code: https://github.com/mariakesa/ZebraFishRegistrationPipeline/blob/master/mask_gui.py
It took me 2 weeks to go from 0 to the current implementation including the time I took to read Carsen's code.
Kommentaarid
Postita kommentaar