FAQ


Q: How can we cite you?

A: If you get any results useful from this tool, you can cite our applications note in Bioinformatics. The underlying model is based upon a Mask RCNN, which you can cite using this paper here. The Broad Bioimage dataset used to train this model, can found here.



Q: How are the output files formatted?

A: Each output segmentation is stored as an integer-signed image. A value of 0 corresponds to background, and each subsequent integer corresponds to a unique cell (e.g. pixels that are 1 belong to the 1st cell, 2 to the second cell, etc.) For examples on how to load these files into downstream analyses in Python, Matlab, or R, check out some of the scripts we have on our example postprocessing scripts page.



Q: I have hundreds of images, or very large images. How can I run your tool?

A: We limit our web tool to a maximum of 10 images at once to reduce strain on our server. If you need to run more images than our web tool can handle, try downloading our user-friendly Python code from our GitHub to run on your own local machine.



Q: Do you stored or use any of the images I submit?

A: Your images are retained for four hours, after which they are automatically wiped from our servers. We do not collect or store any images in the long-term, and will not use your images for any purposes on our end. If you do get results that you want to share with us, we encourage you to contact us, and we can post it to our gallery with your consent.



Q: My image seems to be taking a long time to run. Is there a way to speed it up?

A: Try resizing your image to make it smaller. Usually, this will not drastically affect the accuracy of the segmentations, but it will make your run much faster. If you see your job sitting at the "image waiting in queue" stage for a long time, that probably means that there are a lot of requests on our server. Try running your jobs at another time, if possible.



Q: What's the technology behind the server?

A: We transferred a model trained by the Deep Retina team, who won third place in the Kaggle 2018 Data Science Bowl on nuclear segmentation. Originally, this model was trained to segment images of mostly human nuclei across different imaging modalities, but we found that this model transferred very well to yeast cells without any fine-tuning. We adopted it as the go-to tool for segmentation in our own lab. Given how useful it was for our own research, we decided to build a web tool to make it accessible to the wider yeast community.



Q: Who maintains this tool?

A: The Moses Lab at the University of Toronto. If you have any questions, comments, or feedback, feel free to contact us.