FAQ

Pipsqueak Pro is better, faster, and more accurate

Pipsqueak is published and proven to increase the accuracy and reliability of your data, while reducing tedious and time-comsuming manual cell detection. Pipsqueak Pro takes it to the next level.

Frequently asked questions

Automation of cell detection and quantification has been published to improve both intra- and inter-rater reliability. Don’t let bias and errors currupt your data.

A lot is improving with Pipsqueak. Here are some answers to questions that you might have:

Is Pipsqueak open-source?

Clarity is everything in research. The Pipsqueak methods have been peer-reviewed and published, and FIJI analysis code is open and accessible.

Is Pipsqueak Basic going to remain free?

Yes. FIJI-based Pipsqueak (the old PIPSQUEAK) isn’t going anywhere and we will continue to maintain the code with periodic updates.

Why isn’t Pipsqueak Pro free?

Our pre-trained computer vision models make it easy to select cells and biomarkers for counting and quantification. It’ll make you want back those hours (or years) you spent manually counting cells.

To make it work, our machine learning engine is hosted on high-power and widely-available servers around the world so that your images are analyzed quickly and accurately. We employ a team of developers and data scientists to continually improve and maintain our AI models.

Our goal to make Pipsqueak AI the best, and most accurate, histological tool available, while keeping it affordable to use. That’s why Pipsqueak AI cost so much less than other “AI” image analysis software. 

Can I use Pipsqueak Pro without internet?

Our machine learning models require powerful servers to quickly process your images. (Just like Alexa or Siri, but hopefully better than Siri…) Currently, that means you need to stay connected to the internet so that Pipsqueak can process your images and display your results. Disconnected processing may become available as we continue to develop new features and capabilities.

Are my data and images safe?

Yes. Pipsqueak Pro run on a network of AWS servers around the world that use the highest level of security and encryption when processing your images. Your images and data belong to you and are never visible, apparent, or available to other users. 

Does Pipsqueak use ImageJ/Fiji?

Analysis software is great, if you know what it is doing. Our legacy open-source code (PIPSQUEAK, Pipsqueak Basic, and Pipsqueak AI) was based on ImageJ/FIJI’s reliable and trusted image quantification. Today, Pipsqueak Pro is built on a custom Java interface that enables faster, easier image analysis. Our quantification algorithms use a combination of widely-used OpenCV and ImageJ/FIJI resources, meaning that you can be confident about the quantification Pipsqueak Pro returns.

Technical FAQ

Will the AI learn with experience, and will that affect data reproducibility? Will improvements in AI accuracy cause data to vary over time?

We are constantly discussing when and how to make changes to the AI’s performance. The short answer is that the AI models are locked and will not drift on their own. We periodically make slight modifications to the models, which are rigorously tested internally before being released. But in general, the AI only helps detect the cells and does not alter the measurement collected.
Pipsqueak’s user verification step (following the return of the AI’s predicted cells) is important to ensuring the accuracy of the cells being measured. As we continue to improve the detection models to provide better cell detection, the user will need to help less and less with detection.
The longer answer is that we recognize that a lot of us trust the AI’s detections and want to reduce the amount of verification we have to do when using Pipsqueak. In doing so, any modification we make to the detection model will have an impact on unsupervised analysis. There will be a point in the future when a user does not need to verify the cell detections. Currently, this might be similar to trusting a Tesla to drive perfectly autonomous— the AI makes driving safer/easier/cooler, but you probably shouldn’t take a nap while driving (quite yet)…
We are already developing batch processing features that will allow users to run the AI on hundreds of images all at once (e.g. level 4ish autonomous driving). When we get there, the AI models will not only be locked so that they will not drift or learn, but users will also be able to select the version of the model they are using (or even tune their own model to be perfect for their images). Until then, I think it is a good idea to double check the cell detections, even if just briefly.

Can I customize background or analysis parameters?

Yes. Everything from background subtraction to ROI selection can be customized and checked.

Does it matter if summed images are 32-bit and not 8-bit?

Bit depth is a limitation that we recently moved past. You can now analyze color or grey images of any depth (or image format), but the pixel intensity measurement will be collected from an automatically generated 32-bit image. This is done to standardize the value range of intensity measurements based on the Slaker et al., 2016 methods. If you would like to add an additional approach (or flexibility), let us know!

Does the pixel scale of my image matter for image quantification?

The size of the image and the physical scale are properties that won’t directly affect the cell detection or intensity measurement. However, some labs choose to report cell intensity as mean intensity/area, in which case you would need to double check the area calculation.

How does Pipsqueak’s background subtraction work? What variable can I change?

The multi-faceted background subtraction in Pipsqueak is designed to be highly customizable as many different methods to capture images are used, allowing diverse results from lab to lab. Our automatic background subtraction algorithm combines the rolling ball algorithm, built in to ImageJ, with custom thresholding. We offer both automatic and manual selection for background sampling. If automatic background sampling is selected, our algorithm places 22 square ROIs around the perimeter of the image, distributed uniformly. If manual sampling is selected, the user places their own ROIs for background sampling. It is recommended that at least 5 ROIs are placed, but the algorithm will still perform if less are selected.

After background ROIs are selected, the rolling ball algorithm settings are finalized and the rolling ball smoothing is performed across the entire image. The ball’s radius is set to 50 pixels by default, but it is highly recommended that the user changes this to be around 1.15-1.25 times the average size of the object of interest, in pixels for best results.

After smoothing is performed, the user’s selection of “high” or “low” background subtraction determines the thresholding method performed on the image. If the user selects “high” background subtraction, we discard the brightest 33% and dimmest 33% ROIs in the background samples, whereas if the user selects “low” the brightest 77% of ROIs will discarded. Regardless of user selection, we then calculate the mean pixel intensity and standard deviation across the ROIs selected for background sampling. These values are then used to set all pixels in the image dimmer than the calculated mean + two standard deviations to not a number (NaN), so the suppressed background pixels do not interfere with measured cell intensity values.

What is the Lower Threshold Multiplier?

Lower Threshold Multiplier is only used to help detect cells when not using the AI. Pipsqueak Basic will use thresholding to detect pixel masses and then filter the masses to fit the expected size and shape. We have found that liberal adjustment of the Lower Threshold Multiplier before and after detection can help hone in on the best possible cell detection. Adjusting this will not directly affect pixel measurements.

What is ROI enlargement?

ROI Enlargement is used to systematically adjust the ROI size of a stain class. In our academic research, we typically will systematically decrease the parvalbumin ROI size by 1 pixel in order to make sure our ROIs were tight around the cell body. We would systematically increase our WFA ROIs by 6 pixels to make sure we captured all dendrites. Changing this variable will have an effect on the pixel measurement, but it can be deactivated by setting the value to 0 (recommended for faster analysis).

Can I adjust ROIs to be freeform?

Yes! Pipsqueak previously enabled freeform ROI drawing, and the feature was dropped when the AI began returning cell detections. You should be able to draw ROIs freeform currently during the verification/modification step (just select that tool in the toolbar). Those ROIs will be saved and measured with the freeform shape that you draw. (We’re currently working on a new AI model that will detect only the outlines of the cell, which will make all ROIs look more freeform.)