Parameters and settings#

In this section, the parameters and settings available in Convpaint are described. For a comprehensive description of the core ConvpaintModel class, please refer to the separate page.

While parameters directly influence the classification process, settings allow you to adjust the behavior of the plugin or API without affecting the results.

Parameters#

The Param object organizes the parameters that control the behavior of the feature extraction and classification processes.

It therefore defines the processing and results with Convpaint.

These parameters can be adjusted to optimize the performance of the model for specific use cases.

Parameters:

Name Type Description Default
classifier str

Path to the classifier model (if saved, otherwise None)

None
multi_channel_img bool = None

Interpret the first dimension as channels (as opposed to z or time)

None
normalize int

Normalization mode: 1 = no normalization, 2 = normalize stack, 3 = normalize each image

None
image_downsample int

Factor for downscaling the image right after input (predicted classes are upsampled accordingly for output). Hint: use negative numbers for upsampling instead.

None
seg_smoothening int

Factor for smoothening the segmentation output with a Majority filter

None
tile_annotations bool

If True, extract only features of bounding boxes around annotated areas when training

None
tile_image bool

If True, extract features in tiles when running predictions (for large images)

None
use_dask bool

If True, use dask for parallel processing (currently only used when tiling images)

None
unpatch_order int

Order of interpolation for unpatching the output of patch-based FEs (default = 1 = bilinear interpolation)

None
fe_name str

Name of the feature extractor model

None
fe_layers list[str]

List of layers (names or indices among available layers) to extract features from

None
fe_use_gpu bool

Whether to use GPU for feature extraction

None
fe_scalings list[int]

List of scaling factors for the feature extractor, creating a pyramid of features (features are rescaled accordingly before input to classifier)

None
fe_order int

Interpolation order used for the upscaling of features of the pyramid

None
fe_use_min_features bool

If True, use the minimum number of features among all layers (simply taking the first x features)

None
clf_iterations int

Number of iterations for the classifier

None
clf_learning_rate float

Learning rate for the classifier

None
clf_depth int = None

Depth of the classifier

None
clf_use_gpu bool = None

Whether to use GPU for the classifier (if None, fe_use_gpu is used)

None

Settings#

Besides the parameters that directly influence the classification process, Convpaint contains options which should not affect the results of the classification, but let you adjust the behaviour of the plugin or API.

“Auto segment” (plugin)#

Often we want to inspect the results of segmentation right away on the image that we used for training. For this purpose, we added an option to the plugin that let’s you segment the image automatically after training is finished.

Layers handling (plugin, Advanced tab)#

We let you adjust the behaviour of the annotation layers in the Advanced tab of the plugin settings. By default, an annotation layer is automatically added whenever you select a new image in Convpaint, overwriting any existing annotation layers. You can change this behaviour in two ways:

  • Turn off the automatic addition of annotation layers; in this case, you need to manually add annotation layers after selecting a new image, e.g. through the button “Add annotation layer” in the plugin.

  • Tick “Keep old layers”, in order to backup existing annotation layers and prevent them from being overwritten; the backup layers will be renamed according to the selected image and chosen image type (e.g. multichannel).

We provide two more options for handling annotation layers:

  • Clicking “Add for all selected” will add annotation layers for all images which are selected in the napari layer list (typically on the left side); the layers are named as described above for the backup layers.

  • Ticking “Auto-select annotation layer” will automatically select the annotation layer corresponding to the currently selected image, given they are named according to the conventions described.

memory_mode (API) and “Continuous training” (plugin, Advanced tab)#

Often times we’re not done after one round of annotating and training. For this purpose, we added an option that saves the annotations as well as the according features inside the ConvpaintModel instance. This avoids extracting features from the same pixels again, which can save time especially when we use image tiling for training. And it even enables to iteratively combine features from different images.

In the API, this behaviour is controlled by the memory_mode parameter of the train() method (as well as the get_feature_image() method). If set to True, the model will retain all annotations and features across training sessions, allowing for iterative training and refinement. If using it across images, image_ids must be provided, in order to differentiate between the images.

If memory_mode is turned on, features will be both loaded from and saved into the ConvpaintModel instance. If turned off, the saved features will be ignored. However, they will only be discarded when manually resetting either the training features (reset_training()) or the classifier (reset_classifier(), which internally calls reset_features()).

In the plugin, the Continuous training option can be adjusted to allow for this behavior. When enabled, the plugin will run the underlying methods using memory_mode=True and automatically save all annotations and features after each training session.

With the option Image, the training is reset whenever you switch to a different image, giving the advantage of not having to extract features again. If the option Global is selected, the model will retain all annotations and features across different images. When turned Off, the model will neither load nor save any annotations or features.

In the plugin, the default option is Image. Turn it off, if this is not desired.

Sidenote: Using an image downsampled at different scales will create separate feature entries for each scale (as if it were different images, but without resetting the training in between if memory_mode is set to Image).

use_dask (API and plugin, Advanced tab)#

We allow users to use Dask for parallel computing, which can significantly speed up the training process, especially for large datasets. Currently, this is still a beta feature and may not yet be fully optimized. Additionally, Dask is only available for parallelizing predictions across multiple tiles when image tiling for prediction is enabled.

To use Dask, you can set the use_dask parameter to True in the API (segment()and predict_probas()methods) or enable the corresponding option in the Advanced tab of the plugin settings.

in_channels (API and plugin, Advanced tab)#

Sometimes you want to use a subset of input channels and maybe even compare channels between images that are in different orders. For this, we let you specify the in_channels parameter in the API (any method that takes image inputs) or the corresponding option in the Advanced tab of the plugin settings.

Predicting probabilities (API and plugin, Advanced tab)#

Besides getting a segmentation as output, you can also obtain per-pixel probabilities for each class. This can be useful for tasks where you need to know the uncertainty of the predictions or when you want to apply custom post-processing steps based on the confidence of the model.

To get the predicted probabilities, you can use the predict_probas() methods in the API or tick the corresponding output option in the Advanced tab of the plugin settings.

Sidenote: For segmentation, Convpaint internally calculates the predicted probabilities for each class and determines the most probable class to generate the final segmentation mask.