Replies: 3 comments
-
|
A note on For example import sys
import docopt
cellpose_help_text = '''
usage: cellpose [-h] [--version] [--verbose] [--use_gpu] [--gpu_device GPU_DEVICE] [--check_mkl] [--dir DIR]
[--image_path IMAGE_PATH] [--look_one_level_down] [--img_filter IMG_FILTER]
[--channel_axis CHANNEL_AXIS] [--z_axis Z_AXIS] [--chan CHAN] [--chan2 CHAN2] [--invert]
[--all_channels] [--pretrained_model PRETRAINED_MODEL] [--add_model ADD_MODEL] [--unet]
[--nclasses NCLASSES] [--no_resample] [--net_avg] [--no_interp] [--no_norm] [--do_3D]
[--diameter DIAMETER] [--stitch_threshold STITCH_THRESHOLD] [--min_size MIN_SIZE] [--fast_mode]
[--flow_threshold FLOW_THRESHOLD] [--cellprob_threshold CELLPROB_THRESHOLD] [--anisotropy ANISOTROPY]
[--exclude_on_edges] [--save_png] [--save_tif] [--no_npy] [--savedir SAVEDIR] [--dir_above]
[--in_folders] [--save_flows] [--save_outlines] [--save_ncolor] [--save_txt] [--train] [--train_size]
[--test_dir TEST_DIR] [--mask_filter MASK_FILTER] [--diam_mean DIAM_MEAN]
[--learning_rate LEARNING_RATE] [--weight_decay WEIGHT_DECAY] [--n_epochs N_EPOCHS]
[--batch_size BATCH_SIZE] [--min_train_masks MIN_TRAIN_MASKS] [--residual_on RESIDUAL_ON]
[--style_on STYLE_ON] [--concatenation CONCATENATION] [--save_every SAVE_EVERY] [--save_each]
cellpose parameters
optional arguments:
-h, --help show this help message and exit
--version show cellpose version info
--verbose show information about running and settings and save to log
hardware arguments:
--use_gpu use gpu if torch with cuda installed
--gpu_device GPU_DEVICE
which gpu device to use, use an integer for torch, or mps for M1
--check_mkl check if mkl working
input image arguments:
--dir DIR folder containing data to run or train on.
--image_path IMAGE_PATH
if given and --dir not given, run on single image instead of folder (cannot train with this
option)
--look_one_level_down
run processing on all subdirectories of current folder
--img_filter IMG_FILTER
end string for images to run on
--channel_axis CHANNEL_AXIS
axis of image which corresponds to image channels
--z_axis Z_AXIS axis of image which corresponds to Z dimension
--chan CHAN channel to segment; 0: GRAY, 1: RED, 2: GREEN, 3: BLUE. Default: 0
--chan2 CHAN2 nuclear channel (if cyto, optional); 0: NONE, 1: RED, 2: GREEN, 3: BLUE. Default: 0
--invert invert grayscale channel
--all_channels use all channels in image if using own model and images with special channels
model arguments:
--pretrained_model PRETRAINED_MODEL
model to use for running or starting training
--add_model ADD_MODEL
model path to copy model to hidden .cellpose folder for using in GUI/CLI
--unet run standard unet instead of cellpose flow output
--nclasses NCLASSES if running unet, choose 2 or 3; cellpose always uses 3
algorithm arguments:
--no_resample disable dynamics on full image (makes algorithm faster for images with large diameters)
--net_avg run 4 networks instead of 1 and average results
--no_interp do not interpolate when running dynamics (was default)
--no_norm do not normalize images (normalize=False)
--do_3D process images as 3D stacks of images (nplanes x nchan x Ly x Lx)
--diameter DIAMETER cell diameter, if 0 will use the diameter of the training labels used in the model, or with
built-in model will estimate diameter for each image
--stitch_threshold STITCH_THRESHOLD
compute masks in 2D then stitch together masks with IoU>0.9 across planes
--min_size MIN_SIZE minimum number of pixels per mask, can turn off with -1
--fast_mode now equivalent to --no_resample; make code run faster by turning off resampling
--flow_threshold FLOW_THRESHOLD
flow error threshold, 0 turns off this optional QC step. Default: 0.4
--cellprob_threshold CELLPROB_THRESHOLD
cellprob threshold, default is 0, decrease to find more and larger masks
--anisotropy ANISOTROPY
anisotropy of volume in 3D
--exclude_on_edges discard masks which touch edges of image
output arguments:
--save_png save masks as png and outlines as text file for ImageJ
--save_tif save masks as tif and outlines as text file for ImageJ
--no_npy suppress saving of npy
--savedir SAVEDIR folder to which segmentation results will be saved (defaults to input image directory)
--dir_above save output folders adjacent to image folder instead of inside it (off by default)
--in_folders flag to save output in folders (off by default)
--save_flows whether or not to save RGB images of flows when masks are saved (disabled by default)
--save_outlines whether or not to save RGB outline images when masks are saved (disabled by default)
--save_ncolor whether or not to save minimal "n-color" masks (disabled by default)
--save_txt flag to enable txt outlines for ImageJ (disabled by default)
training arguments:
--train train network using images in dir
--train_size train size network at end of training
--test_dir TEST_DIR folder containing test data (optional)
--mask_filter MASK_FILTER
end string for masks to run on. use "_seg.npy" for manual annotations from the GUI. Default:
_masks
--diam_mean DIAM_MEAN
mean diameter to resize cells to during training -- if starting from pretrained models it cannot
be changed from 30.0
--learning_rate LEARNING_RATE
learning rate. Default: 0.2
--weight_decay WEIGHT_DECAY
weight decay. Default: 1e-05
--n_epochs N_EPOCHS number of epochs. Default: 500
--batch_size BATCH_SIZE
batch size. Default: 8
--min_train_masks MIN_TRAIN_MASKS
minimum number of masks a training image must have to be used. Default: 5
--residual_on RESIDUAL_ON
use residual connections
--style_on STYLE_ON use style vector
--concatenation CONCATENATION
concatenate downsampled layers with upsampled layers (off by default which means they are added)
--save_every SAVE_EVERY
number of epochs to skip between saves. Default: 100
--save_each save the model under a different filename per --save_every epoch for later comparsion
'''
def main():
args = docopt.docopt(cellpose_help_text, version='0.0.1')
if __name__ == '__main__':
main()See the homepage for more. The official repo is not well maintained, but this fork is and is called |
Beta Was this translation helpful? Give feedback.
-
|
Thanks! Whole Testing-Strategy gist is well-put. At the moment, a lot of time is spent on manually testing everything repeatedly, so this approach would help reduce that. One point you raised about -
I feel, some portion of the logic has different behavior in handling types -
|
Beta Was this translation helpful? Give feedback.
-
|
Currently testing is implemented in the below way -
For the Part 1, we have used LinkML. For Part 2, we discussed let's keep that portion until we implement a new interface. Analyze what manual work we face again and again, and implement testing strategy for Part 2. The simple choice in mind is as of now is implementing functional tests for each algorithm under their specific folder using pytest, with The current testing folder structure is - |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We currently haven't built out our testing suite, but we plan to do so in the future (after schema validation and documentation).
The current workflow is this:
config.ymlauthor to specifyarg=valinstead of existingarg valsyntax)config.ymlor create a new one, and run bilayers to buildalgo_name arg=val [other stuff ...])printstatements and eyeball to make sure it looks rightCreating tests in Bilayers that automates the above process is a bit tricky. First, we have the change we made to the
config.ymlsyntax. That can be tested via schema validation, i.e. modify the yaml schema to allow forarg=valsyntax. Second, we have the changes we made to the parser-generator, i.e. has the parser been properly modified to account for the new feature, and do the generator + gradio/jupyter templates generate the correct logic (such that the logic correctly transforms the widget values to a valid CLI command). The second is harder to test, because it requires manual interaction at the interface level (step 4.)A possible solution here is to not rely on interaction with Gradio/Jupyter specifically, but instead to create a third test-specific virtual interface. This virtual interface won't have a GUI. Instead it will be a collection of Python classes that match up with all of our interface types (e.g.
Checkbox,Integer,Files,Textbox,Radio,Float, etc.) They'll also have internal methods that simulate user behavior. For instance, theCheckboxclass will have aselectRandomwhich will randomly "check" some amount of the options, andselectNthto select the nth checkbox.The template for the virtual interface will have very similar logic to
generate_cli_command/construct_cli_commandin the Gradio and Jupyter templates (in fact we should probably consolidate the internal logic of those two in a single template that the Gradio and Jupyter and virtual interface templates just import, so that it really will be identical).The CLI command will not be for any real algorithm (e.g. cellpose). Instead it will be a pseudo algorithm (or perhaps a collection of them). The job of the pseudo algorithm is "simply" to take in the CLI args, parse them (via argparse, or better yet docopt, or possibly a third thing if docopt is insufficient in some way), and succeed if they are correctly specified. Note that this is different the the validation that happens at the schema validation step. It will be python code, so it can do arbitrarily complex things like interaction between args. The pseudo algorithm is essentially the test suite (or alt. each pseudo algorithm in the collection is a unit test). For instance the pseudo algorithm will have an argument parser that looks for
pseudo_alg [opts] arg1=val1. It expects the required argument ofarg1to have a valueval1, and for the CLI command to use thearg=valsyntax.TLDR: we generate a cli command for a pseudo algorithm based off of a simulated user interaction with a virtual interface.
Beta Was this translation helpful? Give feedback.
All reactions