Skip to content

USD-AI-ResearchLab/DynaFormer

 
 

Repository files navigation

DynaFormer: Dynamic Dual-Attention Transformer For Medical Image Segmentation

Proposed Model

Updates

  • December 1, 2025: Initial release with arXiv.

How to use

The script train.py contains all the necessary steps for training the network. A list and dataloader for the Synapse dataset are also included. To load a network, use the --module argument when running the train script (--module <directory>.<module_name>.<class_name>, e.g. --module networks.DynaFormer.DynaFormer)

Training and Testing

  1. Download the Synapse dataset from here.

  2. Run the following code to install the Requirements.

    pip install -r requirements.txt

  3. Run the below code to train the DAEFormer on the synapse dataset.

    python train.py --root_path ./data/Synapse/train_npz --test_path ./data/Synapse/test_vol_h5 --batch_size 20 --eval_interval 20 --max_epochs 400 --module networks.DynaFormer.DynaFormer

    --root_path [Train data path]

    --test_path [Test data path]

    --eval_interval [Evaluation epoch]

    --module [Module name, including path (can also train your own models)]

  4. Run the below code to test the DynaFormer on the synapse dataset.

    python test.py --volume_path ./data/Synapse/ --output_dir './model_out'

    --volume_path [Root dir of the test data]

    --output_dir [Directory of your learned weights]

About

DynaFormer: Dynamic Dual-Attention Transformer For Medical Image Segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%