Binary semantic segmentation for shadow detection using MMSegmentation. Originally created as a DeveloperWeek 2026 Hackathon project, with continued post-submission model development in this repository.
This repository contains:
- DeepLabV3+ configs for ShadowSeg/RoverShadow (
R50baseline,R101candidate) - a custom false-positive-aware loss (
ShadowFalsePositiveLoss) - a safe class-weighted cross-entropy variant (
SafeCrossEntropyLoss) - shared runtime compatibility helpers for
mmcv-lite - CLI tools for training, evaluation, inference, and diagnostics
- Original hackathon submission: ShadowSeg: Lighting-Aware Terrain Intelligence (Devpost)
- Origin: DeveloperWeek 2026 Hackathon
- Original team members:
- Mrinank Sivakumar (BrownAssassin)
- Arv Bali (ArvBali2101)
- Myles Liu
- Kenji Baritua
configs/shadow_deeplabv3plus_r50.py: main DeepLabV3+ configconfigs/shadow_deeplabv3plus_r101.py: R101 candidate configconfigs/shadow_external_segformer_b0.py: fallback external pseudo-label modelconfigs/shadow_fcn_r50.py: legacy FCN baseline configrovershadow/runtime/mmcv_ops_shim.py: sharedmmcv.opsshim logicrovershadow/losses/shadow_false_positive_loss.py: custom loss modulerovershadow/pseudo_labeling/*: external-only pseudo-labeling pipeline modulestools/train_shadow.py: reproducible training CLItools/eval_shadow.py: public/private evaluation CLItools/prepare_render_domain_data.py: external-only render integration pipelinetools/verify_dataset_integrity.py: dataset integrity verification CLItools/export_private_triptychs.py: side-by-side diagnostic exportrun_infer.py: single-image inference CLI
Use one of the pinned requirement files in this repo:
- CPU baseline:
requirements-cpu.txt - GPU baseline (CUDA 13.0):
requirements-gpu-cu130.txt
Example install (GPU):
python3.10 -m pip install -r requirements-gpu-cu130.txt --index-url https://download.pytorch.org/whl/cu130Use python3.10 for all commands below to avoid accidentally using a different Python install.
Expected dataset paths:
- public train images:
data/public/Rover_Shadow_Public_Dataset/ShadowImages/train - public train masks:
data/public/Rover_Shadow_Public_Dataset/ShadowMasks/train - public val images:
data/public/Rover_Shadow_Public_Dataset/ShadowImages/val - public val masks:
data/public/Rover_Shadow_Public_Dataset/ShadowMasks/val - private holdout images:
data/private/LunarShadowDataset/ShadowImages - private holdout masks:
data/private/LunarShadowDataset/ShadowMasks
Exported best model artifact:
- best checkpoint:
artifacts/best_private_model/iter_11000.pth - paired metrics:
artifacts/best_private_model/metrics_*.json
Private data is evaluation-only holdout:
- do model selection on public train/val only
- run private evaluation as a final locked check
- do not repeatedly tune hyperparameters on private metrics
Standard DeepLabV3+ R50 run:
python3.10 tools/train_shadow.py --config configs/shadow_deeplabv3plus_r50.py --work-dir work_dirs/shadow_deeplabv3plus_r50_exp1 --max-iters 12000 --val-interval 1000 --device cudaShort sweep run example:
python3.10 tools/train_shadow.py --config configs/shadow_deeplabv3plus_r50.py --work-dir work_dirs/shadow_deeplabv3plus_r50_sweep_a --max-iters 2000 --val-interval 1000 --checkpoint-interval 1000 --device cuda --lr 0.01R101 candidate run example:
python3.10 tools/train_shadow.py --config configs/shadow_deeplabv3plus_r101.py --work-dir work_dirs/shadow_deeplabv3plus_r101_candidate --max-iters 8000 --val-interval 1000 --device cudaFast plumbing smoke (skip validation):
python3.10 tools/train_shadow.py --config configs/shadow_deeplabv3plus_r50.py --work-dir work_dirs/shadow_deeplabv3plus_r50_smoke --max-iters 1 --checkpoint-interval 1 --device cuda --no-validatePublic validation:
python3.10 tools/eval_shadow.py --config configs/shadow_deeplabv3plus_r50.py --ckpt work_dirs/shadow_deeplabv3plus_r50_exp1/iter_12000.pth --split public-val --device cuda --save-json work_dirs/shadow_deeplabv3plus_r50_exp1/public_val_metrics.jsonPrivate holdout final check:
python3.10 tools/eval_shadow.py --config configs/shadow_deeplabv3plus_r50.py --ckpt work_dirs/shadow_deeplabv3plus_r50_exp1/iter_12000.pth --split private --device cuda --tta flip-ms --shadow-threshold 0.55 --save-json work_dirs/shadow_deeplabv3plus_r50_exp1/private_metrics.jsonSingle image inference:
python3.10 run_infer.py --img data/public/Rover_Shadow_Public_Dataset/ShadowImages/val/lssd4000.jpg --cfg configs/shadow_deeplabv3plus_r50.py --ckpt work_dirs/shadow_deeplabv3plus_r50_exp1/iter_12000.pth --device cuda --out outputs/demo_result.pngExport private triptychs (image / GT / prediction) and overlay triptychs:
python3.10 tools/export_private_triptychs.py --cfg configs/shadow_deeplabv3plus_r50.py --ckpt work_dirs/shadow_deeplabv3plus_r50_exp1/iter_12000.pth --out-dir work_dirs/private_triptychs --out-overlay-dir work_dirs/private_triptychs_overlay --tta flip-ms --shadow-threshold 0.55 --device cudaDry-run preflight (no public/render mutations):
python3.10 tools/prepare_render_domain_data.py --render-root data/render --public-root data/public/Rover_Shadow_Public_Dataset --external-model auto --device cuda --split-ratio 0.9 --seed 42 --qa-samples 200 --archive-root data/archive --workspace data/_staging_render --dry-runFull integration run:
python3.10 tools/prepare_render_domain_data.py --render-root data/render --public-root data/public/Rover_Shadow_Public_Dataset --external-model auto --device cuda --split-ratio 0.9 --seed 42 --qa-samples 200 --archive-root data/archive --workspace data/_staging_renderFallback-path smoke (simulate failed downloads, train fallback external model, skip merge):
python3.10 tools/prepare_render_domain_data.py --render-root data/render --public-root data/public/Rover_Shadow_Public_Dataset --simulate-download-failure --fallback-only-smoke --fallback-max-iters 1 --fallback-val-interval 1 --calibration-max-images 20 --device cuda --workspace data/_staging_render_smokeOptional explicit external checkpoint:
python3.10 tools/prepare_render_domain_data.py --render-root data/render --public-root data/public/Rover_Shadow_Public_Dataset --external-weights path/to/external_model.pth --device cudaPost-run integrity verification:
python3.10 tools/verify_dataset_integrity.py --public-root data/public/Rover_Shadow_Public_Dataset --workspace data/_staging_renderSplit-based normalization (train, val):
python3.10 tools/fix_masks_to_01.pyFlat private folder normalization:
python3.10 tools/fix_private_masks_to_01.pytools/eval_shadow.py reports:
IoU_background: Intersection over Union for classbackgroundIoU_shadow: Intersection over Union for classshadowmIoU: mean Intersection over Union across classesAcc_background: per-class pixel accuracy forbackgroundAcc_shadow: per-class pixel accuracy forshadowmAcc: mean per-class pixel accuracyaAcc: all-pixel (global) accuracypublic_proxy_score:0.6 * mIoU + 0.4 * harmonic(IoU_background, IoU_shadow)
Static check:
python3.10 -m compileall run_infer.py tools rovershadow configsThis repo is set up to:
- ignore generated experiment folders (
work_dirs/) and staging/archive folders underdata/ - include canonical datasets under
data/public/anddata/private/ - include curated model artifact(s) under
artifacts/
Large files in this project (datasets and .pth checkpoints) should be pushed with Git LFS. This repo includes .gitattributes rules for that.
One-time local setup:
git lfs installRecommended pre-push checks:
python3.10 -m compileall run_infer.py tools rovershadow configs
python3.10 tools/verify_dataset_integrity.py --public-root data/public/Rover_Shadow_Public_Dataset --workspace data/_staging_renderExample Git bootstrap (source + datasets + best artifact):
git init
git add .gitignore .gitattributes README.md requirements-cpu.txt requirements-gpu-cu130.txt run_infer.py configs rovershadow tools artifacts data/public data/private
git commit -m "Initial RoverShadow source commit"mmcv-liteis used in this project; missing compiled ops are handled byrovershadow/runtime/mmcv_ops_shim.py.TORCH_FORCE_NO_WEIGHTS_ONLY_LOAD=1is set by runtime helpers for compatibility with MMEngine checkpoint loading behavior.