ProjectCARE: Suicide Prevention from Tissue Damage Images

Artificial Intelligence
Computer Vision
AI4SocialGood
An NIH-funded clinical decision-support tool to automate the assessment of tissue damage in self-injury images for suicide risk prediction, with the potential to lower suicide rates. Visual assessment of tissue damage is a high-risk setting that demands robust and reliable CV models. The project resulted in a reliable suicide-risk metric and better tissue damage detection (mAP=0.81) compared to trained human annotators (mAP=0.64). A manuscript on this project is being prepared, and we are currently developing an NIH-R01 application to integrate this tool into electronic health records. In collaboration with Harvard University and Massachusetts General Hospital.
Technologies Used: OpenMMLab, RNNs, Pytorch, opencv.
My Role:
- Designed multiple components for the REDCap survey to collect participant data.
- Web-scraped 9000 images of human limbs from Google Images, Flickr, and Shutterstock. Manually verified all 9000 images.
- Trained a two-step detector with custom objective detectors that are convolution-based and transformer-based.
- Built end-to-end data processing pipelines for all the participant data of 30k images.
- Models for 84% accuracy with classifiers on the scars classifying method, type, etc. using self-supervised learning.
- Used a two-step detector to harvest false positives to tune the detector.