top of page

DALPHIN: DigitAL PatHology assIstant beNchmark

dalphin_logo3-photoshop.png

Inspired by recent advancements in multimodal foundation models, especially vision-language models in pathology, during COMPAYL we will present the first results of your ongoing experiment on "Digital Pathology Assistant Benchmark" (short DALPHIN) for vision-language models, echoing the ChatBot arena initiative for large language models and the Vision Arena for multi-modal models.

 

To set up our DALPHIN arena, we are collecting histopathology images, partly sourced from the COMPAYL network, and feed them into multimodal models, using both vision and language as input data. Eventually, we aim to make this experiment's data publicly available.

 

We will compare their textual outputs, like captions or diagnoses, answers to questions provided as textual prompts, assess results via zero-shot classification tests, challenge the models to identify tissue types from a given set of options, etc.

 

We are preparing this experiment in the months before COMPAYL and we will present it during the workshop.

bottom of page