Evaluation of different methods on the test split (whole: 4,241, mini: 1,000 examples). The accuracies across various categories and the overall average are reported below.
😀 You are invited to contribute your results to the ScienceQA test split! Please send your result scores to this email or open a new issue at the github repository.
⚠️⚠️⚠️ Caveat: The data in the leaderboard is collected manually from existing papers. There might be some errors in the data, ambiguous data due to different interpretations, and missing data due to the lack of information in the papers. Make sure to double-check the data before using it. Please contact us at this email if you find any errors or have any suggestions. We appreciate your contributions and feedback.
# | Model | Method | Learning | #Size | #P | Link | Date | NAT | SOC | LAN | TXT | IMG | NO | G1-6 | G7-12 | Avg |
Model names:
Method types:
Learning:
#Size: Total number of parameters in the model
#P: Number of trainable parameters when fine-tuned on ScienceQA
Accuracies for different question sets: