Leaderboard - ScienceQA

Evaluation of different methods on the test split (whole: 4,241, mini: 1,000 examples). The accuracies across various categories and the overall average are reported below.

😀 You are invited to contribute your results to the ScienceQA test split! Please send your result scores to this email or open a new issue at the github repository.

⚠️⚠️⚠️ Caveat: The data in the leaderboard is collected manually from existing papers. There might be some errors in the data, ambiguous data due to different interpretations, and missing data due to the lack of information in the papers. Make sure to double-check the data before using it. Please contact us at this email if you find any errors or have any suggestions. We appreciate your contributions and feedback.

# Model Method Learning #Size #P Link Date NAT SOC LAN TXT IMG NO G1-6 G7-12 Avg

Model names:

  • Q: question
  • C: context
  • M: multiple options
  • A: answer
  • AE: answer with explanation
  • ALE: answer with lecture and explanation
  • Method types:

  • VQA-NN: Standard neural network for visual question answering
  • PLM: Pre-trained language model
  • LLM: Large language model
  • VLM: (Large) vision-language model / Large multimodal model / Multimodal large language model
  • Tool-LLM: Tool-augmented large language model
  • Learning:

  • Zero-shot: The model is evaluated in a zero-shot setting on ScienceQA
  • Few-shot: The model is evaluated in a few-shot setting on ScienceQA
  • Fine-tune: The model is fine-tuned on ScienceQA
  • -: Not available
  • #Size: Total number of parameters in the model

    #P: Number of trainable parameters when fine-tuned on ScienceQA

    Accuracies for different question sets:

  • NAT: questions of the natural science subject
  • SOC: questions of the social science subject
  • LAN: questions of the language science subject
  • TXT: questions with the text context
  • IMG: questions with the image context
  • NO: questions with no context
  • G1-6: questions in the grade group of 1-6
  • G7-12: questions in the grade group of 7-12
  • Avg: all questions (reporting the average accuracy)