CLEF promotes the systematic evaluation of information access systems, primarily through experimentation on shared tasks.

Ten labs are offered at CLEF 2013.

Nine labs will follow a "campaign-style" evaluation practice for specific information access problems in the tradition of past CLEF campaign tracks:

  1. CHiC - Cultural Heritage in CLEF a benchmarking activity to investigate systematic and large-scale evaluation of cultural heritage digital libraries and information access systems
  2. CLEFeHealth - CLEF eHealth Evaluation Lab a benchmarking activity aiming at developing processing methods and resources to enrich difficult-to-understand health text as well as their evaluation setting
  3. CLEF-IP - Retrieval in the Intellectual Property Domain a benchmarking activity to investigate IR techniques in the patent domain
  4. ImageCLEF - Cross Language Image Annotation and Retrieval a benchmarking activity on the experimental evaluation of image classification and retrieval, focusing on the combination of textual and visual evidence
  5. INEX - INitiative for the Evaluation of XML retrieval builds evaluation benchmarks for search with rich structure - such as document structure, semantic metadata, entities, or genre/topical structure - as of increasing importance on the web and in professional search.
  6. PAN - Uncovering Plagiarism, Authorship, and Social Software Misuse a benchmarking activity on uncovering plagiarism, authorship and social software misuse
  7. QA4MRE - Question Answering for Machine Reading Evaluation a benchmarking activity on the evaluation of machine reading systems through question answering and reading comprehension tests
  8. QALD-3 - Question Answering over Linked Data a benchmarking activity on question answering over linked data
  9. RepLab 2013 second CLEF lab on Online Reputation Management

One lab will be run as a workshop organized as speaking and discussion session to explore issues of evaluation methodology, metrics, and processes in information access and closely related fields:

  1. CLEF-ER - Entity Recognition @ CLEF workshop on multilingual annotation of named entities and terminology resources acquisition