Skip to content

castorini/anserini

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2,516 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Anserini

build codecov Generic badge Maven Central LICENSE doi

Anserini is a toolkit for reproducible information retrieval research. By building on Lucene, we aim to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Among other goals, our effort aims to be the opposite of this.* Anserini grew out of a reproducibility study of various open-source retrieval engines in 2016 (Lin et al., ECIR 2016). See Yang et al. (SIGIR 2017) and Yang et al. (JDIQ 2018) for overviews.

For users steering coding agents: If you want a complete dev environment, give your agent the following instructions:

read the skill at https://github.com/castorini/anserini/tree/master/.agents/skills/install-anserini-dev-env to create a dev environment for me; no need to install the skill

Alternatively, if you want a quick fatjar-based setup, give your agent the following instructions:

read the skill at https://github.com/castorini/anserini/tree/master/.agents/skills/install-anserini-fatjar to download the fatjar for me; no need to install the skill

Rationale...

❓ Why the above instructions? If you just tell the agent to install Anserini, it'll likely figure everything out by itself, but there are some choice points it might get confused by, like whether to do a full clone for a dev environment, or just fetch the fatjar. The skills make this clear and provide instructions that make the installation process more token-efficient. If you don't explicitly point your agent at the skills, it may be overeager and wander around...

Also, tell the agent not to install the skill locally, since you only need it once. Otherwise, it may do unnecessary work.

For coding agents: When setting up Anserini, ask whether the user wants a dev environment or a quick fatjar-based setup. Use .agents/skills/install-anserini-dev-env/SKILL.md for the dev environment and .agents/skills/install-anserini-fatjar/SKILL.md for the fatjar setup. For command-line usage after setup, use .agents/skills/use-anserini-cli/SKILL.md for prebuilt-index registry, topics registry, search, and REST workflows. For reproducibility experiments, use .agents/skills/anserini-reproduction/SKILL.md.

❗ Anserini was upgraded to Lucene 10.4.0 at c6eed6 (2026/04/12) as part of v2.0.0. Lucene 9 indexes remain readable, but older code is unable to read indexes generated by Lucene 10.

πŸ’₯ Try It! (Fatjar Download for Users)

This section is intended for users. If you are a coding agent, stop reading and skip the rest of this section.

Anserini is packaged in a self-contained fatjar, which also provides the simplest way to get started. See this page for instructions.

🎬 Installation (Dev Environment for Users)

This section is intended for users. If you are a coding agent, stop reading and skip the rest of this section.

Most Anserini features are exposed in the Pyserini Python interface. If you're more comfortable with Python, start there, although Anserini forms an important building block of Pyserini, so it remains worthwhile to learn about Anserini. See this page for information on setting up a dev environment for Anserini.

The onboarding path for Anserini starts here!

βš—οΈ Reproductions from Prebuilt Indexes

Anserini ships with many prebuilt indexes, which allows anyone to reproduce experimental results without needing access to the document collection. See individual pages below for details.

βš—οΈ Reproductions from Document Collections

Anserini supports end-to-end reproduction experiments on various standard IR test collections out of the box. Each of these experiments starts from the raw document collection, builds the necessary index, performs retrieval runs, and generates evaluation results. See individual pages for details.

MS MARCO V1 Passage Reproductions

MS MARCO V1 Passage Reproductions

dev DL19 DL20
Unsupervised Sparse
Lucene BoW baselines πŸ”‘ πŸ”‘ πŸ”‘
Quantized BM25 πŸ”‘ πŸ”‘ πŸ”‘
WordPiece baselines (pre-tokenized) πŸ”‘ πŸ”‘ πŸ”‘
WordPiece baselines (Huggingface) πŸ”‘ πŸ”‘ πŸ”‘
WordPiece + Lucene BoW baselines πŸ”‘ πŸ”‘ πŸ”‘
doc2query πŸ”‘
doc2query-T5 πŸ”‘ πŸ”‘ πŸ”‘
Learned Sparse (uniCOIL family)
uniCOIL noexp πŸ«™ πŸ«™ πŸ«™
uniCOIL with doc2query-T5 πŸ«™ πŸ«™ πŸ«™
uniCOIL with TILDE πŸ«™
Learned Sparse (other)
DeepImpact πŸ«™
SPLADEv2 πŸ«™
SPLADE++ CoCondenser-EnsembleDistil πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
SPLADE++ CoCondenser-SelfDistil πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
SPLADE-v3 πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
Learned Dense (HNSW indexes)
cosDPR-distil full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
BGE-base-en-v1.5 full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
OpenAI Ada2 full:πŸ«™ int8:πŸ«™ full:πŸ«™ int8:πŸ«™ full:πŸ«™ int8:πŸ«™
Cohere English v3.0 full:πŸ«™ int8:πŸ«™ full:πŸ«™ int8:πŸ«™ full:πŸ«™ int8:πŸ«™
Learned Dense (flat vector indexes)
cosDPR-distil full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
BGE-base-en-v1.5 full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
OpenAI Ada2 full:πŸ«™ int8:πŸ«™οΈ full:πŸ«™ int8:πŸ«™ full:πŸ«™ int8:πŸ«™
Cohere English v3.0 full:πŸ«™ int8:πŸ«™ full:πŸ«™ int8:πŸ«™ full:πŸ«™ int8:πŸ«™
Learned Dense (Inverted; experimental)
cosDPR-distil w/ "fake words" πŸ«™ πŸ«™ πŸ«™
cosDPR-distil w/ "LexLSH" πŸ«™ πŸ«™ πŸ«™

Key:

  • πŸ”‘ = keyword queries
  • "full" = full 32-bit floating precision
  • "int8" = quantized 8-bit precision
  • πŸ«™ = cached queries, πŸ…ΎοΈ = query encoding with ONNX

Available Corpora for Download

Corpora Size Checksum
Quantized BM25 1.2 GB 0a623e2c97ac6b7e814bf1323a97b435
uniCOIL (noexp) 2.7 GB f17ddd8c7c00ff121c3c3b147d2e17d8
uniCOIL (d2q-T5) 3.4 GB 78eef752c78c8691f7d61600ceed306f
uniCOIL (TILDE) 3.9 GB 12a9c289d94e32fd63a7d39c9677d75c
DeepImpact 3.6 GB 73843885b503af3c8b3ee62e5f5a9900
SPLADEv2 9.9 GB b5d126f5d9a8e1b3ef3f5cb0ba651725
SPLADE++ CoCondenser-EnsembleDistil 4.2 GB e489133bdc54ee1e7c62a32aa582bc77
SPLADE++ CoCondenser-SelfDistil 4.8 GB cb7e264222f2bf2221dd2c9d28190be1
SPLADE-v3 7.4 GB b5fbe7c294bd0b1e18f773337f540670
cosDPR-distil (parquet) 26 GB b9183de205fbd5c799211c21187179e7
BGE-base-en-v1.5 (parquet) 26 GB a55b3cb338ec4a1b1c36825bf0854648
OpenAI-ada2 (parquet) 51 GB a8fddf594c9b8e771637968033b12f6d
Cohere embed-english-v3.0 (parquet) 16 GB 760dfb5ba9e2b0cc6f7e527e518fef03

MS MARCO V1 Document Reproductions

MS MARCO V1 Document Reproductions

dev DL19 DL20
Unsupervised Lexical, Complete Doc*
Lucene BoW baselines + + +
WordPiece baselines (pre-tokenized) + + +
WordPiece baselines (Huggingface tokenizer) + + +
WordPiece + Lucene BoW baselines + + +
doc2query-T5 + + +
Unsupervised Lexical, Segmented Doc*
Lucene BoW baselines + + +
WordPiece baselines (pre-tokenized) + + +
WordPiece + Lucene BoW baselines + + +
doc2query-T5 + + +
Learned Sparse Lexical
uniCOIL noexp βœ“ βœ“ βœ“
uniCOIL with doc2query-T5 βœ“ βœ“ βœ“

Available Corpora for Download

Corpora Size Checksum
MS MARCO V1 doc: uniCOIL (noexp) 11 GB 11b226e1cacd9c8ae0a660fd14cdd710
MS MARCO V1 doc: uniCOIL (d2q-T5) 19 GB 6a00e2c0c375cb1e52c83ae5ac377ebb

MS MARCO V2 Passage Reproductions

MS MARCO V2 Passage Reproductions

dev DL21 DL22 DL23
Unsupervised Lexical, Original Corpus
baselines + + + +
doc2query-T5 + + + +
Unsupervised Lexical, Augmented Corpus
baselines + + + +
doc2query-T5 + + + +
Learned Sparse Lexical
uniCOIL noexp zero-shot βœ“ βœ“ βœ“ βœ“
uniCOIL with doc2query-T5 zero-shot βœ“ βœ“ βœ“ βœ“
SPLADE++ CoCondenser-EnsembleDistil (cached queries) βœ“ βœ“ βœ“ βœ“
SPLADE++ CoCondenser-EnsembleDistil (ONNX) βœ“ βœ“ βœ“ βœ“
SPLADE++ CoCondenser-SelfDistil (cached queries) βœ“ βœ“ βœ“ βœ“
SPLADE++ CoCondenser-SelfDistil (ONNX) βœ“ βœ“ βœ“ βœ“

Available Corpora for Download

Corpora Size Checksum
uniCOIL (noexp) 24 GB d9cc1ed3049746e68a2c91bf90e5212d
uniCOIL (d2q-T5) 41 GB 1949a00bfd5e1f1a230a04bbc1f01539
SPLADE++ CoCondenser-EnsembleDistil 66 GB 2cdb2adc259b8fa6caf666b20ebdc0e8
SPLADE++ CoCondenser-SelfDistil 76 GB 061930dd615c7c807323ea7fc7957877

MS MARCO V2 Document Reproductions

MS MARCO V2 Document Reproductions

dev DL21 DL22 DL23
Unsupervised Lexical, Complete Doc
baselines + + + +
doc2query-T5 + + + +
Unsupervised Lexical, Segmented Doc
baselines + + + +
doc2query-T5 + + + +
Learned Sparse Lexical
uniCOIL noexp zero-shot βœ“ βœ“ βœ“ βœ“
uniCOIL with doc2query-T5 zero-shot βœ“ βœ“ βœ“ βœ“

Available Corpora for Download

Corpora Size Checksum
MS MARCO V2 doc: uniCOIL (noexp) 55 GB 97ba262c497164de1054f357caea0c63
MS MARCO V2 doc: uniCOIL (d2q-T5) 72 GB c5639748c2cbad0152e10b0ebde3b804

MS MARCO V2.1 Segmented Document Reproductions

MS MARCO V2.1 Segmented Document Reproductions

The MS MARCO V2.1 corpora (documents and segmented documents) were derived from the V2 documents corpus for the TREC 2024 RAG Track. Instructions for downloading the corpus can be found here. The experiments below capture topics and passage-level qrels for the V2.1 segmented documents corpus.

RAG24 β˜‚οΈ RAG 24 NIST RAG25 β˜‚οΈ RAG25 NIST
baselines πŸ”‘ πŸ”‘ πŸ”‘ πŸ”‘
SPLADE-v3 πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Arctic-embed-l (shard00) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ
Arctic-embed-l (shard01) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ
Arctic-embed-l (shard02) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ
Arctic-embed-l (shard03) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ
Arctic-embed-l (shard04) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ
Arctic-embed-l (shard05) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ
Arctic-embed-l (shard06) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ
Arctic-embed-l (shard07) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ
Arctic-embed-l (shard08) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ
Arctic-embed-l (shard09) πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ πŸ…ΎοΈ

Note that all Arctic-embed-l shards use flat vector indexes.

Key:

  • β˜‚οΈ = UMBRELA for RAG24, UMBRELA 2.0 for RAG25
  • πŸ”‘ = keyword queries
  • πŸ«™ = cached queries, πŸ…ΎοΈ = query encoding with ONNX

Available Corpora for Download

Corpora Size Checksum
SPLADE-v3 125 GB c62490569364a1eb0101da1ca4a894d9

MS MARCO V2.1 Document Reproductions

MS MARCO V2.1 Document Reproductions

The MS MARCO V2.1 corpora (documents and segmented documents) were derived from the V2 documents corpus for the TREC 2024 RAG Track. Instructions for downloading the corpus can be found here. The experiments below capture topics and document-level qrels originally targeted at the V2 documents corpus, but have been "projected" over to the V2.1 documents corpus. These should be treated like dev topics for the TREC 2024 RAG Track; actual qrels for that track were generated at the passage level. There are no plans to generate addition document-level qrels beyond these.

dev DL21 DL22 DL23 RAGgy dev
Unsupervised Lexical, Complete Doc
baselines πŸ”‘ πŸ”‘ πŸ”‘ πŸ”‘ πŸ”‘
Unsupervised Lexical, Segmented Doc
baselines πŸ”‘ πŸ”‘ πŸ”‘ πŸ”‘ πŸ”‘
SPLADE-v3 πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ

Key:

  • πŸ”‘ = keyword queries
  • πŸ«™ = cached queries, πŸ…ΎοΈ = query encoding with ONNX

BEIR (v1.0.0) Reproductions

BEIR (v1.0.0) Reproductions

Sparse representations

Key:

  • F1 = "flat" baseline (Lucene analyzer), keyword queries (πŸ”‘)
  • F2 = "flat" baseline (pre-tokenized with bert-base-uncased tokenizer), keyword queries (πŸ”‘)
  • MF = "multifield" baseline (Lucene analyzer), keyword queries (πŸ”‘)
  • U1 = uniCOIL (noexp), cached queries (πŸ«™)
  • Spp = SPLADE++ CoCondenser-EnsembleDistil: cached queries (πŸ«™), ONNX (πŸ…ΎοΈ)
  • Sv3 = SPLADE-v3: cached queries (πŸ«™), ONNX (πŸ…ΎοΈ)

See instructions below the table for how to reproduce results programmatically.

Corpus F1 F2 MF U1 Spp Sv3
TREC-COVID πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
BioASQ πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
NFCorpus πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
NQ πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
HotpotQA πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
FiQA-2018 πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
Signal-1M(RT) πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
TREC-NEWS πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
Robust04 πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
ArguAna πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
Touche2020 πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Android πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-English πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Gaming πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Gis πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Mathematica πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Physics πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Programmers πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Stats πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Tex πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Unix πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Webmasters πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
CQADupStack-Wordpress πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
Quora πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
DBPedia πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
SCIDOCS πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
FEVER πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
Climate-FEVER πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ
SciFact πŸ”‘ πŸ”‘ πŸ”‘ πŸ«™ πŸ«™πŸ…ΎοΈ πŸ«™πŸ…ΎοΈ

Dense representations

Key:

  • BGE (flat) = BGE-base-en-v1.5 (flat vector indexes)
    • original (float32) indexes: cached queries (πŸ«™), ONNX (πŸ…ΎοΈ)
    • quantized (int8) indexes: cached queries (πŸ«™), ONNX (πŸ…ΎοΈ)
  • BGE (HNSW) = BGE-base-en-v1.5 (HNSW indexes)
    • original (float32) indexes: cached queries (πŸ«™), ONNX (πŸ…ΎοΈ)
    • quantized (int8) indexes: cached queries (πŸ«™), ONNX (πŸ…ΎοΈ)

See instructions below the table for how to reproduce results programmatically.

Corpus BGE (flat) BGE (HNSW)
TREC-COVID full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
BioASQ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
NFCorpus full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
NQ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
HotpotQA full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
FiQA-2018 full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
Signal-1M(RT) full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
TREC-NEWS full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
Robust04 full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
ArguAna full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
Touche2020 full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Android full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-English full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Gaming full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Gis full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Mathematica full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Physics full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Programmers full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Stats full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Tex full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Unix full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Webmasters full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
CQADupStack-Wordpress full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
Quora full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
DBPedia full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
SCIDOCS full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
FEVER full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
Climate-FEVER full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ
SciFact full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ full:πŸ«™πŸ…ΎοΈ int8:πŸ«™πŸ…ΎοΈ

To reproduce the above results programmatically, use the following commands to download and unpack the data:

wget https://rgw.cs.uwaterloo.ca/pyserini/data/$COLLECTION -P collections/
tar xvf collections/$COLLECTION -C collections/

Substitute the appropriate $COLLECTION from the table below.

$COLLECTION Size Checksum
beir-v1.0.0-corpus.tar 14 GB faefd5281b662c72ce03d22021e4ff6b
beir-v1.0.0-corpus-wp.tar 13 GB 3cf8f3dcdcadd49362965dd4466e6ff2
beir-v1.0.0-unicoil-noexp.tar 30 GB 4fd04d2af816a6637fc12922cccc8a83
beir-v1.0.0-splade-pp-ed.tar 43 GB 9c7de5b444a788c9e74c340bf833173b
beir-v1.0.0-splade-v3.tar 55 GB 37f294610af763ce48eed03afd9455df
beir-v1.0.0-bge-base-en-v1.5.parquet.tar 127 GB 5f8dce18660cc8ac0318500bea5993ac

Once you've unpacked the data, follow the linked reproduction pages above to run and verify the desired BEIR corpus/model combinations.

Substitute the appropriate $MODEL from the table below.

Key $MODEL
F1 flat
F2 flat-wp
MF multifield
U1 (cached) unicoil-noexp.cached
Spp (cached) splade-pp-ed.cached
Spp (ONNX) splade-pp-ed.onnx
Sv3 (cached) splade-v3.cached
Sv3 (ONNX) splade-v3.onnx
BGE (flat, full; cached) bge-base-en-v1.5.parquet.flat.cached
BGE (flat, int8; cached) bge-base-en-v1.5.parquet.flat-sqv.cached
BGE (HNSW, full; cached) bge-base-en-v1.5.parquet.hnsw.cached
BGE (HNSW, int8; cached) bge-base-en-v1.5.parquet.hnsw-sqv.cached
BGE (flat, full; ONNX) bge-base-en-v1.5.parquet.flat.onnx
BGE (flat, int8; ONNX) bge-base-en-v1.5.parquet.flat-sqv.onnx
BGE (HNSW, full; ONNX) bge-base-en-v1.5.parquet.hnsw.onnx
BGE (HNSW, int8; ONNX) bge-base-en-v1.5.parquet.hnsw-sqv.onnx

BRIGHT Reproductions

BRIGHT Reproductions

BRIGHT is a retrieval benchmark described here.

  • BM25
  • SPLADE-v3 = SPLADE-v3: cached queries (πŸ«™), ONNX (πŸ…ΎοΈ)
  • BGE (flat) = BGE-large-en-v1.5 (flat vector indexes): cached queries (πŸ«™), ONNX (πŸ…ΎοΈ)
Corpus BM25 SPLADE-v3 BGE (flat)
StackExchange
Biology πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Earth Science πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Economics πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Psychology πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Robotics πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Stack Overflow πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Sustainable Living πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Coding
LeetCode πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Pony πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
Theorems
AoPS πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
TheoremQA-Q πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ
TheoremQA-T πŸ”‘ πŸ«™ πŸ…ΎοΈ πŸ«™ πŸ…ΎοΈ

Available Corpora for Download

Corpora Size Checksum
Post-Processed Corpora 284 MB 568b594709a9977369033117bfb6889c
SPLADE-v3 1.5 GB 434cd776b5c40f8112d2bf888c58a516
BGE-large-en-v1.5 13 GB 0ce2634d34d3d467cd1afd74f2f63c7b

The BRIGHT corpora above were processed from Hugging Face with these scripts.


Cross-lingual and Multi-lingual Reproductions

Cross-lingual and Multi-lingual Reproductions


Other Reproductions

Other Reproductions


πŸ“ƒ Additional Documentation

The experiments described below are not associated with rigorous end-to-end reproduction testing and thus provide a lower standard of reproducibility. For the most part, manual copying and pasting of commands into a shell is required to reproduce our results.

MS MARCO V1

MS MARCO V1

MS MARCO V2

MS MARCO V2

TREC-COVID and CORD-19

TREC-COVID and CORD-19

Other Experiments and Features

Other Experiments and Features

πŸ™‹ How Can I Contribute?

If you've found Anserini to be helpful, we have a simple request for you to contribute back. In the course of reproducing baseline results on standard test collections, please let us know if you're successful by sending us a pull request with a simple note, like what appears at the bottom of the page for Disks 4 & 5. Reproducibility is important to us, and we'd like to know about successes as well as failures. Since the reproduction documentation is auto-generated, pull requests should be sent against the reproduction definitions and doc templates under src/main/resources/reproduce. Then the reproduction documentation can be generated using the bin/build.sh script. In turn, you'll be recognized as a contributor.

Beyond that, there are always open issues we would appreciate help on!

πŸ“œοΈ Release History

older... (and historic notes)

πŸ“œοΈ Historical Notes

  • Anserini was upgraded to Lucene 10.4.0 at c6eed6 (2026/04/12) as part of v2.0.0. Lucene 9 indexes remain readable, but older code is unable to read indexes generated by Lucene 10.
  • Anserini was upgraded from JDK 11 to JDK 21 at commit 272565 (2024/04/03), which corresponds to the release of v0.35.0.
  • Anserini was upgraded to Lucene 9.3 at commit 272565 (8/2/2022): this upgrade created backward compatibility issues, see #1952. Anserini will automatically detect Lucene 8 indexes and disable consistent tie-breaking to avoid runtime errors. However, Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes. Lucene 8 code will not run on Lucene 9 indexes. Pyserini has also been upgraded and similar issues apply: Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes.
  • Anserini was upgraded to Java 11 at commit 17b702d (7/11/2019) from Java 8. Maven 3.3+ is also required.
  • Anserini was upgraded to Lucene 8.0 as of commit 75e36f9 (6/12/2019); prior to that, the toolkit uses Lucene 7.6. Based on preliminary experiments, query evaluation latency has been much improved in Lucene 8. As a result of this upgrade, results of all reproductions have changed slightly. To reproducible old results from Lucene 7.6, use v0.5.1.

✨ References

πŸ™ Acknowledgments

This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Previous support came from the U.S. National Science Foundation under IIS-1423002 and CNS-1405688. Any opinions, findings, and conclusions or recommendations expressed do not necessarily reflect the views of the sponsors.

About

Anserini is a Lucene toolkit for reproducible information retrieval research

Topics

Resources

License

Stars

Watchers

Forks

Contributors