Anserini is a toolkit for reproducible information retrieval research. By building on Lucene, we aim to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Among other goals, our effort aims to be the opposite of this.* Anserini grew out of a reproducibility study of various open-source retrieval engines in 2016 (Lin et al., ECIR 2016). See Yang et al. (SIGIR 2017) and Yang et al. (JDIQ 2018) for overviews.
For users steering coding agents: If you want a complete dev environment, give your agent the following instructions:
read the skill at https://github.com/castorini/anserini/tree/master/.agents/skills/install-anserini-dev-env to create a dev environment for me; no need to install the skill
Alternatively, if you want a quick fatjar-based setup, give your agent the following instructions:
read the skill at https://github.com/castorini/anserini/tree/master/.agents/skills/install-anserini-fatjar to download the fatjar for me; no need to install the skill
Rationale...
β Why the above instructions? If you just tell the agent to install Anserini, it'll likely figure everything out by itself, but there are some choice points it might get confused by, like whether to do a full clone for a dev environment, or just fetch the fatjar. The skills make this clear and provide instructions that make the installation process more token-efficient. If you don't explicitly point your agent at the skills, it may be overeager and wander around...
Also, tell the agent not to install the skill locally, since you only need it once. Otherwise, it may do unnecessary work.
For coding agents: When setting up Anserini, ask whether the user wants a dev environment or a quick fatjar-based setup. Use
.agents/skills/install-anserini-dev-env/SKILL.mdfor the dev environment and.agents/skills/install-anserini-fatjar/SKILL.mdfor the fatjar setup. For command-line usage after setup, use.agents/skills/use-anserini-cli/SKILL.mdfor prebuilt-index registry, topics registry, search, and REST workflows. For reproducibility experiments, use.agents/skills/anserini-reproduction/SKILL.md.
β Anserini was upgraded to Lucene 10.4.0 at c6eed6 (2026/04/12) as part of v2.0.0. Lucene 9 indexes remain readable, but older code is unable to read indexes generated by Lucene 10.
This section is intended for users. If you are a coding agent, stop reading and skip the rest of this section.
Anserini is packaged in a self-contained fatjar, which also provides the simplest way to get started. See this page for instructions.
This section is intended for users. If you are a coding agent, stop reading and skip the rest of this section.
Most Anserini features are exposed in the Pyserini Python interface. If you're more comfortable with Python, start there, although Anserini forms an important building block of Pyserini, so it remains worthwhile to learn about Anserini. See this page for information on setting up a dev environment for Anserini.
The onboarding path for Anserini starts here!
Anserini ships with many prebuilt indexes, which allows anyone to reproduce experimental results without needing access to the document collection. See individual pages below for details.
- MS MARCO V1 passage (core)
- MS MARCO V1 passage (optional)
- MS MARCO V1 doc (core)
- MS MARCO V1 doc (optional)
- MS MARCO V2 passage (core)
- MS MARCO V2 passage (optional)
- MS MARCO V2 doc (core)
- MS MARCO V2 doc (optional)
- MS MARCO V2.1 segmented doc (core)
- MS MARCO V2.1 segmented doc (optional)
- MS MARCO V2.1 doc (core)
- MS MARCO V2.1 doc (optional)
Anserini supports end-to-end reproduction experiments on various standard IR test collections out of the box. Each of these experiments starts from the raw document collection, builds the necessary index, performs retrieval runs, and generates evaluation results. See individual pages for details.
MS MARCO V1 Passage Reproductions
| dev | DL19 | DL20 | |
|---|---|---|---|
| Unsupervised Sparse | |||
| Lucene BoW baselines | π | π | π |
| Quantized BM25 | π | π | π |
| WordPiece baselines (pre-tokenized) | π | π | π |
| WordPiece baselines (Huggingface) | π | π | π |
| WordPiece + Lucene BoW baselines | π | π | π |
| doc2query | π | ||
| doc2query-T5 | π | π | π |
| Learned Sparse (uniCOIL family) | |||
| uniCOIL noexp | π« | π« | π« |
| uniCOIL with doc2query-T5 | π« | π« | π« |
| uniCOIL with TILDE | π« | ||
| Learned Sparse (other) | |||
| DeepImpact | π« | ||
| SPLADEv2 | π« | ||
| SPLADE++ CoCondenser-EnsembleDistil | π« |
π« |
π« |
| SPLADE++ CoCondenser-SelfDistil | π« |
π« |
π« |
| SPLADE-v3 | π« |
π« |
π« |
| Learned Dense (HNSW indexes) | |||
| cosDPR-distil | full:π« |
full:π« |
full:π« |
| BGE-base-en-v1.5 | full:π« |
full:π« |
full:π« |
| OpenAI Ada2 | full:π« int8:π« | full:π« int8:π« | full:π« int8:π« |
| Cohere English v3.0 | full:π« int8:π« | full:π« int8:π« | full:π« int8:π« |
| Learned Dense (flat vector indexes) | |||
| cosDPR-distil | full:π« |
full:π« |
full:π« |
| BGE-base-en-v1.5 | full:π« |
full:π« |
full:π« |
| OpenAI Ada2 | full:π« int8:π«οΈ | full:π« int8:π« | full:π« int8:π« |
| Cohere English v3.0 | full:π« int8:π« | full:π« int8:π« | full:π« int8:π« |
| Learned Dense (Inverted; experimental) | |||
| cosDPR-distil w/ "fake words" | π« | π« | π« |
| cosDPR-distil w/ "LexLSH" | π« | π« | π« |
Key:
- π = keyword queries
- "full" = full 32-bit floating precision
- "int8" = quantized 8-bit precision
- π« = cached queries,
π ΎοΈ = query encoding with ONNX
| Corpora | Size | Checksum |
|---|---|---|
| Quantized BM25 | 1.2 GB | 0a623e2c97ac6b7e814bf1323a97b435 |
| uniCOIL (noexp) | 2.7 GB | f17ddd8c7c00ff121c3c3b147d2e17d8 |
| uniCOIL (d2q-T5) | 3.4 GB | 78eef752c78c8691f7d61600ceed306f |
| uniCOIL (TILDE) | 3.9 GB | 12a9c289d94e32fd63a7d39c9677d75c |
| DeepImpact | 3.6 GB | 73843885b503af3c8b3ee62e5f5a9900 |
| SPLADEv2 | 9.9 GB | b5d126f5d9a8e1b3ef3f5cb0ba651725 |
| SPLADE++ CoCondenser-EnsembleDistil | 4.2 GB | e489133bdc54ee1e7c62a32aa582bc77 |
| SPLADE++ CoCondenser-SelfDistil | 4.8 GB | cb7e264222f2bf2221dd2c9d28190be1 |
| SPLADE-v3 | 7.4 GB | b5fbe7c294bd0b1e18f773337f540670 |
| cosDPR-distil (parquet) | 26 GB | b9183de205fbd5c799211c21187179e7 |
| BGE-base-en-v1.5 (parquet) | 26 GB | a55b3cb338ec4a1b1c36825bf0854648 |
| OpenAI-ada2 (parquet) | 51 GB | a8fddf594c9b8e771637968033b12f6d |
| Cohere embed-english-v3.0 (parquet) | 16 GB | 760dfb5ba9e2b0cc6f7e527e518fef03 |
MS MARCO V1 Document Reproductions
| dev | DL19 | DL20 | |
|---|---|---|---|
| Unsupervised Lexical, Complete Doc* | |||
| Lucene BoW baselines | + | + | + |
| WordPiece baselines (pre-tokenized) | + | + | + |
| WordPiece baselines (Huggingface tokenizer) | + | + | + |
| WordPiece + Lucene BoW baselines | + | + | + |
| doc2query-T5 | + | + | + |
| Unsupervised Lexical, Segmented Doc* | |||
| Lucene BoW baselines | + | + | + |
| WordPiece baselines (pre-tokenized) | + | + | + |
| WordPiece + Lucene BoW baselines | + | + | + |
| doc2query-T5 | + | + | + |
| Learned Sparse Lexical | |||
| uniCOIL noexp | β | β | β |
| uniCOIL with doc2query-T5 | β | β | β |
| Corpora | Size | Checksum |
|---|---|---|
| MS MARCO V1 doc: uniCOIL (noexp) | 11 GB | 11b226e1cacd9c8ae0a660fd14cdd710 |
| MS MARCO V1 doc: uniCOIL (d2q-T5) | 19 GB | 6a00e2c0c375cb1e52c83ae5ac377ebb |
MS MARCO V2 Passage Reproductions
| dev | DL21 | DL22 | DL23 | |
|---|---|---|---|---|
| Unsupervised Lexical, Original Corpus | ||||
| baselines | + | + | + | + |
| doc2query-T5 | + | + | + | + |
| Unsupervised Lexical, Augmented Corpus | ||||
| baselines | + | + | + | + |
| doc2query-T5 | + | + | + | + |
| Learned Sparse Lexical | ||||
| uniCOIL noexp zero-shot | β | β | β | β |
| uniCOIL with doc2query-T5 zero-shot | β | β | β | β |
| SPLADE++ CoCondenser-EnsembleDistil (cached queries) | β | β | β | β |
| SPLADE++ CoCondenser-EnsembleDistil (ONNX) | β | β | β | β |
| SPLADE++ CoCondenser-SelfDistil (cached queries) | β | β | β | β |
| SPLADE++ CoCondenser-SelfDistil (ONNX) | β | β | β | β |
| Corpora | Size | Checksum |
|---|---|---|
| uniCOIL (noexp) | 24 GB | d9cc1ed3049746e68a2c91bf90e5212d |
| uniCOIL (d2q-T5) | 41 GB | 1949a00bfd5e1f1a230a04bbc1f01539 |
| SPLADE++ CoCondenser-EnsembleDistil | 66 GB | 2cdb2adc259b8fa6caf666b20ebdc0e8 |
| SPLADE++ CoCondenser-SelfDistil | 76 GB | 061930dd615c7c807323ea7fc7957877 |
MS MARCO V2 Document Reproductions
| dev | DL21 | DL22 | DL23 | |
|---|---|---|---|---|
| Unsupervised Lexical, Complete Doc | ||||
| baselines | + | + | + | + |
| doc2query-T5 | + | + | + | + |
| Unsupervised Lexical, Segmented Doc | ||||
| baselines | + | + | + | + |
| doc2query-T5 | + | + | + | + |
| Learned Sparse Lexical | ||||
| uniCOIL noexp zero-shot | β | β | β | β |
| uniCOIL with doc2query-T5 zero-shot | β | β | β | β |
| Corpora | Size | Checksum |
|---|---|---|
| MS MARCO V2 doc: uniCOIL (noexp) | 55 GB | 97ba262c497164de1054f357caea0c63 |
| MS MARCO V2 doc: uniCOIL (d2q-T5) | 72 GB | c5639748c2cbad0152e10b0ebde3b804 |
MS MARCO V2.1 Segmented Document Reproductions
The MS MARCO V2.1 corpora (documents and segmented documents) were derived from the V2 documents corpus for the TREC 2024 RAG Track. Instructions for downloading the corpus can be found here. The experiments below capture topics and passage-level qrels for the V2.1 segmented documents corpus.
| RAG24 βοΈ | RAG 24 NIST | RAG25 βοΈ | RAG25 NIST | |
|---|---|---|---|---|
| baselines | π | π | π | π |
| SPLADE-v3 | π« |
π« |
π« |
π« |
Arctic-embed-l (shard00) |
||||
Arctic-embed-l (shard01) |
||||
Arctic-embed-l (shard02) |
||||
Arctic-embed-l (shard03) |
||||
Arctic-embed-l (shard04) |
||||
Arctic-embed-l (shard05) |
||||
Arctic-embed-l (shard06) |
||||
Arctic-embed-l (shard07) |
||||
Arctic-embed-l (shard08) |
||||
Arctic-embed-l (shard09) |
Note that all Arctic-embed-l shards use flat vector indexes.
Key:
- βοΈ = UMBRELA for RAG24, UMBRELA 2.0 for RAG25
- π = keyword queries
- π« = cached queries,
π ΎοΈ = query encoding with ONNX
| Corpora | Size | Checksum |
|---|---|---|
| SPLADE-v3 | 125 GB | c62490569364a1eb0101da1ca4a894d9 |
MS MARCO V2.1 Document Reproductions
The MS MARCO V2.1 corpora (documents and segmented documents) were derived from the V2 documents corpus for the TREC 2024 RAG Track. Instructions for downloading the corpus can be found here. The experiments below capture topics and document-level qrels originally targeted at the V2 documents corpus, but have been "projected" over to the V2.1 documents corpus. These should be treated like dev topics for the TREC 2024 RAG Track; actual qrels for that track were generated at the passage level. There are no plans to generate addition document-level qrels beyond these.
| dev | DL21 | DL22 | DL23 | RAGgy dev | |
|---|---|---|---|---|---|
| Unsupervised Lexical, Complete Doc | |||||
| baselines | π | π | π | π | π |
| Unsupervised Lexical, Segmented Doc | |||||
| baselines | π | π | π | π | π |
| SPLADE-v3 | π« |
π« |
π« |
π« |
π« |
Key:
- π = keyword queries
- π« = cached queries,
π ΎοΈ = query encoding with ONNX
BEIR (v1.0.0) Reproductions
Sparse representations
Key:
- F1 = "flat" baseline (Lucene analyzer), keyword queries (π)
- F2 = "flat" baseline (pre-tokenized with
bert-base-uncasedtokenizer), keyword queries (π) - MF = "multifield" baseline (Lucene analyzer), keyword queries (π)
- U1 = uniCOIL (noexp), cached queries (π«)
- Spp = SPLADE++ CoCondenser-EnsembleDistil: cached queries (π«), ONNX (
π ΎοΈ ) - Sv3 = SPLADE-v3: cached queries (π«), ONNX (
π ΎοΈ )
See instructions below the table for how to reproduce results programmatically.
Dense representations
Key:
- BGE (flat) = BGE-base-en-v1.5 (flat vector indexes)
- original (float32) indexes: cached queries (π«), ONNX (
π ΎοΈ ) - quantized (int8) indexes: cached queries (π«), ONNX (
π ΎοΈ )
- original (float32) indexes: cached queries (π«), ONNX (
- BGE (HNSW) = BGE-base-en-v1.5 (HNSW indexes)
- original (float32) indexes: cached queries (π«), ONNX (
π ΎοΈ ) - quantized (int8) indexes: cached queries (π«), ONNX (
π ΎοΈ )
- original (float32) indexes: cached queries (π«), ONNX (
See instructions below the table for how to reproduce results programmatically.
To reproduce the above results programmatically, use the following commands to download and unpack the data:
wget https://rgw.cs.uwaterloo.ca/pyserini/data/$COLLECTION -P collections/
tar xvf collections/$COLLECTION -C collections/Substitute the appropriate $COLLECTION from the table below.
$COLLECTION |
Size | Checksum |
|---|---|---|
beir-v1.0.0-corpus.tar |
14 GB | faefd5281b662c72ce03d22021e4ff6b |
beir-v1.0.0-corpus-wp.tar |
13 GB | 3cf8f3dcdcadd49362965dd4466e6ff2 |
beir-v1.0.0-unicoil-noexp.tar |
30 GB | 4fd04d2af816a6637fc12922cccc8a83 |
beir-v1.0.0-splade-pp-ed.tar |
43 GB | 9c7de5b444a788c9e74c340bf833173b |
beir-v1.0.0-splade-v3.tar |
55 GB | 37f294610af763ce48eed03afd9455df |
beir-v1.0.0-bge-base-en-v1.5.parquet.tar |
127 GB | 5f8dce18660cc8ac0318500bea5993ac |
Once you've unpacked the data, follow the linked reproduction pages above to run and verify the desired BEIR corpus/model combinations.
Substitute the appropriate $MODEL from the table below.
| Key | $MODEL |
|---|---|
| F1 | flat |
| F2 | flat-wp |
| MF | multifield |
| U1 (cached) | unicoil-noexp.cached |
| Spp (cached) | splade-pp-ed.cached |
| Spp (ONNX) | splade-pp-ed.onnx |
| Sv3 (cached) | splade-v3.cached |
| Sv3 (ONNX) | splade-v3.onnx |
| BGE (flat, full; cached) | bge-base-en-v1.5.parquet.flat.cached |
| BGE (flat, int8; cached) | bge-base-en-v1.5.parquet.flat-sqv.cached |
| BGE (HNSW, full; cached) | bge-base-en-v1.5.parquet.hnsw.cached |
| BGE (HNSW, int8; cached) | bge-base-en-v1.5.parquet.hnsw-sqv.cached |
| BGE (flat, full; ONNX) | bge-base-en-v1.5.parquet.flat.onnx |
| BGE (flat, int8; ONNX) | bge-base-en-v1.5.parquet.flat-sqv.onnx |
| BGE (HNSW, full; ONNX) | bge-base-en-v1.5.parquet.hnsw.onnx |
| BGE (HNSW, int8; ONNX) | bge-base-en-v1.5.parquet.hnsw-sqv.onnx |
BRIGHT Reproductions
BRIGHT is a retrieval benchmark described here.
- BM25
- SPLADE-v3 = SPLADE-v3: cached queries (π«), ONNX (
π ΎοΈ ) - BGE (flat) = BGE-large-en-v1.5 (flat vector indexes): cached queries (π«), ONNX (
π ΎοΈ )
| Corpus | BM25 | SPLADE-v3 | BGE (flat) |
|---|---|---|---|
| StackExchange | |||
| Biology | π | π« |
π« |
| Earth Science | π | π« |
π« |
| Economics | π | π« |
π« |
| Psychology | π | π« |
π« |
| Robotics | π | π« |
π« |
| Stack Overflow | π | π« |
π« |
| Sustainable Living | π | π« |
π« |
| Coding | |||
| LeetCode | π | π« |
π« |
| Pony | π | π« |
π« |
| Theorems | |||
| AoPS | π | π« |
π« |
| TheoremQA-Q | π | π« |
π« |
| TheoremQA-T | π | π« |
π« |
| Corpora | Size | Checksum |
|---|---|---|
| Post-Processed Corpora | 284 MB | 568b594709a9977369033117bfb6889c |
| SPLADE-v3 | 1.5 GB | 434cd776b5c40f8112d2bf888c58a516 |
| BGE-large-en-v1.5 | 13 GB | 0ce2634d34d3d467cd1afd74f2f63c7b |
The BRIGHT corpora above were processed from Hugging Face with these scripts.
Cross-lingual and Multi-lingual Reproductions
- Reproductions for Mr. TyDi (v1.1) baselines: ar, bn, en, fi, id, ja, ko, ru, sw, te, th
- Reproductions for MIRACL (v1.0) baselines: ar, bn, en, es, fa, fi, fr, hi, id, ja, ko, ru, sw, te, th, zh
- Reproductions for TREC 2022 NeuCLIR Track BM25 (query translation): Persian, Russian, Chinese
- Reproductions for TREC 2022 NeuCLIR Track BM25 (document translation): Persian, Russian, Chinese
- Reproductions for TREC 2022 NeuCLIR Track SPLADE (query translation): Persian, Russian, Chinese
- Reproductions for TREC 2022 NeuCLIR Track SPLADE (document translation): Persian, Russian, Chinese
- Reproductions for HC4 (v1.0) baselines on HC4 corpora: Persian, Russian, Chinese
- Reproductions for HC4 (v1.0) baselines on original NeuCLIR22 corpora: Persian, Russian, Chinese
- Reproductions for HC4 (v1.0) baselines on translated NeuCLIR22 corpora: Persian, Russian, Chinese
- Reproductions for NTCIR-8 ACLIA (IR4QA subtask, Monolingual Chinese)
- Reproductions for CLEF 2006 Monolingual French
- Reproductions for TREC 2002 Monolingual Arabic
- Reproductions for FIRE 2012 monolingual baselines: Bengali, Hindi, English
- Reproductions for CIRAL (v1.0) BM25 (query translation): Hausa, Somali, Swahili, Yoruba
- Reproductions for CIRAL (v1.0) BM25 (document translation): Hausa, Somali, Swahili, Yoruba
Other Reproductions
- Reproductions for Disks 1 & 2 (TREC 1-3), Disks 4 & 5 (TREC 7-8, Robust04), AQUAINT (Robust05)
- Reproductions for the New York Times Corpus (Core17), the Washington Post Corpus (Core18)
- Reproductions for Wt10g, Gov2
- Reproductions for ClueWeb09 (Category B), ClueWeb12-B13, ClueWeb12
- Reproductions for Tweets2011 (MB11 & MB12), Tweets2013 (MB13 & MB14)
- Reproductions for Complex Answer Retrieval (CAR17): v1.5, v2.0, v2.0 with doc2query
- Reproductions for TREC News Tracks (Background Linking Task): 2018, 2019, 2020
- Reproductions for FEVER Fact Verification
- Reproductions for DPR Wikipedia QA baselines: 100-word splits, 6/3 sliding window sentences
The experiments described below are not associated with rigorous end-to-end reproduction testing and thus provide a lower standard of reproducibility. For the most part, manual copying and pasting of commands into a shell is required to reproduce our results.
MS MARCO V1
- Reproducing BM25 baselines for MS MARCO Passage Ranking
- Reproducing BM25 baselines for MS MARCO Document Ranking
- Reproducing baselines for the MS MARCO Document Ranking Leaderboard
- Reproducing doc2query results (MS MARCO Passage Ranking and TREC-CAR)
- Reproducing docTTTTTquery results (MS MARCO Passage and Document Ranking)
- Notes about reproduction issues with MS MARCO Document Ranking w/ docTTTTTquery
TREC-COVID and CORD-19
Other Experiments and Features
- Working with the 20 Newsgroups Dataset
- Guide to BM25 baselines for the FEVER Fact Verification Task
- Guide to reproducing "Neural Hype" Experiments
- Guide to running experiments on the AI2 Open Research Corpus
- Experiments from Yang et al. (JDIQ 2018)
- Runbooks for TREC 2018: [Anserini group] [h2oloo group]
- Runbook for ECIR 2019 paper on axiomatic semantic term matching
- Runbook for ECIR 2019 paper on cross-collection relevance feedback
- Support for approximate nearest-neighbor search on dense vectors with inverted indexes
If you've found Anserini to be helpful, we have a simple request for you to contribute back.
In the course of reproducing baseline results on standard test collections, please let us know if you're successful by sending us a pull request with a simple note, like what appears at the bottom of the page for Disks 4 & 5.
Reproducibility is important to us, and we'd like to know about successes as well as failures.
Since the reproduction documentation is auto-generated, pull requests should be
sent against the reproduction definitions and doc templates under
src/main/resources/reproduce.
Then the reproduction documentation can be generated using the
bin/build.sh script.
In turn, you'll be recognized as a contributor.
Beyond that, there are always open issues we would appreciate help on!
- v2.0.0: April 14, 2026 [Release Notes]
- v1.7.1: March 24, 2026 [Release Notes]
- v1.7.0: March 21, 2026 [Release Notes]
- v1.6.0: February 24, 2026 [Release Notes]
- v1.5.0: January 9, 2026 [Release Notes]
- v1.4.0: December 2, 2025 [Release Notes]
- v1.3.0: September 14, 2025 [Release Notes]
- v1.2.2: September 2, 2025 [Release Notes]
- v1.2.1: August 20, 2025 [Release Notes]
- v1.2.0: August 12, 2025 [Release Notes]
- v1.1.1: July 1, 2025 [Release Notes]
- v1.1.0: July 1, 2025 [Release Notes] [Known Issues]
- v1.0.0: April 25, 2025 [Release Notes]
older... (and historic notes)
- v0.39.0: January 12, 2025 [Release Notes]
- v0.38.0: September 6, 2024 [Release Notes]
- v0.37.0: August 22, 2024 [Release Notes]
- v0.36.1: May 23, 2024 [Release Notes]
- v0.36.0: April 28, 2024 [Release Notes]
- v0.35.1: April 24, 2024 [Release Notes]
- v0.35.0: April 3, 2024 [Release Notes]
- v0.25.0: March 27, 2024 [Release Notes]
- v0.24.2: February 27, 2024 [Release Notes]
- v0.24.1: January 27, 2024 [Release Notes]
- v0.24.0: December 28, 2023 [Release Notes]
- v0.23.0: November 16, 2023 [Release Notes]
- v0.22.1: October 18, 2023 [Release Notes]
- v0.22.0: August 28, 2023 [Release Notes]
- v0.21.0: March 31, 2023 [Release Notes]
- v0.20.0: January 20, 2023 [Release Notes]
- v0.16.2: December 12, 2022 [Release Notes]
- v0.16.1: November 2, 2022 [Release Notes]
- v0.16.0: October 23, 2022 [Release Notes]
- v0.15.0: September 22, 2022 [Release Notes]
- v0.14.4: July 31, 2022 [Release Notes]
- v0.14.3: May 9, 2022 [Release Notes]
- v0.14.2: March 24, 2022 [Release Notes]
- v0.14.1: February 27, 2022 [Release Notes]
- v0.14.0: January 10, 2022 [Release Notes]
- v0.13.5: November 2, 2021 [Release Notes]
- v0.13.4: October 22, 2021 [Release Notes]
- v0.13.3: August 22, 2021 [Release Notes]
- v0.13.2: July 20, 2021 [Release Notes]
- v0.13.1: June 29, 2021 [Release Notes]
- v0.13.0: June 22, 2021 [Release Notes]
- v0.12.0: April 29, 2021 [Release Notes]
- v0.11.0: February 13, 2021 [Release Notes]
- v0.10.1: January 8, 2021 [Release Notes]
- v0.10.0: November 25, 2020 [Release Notes]
- v0.9.4: June 25, 2020 [Release Notes]
- v0.9.3: May 26, 2020 [Release Notes]
- v0.9.2: May 14, 2020 [Release Notes]
- v0.9.1: May 6, 2020 [Release Notes]
- v0.9.0: April 18, 2020 [Release Notes]
- v0.8.1: March 22, 2020 [Release Notes]
- v0.8.0: March 11, 2020 [Release Notes]
- v0.7.2: January 25, 2020 [Release Notes]
- v0.7.1: January 9, 2020 [Release Notes]
- v0.7.0: December 13, 2019 [Release Notes]
- v0.6.0: September 6, 2019 [Release Notes] [Known Issues]
- v0.5.1: June 11, 2019 [Release Notes]
- v0.5.0: June 5, 2019 [Release Notes]
- v0.4.0: March 4, 2019 [Release Notes]
- v0.3.0: December 16, 2018 [Release Notes]
- v0.2.0: September 10, 2018 [Release Notes]
- v0.1.0: July 4, 2018 [Release Notes]
- Anserini was upgraded to Lucene 10.4.0 at
c6eed6(2026/04/12) as part of v2.0.0. Lucene 9 indexes remain readable, but older code is unable to read indexes generated by Lucene 10. - Anserini was upgraded from JDK 11 to JDK 21 at commit
272565(2024/04/03), which corresponds to the release of v0.35.0. - Anserini was upgraded to Lucene 9.3 at commit
272565(8/2/2022): this upgrade created backward compatibility issues, see #1952. Anserini will automatically detect Lucene 8 indexes and disable consistent tie-breaking to avoid runtime errors. However, Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes. Lucene 8 code will not run on Lucene 9 indexes. Pyserini has also been upgraded and similar issues apply: Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes. - Anserini was upgraded to Java 11 at commit
17b702d(7/11/2019) from Java 8. Maven 3.3+ is also required. - Anserini was upgraded to Lucene 8.0 as of commit
75e36f9(6/12/2019); prior to that, the toolkit uses Lucene 7.6. Based on preliminary experiments, query evaluation latency has been much improved in Lucene 8. As a result of this upgrade, results of all reproductions have changed slightly. To reproducible old results from Lucene 7.6, use v0.5.1.
- Jimmy Lin, Matt Crane, Andrew Trotman, Jamie Callan, Ishan Chattopadhyaya, John Foley, Grant Ingersoll, Craig Macdonald, Sebastiano Vigna. Toward Reproducible Baselines: The Open-Source IR Reproducibility Challenge. ECIR 2016.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Enabling the Use of Lucene for Information Retrieval Research. SIGIR 2017.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality, 10(4), Article 16, 2018.
This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Previous support came from the U.S. National Science Foundation under IIS-1423002 and CNS-1405688. Any opinions, findings, and conclusions or recommendations expressed do not necessarily reflect the views of the sponsors.
