Skip to content

Commit 5bfd918

Browse files
authored
Update README.md
1 parent fca0d8b commit 5bfd918

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Before running, do
1818
export REPO_DIR=<path to this directory e.g. /shared/nas/data/m1/revanth3/exp/prf/ai2_data/workspace/repo/llm-reranker>
1919
```
2020

21-
## Retrieval
21+
## 1. Retrieval
2222
Please download the precomputed BEIR encodings stored at (Link will be added shortly)
2323
Run the baseline Contriever retrieval using the precomputed encodings
2424

@@ -31,13 +31,13 @@ To get the baseline contriever scores and preprocess datasets, Run:
3131
bash bash/beir/run_eval.sh rank
3232
```
3333

34-
## Reranking
34+
## 2a. Reranking
3535
Cross-encoder rerankig config is at `{REPO_DIR}/bash/beir/run_rerank_CE.sh`
3636
To run the baseline cross encoder re-ranking, run:
3737
```
3838
bash bash/beir/run_rerank.sh
3939
```
40-
### LLM Reranking
40+
### 2b. LLM Reranking
4141
LLM results preparation config is at `{REPO_DIR}/bash/beir/run_convert_results.sh`
4242
To prepare retrieval results for LLM reranking, run:
4343

@@ -60,7 +60,8 @@ bash bash/run_eval.sh rerank
6060
Set flag --suffix to "llm_FIRST_alpha" for FIRST LLM evaluation or "ce" for cross encoder reranker
6161
```
6262

63-
### Train
63+
64+
### 3. Train
6465

6566
We support three training objectives: Ranking, Generation, and Combined. The Ranking objective uses a learning-to-rank algorithm to output the logits for the highest-ranked passage ID. The Generation objective follows the principles of Causal Language Modeling, focusing on permutation generation. The Combined objective, which we introduce in our paper, is a novel weighted approach that seamlessly integrates both ranking and generation principles, and is the setting applied to FIRST model.
6667

0 commit comments

Comments
 (0)