pip install -r requirements.txtDownload the Llama-2-7b-hf model into the ../model/Llama-2-7b-hf/ directory.
Run the full continual learning training (including random initialization baseline + continual learning sequence DBPedia → Amazon → Yahoo → AGNews):
bash scripts/mtl5/run_moe-cl.shEvaluate the trained model:
python evaluate_mtl5.py \
--base_model ../model/Llama-2-7b-hf \
--config configs/mtl5/moe-cl.json \
--order order1 \
--load_adapter_file agnews_finetuned- Training results are saved in the
results/directory - Training logs are saved in the
logs/directory