We introduce Oblivionis, a lightweight framework that integrates federated learning and targeted unlearning for LLMs, formulating them as a joint multi-objective optimization task to enable privacy-preserving training and compliance with GDPR's right to be forgotten.
Federated Learning Methods
FedAv, FedProx, FedAdagrad, FedAdam, FedYogi
Unlearning Methods
GradAscent, GradDiff, NPO, SimNPO, Retrain
Metrics
Including Probability, ROUGE-L, Truth Ratio, and other 10+ metrics
Datasets
Including TOFU: Forget, Retain, Real Author, Word Fact; MUSE: Books, News
Built a complete optimization process
Facilitating standardized and reproducible research
Robust balance between forgetting and model utility
@misc{zhang2025oblivionislightweightlearningunlearning,
title={Oblivionis: A Lightweight Learning and Unlearning Framework for Federated Large Language Models},
author={Fuyao Zhang and Xinyu Yan and Tiantong Wu and Wenjie Li and Tianxiang Chen and Yang Cao and Ran Yan and Longtao Huang and Wei Yang Bryan Lim and Qiang Yang},
year={2025},
eprint={2508.08875},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.08875},
}
We are aware of an existing work whose title bears some similarity to ours. The term Oblivionis in our title originates from the Latin root oblivio, meaning βforgettingβ or βoblivion.β We would like to clarify that our study is independent and distinct in scope, methodology, and contributions.