Effect-Invariant Mechanisms for Policy Generalization

JOURNAL OF MACHINE LEARNING RESEARCH(2024)

引用 0|浏览24
暂无评分
摘要
Policy learning is an important component of many real -world learning systems. A major challenge in policy learning is how to adapt efficiently to unseen environments or tasks. Recently, it has been suggested to exploit invariant conditional distributions to learn models that generalize better to unseen environments. However, assuming invariance of entire conditional distributions (which we call full invariance) may be too strong of an assumption in practice. In this paper, we introduce a relaxation of full invariance called effect -invariance (e -invariance for short) and prove that it is sufficient, under suitable assumptions, for zeroshot policy generalization. We also discuss an extension that exploits e -invariance when we have a small sample from the test environment, enabling few -shot policy generalization. Our work does not assume an underlying causal graph or that the data are generated by a structural causal model; instead, we develop testing procedures to test e -invariance directly from data. We present empirical results using simulated data and a mobile health intervention dataset to demonstrate the effectiveness of our approach.
更多
查看译文
关键词
distribution generalization,policy learning,invariance,causality,domain adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要