Fex: A Software Systems Evaluator

2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)(2017)

引用 15|浏览104
暂无评分
摘要
Software systems research relies on experimental evaluation to assess the effectiveness of newly developed solutions. However, the existing evaluation frameworks are rigid (do not allow creation of new experiments), often simplistic (may not reveal issues that appear in real-world applications), and can be inconsistent (do not guarantee reproducibility of experiments across platforms). This paper presents Fex, a software systems evaluation framework that addresses these limitations. Fex is extensible (can be easily extended with custom experiment types), practical (supports composition of different benchmark suites and real-world applications), and reproducible (it is built on container technology to guarantee the same software stack across platforms). We show that Fex achieves these design goals with minimal end-user effort - for instance, adding Nginx web-server to evaluation requires only 160 LoC. Going forward, we discuss the architecture of the framework, explain its interface, show common usage scenarios, and evaluate the efforts for writing various custom extensions.
更多
查看译文
关键词
Benchmarks,performance,experiment automation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要