Improving Data Integrity with Randomness – A Compressive Sensing Approach

msra(2009)

引用 23|浏览37
暂无评分
摘要
Data loss in wireless sensor systems is inevitable, either due to exogenous (such as transmission medium impediments) or endogenous (such as faulty sensors) causes. While there have been many attempts at coping with this issue, recent developments in the area of Compressive Sensing (CS) enable a new perspective. Since many natural signals are compressible, it is possible to employ CS, not only to reduce the effective sampling rate, but to improve the robustness of the system at a given Quality of Information (QoI). This is possible because reconstruction algorithms for compressively sampled signals are not hampered by the stochastic nature of wireless link disturbances and sensor malfunctions, which has traditionally plagued attempts at proactively handling the effects of these errors. In this paper, we show how reconstruction error remains unchanged despite extreme independent data losses by marginally increasing the average sampling rate. We also show that a simple re-ordering of samples prior to communication could enable successful reconstruction when losses display bursty behavior. The problem we seek to address is acquiring an n-length vector f ∈ R at a sensor node such that it can be recovered accurately at a base station one or more wireless hops away. Realistically, data losses creep into the system owing to two inevitable circumstances – wireless link quality variations because of noise and interference and temporary sensor faults1. To cope with these issues, reactive schemes like retransmissions (end-to-end or hop-by-hop) have been popularly employed. Proactive schemes such as error correcting codes have also been used, though in a limited sense. In this short paper, we introduce how recent developments in the area of Compressive Sensing (CS) [2] enable a low-encoding complexity, proactive sensing approach that can easily be made robust to even extreme data losses. Utilizing the fact that CS strategies make inherent use of randomness within the sensing process, we surmise that data lost through the stochastic nature of an erasure channel is indistinguishable from an a priori lower sensing rate at the fusion center. We verify this conjecture empirically and show that it is sufficient to proactively increase the sampling rate in order to maintain reconstruction accuracy.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要