Information assurance in Critical Infrastructures (CIs) is a problem of great practical interest and a challenging research field. Within this scope we focus on the problem of monitoring of CIs. In particular, we propose a model to maximize the amount of monitoring-related data that can survive after a portion of the CI suffers a disaster. The proposed model addresses a specific CI—oil pipelines—, and it is built on the hypothesis that the monitoring data are provided by means of wireless sensor networks. In particular, we consider a CI where the sensors are deployed along the pipelines and execute a common monitoring task with a given sampling rate. In order to ensure data availability the sensors replicate the sensed data to their peers. This model poses a few unique challenges, calling for the optimization of competing system parameters. For instance, a higher sampling rate would allow, on one hand, a finergrain analysis of the situation while on the other hand would consume more energy. High volume of data replication would allow a higher chance for data to survive a disaster—hence helping in forensics or further disaster prevention—, while it would cost more in both energetic and bandwidth terms. We derive an analytical model for this scenario. This model can be processed to derive the optimal sampling rate thatmaximizes the amount of information collected by the monitoring infrastructure, while satisfying the complex and competing system parameters. Further, simulations are performed on both regular (tree-based) and random generated oil pipelines and show the wide applicability of our model, as well as providing a few non-intuitive results on the behaviour of the competing system parameters. Finally, we develop a case study on a real-world oil pipeline. Results support the quality of the proposed model and its flexibility.