As scientists strive to understand the oceans from the regional to basin scale on decadal as well as daily/weekly timescales, the demand for timely and distributed observational technology is increasing. This is driving the need for a persistent presence of sensors in our oceans, with the need to deploy these both on the seabed and in the water column for months, if not years at a time. Such needs call for high material reliability and low power operation for ocean observations.
Whether sensors are lander or moorings deployed, there is a need for the underpinning infrastructure to be highly reliable and easy to operate. In particular, scientists wish to avoid wasting costly ship time and impinging on other science activities by spending significant amounts of time searching for and then attempting to trigger an acoustic release. More fundamentally, efficiency of deployment often means reducing ship time by leaving sensors in place and remotely accessing data via a surface buoy, unmanned surface vessel gateway or even fly-by with an autonomous underwater vehicle.
Managing the data
While sensor deployment is a key issue for scientists, it’s the data that they gather is what really matters. In the simplest situations, this may merely imply guaranteed recovery of the instrument itself with stored data, but increasingly there is a need to both manage data in-situ and recover it wirelessly while leaving the sensors in place. Few have the luxury of having access to their data on-line (except in the case of cabled observatories), but scientists increasingly need intelligent access to near real time data, that doesn’t saturate limited and often costly bandwidth with large volumes of high frequency data, while allowing them to selectively access high resolution data for events of particular interest.