Pages

Monday 13 April 2015

R.O.C. On!

So, so much data.

One of the best, and worst, things about conservation acoustics is how easy and affordable it is to collect sound data. For the costs of paying one observer to sit in the field watching for animals, you could buy three or four acoustic recorders that can be used year after year. For any penny-pinching manager this is particularly enticing.

Unfortunately, while it is becoming ever easier to collect "gobs" of data it is also becoming more difficult to process it. For example, my PhD project will involve data collected  from 43 sensors every summer for four years. I estimate the total storage requirements to be over 40TB of acoustic data. To put that in perspective, the chemistry department in St Andrews was just gifted a server of the same size-for the entire department.

Anyway, as sometimes happens, thousands of dollars worth of recording equipment are purchased and deployed and one or two people are hired to process the data.

Admin1: Researchers are expensive but acoustics recorders are cheap.
Admin2: Then let's buy lots of recorders and hire one poor sop to process all the data.



So, what's a PhD student/research scientist to do? Use automatic detectors, that's what. Acoustic detectors can autonomously scan through data and flag sounds that match user-defined characteristics from known species such as right whale contact calls, bat chirps, sperm whale pulses etc.

I won't go into it too much here but there are loads of different kinds of acoustic detectors. Detectors can be based on amplitude thresholds that flag any sound above a certain amplitude, to much more complicated whistle and moan detectors that almost mimic the human eye and look for contours in the spectrogram. Other detectors can look at zero-crossings (notably for bats chirps) or match spectrogram images (see the template detector in XBat).

Unfortunately, no automatic detector is perfect even under ideal recording conditions. This is exacerbated in complex sound environments which may be full of  animals vocalizations, human noise, wind and rain. Of course, we would like detectors to identify every target sound and ignore all other sounds. But that's not the case.  Since no detector is perfect it's important to look at which detector is the best for the situation at hand. To quantitatively measure how different detectors perform we need some basic terminology. Below are four terms that describe detector performance.

False Positive : A non-target sound falsely identified by the detector

False Negative: A true detection missed by the detector

True Positive: A target sound that the detector correctly identified

True Negative: A non-target sound correctly passed or ignored  by the detector


Confused? Here, let me 'splain
Example of a PAMGuard Whistle and Moan Detector performance on sample of common dolphin whistles

Much better, eh?

Detectors can be tuned to inrease true positives at a cost of also increasing the false positives. This is why there is a simple metric to compare multiple detectors it's called a Receiver Operating Characteristic or the ROC curve. ROC curves are simply a measure of the relative false positive to true negative rates.

For each detector we build, we can tweak the sensitivity to increase the false positive rate and see how that effects the true true positive rate.


When we've done this several times with each detector we will get a curve which is the ROC. As above you can get an ROC curve for each detector. To determine which detector is the best we integrate the total area under each curve. In the example above you can see that the yellow line has the most area under it while the blue line has the least. Now each point on each line represents different settings for our detector. In this case we would use the detector represented by the yellow curve and probably pick settings that gave us ~20% false positive rate. Any higher and we would be swimming in false detections and any lower and we would miss a large portion of the true positives. Make sense?

You can try this out using any number of software platforms including PAMGuard, Raven, Adobe Audition and XBat. Each have their own detectors with their own sensitivity adjustments and they are all fun to play with, but I might be a bit biased.

As always, thanks for reading and happy analyzing!









3 comments:

  1. This is a vital topic ‘Bioacoustics Procrastinator : R.O.C. On!’. Diamonds and Detectors give us a blinding glimpse of the innovations that are coming our way and that they are helping to create.
    homework writing help

    ReplyDelete
  2. I have just finished reading the article you wrote on Detectors, False positive rate & Detector. I want to tell you how much I appreciated your clearly written and thought-provoking article.
    get unique essays written

    ReplyDelete
  3. Part of the payoff to this scientist is looking around and seeing the fruits of your Detectors and False positive rate. scientist have a hand in Detectors, False positive rate and Detector.
    help with writing paper

    ReplyDelete

Comment forum rules.
1. Be accurate
2. Cite your sources
3. Be nice

Comments failing to meet these criteria will be removed