Heaviside Signal Detection Part 1: Informed non-parametric testing

Steps may be frequently found in geophysical datasets, specifically timeseries (e.g. GPS).  A common approach to estimating the size of the offset is to assume (or estimate) what the statistical structure of the noise is and estimate the size and uncertainty of the step.  In a series of posts I’m hoping to address a simple question which has no simple answer: Without any information about the location of a step, and the structure of the noise in a given dataset (containing only one step), what are some novel ways to estimate the size, uncertainty, and even location of the step?

Let’s first start with the control cases: A single offset of known size, located halfway through the series, in the presence of normally distributed noise of known standard deviation.  A simple function to create this is here:

doseq <- function(heavi, n=2000, seqsd=1, seq.frac=0.5){
 # noise
 tmpseq <- rnorm(n, sd=seqsd)
 # + signal (scaled by stdev of noise)
 tmpseq[ceiling(seq.frac*n):n] <- tmpseq[ceiling(seq.frac*n):n] + heavi*seqsd
 return(tmpseq)
}

To come to some understanding about what may and may not be detected in the idealized case, we can use non-parametric sample-distance test on two samples: one for pre-step information, and the other for post-step.  A function to do that is here:

 doit <- function(n=0, plotit=FALSE){
 xseq <- doseq(n)
 lx2 <- (length(xseq) %/% 2)
 # rank the series
 xseq.r <- rank(xseq[1:(2*lx2)])
 # normalize
 xseq.rn <- xseq.r/lx2
 df <- data.frame(rnk=xseq.rn[1:(2*lx2)])
 # set pre/post factors [means we know where H(t) is]
 df$loc <- "pre"
 df$loc[(lx2+1):(2*lx2)] <- "post"
 df$loc <- as.factor(df$loc)
 # plot rank by index
 if (plotit){plot(df$rnk, pch=as.numeric(df$loc))}
 # Wilcoxon rank test of pre-heaviside vs post-heaviside
 rnktest <- wilcox.test(rnk ~ loc, data=df, alternative = "two.sided", exact = FALSE, correct = FALSE, conf.int=TRUE, conf.level=.99)
 # coin has the same functions, but can do Monte Carlo conf.ints
 # require(coin)
 # coin::wilcox_test(rnk ~ loc, data=df, distribution="approximate",
 # conf.int=TRUE, conf.level=.99)
 # The expected W-statistic for the rank-sum and length of sample 1 (could also do samp. 2)
 myW <- sum(subset(df,loc="pre")$rnk*lx2) - lx2*(lx2+1)/2
 return(invisible(list(data=df, ranktest=rnktest, n=n, Wexpected=myW)))
}

So let’s run some tests, and make some figures, shall we?  First, let’s look at how the ranked series depends on the signal-to-noise ratio; this gives us at least a visual sense of what’s being tested, namely that the there is no-difference between the two sets.

par(mar = rep(0, 4))
layout(matrix(1:10, 5, 2, byrow = F))
X<-seq(0,4.9,by=.5) # coarse resolution, for figure
sapply(X=X,FUN=function(x) doit(x,plotit=TRUE),simplify=TRUE)

Which gives the following figure:

Indexed ranked series.  From top-to-bottom, left-to-right, the signal-to-noise ratio is increased by increments of (standard deviation)/2, beginning with zero signal.

and makes clear a few things:

  1. Very small signals are imperceptible to human vision (well, at least to mine).
  2. As the signal grows, the ranked series forms two distinct sets, so a non-parametric test should yield sufficiently low values for the p statistic.
  3. And, as expected, the ranking doesn’t inform about the magnitude of the offset; so we can detect that it exists, but can’t say how large it is.

But how well does a rank-sum test identify the signal?  In other words, what are the test results as a function of signal-to-noise (SNR)?

   X<-seq(0,5,by=.1) # a finer resolution
   tmpld<<-sapply(X=X,FUN=function(x) doit(x,plotit=FALSE),simplify=T)
   # I wish I knew a more elegant solution, but setup a dummy function to extract the statistics
   tmpf <- function(nc){
      tmpd <- tmpld[2,nc]$ranktest
      W <- unlist(tmpld[4,nc]$Wexpected)
      w <- tmpd$statistic
      if (is.null(w)){w<-0}
      p<-tmpd$p.value
      if (is.null(p)){p<-1}
      return(data.frame(w=(as.numeric(w)/W), p=(as.numeric(p)/0.01)))
   }
   tmpd <- (t(sapply(X=1:length(X),FUN=tmpf,simplify=T)))
   ##
   library(reshape2)
   tmpldstat <- melt(data.frame(n=X, w=unlist(tmpd[,1]),p=unlist(tmpd[,2])), id.vars="n")
   # plot the results
   library(ggplot2)
   g <- ggplot(tmpldstat,aes(x=n,y=value, group=variable))
   g+ geom_step()+ geom_point(symbol="+")+ scale_x_continuous("Signal to noise ratio")+ scale_y_continuous("Normalized statistic")+ facet_grid(variable~., scales="free_y")+ theme_bw()

Which gives this figure:

Results of non-parametric rank-sum tests for differences between pre and post Heaviside signal onset, as a function of signal-to-noise ratio. Top: W statistic (also known as the U statistic) normalized by the expected value. Bottom: P-value normalized by the confidence limit tolerance (0.01).

and reveals a few other points:

  1. If the signal is greater than 1/10 the size of the noise, the two sets can be categorized as statistically different 95 times out of 100. In other words, the null hypothesis that they are not different may be rejected.
  2. The W-statistic is half the expected maximum value when the signal is equal in size to the noise, and stabilizes to ~0.66 by SNR=4; I’m not too familiar with the meaning of this statistic, but it might be indicating the strength of separation between the two sets which is visible in e Figure 1.

I’m quite amazed at how sensitive this type of test even for such small SNRs as I’ve given it, and this is an encouraging first step.  No pun intended.

R and C: arrays

Let’s start this with some disclosure: I have recently grown to love R (despite it’ drawbacks), and I first ‘learned’ C about 10 years ago in an undergraduate class (wow, that’s weird to think about).  Apparently, though, I was staring at girls more than my textbook or the PuTTY terminal, and now I’m paying for it. But who could blame me? I was a mechanical engineer, fresh out of high school, and I thought a computer existed primarily for using Napster and Microshit Word.  (And  Napster was free, cool, and awesome.) So now, as a graduate student, I realize I should have cared about that class on C.

Yes, FORTRAN is still extensively used in the geosciences, but reading it is about as enjoyable as looking at a troff document. Fortunately Stack Overflow (SO) exists, and is well ranked in most Google queries, so I can quickly refresh my C skills.  It’s unfortunate that public forums often engender snarky responses, especially to half-baked questions, but SO has managed to keep that effect to a minimum;  I think it’s mostly due to the ingenious point-awarding system used by them.

OK, why am I writing all this?  I recently decided that using METEOR, a FIR filter-design code, would be fruitful.  It compiles fine with gcc and there’s an example to test it’s usage, but that’s not good enough these days:  I want to access it directly inside R.  There is a function in the twenty-something year old code that uses STDIO’s getchar(), but that poses an input problem to a naive user (me).  So, I don’t know if there’s a solution besides this…

Let’s say I have a C-code rdarr.c:

#include <stdio.h>
#include <stdlib.h>
int rdarr(char *myarr[], int narr[])
{
        int n = narr[0];
        int i;
        printf(">>>> %i\n",n);
        if (n < 1E4) {
                for(i = 0; i < n; i++){
                        printf("\t%s\n", myarr[i]);
                }
        }
        printf("<<<<\n");
        return (EXIT_SUCCESS); /* stdlib */
}

which is compiled using R CMD SHLIB rdarr.c.

If there is a function in R which accesses that C-code, like this:

dyn.load("rdarr.so")
rdarrC <- function(arr, narr){
unlist(.C("rdarr", arr, narr))
}

we can say, for example, rdarrC(c("a","b"), 2) and get:

>>>> 2
        a
        b
<<<<

So the trick will be to use something like this as a string-block reader. But, because R apparently uses arrays for any inputs to .C(), this only works when the zero-index value of the array is referenced (thus the int n = narr[0]; statement in the C-code).

It is a bit annoying to me, but I don’t know another way to do it, and the same situation might be a major gotcha-type issue for others. While I do still consider myself new to all this, there are many exciting directions integrating C with R can go.

Suggestions for improvement welcomed; but, cheers – to the learning curve!

What to do about publishing data?

There seems to be a scale that’s tipping at the moment: Data or code that should be published* is often not.  This should seem like an odd statement for anyone in science, but it should be easy to show that most publications (at least in the geophysics community) publish data in the form of a map, or a scatter plot, or in some other non-accessible way (besides visualization).  Where is the spirit of reproducibility with such a practice?  

Publications that do supplement their paper with data usually provide it in the form of a table in an ASCII flat file.  And yes, this is what I’m arguing for, but I have come across some hideously formatted tables (the worst is when the table is in HTML or in a PDF) that nearly make me abandon all hope for the work.

So why am I writing this?  Well, I’m preparing a paper for submission right now and thinking about how I want to publish the data (besides typeset tables or graphs), and I think I’ve come up with a solution for smaller datasets: An R package of datasets in CRAN.

What is CRAN?  The Comprehensive R Archive Network.  It’s a place for users of the R language to publish their code and/or data in a way that’s usable to the R community.  Before the package (or an updated version of it) is accepted, it is checked for consistency; this means the only worry should be whether or not the code and/or data in the package is a pile of useless crap.

For the purposes of data publishing, we would need to consider a few things:

  1. Identification.  How will the user know what to access?
    1. The package would need to be identified somehow, either by a topic, or a journal, or an author/working-group.  I’d choose author since it assigns responsibility to him/her.
    2. The dataset in the package will need to be associated with a published work – perhaps a Digital Object Identifier handle?
  2. Format.
    1. Obviously, once the data are in the package they will may be accessed in the R-language.
    2. Databases such as flat files, for example, are not necessarily optimally normalized.  So I propose publishing the data with as high a normalization as the author can stand.  This will allow for robust subsetting of the data with, for example, sqldf (if SQL commands are your fancy).
  3. Size: What happens when the dataset to be archived is very large?  I propose the package author find a repository that will host the file, and then write a function which accesses the remote file.
  4. Versions.  I’m certainly no expert in version control, but there should be strict rules with this.  0.1-0 might be the first place to start which would mean: ‘zeroth version’.’one dataset’-‘no minor updates/fixes’  
  5. Functions. Internal functions should probably be avoided unless they are necessary, used to reproduce a calculation (and hence a dataset), or are needed for accessing large datasets [see (3)].
  6. Methods and Classes. 
    1. The class should be something specific either to the package or dataset – I would argue for dataset (i.e. publication).
    2. There should be at least the print, summary, and print.summary methods for the new class.  If, as I propose in (A), the class is specific to the dataset, the methods could be easily customizable.

So I suppose this post has gone on long enough, but the argument seems sound given some fundamental considerations.  Data should be published, and the R/CRAN package versioning system may provide just the right opportunity for doing so easily, reproducibly, and in a way which benefits many more than just the scientific community which has journal access.  It would also force more people to learn R!

 

*I apologize if you’re not granted access this piece.

Music. Focus.

I have no problem generating focus, but there are some albums I listen to while I’m working that just seem to carry me along, weightless, rhythmic, and hyper-focused.  According to my iTunes play counter, they are:

  1. All Hour Cymbals, Yeasayer
  2. Oracular Spectacular, MGMT
  3. De-Loused in the Comatorium, the Mars Volta
  4. Takk…, Sigur Rós
  5. Security, Peter Gabriel
  6. Two Conversations, the Appleseed Cast
  7. Tao of the Dead, …And You Will Know Us By the Trail of Dead
  8. Feels, Animal Collective
  9. A Weekend in the City, Bloc Party
  10. The Lamb Lies Down on Broadway, Genesis

I’m fortunate to work in an environment that doesn’t restrict things like music, but that just means I end up on Facebook or Google Reader more often than I should.

Collapse pasting in R

How do you take a vector in R, and paste the individual elements together?  Easy, a loop.  No, bad! **slap on the wrist**  Here’s an example why not.  

So let’s start by concatenating a few character strings and showing some examples using base::paste:

a <- c("something", "to", "paste")
[1] "something" "to" "paste" 
> paste(a, sep="_") 
[1] "something" "to" "paste"
> paste(a, collapse="_") 
[1] "something_to_paste"
> paste(a, sep="_", collapse="-")
[1] "something-to-paste"

And there we have it, a simple example demonstrating the difference between a separator expression and a collapse expression without using a hideous loop.  

So finally the moral: read the help pages!

SO makes me happy.

If you write any code, and there is anything you need to find out, you should be trolling Stack Overflow (SO).  It’s astounding what you can find, and how willing people are to help (unless you write a terrible question, then you’re chastised).

Just yesterday I realized my string formatting skills were not advanced enough to parse the horridly formatted NEIC moment tensor solutions (here’s an example).  Hi ho, hi ho, it’s off to SO I go. I wrote a question, and within an hour I had two useful (and different) solutions to work with – this saved me probably a day or two worth of work.