Last time we took this trip our trip from Paris to Bordeaux to begin this project. The trees were all full of round masses of bright green stuff, which we took to be some kind of squirrel’s nest. Anyone reading this who knows better will recognize immediately that this was in fact mistletoe. It’s all over here, today perhaps a bit greener than it was at the end of December. The fields are much wetter. We’ve had loads of rain since we’ve been here. I wonder if the horses get colds from standing in it all day?
Back to work. I’m trying to finish up documenting the codebase for RADami today. That has been a ridiculously slow process with little to write about, but I think I can finish it up on the train and the manuscript revised and resubmitted this week, so I can get on with new RAD work next week. One realization: there may be a bias in the partitioned RAD visualization the way I’ve been doing it, because the expectation is linear with a slope of 1.0 when (1) the tree likelihood is calculated as the sum of locus log-likelihoods for all loci based on just the loci used for the partitioned analysis, rather than the global likelihood for all loci; and (2) locus log-likelihoods are assigned based on topological identity with pruned trees that are voted on by each locus. I’m not sure this introduces a bias. It should reduce noise, as there is noise associated with requiring a locus to vote on trees that are topologically identical to one another when pruned down to just those taxa that are in the locus, but not topologically identical when all taxa are included. Because the optimization runs only until it stops improving by epsilon, trees that all lie at one point in the likelihood surface may appear to be at different positions. So in this sense the visualization as written biases us toward finding a tighter fit plot; but does it bias us toward a linear relationship? I’ll set this up to run the global likelihood on each tree as well, and see whether that plot is also so nice.
No comments:
Post a Comment