1. # The Dirichlet-Multinomial in PyMC3

### Modeling Overdispersion in Compositional Count Data

published:
category:
tags:

Having just spent a few too many hours working on the Dirichlet-multinomial distribution in PyMC3, I thought I'd convert the demo notebook I also contributed into a blog post.

This example (exported and minimally edited from a Jupyter Notebook) demonstrates the use of a Dirichlet mixture of multinomials (a.k.a Dirichlet-multinomial or DM) to model categorical count data. Models like this one are important in a variety of areas, including natural language processing, ecology, bioinformatics, and more.

The Dirichlet-multinomial can be understood as draws from a Multinomial distribution where each sample has a slightly different probability vector, which is itself drawn from a common Dirichlet distribution. This contrasts with the Multinomial distribution, which assumes that all observations arise from a single fixed probability vector. This enables the Dirichlet-multinomial to accommodate more variable (a.k.a, over-dispersed) count data than the Multinomial.

Other examples of over-dispersed count distributions are the …

2. # Things I'm Glad I Learned

### Skills, concepts, techniques, and models

published:
edited: January 21, 2021, 12:00
category:
tags:

WARNING: This post was written with haste and therefore contains all kinds of typos, spelling errors, grammatical issues, and delusions of grandeur, wisdom, and writing ability.

This post is intended as a living document—a gratitude journal of sorts—of some things that I'm glad I learned. I expect many of the items on this list will be relevant to computation biology, but that may change in the future.

The big idea is that for every item on this list I am (A) glad that someone introduced me to it, and (B) think more people should know about it. This post is my chance to "pay it backwards", as it were; maybe someone else will be grateful for something they find for the first time on this list.

It may also double as an inspiration list for future posts.

My goal is to write a small blurb for each item …

3. # Changes to the gut microbiome resulting from acarbose treatment are associated with increased longevity in mice

### New preprint posted to the bioRxiv

published:
edited: June 20, 2019, 15:00
category:
tags:

I'm excited to announce that we've posted a preprint of our latest manuscript to the bioRxiv, as well as submitted it for peer review to the open access journal Microbiome. I'll update this note if and when it gets accepted.

Edit 2019-06-20: Our submission to Microbiome was tranferred to BMC Microbiology, and was finally accepted (more than a full year in review!). Check it out in print.

These days it seems like the only research more over-hyped than "microbiome" is longevity-enhancement. It is therefore with some trepidation that I have released into this world of buzz the first chapter of my dissertation, titled: "Changes in the gut microbiota and fermentation products associated with enhanced longevity in acarbose-treated mice."

Previous work (done by my co-authors on this paper as well as others) has conclusively demonstrated that treatment with the anti-diabetic drug acarbose substantially increases lifespan in mice (also). The magnitude of …

4. # Tutorial: Reproducible data analysis pipelines using Snakemake

published:
category:
tags:

In many areas of natural and social science, as well as engineering, data analysis involves a series of transformations: filtering, aggregating, comparing to theoretical models, culminating in the visualization and communication of results. This process is rarely static, however, and components of the analysis pipeline are frequently subject to replacement and refinement, resulting in challenges for reproducing computational results. Describing data analysis as a directed network of transformations has proven useful for translating between human intuition and computer automation. In the past I've evangelized extensively for GNU Make, which takes advantage of this graph representation to enable incremental builds and parallelization.

Snakemake is a next-generation tool based on this concept and designed specifically for bioinformatics and other complex, computationally challenging analyses. I've started using Snakemake for my own data analysis projects, and I've found it to be a consistent improvement, enabling more complex pipelines with fewer of the "hacks" that …

5. # Teaching Python by the (Note)Book

published:
category:
tags:

tl;dr: I tried out a modified Python lesson and I think it was successful at balancing learner motivation with teaching foundational (and sometimes boring) concepts.

In many ways, teaching Python to scientists is easier than just about every other audience. The learning objective is clear: write code to make my science more accurate, more efficient, and more impactful. The motivation is apparent: data is increasingly plentiful and increasingly complex. The learners are both engaged and prepared to put in the effort required to develop new skills.

But, despite all of the advantages, teaching anybody to program is hard.

In my experience, one of the most challenging trade-offs for lesson planners is between motivating the material and teaching a mental model for code execution. For example, scientists are easily motivated by simple data munging and plotting using pandas and matplotlib; these are features of the Python ecosystem that can convince …

6. # Take five minutes to simplify your life with Make

published:
edited: November 21, 2017, 09:30
category:
tags:

WARNING: Because of the Markdown rendering of this blog, tab characters have been replaced with 4 spaces in code blocks. For this reason, the makefile code will not work when copied directly from the post. Instead, you must first replace all 4-space indents with a tab character.

I use GNU Make to automate my data processing pipelines. I've written a tutorial 1 for novices on the basics of using Make for reproducible analysis and I think that everyone who writes more than one script, or runs more than one shell command to process their data can benefit from automating that process. I'm not alone.

However, the investment required to learn Make and to convert an entire project can seem daunting to many time-strapped researchers. Even if you aren't living the dream—rebuilding a paper from raw data with a single invocation of make paper—I still think you can benefit …

7. # Software carpentry instructor training

### A survival analysis in python

published:
edited: May 31, 2016, 12:00
category:
tags:

Edit (2016-05-31): Added a hypothesis for why my results differ somewhat from Erin Becker's. Briefly: I removed individuals who taught before they were officially certified.

A couple weeks ago, Greg Wilson asked the Software Carpentry community for feedback on a collection of data about the organization's instructors, when they were certified, and when they taught. Having dabbled in survival analysis, I was excited to explore the data within that context.

Survival analysis is focused on time-to-event data, for example time from birth until death, but also time to failure of engineered systems, or in this case, time from instructor certification to first teaching a workshop. The language is somewhat morbid, but helps with talking precisely about models that can easily be applied to a variety of data, only sometimes involving death or failure. The power of modern survival analysis is the ability to include results from subjects who have not …

8. # Tutorial: Reproducible bioinformatics pipelines using GNU Make

published:
edited: November 21, 2017, 09:30
category:
tags:

WARNING: Because of the Markdown rendering of this blog, tab characters have been replaced with 4 spaces in code blocks. For this reason, the makefile code will not work when copied directly from the post. Instead, you must first replace all 4-space indents with a tab character.

For most projects with moderate to intense data analysis you should consider using Make. Some day I'll write a post telling you why, but for now check out this post by Zachary M. Jones1. If you're already convinced, or just want to see what it's all about, read on.

This post is the clone of a tutorial that I wrote for Titus Brown's week-long Bioinformatics Workshop at UC Davis's Bodega Marine Laboratory in February, 2016. For now, the live tutorial lives in a Github repository, although I eventually want to merge all of the good parts into the Software Carpentry Make lesson …

9. # Not all carbs are bad

### Getting enough fiber might do more than keep you regular.

published:
category:
tags:

This brief post was written as a popular science article for a class on science communication. My own research is currently focused on exactly this topic: describing microbial community dynamics associated with acarbose treatment and the production of butyrate.

A quick internet search search for “low-carb diets” comes back filled with promises to make you sleek, spry, and slim just by cutting out this entire category of foods. The popularity of these diets shouldn't surprise you. Recent research has implicated overconsumption of sugars, the simplest form of carbohydrates, and starchy foods, which can quickly be broken down into sugars, in the increased risk of heart disease, obesity, even some forms of dementia. Americans have responded quickly, with 50% trying to limit their intake of sugars and carbohydrates according to a 2014 survey. That same survey found only 74% of respondents believe that a healthy diet can include moderate amounts of …

10. # First time teaching Python to novices

published:
edited: August 14, 2015, 10:00
category:
tags:

This July I co-instructed with Jennifer Shelton a Software Carpentry workshop at Stanford University, targeted to researchers with genomic or evolutionary datasets. Jennifer taught the shell (Bash) and version control with Git, while I taught the general programming language Python. I've been aware of the organization, which teaches software development and computational methods to scientists, since attending a workshop in 2012. Since then I've served as a helper at one workshop (troubleshooting individual learner's problems and helping catch them up with the rest of the class), and gone through the "accelerated", two day, instructor training at Michigan State University. After the Stanford workshop, I took part in new-instructor debriefing on August 4th, during which I mentioned that I had to greatly pare down the community-written lesson plan, python-novice-inflammation, to fit into the two half-day session we allotted it.

Karin and Tiffany, who were running the debriefing, asked me to send …

Page 1 / 2 »