Fork me on GitHub

Articles tagged pipelines

  1. Tutorial: Reproducible data analysis pipelines using Snakemake

    In many areas of natural and social science, as well as engineering, data analysis involves a series of transformations: filtering, aggregating, comparing to theoretical models, culminating in the visualization and communication of results. This process is rarely static, however, and components of the analysis pipeline are frequently subject to replacement and refinement, resulting in challenges for reproducing computational results. Describing data analysis as a directed network of transformations has proven useful for translating between human intuition and computer automation. In the past I’ve evangelized extensively for GNU Make, which takes advantage of this graph representation to enable incremental builds and parallelization.

    Snakemake is a next-generation tool based on this concept and designed specifically for bioinformatics and other complex, computationally challenging analyses. I’ve started using Snakemake for my own data analysis projects, and I’ve found it to be a consistent improvement, enabling more complex pipelines with fewer of …

  2. Take five minutes to simplify your life with Make

    I use GNU Make to automate my data processing pipelines. I’ve written a tutorial 1 for novices on the basics of using Make for reproducible analysis and I think that everyone who writes more than one script, or runs more than one shell command to process their data can benefit from automating that process. I’m not alone.

    However, the investment required to learn Make and to convert an entire project can seem daunting to many time-strapped researchers. Even if you aren’t living the dream—rebuilding a paper from raw data with a single invocation of make paper—I still think you can benefit from adding a simple Makefile to your project root.

    When done right, scripting the tedious parts of your job can save you time in the long run2. But the time savings aren’t the only reason to do it. For me, a bigger …

  3. Tutorial: Reproducible bioinformatics pipelines using GNU Make

    For most projects with moderate to intense data analysis you should consider using Make. Some day I’ll write a post telling you why, but for now check out this post by Zachary M. Jones1. If you’re already convinced, or just want to see what it’s all about, read on.

    This post is the clone of a tutorial that I wrote for Titus Brown’s week-long Bioinformatics Workshop at UC Davis’s Bodega Marine Laboratory in February, 2016. For now, the live tutorial lives in a Github repository, although I eventually want to merge all of the good parts into the Software Carpentry Make lesson (repository).

    I’m posting this tutorial because I think it’s a good introduction to the analysis pipeline approach I have been slowly adopting over the last several years. This approach is even more deeply enshrined in a project template that I …

  4. PyMake I: Another GNU Make clone

    (Edit 1): This is the first of two posts about my program PyMake. I’ll post the link to Part II here when I’ve written it. While I still agree with some of the many of the views expressed in this piece, I have changed my thinking on Makefiles.

    (Edit 2): I’ll post a new post about the topic when I take the time to write it. I’ve written a tutorial on using Make for reproducible data analysis.

    I am an aspiring but unskilled (not yet skilled?) computer geek. You can observe this for yourself by watching me fumble my way through vim configuration, multi-threading/processing in Python, and git merges.

    Rarely do I actually feel like my products are worth sharing with the wider world. The only reason I have a GitHub account is personal convenience and absolute confidence that no one else will ever look …

Page 1 / 1