In the last years several new sequencing technologies were introduced, with the goal of making it possible to sequence a Human Genome for $1000: the $1000-genome. The primary applications of these technologies are determining the sequences of novel genomes (de-novo sequencing) and identifying sequence variations between the genomes of individuals (re-sequencing). Their low cost has prompted other uses such as measuring gene activity (or gene expression) with RNASeq at much higher resolutions than previously possible, detecting functional pieces in what was believed to be junk DNA, or assessing the genetic diversity of environmental samples or analyzing the abundance of different organisms in a sample.
Modern sequencers produce tens of millions of short reads, or fragments of DNA. This creates large computational challenges for everything from preprocessing to genome assembly to the computational analysis of genetic variation, gene expression, microRNA or meta-genomic data. These challenges are intimately tied to the specifics of the sequencing platform used and change with every new generation of machinery introduced. Of particular interest are robust statistical methods to analyze sequencing experiments and assess significance of findings and pooling or multi-plexing approaches to increase throughput even further.