Background: Interpreting the results of high-throughput experiments, such as those obtained from DNA-microarrays, is an often time-consuming task due to the high number of data-po...
Felix Kokocinski, Nicolas Delhomme, Gunnar Wrobel,...
We present a framework for segmenting and storing filament networks from scalar volume data. Filament structures are commonly found in data generated using high-throughput microsc...
Clustering is the problem of identifying the distribution of patterns and intrinsic correlations in large data sets by partitioning the data points into similarity classes. This p...
: Phylogenetic analysis is a central tool in studies of comparative genomics. When a new region of DNA is isolated and sequenced, researchers are often forced to throw away months ...
Jesse Mecham, Mark J. Clement, Quinn Snell, Todd F...
Support vector machine (SVM) is a powerful technique for data classification. Despite of its good theoretic foundations and high classification accuracy, normal SVM is not suitabl...
One of the first motivations of using grids comes from applications managing large data sets in field such as high energy physics or life sciences. To improve the global throughput...
Background: In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput t...
We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a...
Interactive rendering of large data sets requires fast algorithms and rapid hardware acceleration. Both can be improved, but none of this ensures interactive response times. If a ...
Ongoing changes in computer performance are affecting the efficiency of string sorting algorithms. The size of main memory in typical computers continues to grow, but memory acce...