Sciweavers

412 search results - page 27 / 83
» Diagonalization
Sort
View
JMLR
2010
101views more  JMLR 2010»
13 years 2 months ago
Exploiting Feature Covariance in High-Dimensional Online Learning
Some online algorithms for linear classification model the uncertainty in their weights over the course of learning. Modeling the full covariance structure of the weights can prov...
Justin Ma, Alex Kulesza, Mark Dredze, Koby Crammer...
ICASSP
2011
IEEE
12 years 11 months ago
Simultaneous SDR optimality via a joint matrix decomposition
Abstract—This work considers the joint source-channel problem of transmitting a Gaussian source over a two-user multipleinput multiple-output (MIMO) broadcast channel. We show th...
Yuval Kochman, Anatoly Khina, Uri Erez
ICASSP
2011
IEEE
12 years 11 months ago
Preserve ordering property of generated LSPS for minimum generation error training in HMM-based speech synthesis
Ordering property is an important property of LSP and closely connected with the naturalness of reconstructed speech. When LSP is adopted as spectrum feature in HMM-based parametr...
Ming Lei, Zhen-Hua Ling, Li-Rong Dai
CISS
2008
IEEE
14 years 1 months ago
Two experimental pearls in Costas arrays
—The results of 2 experiments in Costas arrays are presented, for which theoretical explanation is still not available: the number of dots on the main diagonal of exponential Wel...
Konstantinos Drakakis, Rod Gow
MFCS
2004
Springer
14 years 23 days ago
All Superlinear Inverse Schemes Are coNP-Hard
How hard is it to invert NP-problems? We show that all superlinearly certified inverses of NP problems are coNP-hard. To do so, we develop a novel proof technique that builds dia...
Edith Hemaspaandra, Lane A. Hemaspaandra, Harald H...