It is common in classification methods to first place data in a vector space and then learn decision boundaries. We propose reversing that process: for fixed decision boundaries, we "learn" the location of the data. This way we (i) do not need a metric (or even stronger structure) ? pairwise dissimilarities suffice; and additionally (ii) produce low-dimensional embeddings that can be analyzed visually. We achieve this by combining an entropybased embedding method with an entropybased version of semi-supervised logistic regression. We present results for clustering and semi-supervised classification.