Many applications of universal compression involve sources such as text, speech and image, whose alphabet is extremely large. In this work we propose a conceptual framework in which a large alphabet memory less source is decomposed into multiple 'as independent as possible' sources whose alphabet is much smaller. This way we slightly increase the average codeword length as the compressed symbols are no longer perfectly independent, but at the same time significantly reduce the overhead redundancy resulted by the large alphabet of the observed source. Our proposed algorithm, based on a generalization of the Binary Independent Component Analysis, shows to efficiently find the ideal trade-off so that the overall compression size is minimal. We demonstrate our framework on memory less draws from a variety of natural languages and show that the redundancy we achieve is remarkably smaller than most commonly used methods.