4 links tagged with all of: machine-learning + compression
Click any tag below to further narrow down your results
Links
The article discusses an experiment where a summarizer and a generator were co-trained to create a compression scheme for text. The model learned to effectively use Mandarin and punctuation to reduce text size while preserving meaning, achieving a compression rate of about 90%.
This article explores an unconventional method for classifying text by leveraging compression algorithms. The author demonstrates how to concatenate labeled documents, compress them, and use the compressed sizes to predict labels for new texts. While the method shows promise, it is computationally expensive and generally underperforms compared to traditional classifiers.
This article explores how Python 3.14's zstd module enables efficient text classification through incremental compression. It outlines a method where text is classified based on the size of compressed output from different class-specific compressors, demonstrating improved speed and accuracy over traditional methods.
The FGFP framework introduces a novel method for compressing deep neural networks using fractional Gaussian filters and adaptive unstructured pruning. By minimizing the number of parameters and leveraging Grünwald-Letnikov fractional derivatives, it achieves significant model size reductions with minimal impact on accuracy, as demonstrated on benchmarks like CIFAR-10 and ImageNet2012.