Using Automatic Differentiation with Sparse Coding

Team: 26

School: Los Alamos High

Area of Science: Statistics/Machine Learning


Proposal: Sparse coding is a way of breaking something down into individual components, which can be used for classification. With a dateset of images of a specific type which are unlabeled (meaning information isn't given about each image), a sparse encoder could compress these images into a sparse representation (an array of mostly zeros), in such a way that they could be decompressed to get approximately the original image back. This is significant because it is useful for extracting information on unlabeled data.

Sparse coding works by repeatedly updating the output according to an algorithm to produce a result, which describes the input in a latent space. There are multiple algorithms that can be used to minimize a the difference between the input and the decompressed latent representation. Generally these algorithms iterate two steps: putting the difference between the current decompressed sparse representation and the actual input into a residual layer, and adding to the sparse representation based off what remains in the latent layer, and driving it towards sparsity along the way.

This process could be made much more efficient by applying automatic differentiation. This is what i intend to do in this project. This could potentially be useful for a great number of problems which sparse coding is already useful for. I will use python, numpy, pytorch or tensorflow, matplotlib, and jupyter notebooks, or julia.


Team Members:

  Robert Strauss

Sponsoring Teacher: NA

Mail the entire Team