Autoencoders are an interesting class of models which have the simple goal of reconstructing their input after mapping it to a latentcode. In this talk, I'll present some of our recent work on a few autoencoders. First, I'll describe MusicVAE, which maps sequences of musical notes into a latent space which facilitates surprisingly useful semantic manipulation of data. MusicVAE's success is thanks to its decoder architecture which explicitly models hierarchical structure in sequences. Second, I'll present ACAI, an adversarial regularizer which encourages an autoencoder to produce a realistic output when fed a linear mixture of two latent codes. We study the connection between this ability to "interpolate" and downstream representation learning performance. When experimenting with ACAI, we found that the old-fashioned denoising autoencoder exhibited surprisingly good representation learning performance. To follow up on this, I'll present some preliminary work using "virtual adversarial noise" which is explicitly constructed to shift the autoencoder's input away from the learned data manifold.