Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization


Navonil Majumder1*, Chia-Yu Hung1*, Deepanway Ghosal1*,

Wei-Ning Hsu2, Rada Mihalcea3, Soujanya Poria1

1DeCLaRe Lab, Singapore University of Technology and Design, Singapore

2Independent Contributor, USA

3University of Michigan, USA


*equal contribution


Abstract

Generative multimodal content is increasingly prevalent in much of the content creation arena, as it has the potential to allow artists and media personnel to create pre-production mockups by quickly bringing their ideas to life. The generation of audio from text prompts is an important aspect of such processes in the music and film industry. Many of the recent diffusion-based text-to-audio models focus on training increasingly sophisticated diffusion models on a large set of datasets of prompt-audio pairs. These models do not explicitly focus on the presence of concepts or events and their temporal ordering in the output audio with respect to the input prompt. Our hypothesis is focusing on how these aspects of audio generation could improve audio generation performance in the presence of limited data. As such, in this work, using an existing text-to-audio model Tango, we synthetically create a preference dataset where each prompt has a winner audio output and some loser audio outputs for the diffusion model to learn from. The loser outputs, in theory, have some concepts from the prompt missing or in an incorrect order. We fine-tune the publicly available Tango text-to-audio model using diffusion-DPO (direct preference optimization) loss on our preference dataset and show that it leads to improved audio output over Tango and AudioLDM2, in terms of both automatic- and manual-evaluation metrics.

Salient Features

  • Tango 2 is aimed at improved text to audio alignment in terms of event presence and ordering
  • Tango 2 comes from DPO fine-tuning Tango on our synthetic preference dataset Audio-alpaca
  • Tango 2 beats all the open-source baselines despite not seeing any OOD data beyond AudioCaps

  • Figure 1: The creation of Tango 2 has two major stages: (i) preference dataset creation, followed by, (ii) DPO fine-tuning of Tango LDM. The preference dataset Audio-alpaca is created through perturbation of AudioCaps prompts and/or adversarial ensemble filtering of Tango outputs using CLAP score. Tango-full-ft checkpoint is fine-tuned with DPO-diffusion loss using this preference data.


    Comparative Samples

    Text Description TANGO TANGO 2
    A man speaks followed by a loud bursts and then laughter
    A man speaking as a vehicle horn honks and a man speaks in the distance
    Pet birds tweet, chirp, and sing while music plays
    A vehicle struggling to start with some clicks and whines
    A cuckoo bird coos followed by a train running on railroad tracks as a bell dings in the background
    A man yelling in the background as several basketballs bounce and shoes squeak on a hardwood surface
    Rain falling followed by fabric rustling and footsteps shuffling then a vehicle door opening and closing as plastic crinkles
    A man and a woman talking followed by a bell ringing and a cat meowing as a crowd of people applaud
    Fire igniting as a motor runs followed by an electronic beep and vehicle engines running idle then car horns honking
    A man speaking followed by a faucet turning on and off while pouring water twice proceeded by water draining down a pipe
    The two general trends that can be observed in these examples—in concurrence with the human evaluators—are (i) more audible presence of the events and (ii) better event-order reproduction in the outputs of Tango 2 over Tango.

    Limitations

    Tango 2 is based on Tango which was trained on the relatively small AudioCaps dataset. Thus, it may not generate good audio samples related to concepts unseen in the training (e.g., rooster crowing). Similarly, the preference dataset Audio-alpaca is also synthetically derived from the training set of the very same AudioCaps dataset. Such datasets often contain some level of noise. Thus, Tango 2 is not always able to finely follow the instructions in the textual control prompts.

    Other comments

    1. We share our code on github, which aims to open source the audio generation model training and evaluation for easier comparison.

    2. We have released our model checkpoints for reproducibility.

    Acknowledgement

    This website is created based on https://github.com/AudioLDM/AudioLDM.github.io