Back to index

Source-aware Encoder / Bitterli Modern Hall

Source-aware encoders enable straightforward adaptation of a trained model to new content. We train new source-aware encoders for the Tungsten Renderer on a training set built upon publicly available scenes.

The interactive viewer below compares results from several networks:

  1. one trained from scratch (random initialization) on the full set of 1200 frames,
  2. one trained from scratch (random initialization) on a small subset of 75 frames, and
  3. one for which we just trained a new source-aware (from scratch) encoder for a network previously trained on Moana and Cars, using the 75-frame subset for training.

The results are compared against the NFOR denoiser (Bitterli et al. 2016). In most scenes, just training a new frontend using the small dataset yields similar performance to training from scratch on the full dataset.