Source-aware Encoder / Bitterli The Grey And White Room
Source-aware encoders enable straightforward adaptation of a trained model to new content. We train new source-aware encoders for the Tungsten Renderer on a training set built upon publicly available scenes.
The interactive viewer below compares results from several networks:
- one trained from scratch (random initialization) on the full set of 1200 frames,
- one trained from scratch (random initialization) on a small subset of 75 frames, and
- one for which we just trained a new source-aware (from scratch) encoder for a network previously trained on Moana and Cars, using the 75-frame subset for training.
The results are compared against the NFOR denoiser (Bitterli et al. 2016). In most scenes, just training a new frontend using the small dataset yields similar performance to training from scratch on the full dataset.