PlenOctrees for Real-time Rendering of Neural Radiance Fields
Alex Yu,Ruilong Li,Matthew Tancik,Hao Li,Ren Ng,Angjoo Kanazawa


arXivDemo / Project WebsiteVideo

We introduce a method to render Neural Radiance Fields (NeRFs) in real time without sacrificing quality. Our method preserves the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects.

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
Jonathan T. Barron,Ben Mildenhall,Matthew Tancik,Peter Hedman,Ricardo Martin-Brualla,Pratul P. Srinivasan


arXivProject WebsiteVideo

The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. We prefilter the positional encoding function and train NeRF to generate anti-aliased renderings.

Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis
Ajay Jain,Matthew Tancik,Pieter Abbeel


arXivProject WebsiteVideo

We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses. Our semantic loss allows us to supervise DietNeRF from arbitrary poses. We extract these semantics using a pre-trained visual encoder such as CLIP.

Learned Initializations for Optimizing Coordinate-Based Neural Representations
Matthew Tancik*,Ben Mildenhall*,Terrance Wang,Divi Schmidt,Pratul P. Srinivasan,Jonathan T. Barron,Ren Ng

CVPR (2021) Oral

arXivProject WebsiteVideo

We find that standard meta-learning algorithms for weight initialization can enable faster convergence during optimization and can serve as a strong prior over the signal class being modeled, resulting in better generalization when only partial observations of a given signal are available.

pixelNeRF: Neural Radiance Fields from One or Few Images
Alex Yu,Vickie Ye,Matthew Tancik,Angjoo Kanazawa

CVPR (2021)

arXivProject WebsiteVideo

We propose a learning framework that predicts a continuous neural scene representation from one or few input images by conditioning on image features encoded by a convolutional neural network.

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis
Pratul P. Srinivasan,Boyang Deng,Xiuming Zhang,Matthew Tancik,Ben Mildenhall,Jonathan T. Barron,

CVPR (2021)

arXivProject WebsiteVideo

We recover relightable NeRF-like models using neural approximations of expensive visibility integrals, so we can simulate complex volumetric light transport during training.

Fourier Features Let Networks Learn
High Frequency Functions in Low Dimensional Domains
Matthew Tancik*,Pratul P. Srinivasan*,Ben Mildenhall*,Sara Fridovich-Keil,Nithin Raghavan,Utkarsh Singhal,Ravi Ramamoorthi,Jonathan T. Barron,Ren Ng

NeurIPS (2020) Spotlight

arXivProject WebsiteCodeVideo

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Ben Mildenhall*,Pratul P. Srinivasan*,Matthew Tancik*,Jonathan T. Barron,Ravi Ramamoorthi,Ren Ng

ECCV (2020) Oral - Best Paper Honorable Mention

arXivProject WebsiteCodeVideoFollow-ups

We propose an algorithm that represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. With this representation we achieve state-of-the-art results for synthesizing novel views of scenes from a sparse set of input views.

StegaStamp: Invisible Hyperlinks in Physical Photographs
Matthew Tancik*,Ben Mildenhall*,Ren Ng

CVPR (2020)

arXivProject WebsiteCodeVideo

We present a deep learning method to hide imperceptible data into printed images that can be recovered after photographing the print. The method is robust to corruptions like shadows, occlusions, noice, and shift in color .

Lighthouse: Predicting Lighting Volumes
for Spatially-Coherent Illumination
Pratul P. Srinivasan*,Ben Mildenhall*,Matthew Tancik,Jonathan T. Barron,Richard Tucker,Noah Snavely

CVPR (2020)

arXivProject WebsiteVideo

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair. We propose a model that estimates a 3D volumetric RGBA model of a scene, including content outside the observed field of view, and then uses standard volume rendering to estimate the incident illumination at any 3D location within that volume.

TurkEyes: A Web-Based Toolbox for Crowdsourcing Attention Data
Anelise Newman,Barry McNamara,Camilo Fosco,Yun Bin Zhang,Pat Sukham,Matthew Tancik,Nam Wook Kim,Zoya Bylinskii

CHI (2020)

arXivProject WebsiteCode

Eye movements provide insight into what parts of an image a viewer finds most salient, interesting, or relevant to the task at hand. Unfortunately, eye tracking data, a commonly-used proxy for attention, is cumbersome to collect. Here we explore an alternative: a comprehensive web-based toolbox for crowdsourcing visual attention.

Towards Photography Through Realistic Fog
Guy Satat,Matthew Tancik,Ramesh Raskar

ICCP (2018)

Project WebsiteLocal CopyVideoMIT News

We demonstrate a technique that recovers reflectance and depth of a scene obstructed by dense, dynamic, and heterogeneous fog. We use a single photon avalanche diode (SPAD) camera filter our the light that scatters off of the fog in the scene.

Flash Photography for Data-Driven Hidden Scene Recovery
Matthew Tancik,Guy Satat,Ramesh Raskar


We introduce a method that couples traditional geometric understanding and data-driven techniques to image around corners with consumer cameras. We show that we can recover information in real scenes despite only training our models on synthetically generated data.

Photography optics at relativistic speeds
Barmak Heshmat,Matthew Tancik,Guy Satat,Ramesh Raskar

Nature Photonics  (2018)

Project WebsiteNature ArticleVideoMIT News

We demonstrate that by folding the optical path in time, one can collapse the conventional photography optics into a compact volume or multiplex various functionalities into a single imaging optics piece without losing spatial or temporal resolution. By using time-folding at different regions of the optical path, we achieve an order of magnitude lens tube compression, ultrafast multi-zoom imaging, and ultrafast multi-spectral imaging.

Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics
Spandan Madan*,Zoya Bylinskii*,Matthew Tancik*,Adria Recasens,Kim Zhong,Sami Alsheikh,Hanspeter Pfister,Aude Olivia,Fredo Durand


Combining icon classification and text extraction, we present a multi-modal summarization application. Our application takes an infographic as input and automatically produces text tags and visual hashtags that are textually and visually representative of the infographic’s topics respectively.

Lensless Imaging with Compressive Ultrafast Sensing
Guy Satat,Matthew Tancik,Ramesh Raskar

IEEE Transactions on Computational Imaging (2017)

Project WebsiteLocal CopyIEEEMIT News

We demonstrate a new imaging method that is lensless and requires only a single pixel. Compared to previous single pixel cameras our system allows significantly faster and more efficient acquisition by using ultrafast time-resolved measurement with compressive sensing.

Object Classification through Scattering Media with Deep Learning on Time Resolved Measurement
Guy Satat,Matthew Tancik,Otkrist Gupta,Barmak Heshmat,Ramesh Raskar

Optics Express  (2017)

Project WebsiteLocal CopyOSA

A deep learning method for object classification through scattering media. Our method trains on synthetic data with variations in calibration parameters that allows the network to learn a calibration invariant model.