Spectral Sensitivity Estimation Without a Camera

Grigory Solomatov
Derya Akkaynak


[Paper]
[Code]
[Poster]
[Dataset]


Each make and model of camera is characterized by its unique set of spectral sensitivities that define how that camera responds to light. Effectively, this means that different cameras will register a different set of RGB values for a given scene, even if they image the scene simultaneously. Knowing camera spectral responses can help standardize color capture, as well as simplify the solution of many problems in computer vision and related fields. Image credit: midjourney.


Abstract

A number of problems in computer vision and related fields would be mitigated if camera spectral sensitivities were known. As consumer cameras are not designed for high-precision visual tasks, manufacturers do not disclose spectral sensitivities. Their estimation requires a costly optical setup, which triggered researchers to come up with numerous indirect methods that aim to lower cost and complexity by using color targets. However, the use of color targets gives rise to new complications that make the estimation more difficult, and consequently, there currently exists no simple, low-cost, robust go-to method for spectral sensitivity estimation that non-specialized research labs can adopt. Furthermore, even if not limited by hardware or cost, researchers frequently work with imagery from multiple cameras that they do not have in their possession. To provide a practical solution to this problem, we propose a framework for spectral sensitivity estimation that not only does not require any hardware (including a color target), but also does not require physical access to the camera itself. Similar to other work, we formulate an optimization problem that minimizes a two-term objective function: a camera-specific term from a system of equations, and a universal term that bounds the solution space. Different than other work, we utilize publicly available high-quality calibration data to construct both terms. We use the colorimetric mapping matrices provided by the Adobe DNG Converter to formulate the camera-specific system of equations, and constrain the solutions using an autoencoder trained on a database of ground-truth curves. On average, we achieve reconstruction errors as low as those that can arise due to manufacturing imperfections between two copies of the same camera. We provide predicted sensitivities for more than 1,000 cameras that the Adobe DNG Converter currently supports, and discuss which tasks can become trivial when camera responses are available.


Poster


 [Download]


Results

Image 1


View Curves from our 1,000+ Camera Dataset

Image 1


[All Estimated Curves]
[Ground Truth Curves]


Paper and Supplementary Material

Solomotav, G. Akkaynak, D.
Spectral Sensitivity Estimation Without a Camera.
Supplementary Material



Bibtex

  @inproceedings{Solomatov2023spectral,
  author = {Grigory Solomatov and Derya Akkaynak},
  title = {Spectral Sensitivity Estimation Without a Camera},
  booktitle = {IEEE International Conference on Computational Photography (ICCP)},
  year = {2023},
  month = {July}
  }
    


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here. This page came live through a collaborative effort led by Alice Vranka, with great help from Olena Kovalenko and Michael Leibovitch. Our poster was designed and illustrated by Yarden Ben Tabou De Leon.