Jascha Sohl-Dickstein: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
No edit summary
No edit summary
 
(44 intermediate revisions by the same user not shown)
Line 1: Line 1:
I am a graduate student in the [http://redwood.berkeley.edu Redwood Center for Theoretical Neuroscience], at University of California, Berkeley.  I am a member of [https://redwood.berkeley.edu/bruno/ Bruno Olshausen's] lab, and the [http://biophysics.berkeley.edu/ Biophysics Graduate Group]. My email address is [mailto:jascha@berkeley.edu jascha@berkeley.edu].
My new website is at [http://sohldickstein.com/ http://sohldickstein.com/]. Go there instead.


I am interested in how we learn to perceive the world.  There is evidence that much of our representation of the world is learned during development rather than being genetically hardwired - everything from the way light intensity is correlated on adjacent patches of the retina all the way up to rules for social interaction.  How this unsupervised learning problem is solved - how we learn the structure inherent in the world by experiencing examples of it - is not well understood.  This is the problem I am interested in tackling.
--


Practically - I mostly develop techniques to estimate parameters for highly flexible but intractable probabilistic models, using ideas from statistical mechanics and dynamical systems.
The information below is out of date.  I am now a postdoc in Surya Ganguli's lab at Stanford, and working with the Khan Academy. A more complete update to follow soon.  -Jascha (Aug 5, 2012)


== Code ==
--
 
I am a graduate student in the [http://redwood.berkeley.edu Redwood Center for Theoretical Neuroscience], at the University of California, Berkeley.  I am a member of [https://redwood.berkeley.edu/bruno/ Bruno Olshausen's] lab, and the [http://biophysics.berkeley.edu/ Biophysics Graduate Group].  My email address is [mailto:jascha@berkeley.edu jascha@berkeley.edu].
 
Most of my projects involve developing techniques to work with highly flexible but intractable probabilistic models, using ideas from statistical mechanics and dynamical systems.


* '''[https://github.com/Sohl-Dickstein/Hamiltonian-Annealed-Importance-Sampling HAIS] -''' This repository contains Matlab code to perform partition function estimation, log likelihood estimation, and importance weight estimation in models with intractable partition functions and continuous state spaces, using Hamiltonian Annealed Importance SamplingIt can also be used for standard Hamiltonian Monte Carlo sampling (single step, with partial momentum refreshment).
My underlying interest is in how we learn to perceive the world. There is evidence that much of our representation of the world is learned during development rather than being genetically hardwired - everything from the way light intensity is correlated on adjacent patches of the retina all the way up to rules for social interactionHow this unsupervised learning problem is solved - how we learn the structure inherent in our sensory input by experiencing examples of it - is not well understood.  This is the problem I am interested in tackling.
* '''[https://github.com/Sohl-Dickstein/Minimum-Probability-Flow-Learning MPF] -''' This repository contains Matlab code implementing Minimum Probability Flow learning for several cases, specifically:
** '''MPF_ising/ -''' parameter estimation in the Ising model
** '''MPF_RBM_compare_log_likelihood/ -''' parameter estimation in Restricted Boltzmann Machines.  This directory also includes code comparing the log likelihood of small RBMs trained via pseudolikelihood and Contrastive Divergence to ones trained via MPF.


== Projects ==
== Projects ==


* '''Minimum Probability Flow Learning -''' A collaboration with Peter Battaglino and Michael R. DeWeese.  MPF is a technique for parameter estimation in un-normalized probabilistic models.  It proves to be an order of magnitude faster than competing techniques for the Ising model, and an effective tool for learning parameters for any non-normalizable distribution. See the [http://redwood.berkeley.edu/jascha/pdfs/icml.pdf ICML paper] and [https://github.com/Sohl-Dickstein/Minimum-Probability-Flow-Learning released code].  If you are interested in using MPF in a continuous state space, you should use the method described in the Persistent MPF [http://redwood.berkeley.edu/jascha/pdfs/PMPF.pdf note].
* '''Minimum Probability Flow (MPF) -''' A collaboration with Peter Battaglino and Michael R. DeWeese.  MPF is a technique for parameter estimation in un-normalized probabilistic models.  It proves to be an order of magnitude faster than competing techniques for the Ising model, and an effective tool for learning parameters for any non-normalizable distribution. See the [http://redwood.berkeley.edu/jascha/pdfs/icml.pdf ICML paper], the [http://prl.aps.org/abstract/PRL/v107/i22/e220601 PRL paper], and the [https://github.com/Sohl-Dickstein/Minimum-Probability-Flow-Learning released code].  If you are interested in using MPF in a continuous state space, you should use the method described in the Persistent MPF [http://redwood.berkeley.edu/jascha/pdfs/PMPF.pdf note].


* '''Hamiltonian Annealed Importance Sampling -''' A collaboration with [http://www.cs.berkeley.edu/~bjc/ Jack Culpepper].  Allows the estimation of importance weights - and thus partition functions and log likelihoods - for intractable probabilistic models.  See the [http://redwood.berkeley.edu/jascha/pdfs/HAIS.pdf tech report], and the [https://github.com/Sohl-Dickstein/Hamiltonian-Annealed-Importance-Sampling released code].
* '''Hamiltonian Annealed Importance Sampling (HAIS) -''' A collaboration with [http://www.cs.berkeley.edu/~bjc/ Jack Culpepper].  Allows the estimation of importance weights - and thus partition functions and log likelihoods - for intractable probabilistic models.  See the [http://redwood.berkeley.edu/jascha/pdfs/HAIS.pdf tech report], and the [https://github.com/Sohl-Dickstein/Hamiltonian-Annealed-Importance-Sampling released code].
 
* '''Extensions to Hamiltonian Monte Carlo -'''  See the [http://redwood.berkeley.edu/jascha/pdfs/HMC_noflip.pdf note] below on modifying the rejection rules to less frequently negate the momentum, increasing mixing speed.  Additionally, ongoing work maintains an online low rank approximation to the inverse Hessian by the introduction of auxiliary Gaussian distributed variables with the Hessian as their coupling matrix.


* '''Lie group models for transformations in natural video -''' A collaboration with Jimmy Wang and Bruno Olshausen.  We train first order differential operators on inter-frame differences in natural video, in order to learn a set of natural transformations.  We further explore the use of these transformations in video compression.  See [http://arxiv.org/abs/1001.1027 the tech report], and the [http://redwood.berkeley.edu/jascha/pdfs/PID1615931.pdf DCC paper].
* '''Lie group models for transformations in natural video -''' A collaboration with Jimmy Wang and Bruno Olshausen.  We train first order differential operators on inter-frame differences in natural video, in order to learn a set of natural transformations.  We further explore the use of these transformations in video compression.  See [http://arxiv.org/abs/1001.1027 the tech report], and the [http://redwood.berkeley.edu/jascha/pdfs/PID1615931.pdf DCC paper].
Line 22: Line 25:
* '''A comparison of the log likelihoods of popular image models. -''' A collaboration with [http://www.cs.berkeley.edu/~bjc/ Jack Culpepper] and [https://redwood.berkeley.edu/cadieu/homepage/Home.html Charles Cadieu].  We use Hamiltonian Annealed Importance Sampling (HAIS - above) to compare the log likelihoods of popular image models trained via several parameter estimation techniques.
* '''A comparison of the log likelihoods of popular image models. -''' A collaboration with [http://www.cs.berkeley.edu/~bjc/ Jack Culpepper] and [https://redwood.berkeley.edu/cadieu/homepage/Home.html Charles Cadieu].  We use Hamiltonian Annealed Importance Sampling (HAIS - above) to compare the log likelihoods of popular image models trained via several parameter estimation techniques.


* '''Bilinear Generative Models for Natural Images -''' A collaboration with [http://www.cs.berkeley.edu/~bjc/ Jack Culpepper] and Bruno Olshausen.  See the soon to appear ICCV paper.
* '''Bilinear generative models for natural images -''' A collaboration with [http://www.cs.berkeley.edu/~bjc/ Jack Culpepper] and Bruno Olshausen.  See the [http://redwood.berkeley.edu/jascha/pdfs/culpepper-iccv13.pdf ICCV paper].


* '''Hessian Aware Online Optimization -''' By rewriting the inverse Hessian in terms of its Taylor expansion, and then accumulating terms in this expansion in an online fashion, neat things can be done...
[[Image:nicol_bat.jpg|50px|thumb|right]]
* '''A device for human echolocation -''' A collaboration with Nicol Harper and Chris Rodgers. (see stylish picture to right)


* '''Expectation Maximization and Hamiltonian Dynamics -''' A collaboration with [http://www.cs.berkeley.edu/~bjc/ Jack Culpepper] and Bruno Olshausen.
* '''Statistical analysis of medical images of cancer patients -''' A collaboration with Joel Zylberberg and Michael DeWeese.  (See also an earlier project training statistical models on MRI and CT breast images - [http://link.aip.org/link/?PSISDG/7263/726317/1 SPIE publication].)


* '''A Device for Human echolocation -''' A collaboration with Nicol Harper and Chris Rodgers. (see the picture in the upper right)
* '''Hessian-aware online optimization -'''  By rewriting the inverse Hessian in terms of its Taylor expansion, and then accumulating terms in this expansion in an online fashion, neat things can be done...


* '''Statistical analysis of medical images of cancer patients -''' A collaboration with Joel Zylberberg and Michael DeWeese(See also an earlier project training statistical models on MRI and CT breast images - [http://link.aip.org/link/?PSISDG/7263/726317/1 SPIE publication].)
== Code ==
 
* '''[https://github.com/Sohl-Dickstein/Minimum-Probability-Flow-Learning MPF] -''' This repository contains Matlab code implementing Minimum Probability Flow learning (MPF) for several cases, specifically:
** '''MPF_ising/ -''' parameter estimation in the Ising model
** '''MPF_RBM_compare_log_likelihood/ -''' parameter estimation in Restricted Boltzmann MachinesThis directory also includes code comparing the log likelihood of small RBMs trained via pseudolikelihood and Contrastive Divergence to ones trained via MPF.
* '''[https://github.com/Sohl-Dickstein/Hamiltonian-Annealed-Importance-Sampling HAIS] -''' This repository contains Matlab code to perform partition function estimation, log likelihood estimation, and importance weight estimation in models with intractable partition functions and continuous state spaces, using Hamiltonian Annealed Importance Sampling (HAIS). It can also be used for standard Hamiltonian Monte Carlo sampling (single step, with partial momentum refreshment).


== Notes ==
== Notes ==


* [http://redwood.berkeley.edu/jascha/pdfs/PMPF.pdf Persistent Minimum Probability Flow] Develops MPF in the case that non-data states are captured by persistent samples from the current estimate of the model distribution.  Analogous to Persistent CD.  This technique should be used for MPF in continuous state spaces.
* [http://redwood.berkeley.edu/jascha/pdfs/HMC_reducedflip.pdf Hamiltonian Monte Carlo with Fewer Momentum Reversals] - Reduces the number of momentum reversals required in Hamiltonian Monte Carlo. This is accomplished by maintaining the net exchange of probability between states with opposite momenta, but reducing the rate of exchange in both directions such that it is 0 in one direction.
 
* [http://redwood.berkeley.edu/jascha/pdfs/independence_of_energy_function_contributions.pdf On the independence of linear contributions to an energy function] - Even in the overcomplete case where there are more experts than data dimensions, product-of-experts style models tend to learn decorrelated features.  This note provides motivation for this by Taylor expanding the KL divergence, and observing that there are terms in the expansion which specifically penalize similarity between the experts.
 
* [http://redwood.berkeley.edu/jascha/pdfs/PMPF.pdf Persistent Minimum Probability Flow] - Develops MPF in the case that non-data states are captured by persistent samples from the current estimate of the model distribution.  Analogous to Persistent CD.  This technique should be used for MPF in continuous state spaces.


* [http://redwood.berkeley.edu/jascha/pdfs/MPF_sampling.pdf Sampling the Connectivity Pattern in Minimum Probability Flow Learning] - Describes how the connectivity pattern between states in MPF can be described using a proposal distribution, rather than a deterministic rule.
* [http://redwood.berkeley.edu/jascha/pdfs/MPF_sampling.pdf Sampling the Connectivity Pattern in Minimum Probability Flow Learning] - Describes how the connectivity pattern between states in MPF can be described using a proposal distribution, rather than a deterministic rule.


* [http://redwood.berkeley.edu/jascha/pdfs/generic_entropy_091121.pdf Entropy of Generic Distributions] - Calculates the entropy that can be expected for a distribution drawn at random from the simplex of all possible distributions ([http://dittler.us/ John Schulman] points out that ET Jaynes deals with similar questions in chapter 11 of "Probability Theory: The Logic Of Science")
* [http://redwood.berkeley.edu/jascha/pdfs/generic_entropy_091121.pdf Entropy of Generic Distributions] - Calculates the entropy that can be expected for a distribution drawn at random from the simplex of all possible distributions ([http://dittler.us/ John Schulman] points out that ET Jaynes deals with similar questions in chapter 11 of "Probability Theory: The Logic Of Science")
* [http://redwood.berkeley.edu/jascha/pdfs/independence_of_energy_function_contributions.pdf On the independence of linear contributions to an energy function] - Even in the overcomplete case where there are more experts than data dimensions, product-of-experts style models tend to learn decorrelated features.  This note provides motivation for this by Taylor expanding the KL divergence, and observing that there are terms in the expansion which specifically penalize similarity between the experts.


The following are titles for informal notes I intend to write, but haven't gotten to/finished yet.  If any of the following sound interesting to you, pester me and they will appear more quickly.
The following are titles for informal notes I intend to write, but haven't gotten to/finished yet.  If any of the following sound interesting to you, pester me and they will appear more quickly.


* Natural gradients explained via an analogy to signal whitening
* Natural gradients explained via an analogy to signal whitening
* A log bound on the growth of intelligence with system size
* The [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1467533 field of experts model] learns Gabor-like receptive fields when trained via minimum probability flow or score matching
* The [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1467533 field of experts model] learns Gabor-like receptive fields when trained via minimum probability flow or score matching
* For small time bins, [http://www.jneurosci.org/cgi/content/abstract/25/47/11003 generalized linear models] and causal Boltzmann machines become equivalent
* For small time bins, [http://www.jneurosci.org/cgi/content/abstract/25/47/11003 generalized linear models] and causal Boltzmann machines become equivalent
Line 54: Line 64:
== Publications ==
== Publications ==


J Sohl-Dickstein, P Battaglino, M DeWeese. New method for parameter estimation in probabilistic models: Minimum probability flow. Accepted, Physical Review Letters (2011).
J Sohl-Dickstein, P Battaglino, M DeWeese. New method for parameter estimation in probabilistic models: Minimum probability flow. Physical Review Letters (2011). http://prl.aps.org/abstract/PRL/v107/i22/e220601


J Sohl-Dickstein, P Battaglino, M DeWeese. Minimum probability flow learning. ICML (2011) http://redwood.berkeley.edu/jascha/pdfs/icml.pdf with supplementary material http://redwood.berkeley.edu/jascha/pdfs/supplementary_material_icml.pdf (also see the Persistent MPF [http://redwood.berkeley.edu/jascha/pdfs/PMPF.pdf note] for more on learning in continuous state spaces)
J Sohl-Dickstein, P Battaglino, M DeWeese. Minimum probability flow learning. "Distinguished Paper" ICML (2011) http://redwood.berkeley.edu/jascha/pdfs/icml.pdf with supplementary material http://redwood.berkeley.edu/jascha/pdfs/supplementary_material_icml.pdf (also see the Persistent MPF [http://redwood.berkeley.edu/jascha/pdfs/PMPF.pdf note] for more on learning in continuous state spaces)
 
J Sohl-Dickstein, BJ Culpepper. Hamiltonian annealed importance sampling for partition function estimation.  Under submission, draft available as technical report. https://github.com/Sohl-Dickstein/Hamiltonian-Annealed-Importance-Sampling/blob/master/HAIS.pdf


A Hayes, J Grotzinger, L Edgar, SW Squyres, W Watters, J Sohl-Dickstein. Reconstruction of Eolian Bed Forms and Paleocurrents from Cross-Bedded Strata at Victoria Crater, Meridiani Planum, Mars, Journal of Geophysical Research (2011) http://www.agu.org/pubs/crossref/2011/2010JE003688.shtml
A Hayes, J Grotzinger, L Edgar, SW Squyres, W Watters, J Sohl-Dickstein. Reconstruction of Eolian Bed Forms and Paleocurrents from Cross-Bedded Strata at Victoria Crater, Meridiani Planum, Mars, Journal of Geophysical Research (2011) http://www.agu.org/pubs/crossref/2011/2010JE003688.shtml


CM Wang, J Sohl-Dickstein, I Tosik. Lie Group Transformation Models for Predictive Video Coding. Proceedings of the Data Compression Conference (2011) http://redwood.berkeley.edu/jascha/pdfs/PID1615931.pdf
CM Wang, J Sohl-Dickstein, I Tosik. Lie Group Transformation Models for Predictive Video Coding. Proceedings of the Data Compression Conference (2011) http://redwood.berkeley.edu/jascha/pdfs/DCC_2011_LieGroup.pdf
 
J Sohl-Dickstein, CM Wang, BA Olshausen. An Unsupervised Algorithm For Learning Lie Group Transformations. Under submission, draft available as technical report. http://arxiv.org/abs/1001.1027


BJ Culpepper, J Sohl-Dickstein, B Olshausen.  Building a better probabilistic model of images by factorization.   
BJ Culpepper, J Sohl-Dickstein, B Olshausen.  Building a better probabilistic model of images by factorization.   
Accepted, ICCV. (2011)
International Conference on Computer Vision (2011) http://redwood.berkeley.edu/jascha/pdfs/culpepper-iccv13.pdf
 
J Sohl-Dickstein, BJ Culpepper. Hamiltonian annealed importance sampling for partition function estimation.  Redwood Technical Report. (2011) http://redwood.berkeley.edu/jascha/pdfs/HAIS.pdf
 
J Sohl-Dickstein, CM Wang, BA Olshausen. An Unsupervised Algorithm For Learning Lie Group Transformations. Redwood Technical Report (2009) http://arxiv.org/abs/1001.1027
 
J Sohl-Dickstein, P Battaglino, M DeWeese. Minimum probability flow learning. Redwood Technical Report (2009) http://arxiv.org/abs/0906.4779


C Abbey, J Sohl-Dickstein, BA Olshausen. Higher-order scene statistics of breast images. Proceedings of SPIE (2009) http://link.aip.org/link/?PSISDG/7263/726317/1
C Abbey, J Sohl-Dickstein, BA Olshausen. Higher-order scene statistics of breast images. Proceedings of SPIE (2009) http://link.aip.org/link/?PSISDG/7263/726317/1

Latest revision as of 01:06, 22 September 2014

My new website is at http://sohldickstein.com/. Go there instead.

--

The information below is out of date. I am now a postdoc in Surya Ganguli's lab at Stanford, and working with the Khan Academy. A more complete update to follow soon. -Jascha (Aug 5, 2012)

--

I am a graduate student in the Redwood Center for Theoretical Neuroscience, at the University of California, Berkeley. I am a member of Bruno Olshausen's lab, and the Biophysics Graduate Group. My email address is jascha@berkeley.edu.

Most of my projects involve developing techniques to work with highly flexible but intractable probabilistic models, using ideas from statistical mechanics and dynamical systems.

My underlying interest is in how we learn to perceive the world. There is evidence that much of our representation of the world is learned during development rather than being genetically hardwired - everything from the way light intensity is correlated on adjacent patches of the retina all the way up to rules for social interaction. How this unsupervised learning problem is solved - how we learn the structure inherent in our sensory input by experiencing examples of it - is not well understood. This is the problem I am interested in tackling.

Projects

  • Minimum Probability Flow (MPF) - A collaboration with Peter Battaglino and Michael R. DeWeese. MPF is a technique for parameter estimation in un-normalized probabilistic models. It proves to be an order of magnitude faster than competing techniques for the Ising model, and an effective tool for learning parameters for any non-normalizable distribution. See the ICML paper, the PRL paper, and the released code. If you are interested in using MPF in a continuous state space, you should use the method described in the Persistent MPF note.
  • Hamiltonian Annealed Importance Sampling (HAIS) - A collaboration with Jack Culpepper. Allows the estimation of importance weights - and thus partition functions and log likelihoods - for intractable probabilistic models. See the tech report, and the released code.
  • Extensions to Hamiltonian Monte Carlo - See the note below on modifying the rejection rules to less frequently negate the momentum, increasing mixing speed. Additionally, ongoing work maintains an online low rank approximation to the inverse Hessian by the introduction of auxiliary Gaussian distributed variables with the Hessian as their coupling matrix.
  • Lie group models for transformations in natural video - A collaboration with Jimmy Wang and Bruno Olshausen. We train first order differential operators on inter-frame differences in natural video, in order to learn a set of natural transformations. We further explore the use of these transformations in video compression. See the tech report, and the DCC paper.
  • A comparison of the log likelihoods of popular image models. - A collaboration with Jack Culpepper and Charles Cadieu. We use Hamiltonian Annealed Importance Sampling (HAIS - above) to compare the log likelihoods of popular image models trained via several parameter estimation techniques.
  • Bilinear generative models for natural images - A collaboration with Jack Culpepper and Bruno Olshausen. See the ICCV paper.
Nicol bat.jpg
  • A device for human echolocation - A collaboration with Nicol Harper and Chris Rodgers. (see stylish picture to right)
  • Statistical analysis of medical images of cancer patients - A collaboration with Joel Zylberberg and Michael DeWeese. (See also an earlier project training statistical models on MRI and CT breast images - SPIE publication.)
  • Hessian-aware online optimization - By rewriting the inverse Hessian in terms of its Taylor expansion, and then accumulating terms in this expansion in an online fashion, neat things can be done...

Code

  • MPF - This repository contains Matlab code implementing Minimum Probability Flow learning (MPF) for several cases, specifically:
    • MPF_ising/ - parameter estimation in the Ising model
    • MPF_RBM_compare_log_likelihood/ - parameter estimation in Restricted Boltzmann Machines. This directory also includes code comparing the log likelihood of small RBMs trained via pseudolikelihood and Contrastive Divergence to ones trained via MPF.
  • HAIS - This repository contains Matlab code to perform partition function estimation, log likelihood estimation, and importance weight estimation in models with intractable partition functions and continuous state spaces, using Hamiltonian Annealed Importance Sampling (HAIS). It can also be used for standard Hamiltonian Monte Carlo sampling (single step, with partial momentum refreshment).

Notes

  • Hamiltonian Monte Carlo with Fewer Momentum Reversals - Reduces the number of momentum reversals required in Hamiltonian Monte Carlo. This is accomplished by maintaining the net exchange of probability between states with opposite momenta, but reducing the rate of exchange in both directions such that it is 0 in one direction.
  • On the independence of linear contributions to an energy function - Even in the overcomplete case where there are more experts than data dimensions, product-of-experts style models tend to learn decorrelated features. This note provides motivation for this by Taylor expanding the KL divergence, and observing that there are terms in the expansion which specifically penalize similarity between the experts.
  • Persistent Minimum Probability Flow - Develops MPF in the case that non-data states are captured by persistent samples from the current estimate of the model distribution. Analogous to Persistent CD. This technique should be used for MPF in continuous state spaces.
  • Entropy of Generic Distributions - Calculates the entropy that can be expected for a distribution drawn at random from the simplex of all possible distributions (John Schulman points out that ET Jaynes deals with similar questions in chapter 11 of "Probability Theory: The Logic Of Science")

The following are titles for informal notes I intend to write, but haven't gotten to/finished yet. If any of the following sound interesting to you, pester me and they will appear more quickly.

  • Natural gradients explained via an analogy to signal whitening
  • The field of experts model learns Gabor-like receptive fields when trained via minimum probability flow or score matching
  • For small time bins, generalized linear models and causal Boltzmann machines become equivalent
  • How to construct phase space volume preserving recurrent networks
  • Maximum likelihood learning as constraint satisfaction
  • A spatial derivation of score matching

Publications

J Sohl-Dickstein, P Battaglino, M DeWeese. New method for parameter estimation in probabilistic models: Minimum probability flow. Physical Review Letters (2011). http://prl.aps.org/abstract/PRL/v107/i22/e220601

J Sohl-Dickstein, P Battaglino, M DeWeese. Minimum probability flow learning. "Distinguished Paper" ICML (2011) http://redwood.berkeley.edu/jascha/pdfs/icml.pdf with supplementary material http://redwood.berkeley.edu/jascha/pdfs/supplementary_material_icml.pdf (also see the Persistent MPF note for more on learning in continuous state spaces)

J Sohl-Dickstein, BJ Culpepper. Hamiltonian annealed importance sampling for partition function estimation. Under submission, draft available as technical report. https://github.com/Sohl-Dickstein/Hamiltonian-Annealed-Importance-Sampling/blob/master/HAIS.pdf

A Hayes, J Grotzinger, L Edgar, SW Squyres, W Watters, J Sohl-Dickstein. Reconstruction of Eolian Bed Forms and Paleocurrents from Cross-Bedded Strata at Victoria Crater, Meridiani Planum, Mars, Journal of Geophysical Research (2011) http://www.agu.org/pubs/crossref/2011/2010JE003688.shtml

CM Wang, J Sohl-Dickstein, I Tosik. Lie Group Transformation Models for Predictive Video Coding. Proceedings of the Data Compression Conference (2011) http://redwood.berkeley.edu/jascha/pdfs/DCC_2011_LieGroup.pdf

J Sohl-Dickstein, CM Wang, BA Olshausen. An Unsupervised Algorithm For Learning Lie Group Transformations. Under submission, draft available as technical report. http://arxiv.org/abs/1001.1027

BJ Culpepper, J Sohl-Dickstein, B Olshausen. Building a better probabilistic model of images by factorization. International Conference on Computer Vision (2011) http://redwood.berkeley.edu/jascha/pdfs/culpepper-iccv13.pdf

C Abbey, J Sohl-Dickstein, BA Olshausen. Higher-order scene statistics of breast images. Proceedings of SPIE (2009) http://link.aip.org/link/?PSISDG/7263/726317/1

K Kinch, J Sohl-Dickstein, J Bell III, JR Johnson, W Goetz, GA Landis. Dust deposition on the Mars Exploration Rover Panoramic Camera (Pancam) calibration targets. Journal of Geophysical Research-Planets (2007) http://www.agu.org/pubs/crossref/2007/2006JE002807.shtml

POSTER - J Sohl-Dickstein, BA Olshausen. Learning in energy based models via score matching. Cosyne (2007) - this (dense!) poster introduces a spatial derivation of score matching, applies it to learning in a Field of Experts model, and then extends Field of Experts to work with heterogeneous experts (to form a "tapestry of experts"). I'm including it as it hasn't been written up elsewhere. download poster

JR Johnson, J Sohl-Dickstein, WM Grundy, RE Arvidson, J Bell III, P Christensen, T Graff, EA Guinness, K Kinch, R Morris, MK Shepard. Radiative transfer modeling of dust-coated Pancam calibration target materials: Laboratory visible/near-infrared spectrogoniometry. Journal of Geophysical Research (2006) http://www.agu.org/pubs/crossref/2006/2005JE002658.shtml

J Bell III, J Joseph, J Sohl-Dickstein, H Arneson, M Johnson, M Lemmon, D Savransky In-flight calibration and performance of the Mars Exploration Rover Panoramic Camera (Pancam) instruments. Journal of Geophysical Research (2006) http://www.agu.org/pubs/crossref/2006/2005JE002444.shtml

Parker et al. Stratigraphy and sedimentology of a dry to wet eolian depositional system, Burns formation, Meridiani Planum, Mars. Earth and Planetary Science Letters (2005)

Soderblom et al. Pancam multispectral imaging results from the Opportunity rover at Meridiani Planum. Science (2004) http://www.sciencemag.org/content/306/5702/1703

Soderblom et al. Pancam multispectral imaging results from the Spirit rover at Gusev crater. Science (2004) http://www.sciencemag.org/content/305/5685/800

Smith et al. Athena microscopic imager investigation. Journal of Geophysical Research-Planets (2003)

Bell et al. Hubble Space Telescope Imaging and Spectroscopy of Mars During 2001. American Geophysical Union (2001)