Seminars

From RedwoodCenter
Jump to navigationJump to search

Instructions

  1. Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.
  2. Fill in the speaker information in the 'tentative/confirmed speaker' section. Leave the status flag in 'tentative'. Please include your name and email as host in case somebody wants to contact you.
  3. Once the invitation is confirmed with the speaker, change the status flag to 'confirmed'. Notify me [1] that we have a confirmed speaker so that I can update the public web page. Please include a title and abstract.
  4. Kati or I will also send out an announcement.
  5. If the speaker needs accommodations you should contact Kati [2] to reserve a room at the faculty club.
  6. During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment).
  7. After the seminar have the speaker submit travel expenses to Jadine Palapaz [3] at RES for reimbursement. You can get a travel reimbursement form online and give to the speaker so they can submit everything before they leave if they have all their receipts on hand, otherwise they can mail it in afterwards.

Tentative / Confirmed Speakers

29 Feb 2012 (at noon as usual)

  • Speaker: Heather Read
  • Affiliation: U. Connecticut
  • Host: Mike DeWeese
  • Status: confirmed
  • Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"
  • Abstract: TBD

1 Mar 2012 (note: Thurs)

  • Speaker: Daniel Zoran
  • Affiliation: Hebrew University, Jerusalem
  • Host: Bruno
  • Status: confirmed
  • Title: TBA
  • Abstract:

7 Mar 2012

  • Speaker: David Sivak
  • Affiliation: UCB
  • Host: Mike DeWeese
  • Status: Confirmed
  • Title: TBA
  • Abstract:

8 Mar 2012

  • Speaker: Ivan Schwab
  • Affiliation: UC Davis
  • Host: Bruno
  • Status: Confirmed
  • Title: Evolution's Witness: How Eyes Evolved
  • Abstract:

14 Mar 2012

  • Speaker: David Sussillo
  • Affiliation:
  • Host: Jascha
  • Status: confirmed
  • Title:
  • Abstract:

11 April 2012

  • Speaker: Logan Grosenick
  • Affiliation: Stanford
  • Host: Jascha
  • Status: confirmed
  • Title: TBD - something about light field microscopy and calcium imaging
  • Abstract:

18 April 2012

  • Speaker: Kristofer Bouchard
  • Affiliation: UCSF
  • Host: Bruno
  • Status: confirmed
  • Title: TBD - something about ECoG during human speech production
  • Abstract:


23 Jan 2013

  • Speaker: Carlos Brody
  • Affiliation: Princeton
  • Host: Mike D.
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

Previous Seminars

2011/12 academic year

15 Sep 2011 (Thursday, at noon)

  • Speaker: Kathrin Berkner
  • Affiliation: Ricoh Innovations Inc.
  • Host: Ivana Tosic
  • Status: Confirmed
  • Title: TBD
  • Abstract: TBD

21 Sep 2011

  • Speaker: Mike Kilgard
  • Affiliation: UT Dallas
  • Host: Michael Silver
  • Status: Confirmed
  • Title:
  • Abstract:

27 Sep 2011

  • Speaker: Moshe Gur
  • Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology
  • Host: Bruno/Stan
  • Status: Confirmed
  • Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?
  • Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.

5 Oct 2011

  • Speaker: Susanne Still
  • Affiliation: University of Hawaii at Manoa
  • Host: Jascha
  • Status: confirmed
  • Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium
  • Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.

19 Oct 2011

  • Speaker: Graham Cummins
  • Affiliation: WSU
  • Host: Jeff Teeters
  • Status: Confirmed
  • Title:
  • Abstract:

26 Oct 2011

  • Speaker: Shinji Nishimoto
  • Affiliation: Gallant lab, UC Berkeley
  • Host: Bruno
  • Status: Confirmed
  • Title:
  • Abstract:

14 Dec 2011

  • Speaker: Austin Roorda
  • Affiliation: UC Berkeley
  • Host: Bruno
  • Status: Confirmed
  • Title: How the unstable eye sees a stable and moving world
  • Abstract:

11 Jan 2012

  • Speaker: Ken Nakayama
  • Affiliation: Harvard University
  • Host: Bruno
  • Status: confirmed
  • Title: Subjective Contours
  • Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).

Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over. Subjective contours, however, remain as vivid as ever, even more so. Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.

Tuesday, 24 Jan 2012

  • Speaker: Aniruddha Das
  • Affiliation: Columbia University
  • Host: Fritz
  • Status: confirmed
  • Title:
  • Abstract:

22 Feb 2012

  • Speaker: Elad Schneidman
  • Affiliation: Department of Neurobiology, Weizmann Institute of Science
  • Host: Bruno
  • Status: confirmed
  • Title: Sparse high order interaction networks underlie learnable neural population codes
  • Abstract:

2010/11 academic year

02 Sep 2010

  • Speaker: Johannes Burge
  • Affiliation: University of Texas at Austin
  • Host: Jimmy
  • Status: Confirmed
  • Title:
  • Abstract:

8 Sep 2010

  • Speaker: Tobi Szuts
  • Affiliation: Meister Lab/ Harvard U.
  • Host: Mike DeWeese
  • Status: Confirmed
  • Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.
  • Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.

29 Sep 2010

  • Speaker: Vikash Gilja
  • Affiliation: Stanford University
  • Host: Charles
  • Status: Confirmed
  • Title: Towards Clinically Viable Neural Prosthetic Systems.
  • Abstract:

20 Oct 2010

  • Speaker: Alexandre Francois
  • Affiliation: USC
  • Host:
  • Status: Confirmed
  • Title:
  • Abstract:

3 Nov 2010

  • Speaker: Eric Jonas and Vikash Mansinghka
  • Affiliation: Navia Systems
  • Host: Jascha
  • Status: Confirmed
  • Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications
  • Abstract: Complex probabilistic models and Bayesian inference are becoming

increasingly critical across science and industry, especially in large-scale data analysis. They are also central to our best computational accounts of human cognition, perception and action. However, all these efforts struggle with the infamous curse of dimensionality. Rich probabilistic models can seem hard to write and even harder to solve, as specifying and calculating probabilities often appears to require the manipulation of exponentially (and sometimes infinitely) large tables of numbers.

We argue that these difficulties reflect a basic mismatch between the needs of probabilistic reasoning and the deterministic, functional orientation of our current hardware, programming languages and CS theory. To mitigate these issues, we have been developing a stack of abstractions for natively probabilistic computation, based around stochastic simulators (or samplers) for distributions, rather than evaluators for deterministic functions. Ultimately, our aim is to produce a model of computation and the associated hardware and programming tools that are as suited for uncertain inference and decision-making as our current computers are for precise arithmetic.

In this talk, we will give an overview of the entire stack of abstractions supporting natively probabilistic computation, with technical detail on several hardware and software artifacts we have implemented so far. we will also touch on some new theoretical results regarding the computational complexity of probabilistic programs. Throughout, we will motivate and connect this work to some current applications in biomedical data analysis and computer vision, as well as potential hypotheses regarding the implementation of probabilistic computation in the brain.

This talk includes joint work with Keith Bonawitz, Beau Cronin, Cameron Freer, Daniel Roy and Joshua Tenenbaum.

BRIEF BIOGRAPHY

Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a venture-funded startup company building natively probabilistic computing machines. He spent 10 years at MIT, eventually earning an SB. in Mathematics, an SB. in Computer Science, an MEng in Computer Science, and a PhD in Computation. He held graduate fellowships from the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won the 2009 MIT George M. Sprowls award for best dissertation in computer science. He currently serves on DARPA's Information Science and Technology (ISAT) Study Group.

Eric Jonas is a co-founder of Navia Systems, responsible for in-house accelerated inference research and development. He spent ten years at MIT, where he earned SB degrees in electrical engineering and computer science and neurobiology, an MEng in EECS, with a neurobiology PhD expected really soon. He’s passionate about biological applications of probabilistic reasoning and hopes to use Navia’s capabilities to combine data from biological science, clinical histories, and patient outcomes into seamless models.

8 Nov 2010

  • Speaker: Patrick Ruther
  • Affiliation: Imtek, University of Freiburg
  • Host: Tim
  • Status: Confirmed
  • Title: TBD
  • Abstract: TBD

10 Nov 2010

  • Speaker: Aurel Lazar
  • Affiliation: Department of Electrical Engineering, Columbia University
  • Host: Bruno
  • Status: Confirmed
  • Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons
  • Abstract: We first present a general framework for the reconstruction of natural video

scenes encoded with a population of spiking neural circuits with random thresholds. The visual encoding system consists of a bank of filters, modeling the visual receptive fields, in cascade with a population of neural circuits, modeling encoding with spikes in the early visual system. The neuron models considered include integrate-and-fire neurons and ON-OFF neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed to be random. We show that for both time-varying and space-time-varying stimuli neural spike encoding is akin to taking noisy measurements on the stimulus. Second, we formulate the reconstruction problem as the minimization of a suitable cost functional in a finite-dimensional vector space and provide an explicit algorithm for stimulus recovery. We also present a general solution using the theory of smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both synthetic video as well as for natural scenes and show that the quality of the reconstruction degrades gracefully as the threshold variability of the neurons increases. Third, we demonstrate a number of simple operations on the original visual stimulus including translations, rotations and zooming. All these operations are natively executed in the spike domain. The processed spike trains are decoded for the faithful recovery of the stimulus and its transformations. Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley neurons. References: Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou, Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010, Special Issue on Mathematical Models of Visual Coding, http://dx.doi.org/10.1016/j.visres.2010.03.015 Aurel A. Lazar, Population Encoding with Hodgkin-Huxley Neurons, IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010, Special Issue on Molecular Biology and Neuroscience, http://dx.doi.org/10.1109/TIT.2009.2037040

11 Nov 2010 (UCB holiday)

  • Speaker: Martha Nari Havenith
  • Affiliation: UCL
  • Host: Fritz
  • Status: Confirmed
  • Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?
  • Abstract:

19 Nov 2010 (note: on Friday because of SFN)

  • Speaker: Dan Butts
  • Affiliation: UMD
  • Host: Tim
  • Status: Confirmed
  • Title: Common roles of inhibition in visual and auditory processing.
  • Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.

24 Nov 2010

  • Speaker: Eizaburo Doi
  • Affiliation: NYU
  • Host: Jimmy
  • Status: Confirmed
  • Title:
  • Abstract:


29 Nov 2010 - informal talk

  • Speaker: Eero Lehtonen
  • Affiliation: UTU Finland
  • Host: Bruno
  • Status: Confirmed
  • Title: Memristors
  • Abstract:

1 Dec 2010

  • Speaker: Gadi Geiger
  • Affiliation: MIT
  • Host: Fritz
  • Status: Confirmed
  • Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics
  • Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.


13 Dec 2010

  • Speaker: Jorg Lueke
  • Affiliation: FIAS
  • Host: Bruno
  • Status: Confirmed
  • Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data
  • Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.

15 Dec 2010

  • Speaker: Claudia Clopath
  • Affiliation: Universite Paris Decartes
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:


18 Jan 2011

  • Speaker: Siwei Lyu
  • Affiliation: Computer Science Department, University at Albany, SUNY
  • Host: Bruno
  • Status: confirmed
  • Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation
  • Abstract:

19 Jan 2011

  • Speaker: David Field (informal talk)
  • Affiliation:
  • Host: Bruno
  • Status: Tentative
  • Title:
  • Abstract:

25 Jan 2011

  • Speaker: Ruth Rosenholtz
  • Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT
  • Host: Bruno
  • Status: Confirmed
  • Title: What your visual system sees where you are not looking
  • Abstract:

26 Jan 2011

  • Speaker: Ernst Niebur
  • Affiliation: Johns Hopkins U
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

16 March 2011

  • Speaker: Vladimir Itskov
  • Affiliation: University of Nebraska-Lincoln
  • Host: Chris
  • Status: Confirmed
  • Title:
  • Abstract:

23 March 2011

  • Speaker: Bruce Cumming
  • Affiliation: National Institutes of Health
  • Host: Ivana
  • Status: Confirmed
  • Title: TBD
  • Abstract:

27 April 2011

  • Speaker: Lubomir Bourdev
  • Affiliation: Computer Science, UC Berkeley
  • Host:Bruno
  • Status: Confirmed
  • Title: "Poselets and Their Applications in High-Level Computer Vision Problems"
  • Abstract:

12 May 2011 (note: Thursday)

  • Speaker: Jack Culpepper
  • Affiliation: Redwood Center/EECS
  • Host: Bruno
  • Status: Confirmed
  • Title: TBA
  • Abstract:

26 May 2011

  • Speaker: Ian Stevenson
  • Affiliation: Northwestern University
  • Host: Bruno
  • Status: Confirmed
  • Title: Explaining tuning curves by estimating interactions between neurons
  • Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.

1 June 2011

  • Speaker: Michael Oliver
  • Affiliation: Gallant lab
  • Host: Bruno
  • Status: Tentative
  • Title:
  • Abstract:

8 June 2011

  • Speaker: Alyson Fletcher
  • Affiliation: UC Berkeley
  • Host: Bruno
  • Status: tentative
  • Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity
  • Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.

2009/10 academic year

2 September 2009

  • Speaker: Keith Godfrey
  • Affiliation: University of Cambridge
  • Host: Tim
  • Status: Confirmed
  • Title: TBA
  • Abstract:

7 October 2009

  • Speaker: Anita Schmid
  • Affiliation: Cornell University
  • Host: Kilian
  • Status: Confirmed
  • Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time
  • Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.

28 October 2009

  • Speaker: Andrea Benucci
  • Affiliation: Institute of Ophthalmology, University College London
  • Host: Bruno
  • Status: Confirmed
  • Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex
  • Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.

12 November 2009 (Thursday)

  • Speaker: Song-Chun Zhu
  • Affiliation: UCLA
  • Host: Jimmy
  • Status: Confirmed
  • Title:
  • Abstract:

18 November 2009

  • Speaker: Dan Graham
  • Affiliation: Dept. of Mathematics, Dartmouth College
  • Host: Bruno
  • Status: Confirmed
  • Title: The Packet-Switching Brain: A Hypothesis
  • Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.

16 December 2009

  • Speaker: Pietro Berkes
  • Affiliation: Volen Center for Complex Systems, Brandeis University
  • Host: Bruno
  • Status: Confirmed
  • Title: Generative models of vision: from sparse coding toward structured models
  • Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.

6 January 2010

  • Speaker: Susanne Still
  • Affiliation: U of Hawaii
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

20 January 2010

  • Speaker: Tom Dean
  • Affiliation: Google
  • Host: Bruno
  • Status: Confirmed
  • Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors
  • Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.

27 January 2010

  • Speaker: David Philiponna
  • Affiliation: Paris
  • Host: Bruno
  • Status: Confirmed
  • Title:
  • Abstract:

'24 Feburary 2010

  • Speaker: Gordon Pipa
  • Affiliation: U Osnabrueck/MPI Frankfurt
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

3 March 2010

  • Speaker: Gaute Einevoll
  • Affiliation: UMB, Norway
  • Host: Amir
  • Status: Confirmed
  • Title: TBA
  • Abstract: TBA


4 March 2010

  • Speaker: Harvey Swadlow
  • Affiliation:
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

8 April 2010

  • Speaker: Alan Yuille
  • Affiliation: UCLA
  • Host: Amir
  • Status: Confirmed (for 1pm)
  • Title:
  • Abstract:

28 April 2010

  • Speaker: Dharmendra Modha - cancelled
  • Affiliation: IBM
  • Host:Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

5 May 2010

  • Speaker: David Zipser
  • Affiliation: UCB
  • Host: Daniel Little
  • Status: Tentative
  • Title: Brytes 2:
  • Abstract:

Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.

In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.

12 May 2010

  • Speaker: Frank Werblin (Redwood group meeting - internal only)
  • Affiliation: Berkeley
  • Host: Bruno
  • Status: Tentative
  • Title:
  • Abstract:

19 May 2010

  • Speaker: Anna Judith
  • Affiliation: UCB
  • Host: Daniel Little (Redwood Lab Meeting - internal only)
  • Status: confirmed
  • Title:
  • Abstract: