Seminars

From RedwoodCenter
Jump to navigationJump to search

Instructions

  1. Check the internal calendar (here) for a free seminar slot. If a seminar is not already booked at the regular time of noon on Wednesday, you can reserve it.
  2. Make a note on this page in the Tentative Speakers section that you are going to invite a speaker. Please include your name and email as host in case somebody wants to contact you.
  3. Invite a speaker.
  4. As soon as the speaker confirms, move the information to the Confirmed Speakers section.
  5. Notify Jimmy [1] that we have a confirmed speaker so that he can update the public web page. Please include a title and abstract.
  6. Also notify Josephine [2] about the seminar date so she knows to send out an announcement. And if the speaker needs accommodations Josephine handles that too.
  7. After the seminar have the speaker submit travel expenses to Jadine Palapaz [3] at RES for reimbursement. You can get a travel reimbursement form online and give to the speaker so they can submit everything before they leave if they have all their receipts on hand, otherwise they can mail it in afterwards.

Tentative Speakers

flexible/almost local'

  • Speaker: Corinna Darian-Smith
  • Affiliation: Stanford
  • Host: Fritz
  • Title: TBA
  • Abstract:

April, May or June

  • Speaker: Masao Tachibana
  • Affiliation: Tokyo University
  • Host: Fritz
  • Title: TBA
  • Abstract:

17/24 Jun 2009

  • Speaker: Winrich Freiwald
  • Affiliation: University of Bremen
  • Host: Tim
  • Title: TBA
  • Abstract:

Confirmed Speakers

05 Mar 2009

  • Speaker: Urs Koster
  • Affiliation: Department of Computer Science, University of Helsinki
  • Host: Jimmy
  • Title: TBA
  • Abstract:

11 March 2009'

  • Speaker: Nabil Bouaouli
  • Affiliation: Ecole Normale Superieure, Paris
  • Host: Fritz
  • Title: TBA
  • Abstract:

25 Mar 2009

  • Speaker: Susana Martinez-Conde
  • Affiliation: Barrow Neurological Institute, Phoenix, Arizona
  • Host: Tim
  • Title: TBA
  • Abstract:

1 April 2009

  • Speaker: Werner Callebaut
  • Affiliation: Konrad Lorenz Institute for Evolution and Cognition Research, Austria
  • Host: Tony
  • Title: (tentative) Reductionism in biology
  • Abstract:

8 Apr 2009

  • Speaker: Laura Walker Renninger
  • Affiliation: Smith-Kettlewell Eye Research Institute
  • Host: Bruno
  • Title: Applying information models to explore eye movement behavior in patients with central field loss
  • Abstract:

22 Apr 2009

  • Speaker: Thomas Serre
  • Affiliation: MIT
  • Host: Charles
  • Title: TBA
  • Abstract:

29 April 2009

  • Speaker: Garrett Stanley
  • Affiliation: Giorgia Tech
  • Host: Fritz
  • Title: TBA
  • Abstract:

6 May 2009'

  • Speaker: Surya Ganguli
  • Affiliation: UCSF
  • Host: Fritz
  • Title: Origins of short-term memory traces in neuronal networks
  • Abstract: Critical cognitive phenomena such as planning and decision making rely
 on the ability of the brain to hold information in working memory.
 Many proposals exist for the maintenance of such memories in
 persistent activity that arises from stable fixed point attractors in
 the dynamics of recurrent neural networks. However such fixed points
 are incapable of storing temporal sequences of recent events. An
 alternate, and relatively less explored paradigm, is the storage of
 arbitrary temporal input sequences in the transient responses of a
 recurrent neural network.   Such a paradigm raises a host of
 important questions. Are there any fundamental limits on the duration
 of such transient memory traces?  How do these limits depend on the
 size of the network?  What patterns of neural connectivity yield good
 performance on generic working memory tasks? To what extent do these
 traces degrade in the presence of noise?
 We combine Fisher information theory with dynamical systems theory to
 give precise answers to these questions for the class of all linear,
 and some nonlinear, neuronal networks.   We uncover an important role
 for a special class of networks, known as nonnormal networks.  Such
 networks are characterized by a (possibly hidden) feedforward
 structure, which is crucial for the maintenance of robust memory traces.

27 May 2009

  • Speaker: Jonathan Victor
  • Affiliation: Cornell University
  • Host: Fritz
  • Title: Understanding the Computations in Primary Visual Cortex: Does Tweaking the Standard Model Suffice?
  • Abstract:

Previous Seminars

2008/9 academic year

17 Dec 2008

  • Speaker: Francois Meyer
  • Affiliation: Univ. Colorado, Boulder
  • Host: Bruno
  • Title:
  • Abstract:

4 Nov 2008 (note: Tuesday)

  • Speaker: Giorgio Ascoli
  • Affiliation: Molecular Neuroscience Department and Director, Center for Neural Informatics, Structure, and Plasticity, George Mason University
  • Host: Bruno/Fritz
  • Title: From dendrites to connectomics: computational neuroanatomy, neuroinformatics, and the brain
  • Abstract:

30 Oct 2008 (note: Thursday)

  • Speaker: Bard Ermentrout
  • Affiliation: Dept.of Mathematics, University of Pittsburgh
  • Host: Bruno
  • Title: What makes a neuron spike: Optimality, noise, and phase resetting
  • Abstract: I will describe behavior of nearly regularly firing neurons in the presence

of noisy stimuli. I first describe the phase resetting curve (PRC) and how it responds to noisy inputs. I show how there is an optimal shape for the PRC and discuss the effects of unshared noise. Next I turn to an important computational concept - the spike-triggered average. The STA is the optimal linear filter for reconstructing firing rates from stimuli. I show that the STA and the PRC are closely related. I then show that the reliability and the STA are related and use this to show that neurons are tuned to noise which has the spectral characteristics of excitatory synapses. This work is joint with Sashi Marella, Aushra Abouzeid, Nathan Urban and Roberto Fernandez-Galan

22 Oct 2008

  • Speaker: Rich Zemel
  • Affiliation: Dept. of Computer Science, Univ. of Toronto
  • Host: Bruno
  • Title: Neural Representations of Dynamic Stimuli
  • Abstract:

17 Sept 2008

  • Speaker: Dan Butts
  • Affiliation: Institute of Computational Biomedicine, Weill Medical College of Cornell University
  • Host: Bruno
  • Title: Time and visual computation: how precision is generated in the visual pathway
  • Abstract:

24 Sept 2008

  • Speaker: Lena H. Ting
  • Affiliation: Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, and Fall 2008 Visiting Miller Professor
  • Host: Bruno
  • Title: Dimensional reduction in motor patterns for balance control
  • Abstract: How do humans and animals move so elegantly through unpredictable and dynamic environments? And why does this question continue to pose such a challenge? We have a wealth of data on the action of neurons, muscles, and limbs during a wide variety of motor behaviors, yet these data are difficult to interpret, as there is no one-to-one correspondence between a desired movement goal, limb motions, or muscle activity. Using combined experimental and computational approaches, we are teasing apart the neural and biomechanical influences on muscle coordination of during standing balance control in cats and humans. Our work demonstrates that variability in motor patterns both within and across subjects during balance control in humans and animals can be characterized by a low-dimensional set of parameters related to abstract, task-level variables. Temporal patterns of muscle activation across the body can be characterized by a 4-parameter, delayed-feedback model on center-of-mass kinematic variables. Changes in muscle activity that occur following large-fiber sensory-loss in cats, as well as during motor adaptation in humans, appear to be constrained within the low-dimensional parameter space defined by the feedback model. Moreover, well-adapted responses to perturbations are similar to those predicted by an optimal tradeoff between mechanical stability and energetic expenditure. Spatial patterns of muscle activation can also be characterized by a small set of muscle synergies (identified using non-negative matrix factorization) that are like motor building blocks, defining characteristic patterns of activation across multiple muscles. We hypothesize that each muscle synergy performs a task-level function, thereby providing a mechanism by which task-level motor intentions are translated into detailed, low-level muscle activation patterns. We demonstrate that a small set of muscle synergies can account for trial-by-trial variability in motor patterns across a wide range of balance conditions. Further, muscle activity and forces during balance control in novel postural configurations are best predicted my minimizing the activity of a few muscle synergies rather than the activity of individual muscles. Muscle synergies may represent a sparse motor code, organizing muscles to solve an “inverse binding problem” for motor outputs. We propose that such an organization facilitates fast motor adaptation while concurrently imposing constraints on the structure and energetic efficiency of motor patterns used during motor learning.

1 Oct 2008

  • Speaker: Dileep George
  • Affiliation: Numenta
  • Host: Bruno
  • Title: Towards a cortical microcircuit model that integrates invariant recognition, temporal inference and attention
  • Abstract:

15 Oct 2008

  • Speaker: Jim Crutchfield
  • Affiliation: UC Davis
  • Host: Bruno
  • Title:
  • Abstract: I will show how theory building can naturally distinguish between regularity and randomness. Starting from basic modeling principles, using rate distortion theory and computational mechanics I'll argue for a general information-theoretic objective function that embodies a trade-off between a model's complexity and its predictive power. The family of solutions derived from this principle corresponds to a hierarchy of models. At each level of complexity, they achieve maximal predictive power, identifying a process's exact causal organization in the limit of optimal prediction. Examples show how theory building can profit from analyzing a process's causal compressibility, which is reflected in the optimal models' rate-distortion curve.

2008 summer

23 July 2008

  • Speaker: Bill Bialek
  • Affiliation: Princeton U.
  • Host: Mike
  • Title: Networks, codes, and information flow
  • Abstract: In this informal talk I'll try to give a feeling for three problems my colleagues and I are thinking about. A bit of a hodge-podge, perhaps, but I hope that the combination of topics provokes discussion:

(1) How do we go from what we can measure about neurons to a global picture of the network dynamics? We've been exploring maximum entropy methods that allow us to construct a statistical mechanics of the network from measurements on correlations between pairs of neurons, as well as more direct paths to construct a thermodynamics of the network. The surprise emerging from this analysis is that a real neural network (the retina responding to naturalistic movies) seems to be poised at a critical point. (2) How can the brain calibrate the neural code without access to independent knowledge of the stimulus? Rather than asking how patterns of neural activity are related to sensory stimuli in the recent past, we have been exploring how this activity is related to activity in the immediate future. This problem of extracting predictive information is quite general (and quite rich), and seems to identify patterns which are especially informative about the sensory input even though the analysis makes no reference to these inputs; again our example is the retina, where we can also try to understand the nature of the stimulus features that are detected by the prediction algorithms. (3) Can we understand the architecture (or even the detailed dynamics) of networks as solutions to some optimization problem for the flow or representation of information? In the context of neurons, this is an old idea; it has an interesting mapping to genetic networks, where the physical constraints on information flow are clearer. We have some surprising initial successes in applying such optimization principles to the first steps of genetic control in the development of the fruit fly embryo, and I'll try to highlight analogies between current questions in neural and genetic circuits.


13 Aug 2008

  • Speaker: Joshua Vogelstein
  • Affiliation: Johns Hopkins
  • Host: Mike
  • Title: From calcium imaging to spikes, using state-space methods.
  • Abstract: Great technological and experimental advances have recently facilitated the imaging neural activity both in vivo and in vitro using calcium sensitive fluorescence observations. We present here complementary analytical tools maximizing the utility of these data sets. First, we describe a fast method, that can simultaneously infer spike trains from a population of neurons in real time on a typical single processor computer. More precisely, we apply an approach called basis pursuit (or non-negative convex optimization with a log-barrier penalty term), and use an additional trick (tridiagonal matrix inversion) which capitalizes on the exponential nature of the calcium filter, to infer spike times both more accurately and faster than the optimal linear (i.e., Wiener) filter. Second, we describe a sequential Monte Carlo (SMC) expectation maximization algorithm that generalizes many of the assumptions made to derive the fast method. The SMC approach (often called particle filtering) approximates the distribution of the hidden variables (calcium and spikes) by recursively generating weighted samples, which it uses to form a histogram that approximates the actual distribution. By integrating over this histogram, instead of the true distribution, we construct a very accurate approximation. Using such an approach enables us to (i) incorporate errorbars on the estimate, (ii) allow for saturation of the fluorescence signal, and (iii) consider spike history effects such as adaptation, facilitation, and refractoriness. While slower, this strategy still works in real time for each observable neuron. We show how both methods can condition the inferred spike trains on external stimuli, and achieve superresolution, i.e., infer not just whether a spike occurred within a stimulus frame, but when within that frame. Furthermore, both methods have a relatively small number of parameters, and each of the parameters may be estimated using standard gradient ascent techniques, without needing additional calibration experiments or ratiometric dyes. We demonstrate the advantages of each of these approaches over the Wiener filter using data sets recorded using both epifluorescence and 2-photon imaging.


2007/2008 academic year

28 May 2008

  • Speaker: Xin Wang
  • Affiliation: USC
  • Host: Fritz
  • Title: The inner life of bursts in LGN
  • Abstract:

27 May 2008

  • Speaker: Xin Wang
  • Affiliation: USC
  • Host: Fritz
  • Title: Recovering retinal and extraretinal receptive fields of LGN cells
  • Abstract:

21 May 2008

  • Speaker: Kwabena Boahen
  • Affiliation: Stanford University
  • Host: Bruno
  • Title: Neurogrid: Emulating a million neurons in the cortex
  • Abstract:

14 May 2008

  • Speaker: Nicholas Priebe
  • Affiliation: University of Texas at Austin
  • Host: Mike
  • Title: Contrast-invariant orientation tuning in simple cells of visual cortex
  • Abstract: Two views of cortical computation have been proposed to account for the selectivity of sensory neurons. In one view, excitatory afferent input provides a rough sketch of the world, which is then refined and sharpened by lateral or feedback inhibition. In the alternative view, excitatory afferent input is sufficient, on its own, to account for sensory selectivity. The debate between these perspectives has in large part been driven by the very real paradox presented by two divergent lines of evidence. On the one hand, many receptive field properties found in visual cortex, such as cross-orientation suppression and contrast-invariant orientation tuning, appear to require lateral inhibition. On the other hand, intracellular recordings have failed to find consistent evidence for lateral inhibition. I will discuss which of these two viewpoints is most appropriate to describe one feature of cortical simple cells, namely, contrast-invariant orientation tuning. A purely linear feed-forward model, incorporating only excitatory input from the thalamus, predicts that the width of orientation tuning in simple cells should broaden with contrast, breaking contrast invariance. Lateral inhibition, in the form of cross-orientation inhibition, is one mechanism that could restore contrast invariance by antagonizing feed-forward excitation at non-preferred orientations. I will demonstrate instead that the predicted broadening is suppressed by three independent mechanisms, none of which appears to require inhibition. First, many simple cells receive only some of their excitatory input from geniculate relay cells, the remainder originating from other cortical neurons with similar preferred orientations. Second, contrast-dependent changes in the trial-to-trial variability of responses lead to contrast-dependent changes in the transformation between membrane potential and spike rate. Third, membrane potential responses of simple cells saturate at lower contrasts than are predicted by a feed-forward model. Thus, the function of lateral inhibition in refining orientation selectivity is accomplished instead by a number of simple, well-defined nonlinearities of visual neurons.

5-7 May 2008

  • CIFAR workshop (Bruno)

30 Apr 2008

  • Speaker: Thanos Siapas
  • Affiliation: Caltech
  • Host: Amir
  • Title: Hippocampal Network Dynamics and Memory Formation
  • Abstract: Many lines of evidence have shown that the hippocampus is critical for the formation of long-term memories, and that this hippocampal involvement is time-limited. The current predominant conjecture is that memories are encoded in the hippocampus during awake behavior and are gradually consolidated across neocortical circuits under the influence of hippocampal activity during sleep. Consistent with this conjecture, the activation modes of hippocampal and cortical circuits are drastically different in the awake and sleep states. In this talk I will characterize hippocampal activity patterns at the network level in different brain states, and discuss how these patterns evolve across time. I will also discuss timing relationships between hippocampal and neocortical activity, and their consequences for the process of memory formation.

23 Apr 2008

  • Speaker: Mark Goldman
  • Affiliation: UC Davis
  • Host: Bruno
  • Title: Modeling the mechanisms underlying memory-related neural activity
  • Abstract:

16 Apr 2008

  • Speaker: Ueli Rutishauser
  • Affiliation: Caltech
  • Host: Will
  • Title: State dependent computation using coupled recurrent networks
  • Abstract: Although procedural information processing composed of conditional

decisions is a hallmark of intelligent behavior, its neuronal implementation remains an open question. Physiological experiments have reported behavioral-state encoding neurons in the frontal cortices, but the organization of the neuronal circuits that could support such state-dependent processing is very poorly understood. In recent years, neuroanatomical studies have demonstrated rich inter-connections between neurons in the superficial layers of the cortex, and theoretical models have explained how recurrent connections within small populations of neurons can support co-operative competitive dynamics. We show by theoretical analysis and simulations how these circuits can embed reliable robust neuronal finite state-machines, which could support generic conditional processing in the neocortex. We demonstrate how a multi-stable neuronal network that embeds a number of states can be created very simply, by coupling two recurrent networks whose synaptic weights have been set within a range that offers soft winner-take-all (sWTA) performance. The two sWTAs have simple, homogenous locally recurrent connectivity except for a small fraction of recurrent cross-connections between them that are used to embed the required states. This coupling between the maps allows them to retain their current state after the input that elicted that state is withdrawn. A small number of 'transition neurons' implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct an arbitrary neuronal state machine composed of nearly identical recurrent maps. The significance of our finding is that it offers a method whereby the cortex could achieve a broad range of sophisticated processing by only limited specialization of the same generic neuronal circuit.

2 Apr 2008

  • Speaker: Marty Usrey
  • Affiliation: UC Davis
  • Host: Bruno
  • Title: "Functional properties of neuronal circuits for vision"
  • Abstract:

19 March 2008

  • Speaker: Dana Ballard
  • Affiliation: University of Texas, Austin
  • Host: Fritz
  • Title:
  • Abstract:

12 Mar 2008

  • Speaker: Ilana Witten
  • Affiliation: Stanford University
  • Host: Mike
  • Title: Spatial Processing in a Complex Auditory Environment
  • Abstract: A single, stationary object in the auditory environment activates space-selective neurons in the brain, which in turn direct orienting movements towards the object. However, the natural auditory environment is typically complex, containing auditory objects that move through space, as well as multiple simultaneous objects. Moreover, auditory objects need to be integrated with the corresponding visual objects. This complexity provides challenges that the brain most overcome in order to localize sounds appropriately. For instance, when a sound moves through space, neural activity must predict the sound's future location in order to compensate for sensorimotor delays involved in sound orienting behavior. When there are multiple sounds in the environment, the animal must decide whether or not to group them perceptually, and if they are grouped, the animal must decide where to localize them. Finally, when the animal is faced with conflicting localization information from auditory and visual systems, it must employ learning rules that can appropriately reinstate crossmodal alignment. I will describe how the auditory system mediates localization behavior and represents the auditory environment in the face of each of these complexities.

6 Mar 2008

  • Speaker: Peter Robinson
  • Affiliation: University of Sydney
  • Host: Tim
  • Title:
  • Abstract:

26/27 Feb 2008

  • Speaker: Jean-Philippe Lachaux
  • Affiliation: INSERM, Lyon
  • Host: Tim
  • Title:
  • Abstract:

20 Feb 2008

  • Speaker: Costa Colbert
  • Affiliation: Evolved Machines, Inc. and Dept. of Biology and Biochemistry, University of Houston
  • Host: Bruno
  • Title: Electrophysiological, Optical, and Computational Studies of Dendritic Excitability
  • Abstract: Spike-timing dependent plasticity has gained much recent attention as a basis for encoding information at synapses. I will present a number of features of back-propagating dendritic spikes in pyramidal neurons that increase the complexity of dendritic information storage. Both electrophysiological recordings of dendritic ion channels and multi-site multiphoton imaging of dendrites will be discussed in relation to a model of compartmentalization of the dendritic arbor.

13 Feb 2008

  • Speaker: Marcelo Magnasco
  • Affiliation: Rockefeller University
  • Host: Kilian
  • Title: Sparse time-frequency representations and the neural coding of sound
  • Abstract:

6 Feb 2008

  • Speaker: Pam Reinagel
  • Affiliation: UCSD
  • Host: Fritz
  • Title: How context influences representation of visual information in the LGN
  • Abstract:

30 Jan 2008

  • Speaker: Kai Miller
  • Affiliation: University of Washington
  • Host: Kilian
  • Title: Changes in local cortical activity are revealed by a power law in the cortical potential spectrum
  • Abstract: I will begin by demonstrating how careful experimental technique

reveals a power law of the form P~Af^-chi in the electrocortical potential spectrum with exponent \chi=4.0 \pm 0.1 above ~70Hz, and evidence for a power law with \chi_{low}=2.0 \pm 0.4 below this. During a simple finger flexion task, the potential spectrum is effectively decoupled into this power law and the \alpha and \beta rhythms. I will demonstrate that increases in the coefficient, A, of this power law (not the exponent) correspond to local cortical function and reveal discrete finger somatotopy. Finally, I will discuss some possible interpretations for the source and nature of these changes.

Nov. 27

  • Speaker: Geoff Hinton
  • Affiliation: Dept. of Computer Science, University of Toronto
  • Host: Bruno
  • Title: How are error derivatives represented in the brain
  • Abstract: Neurons need to represent both the presence of a feature in the

sensory input and the derivative of an error function with repect to the neural activity. I will describe a simple way in which they can represent both of these very different quantities at the same time and show that this representational scheme would make it easy for real neurons to backpropagate error derivatives so that higher level feature detectors can fine-tune the receptive fields of lower level ones.

Nov. 13

  • Speaker: Sonja Gruen
  • Affiliation: Riken
  • Host: Fritz
  • Title: Spike synchrony and spike-LFP relation in freely viewing monkeys
  • Abstract:

Oct. 31

  • Speaker: Jason Kerr
  • Affiliation: Max Planck Institute for Biological Cybernetics
  • Host: Tim
  • Title: TBA

Oct. 29

  • Speaker: Laurenz Wiskott
  • Affiliation: Bernstein Center for Computational Neuroscience and Institute for Theoretical Biology, Humboldt-University Berlin
  • Host: Bruno
  • Title: Slow feature analysis for modeling place cells in the hippocampus and its relationship to spike timing dependent plasticity
  • Abstract: Slow Feature Analysis (SFA) is an algorithm for extracting slowly varying

features from a quickly varying signal. We have applied SFA to the learning of complex cell receptive fields, visual invariances for whole objects, and place cells in the hippocampus. Here I will report about our results on modeling place cells in the hippocampus. If slowness is indeed an important learning principle in visual cortex and beyond, the question arises, how it could be implemented in a biologically plausible learning rule. In the second part of the talk I will show analytically that for linear Poisson units, SFA can be implemented with STDP with the standard learning window as measured by, e.g., Bi and Poo (1998).

Oct. 23

  • Speaker: Liam Paninski
  • Affiliation: Columbia Univesrity
  • Host: Amir
  • Title: Combining biophysical and statistical methods for understanding neural codes
  • Abstract:

The neural coding problem --- deciding which stimuli will cause a given neuron to spike, and with what probability --- is a fundamental question in systems neuroscience. The high dimensionality of both stimuli and spike trains has spurred the development of a number of sophisticated statistical techniques for learning the neural code from finite experimental data. In particular, modeling approaches based on maximum likelihood have proven to be flexible and powerful.

We present three such applications here. One common thread is that the models we have chosen for these data each have concave loglikelihood surfaces, permitting tractable fitting (by maximizing the loglikelihood) even in high dimensional parameter spaces, since no local maxima can exist for the optimizer to get `stuck' in.

First we describe neural encoding models in which a linear stimulus filtering stage is followed by a noisy integrate-and-fire spike generation mechanism incorporating after-spike currents and spike-dependent conductance modulations. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can effectively reproduce a variety of spiking behaviors. We use this model to analyze extracellular data from populations of retinal ganglion cells, simultaneously recorded during stimulation with dynamic light stimuli. Here the model provides insight into the biophysical factors underlying the reliability of these neurons' spiking responses, and provides a framework for analyzing the cross-correlations observed between these cells. (Joint work with E.J. Chichilnisky, J. Pillow, J. Shlens, E. Simoncelli, and V. Uzzell, at NYU and Salk.)

Next we describe how to use this model to ``decode the underlying subthreshold somatic voltage dynamics, given only the superthreshold spike train. We also point out some connections to spike-triggered averaging techniques.

We close by discussing recent extensions to highly biophysically-detailed, conductance-based models, which have the potential to allow us to estimate the density of active channels in a cell's membrane and also to decode the synaptic input to the cell as a function of time. (With M. Ahrens, Q. Huys, and J. Vogelstein, at Gatsby and Johns Hopkins.)

Oct. 3

  • Speaker: Flip Sabes
  • Affiliation: Keck Center/UCSF
  • Host: Bruno
  • Title: TBA
  • Abstract:

2007 summer seminars

August 21, 2007

  • Speaker: Jeremy Lewi
  • Affiliation: Georgia Tech
  • Host: Amir
  • Title: Adaptively optimizing neurophysiology experiments for estimating encoding models

2006/2007 academic year

May 15, 2007

  • Speaker: Ray Guillery
  • Affiliation: University of Madisson, WI/Marmara University
  • Host: Fritz
  • Title: Thalamus and Sensorimotor Aspects of Perception

May 8

  • Speaker: Lokendra Shastri
  • Affiliation: ICSI
  • Host: Bruno
  • Title: Micro-circuits of Episodic Memory: Structure Matches Function in the Hippocampal System

April 24

  • Speaker: Jeff Johnson
  • Affiliation: UC Davis
  • Host: Bruno
  • Title: What does EEG tell us about the timecourse of object recognition?

April 17, 2007

  • Speaker: Steve Waydo
  • Affiliation: Control & Dynamical Systems, California Institute of Technology
  • Host: Bruno
  • Title: Explicit Object Representation by Sparse Neural Codes

April 10

  • Speaker: Andrew Ng
  • Affiliation: Stanford University
  • Host: Bruno
  • Title: Unsupervised discovery of structure for transfer learning

April 3

  • Speaker: Robert Miller
  • Affiliation: Department of Anatomy and Structural Biology, Otago University
  • Host: Fritz
  • Title: Axonal conduction time and human cerebral laterality

March 20, 2007

  • Speaker: Jeff Hawkins
  • Affiliation: Numenta
  • Host: Bruno
  • Title: Hierarchical Temporal Memory

March 13, 2007

  • Speaker: Chris Wiggins
  • Affiliation: Columbia University, NY
  • Host: Tony
  • Title: Optimal signal processing in small stochastic biochemical networks

March 6

  • Speaker: Pietro Perona
  • Affiliation: Caltech
  • Host: Bruno
  • Title: An exploration of visual recognition

March 1

  • Speaker: Hiroki Asari
  • Affiliation: CSL
  • Host: Fritz
  • Title: Sparse Representations for the Cocktail Party Problem
  • Abstract: A striking feature of many sensory processing problems is that there appear to be many more neurons engaged in the internal representations of the signal than in its transduction. For example, humans have about 30,000 cochlear neurons, but at least a thousand times as many neurons in the auditory cortex. Such apparently redundant internal representations have sometimes been proposed as necessary to overcome neuronal noise. We instead posit that they directly subserve computations of interest. Here we provide an example of how sparse overcomplete linear representations can directly solve difficult acoustic signal processing problems, using as an example monaural source separation using solely the cues provided by the differential filtering imposed on a source by its path from its origin to the cochlea (the head-related transfer function, or HRTF). In contrast to much previous work, the HRTF is used here to separate auditory streams rather than to localize them in space. The experimentally testable predictions that arise from this model--- including a novel method for estimating a neuron's optimal stimulus using data from a multi-neuron recording experiment---are generic, and apply to a wide range of sensory computations.

February 20, 2007

  • Speaker: Yair Weiss
  • Affiliation: Hebrew University, Jerusalem
  • Host: Tony
  • Title: What makes a good model of natural images?

February 13, 2007

  • Speaker: Tobi Delbruck
  • Affiliation: Inst of Neuroinformatics, UNI-ETH Zurich
  • Host: Bruno
  • Title: Building a high-performance event-based silicon retina leads to new ways to compute vision
  • URL: http://siliconretina.ini.uzh.ch

Jan 23, 2007

  • Speaker: Giuseppe Vitiello
  • Affiliation: Department of Physics “E.R.Caianiello”, Salerno University
  • Host: Fritz
  • Title: Relations between many-body physics and nonlinear brain dynamics

Jan 9, 2007

  • Speaker: Boris Gutkin
  • Affiliation: University of Paris
  • Host: Fritz
  • Title: TBA

Dec 5

  • Speaker: Tanya Baker
  • Affiliation: U Chicago
  • Host: Kilian
  • Title: What Forest Fires Tell Us About the Brain

December 1, 2006 1.30pm

  • Informal visit: Nancy Kopell
  • Affiliation: Boston University
  • Host: Fritz
  • Title: No talk: Informal visit in the afternoon

Nov 28

  • Speaker: Thomas Dean
  • Host: Bruno
  • Affiliation: Brown University/Google
  • Title: TBA

Nov 21

  • Speaker: Urs Koster
  • Host: Bruno
  • Affiliation: University of Helsinki
  • Title: Towards Multi-Layer Processing of Natural Images

Nov 14

  • Speaker: Andrew D. Straw
  • Affiliation: Bioengineering, California Institute of Technology
  • Host: Kilian
  • Title: Closed-Loop, Visually-Based Flight Regulation in a Model Fruit Fly

Nov 7

  • Speaker: Mitya Chklovskii
  • Host: Bruno
  • Title: What determines the shape of neuronal arbors?

Oct 31

  • Speaker: Matthias Kaschube
  • Host: Kilian
  • Title: A mathematical constant in the design of the visual cortex


Oct 3

  • Speaker: Jay McClelland
  • Affiliation: Mind, Brain & Computation/MBC, Psychology Department, Stanford
  • Host: Evan
  • Title: Graded Constraints in English Word Forms (video)

Sept 25

  • Speaker: Peter Latham
  • Affiliation: Gatsby Unit, UCL
  • Host: Bruno
  • Title: Requiem for the spike (video)

Sept 19

  • Speaker: Jerry Feldman
  • Affiliation: ICSI/UC Berkeley
  • Host: Bruno
  • Title: From Molecule to Metaphor: Towards a Unified Cognitive Science (video)

Sept 5

  • Speaker: Tom Griffiths
  • Affiliation: Cogsci/UC Berkeley
  • Host: Bruno
  • Title: Natural Statistics and Human Cognition (video)

Aug 1

  • Speaker: Carol Whitney
  • Affiliation: U Maryland
  • Host: Bruno
  • Title: What can Visual Word Recognition Tell us about Visual Object Recognition? (video)

July 18

  • Speaker: Evan Smith
  • Affiliation: Redwood Center/Stanford
  • Host: Bruno
  • Title: Efficient auditory coding

2005/2006 academic year

June 20

  • Speaker: Vincent Bonin
  • Affiliation: Smith Kettlewell Institute
  • Host: Thomas
  • Title:

June 15

  • Speaker: Philip Low
  • Affiliation: Salk Institute
  • Host: Tony
  • Title: A New Way To Look At Sleep

May 2

  • Speaker: Dileep George
  • Affiliation: Numenta
  • Host: Bruno
  • Title: Hierarchical, cortical memory architecture for pattern recognition

April 18

  • Speaker: Risto Miikkulainen
  • Affiliation: The University of Texas at Austin
  • Host: Bruno
  • Title: Computational maps in the visual cortex (video)

April 11

  • Speaker: Charles Anderson
  • Affiliation: Washington University School of Medicine
  • Host: Bruno
  • Title: Population Coding in V1 (video)

April 10

  • Speaker: Charles Anderson
  • Affiliation: Washington University School of Medicine
  • Host: Bruno
  • Title: A Comparison of Neurobiological and Digital Computation (video)

April 4

  • Speaker: Odelia Schwartz
  • Affiliation: The Salk Institute
  • Host: Bruno
  • Title: Natural images and cortical representation

March 21

  • Speaker: Mark Schnitzer
  • Affiliation: Stanford University
  • Host: Amir
  • Title: In vivo microendoscopy and computational modeling studies of mammalian brain circuits

March 15

  • Speaker: Mate Lengyel
  • Affiliation: Gatsby Unit/UCL London
  • Host: fritz
  • Title: Bayesian model learning in human visual perception (video)

March 14

  • Speaker: Mate Lengyel
  • Affiliation: Gatsby Unit/UCL London
  • Host: fritz
  • Title: Firing rates and phases in the hippocampus: what are they good for? (video)

March 7

  • Speaker: Michael Wu
  • Affiliation: Gallant lab/UC Berkeley
  • Host: Bruno
  • Title: A Unified Framework for Receptive Field Estimation

February 28

  • Speaker: Dario Ringach
  • Affiliation: UCLA
  • Host: thomas
  • Title: Population dynamics in primary visual cortex

February 21

  • Speaker: Gerard Rinkus
  • Affiliation: Brandeis University
  • Host: Bruno
  • Title: Hierarchical Sparse Distributed Representations of Sequence Recall and Recognition (video)

February 14

  • Speaker: Jack Cowan
  • Affiliation: U Chicago
  • Host: Bruno
  • Title: Spontaneous pattern formation in large scale brain activity: what visual migraines and hallucinations tell us about the brain (video)

February 7

  • Speaker: Christian Wehrhahn
  • Affiliation: Max Planck Institute for Biological Cybernetics, Tuebingen, Germany
  • Host: Tony
  • Title: Seeing blindsight: motion at isoluminance?

January 23 (Monday)

  • Speaker: Read Montague
  • Affiliation: Baylor College of Medicine
  • Host: Bruno
  • Title: Abstract plans and reward signals in a multi-round trust game

January 17

  • Speaker: Erhardt Barth
  • Affiliation: Institute for Neuro- and Bioinformatics, Luebeck, Germany
  • Host: Bruno
  • Title: Guiding eye movements for better communication (video)

January 3

  • Speaker: Dan Butts
  • Affiliation: Harvard University
  • Host: Thomas
  • Title: "Temporal hyperacuity": visual neuron function at millisecond time resolution

December 13, 2005

  • Speaker: Paul Rhodes
  • Affiliation: Stanford University
  • Title: Simulations of a thalamocortical column with compartment model cells and dynamic synapses (video)

December 6, 2005

November 29, 2005

  • Speaker: Stanley Klein
  • Affiliation: School of Optometry, UC Berkeley
  • Title: Limits of Vision and psychophysical methods (video)

November 22, 2005

  • Speaker: Scott Makeig
  • Affiliation: Swartz Center for Computational Neuroscience, Institute for Neural Computation, UCSD
  • Title: Viewing event-related brain dynamics from the top down