Seminars
Instructions
- Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.
- Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as host in case somebody wants to contact you.
- Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Notify me [1] that we have a confirmed speaker so that I can update the public web page. Please include a title and abstract.
- Natalie (HWNI) checks our web page regularly and will send out an announcement.
- If the speaker needs accommodations you should contact Natalie [2] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.
- During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment).
- After the seminar have the speaker submit travel expenses to Jadine Palapaz [3] at RES for reimbursement. You can get a travel reimbursement form online and give to the speaker so they can submit everything before they leave if they have all their receipts on hand, otherwise they can mail it in afterwards.
Tentative / Confirmed Speakers
2 Oct 2013
- Speaker:
- Affiliation:
- Host:
- Status:
- Title: TBA
- Abstract: TBA
9 Oct 2013
- Speaker: Ekaterina Brocke
- Affiliation: KTH University, Stockholm, Sweden
- Host: Tony
- Status: confirmed
- Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.
- Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.
16 Oct 2013
- Speaker:
- Affiliation:
- Host:
- Status:
- Title: TBA
- Abstract: TBA
23 Oct 2013
- Speaker:
- Affiliation:
- Host:
- Status:
- Title: TBA
- Abstract: TBA
30 Oct 2013
- Speaker: Ilya Nemanman
- Affiliation: Emory
- Host: Mike DeWeese
- Status: confirmed
- Title: TBA
- Abstract: TBA
6 Nov 2013
- Speaker: Garrett T. Kenyon
- Affiliation: Los Alamos National Laboratory, The New Mexico Consortium
- Host: Dylan Paiton
- Status: Tentative
- Title: TBA
- Abstract: TBA
14 Nov 2013 (note: Thursday)
- Speaker: Jonathan Baker
- Affiliation: Cornell Medical School
- Host: Bruno
- Status: confirmed
- Title: TBA
- Abstract: TBA
4 Dec 2013
- Speaker: Zhenwen Dai
- Affiliation: FIAS, Goethe University Frankfurt, Germany.
- Host: Georgios
- Status: tentative
- Title: TBA
- Abstract: TBA
12 March 2014
- Speaker: Carlos Portera-Cailliau
- Affiliation: UCLA
- Host: Mike
- Status: confirmed
- Title: TBA
- Abstract: TBA
19 March 2014
- Speaker: Dean Buonomano
- Affiliation: UCLA
- Host: Mike
- Status: confirmed
- Title: TBA
- Abstract: TBA
26 March 2014
- Speaker: Robert G. Smith
- Affiliation: University of Pennsylvania
- Host: Mike S
- Status: confirmed
- Title: TBA
- Abstract: TBA
Previous Seminars
2012/13 academic year
26 Sept 2012
- Speaker: Jason Yeatman
- Affiliation: Department of Psychology, Stanford University
- Host: Bruno/Susana Chung
- Status: confirmed
- Title: The Development of White Matter and Reading Skills
- Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.
8 Oct 2012
- Speaker: Sophie Deneve
- Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM
- Host: Bruno
- Status: confirmed
- Title: Balanced spiking networks can implement dynamical systems with predictive coding
- Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.
19 Oct 2012
- Speaker: Gert Van Dijck
- Affiliation: Cambridge
- Host: Urs
- Status: confirmed
- Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach
- Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.
Tuesday, 23 Oct 2012
- Speaker: Jaimie Sleigh
- Affiliation: University of Auckland
- Host: Fritz/Andrew Szeri
- Status: confirmed
- Title: Is General Anesthesia a failure of cortical information integration
- Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.
31 Oct 2012 (Halloween)
- Speaker: Jonathan Landy
- Affiliation: UCSB
- Host: Mike DeWeese
- Status: Confirmed
- Title: Mean-field replica theory: review of basics and a new approach
- Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.
7 Nov 2012
- Speaker: Tom Griffiths
- Affiliation: UC Berkeley
- Host:Daniel Little
- Status: Confirmed
- Title: Identifying human inductive biases
- Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.
19 Nov 2012 (Monday) (Thanksgiving week)
- Speaker: Bin Yu
- Affiliation: Dept. of Statistics and EECS, UC Berkeley
- Host: Bruno
- Status: confirmed
- Title: Representation of Natural Images in V4
- Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features. (This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver and J. Gallant.)
30 Nov 2012
- Speaker: Yan Karklin
- Affiliation: NYU
- Host: Tyler
- Status: confirmed
- Title:
- Abstract:
10 Dec 2012 (note this would be the Monday after NIPS)
- Speaker: Marius Pachitariu
- Affiliation: Gatsby / UCL
- Host: Urs
- Status: confirmed
- Title: NIPS paper "Learning visual motion in recurrent neural networks"
- Abstract: We present a dynamic nonlinear generative model for visual motion based on a
latent representation of binary-gated Gaussian variables connected in a network. Trained on sequences of images by an STDP-like rule the model learns to represent different movement directions in different variables. We use an online approximate inference scheme that can be mapped to the dynamics of networks of neurons. Probed with drifting grating stimuli and moving bars of light, neurons in the model show patterns of responses analogous to those of direction-selective simple cells in primary visual cortex. We show how the computations of the model are enabled by a specific pattern of learnt asymmetric recurrent connections. I will also briefly discuss our application of recurrent neural networks as statistical models of simultaneously recorded spiking neurons.
12 Dec 2012
- Speaker: Ian Goodfellow
- Affiliation: U Montreal
- Host: Bruno
- Status: confirmed
- Title:
- Abstract:
7 Jan 2013
- Speaker: Stuart Hammeroff
- Affiliation: University of Arizona
- Host: Gautam Agarwal
- Status: confirmed
- Title: Quantum cognition and brain microtubules
- Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.
Monday 14 Jan 2013, 1:00pm
- Speaker: Dibyendu Mandal
- Affiliation: Physics Dept., University of Maryland (Jarzynski group)
- Host: Mike DeWeese
- Status: confirmed
- Title: An exactly solvable model of Maxwell’s demon
- Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.
23 Jan 2013
- Speaker: Carlos Brody
- Affiliation: Princeton
- Host: Mike DeWeese
- Status: confirmed
- Title: Neural substrates of decision-making in the rat
- Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.
28 Jan 2013
- Speaker: Eugene M. Izhikevich
- Affiliation: Brain Corporation
- Host: Fritz
- Status: confirmed
- Title: Spikes
- Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.
29 Jan 2013
- Speaker: Goren Gordon
- Affiliation: Weizman Intitute
- Host: Fritz
- Status: confirmed
- Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics
- Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.
29 Jan 2013
- Speaker: Jenny Read
- Affiliation: Institute of Neuroscience, Newcastle University
- Host: Sarah
- Status: confirmed
- Title: Stereoscopic vision
- Abstract: [To be written]
7 Feb 2013
- Speaker: Valero Laparra
- Affiliation: University of Valencia
- Host: Bruno
- Status: confirmed
- Title: Empirical statistical analysis of phases in Gabor filtered natural images
- Abstract:
20 Feb 2013
- Speaker: Dolores Bozovic
- Affiliation: UCLA
- Host: Mike DeWeese
- Status: confirmed
- Title: Bifurcations and phase-locking dynamics in the auditory system
- Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.
27 March 2013
- Speaker: Dale Purves
- Affiliation: Duke
- Host: Sarah
- Status: confirmed
- Title: How Visual Evolution Determines What We See
- Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.
9 April 2013
- Speaker: Mounya Elhilali
- Affiliation: Johns Hopkins
- Host: Tyler
- Status: confirmed
- Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis
- Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.
17th of April 2013
- Speaker: Wiktor Młynarski
- Affiliation: Max Planck Institute for Mathematics in the Sciences
- Host: Urs
- Status: confirmed
- Title: Statistical Models of Binaural Sounds
- Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.
15 May 2013
- Speaker: Byron Yu
- Affiliation: CMU
- Host: Bruno/Jose (jointly sponsored with CNEP)
- Status: confirmed
- Title: TBA
- Abstract: TBA
22 May 2013
- Speaker: Bijan Pesaran
- Affiliation: NYU
- Host: Bruno/Jose (jointly sponsored with CNEP)
- Status: confirmed
- Title: TBA
- Abstract: TBA
2011/12 academic year
15 Sep 2011 (Thursday, at noon)
- Speaker: Kathrin Berkner
- Affiliation: Ricoh Innovations Inc.
- Host: Ivana Tosic
- Status: Confirmed
- Title: TBD
- Abstract: TBD
21 Sep 2011
- Speaker: Mike Kilgard
- Affiliation: UT Dallas
- Host: Michael Silver
- Status: Confirmed
- Title:
- Abstract:
27 Sep 2011
- Speaker: Moshe Gur
- Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology
- Host: Bruno/Stan
- Status: Confirmed
- Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?
- Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.
5 Oct 2011
- Speaker: Susanne Still
- Affiliation: University of Hawaii at Manoa
- Host: Jascha
- Status: confirmed
- Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium
- Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.
19 Oct 2011
- Speaker: Graham Cummins
- Affiliation: WSU
- Host: Jeff Teeters
- Status: Confirmed
- Title:
- Abstract:
26 Oct 2011
- Speaker: Shinji Nishimoto
- Affiliation: Gallant lab, UC Berkeley
- Host: Bruno
- Status: Confirmed
- Title:
- Abstract:
14 Dec 2011
- Speaker: Austin Roorda
- Affiliation: UC Berkeley
- Host: Bruno
- Status: Confirmed
- Title: How the unstable eye sees a stable and moving world
- Abstract:
11 Jan 2012
- Speaker: Ken Nakayama
- Affiliation: Harvard University
- Host: Bruno
- Status: confirmed
- Title: Subjective Contours
- Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over. Subjective contours, however, remain as vivid as ever, even more so. Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.
Tuesday, 24 Jan 2012
- Speaker: Aniruddha Das
- Affiliation: Columbia University
- Host: Fritz
- Status: confirmed
- Title:
- Abstract:
22 Feb 2012
- Speaker: Elad Schneidman
- Affiliation: Department of Neurobiology, Weizmann Institute of Science
- Host: Bruno
- Status: confirmed
- Title: Sparse high order interaction networks underlie learnable neural population codes
- Abstract:
29 Feb 2012 (at noon as usual)
- Speaker: Heather Read
- Affiliation: U. Connecticut
- Host: Mike DeWeese
- Status: confirmed
- Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"
- Abstract: TBD
1 Mar 2012 (note: Thurs)
- Speaker: Daniel Zoran
- Affiliation: Hebrew University, Jerusalem
- Host: Bruno
- Status: confirmed
- Title: TBA
- Abstract:
7 Mar 2012
- Speaker: David Sivak
- Affiliation: UCB
- Host: Mike DeWeese
- Status: Confirmed
- Title: TBA
- Abstract:
8 Mar 2012
- Speaker: Ivan Schwab
- Affiliation: UC Davis
- Host: Bruno
- Status: Confirmed
- Title: Evolution's Witness: How Eyes Evolved
- Abstract:
14 Mar 2012
- Speaker: David Sussillo
- Affiliation:
- Host: Jascha
- Status: confirmed
- Title:
- Abstract:
18 April 2012
- Speaker: Kristofer Bouchard
- Affiliation: UCSF
- Host: Bruno
- Status: confirmed
- Title: Cortical Foundations of Human Speech Production
- Abstract:
23 May 2012 (rescheduled from April 11)
- Speaker: Logan Grosenick
- Affiliation: Stanford, Deisseroth & Suppes Labs
- Host: Jascha
- Status: confirmed
- Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics
- Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics.
[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006. [2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.
7 June 2012 (Thursday)
- Speaker: Mitya Chklovskii
- Affiliation: janelia
- Host: Bruno
- Status:
- Title:
- Abstract
27 June 2012
- Speaker: Jerry Feldman
- Affiliation:
- Host: Bruno
- Status:
- Title:
- Abstract:
30 July 2012
- Speaker: Lucas Theis
- Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen
- Host: Jascha
- Status: Confirmed
- Title: Hierarchical models of natural images
- Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.
(joint work with Reshad Hosseini and Matthias Bethge)
2010/11 academic year
02 Sep 2010
- Speaker: Johannes Burge
- Affiliation: University of Texas at Austin
- Host: Jimmy
- Status: Confirmed
- Title:
- Abstract:
8 Sep 2010
- Speaker: Tobi Szuts
- Affiliation: Meister Lab/ Harvard U.
- Host: Mike DeWeese
- Status: Confirmed
- Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.
- Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.
29 Sep 2010
- Speaker: Vikash Gilja
- Affiliation: Stanford University
- Host: Charles
- Status: Confirmed
- Title: Towards Clinically Viable Neural Prosthetic Systems.
- Abstract:
20 Oct 2010
- Speaker: Alexandre Francois
- Affiliation: USC
- Host:
- Status: Confirmed
- Title:
- Abstract:
3 Nov 2010
- Speaker: Eric Jonas and Vikash Mansinghka
- Affiliation: Navia Systems
- Host: Jascha
- Status: Confirmed
- Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications
- Abstract: Complex probabilistic models and Bayesian inference are becoming
increasingly critical across science and industry, especially in large-scale data analysis. They are also central to our best computational accounts of human cognition, perception and action. However, all these efforts struggle with the infamous curse of dimensionality. Rich probabilistic models can seem hard to write and even harder to solve, as specifying and calculating probabilities often appears to require the manipulation of exponentially (and sometimes infinitely) large tables of numbers.
We argue that these difficulties reflect a basic mismatch between the needs of probabilistic reasoning and the deterministic, functional orientation of our current hardware, programming languages and CS theory. To mitigate these issues, we have been developing a stack of abstractions for natively probabilistic computation, based around stochastic simulators (or samplers) for distributions, rather than evaluators for deterministic functions. Ultimately, our aim is to produce a model of computation and the associated hardware and programming tools that are as suited for uncertain inference and decision-making as our current computers are for precise arithmetic.
In this talk, we will give an overview of the entire stack of abstractions supporting natively probabilistic computation, with technical detail on several hardware and software artifacts we have implemented so far. we will also touch on some new theoretical results regarding the computational complexity of probabilistic programs. Throughout, we will motivate and connect this work to some current applications in biomedical data analysis and computer vision, as well as potential hypotheses regarding the implementation of probabilistic computation in the brain.
This talk includes joint work with Keith Bonawitz, Beau Cronin, Cameron Freer, Daniel Roy and Joshua Tenenbaum.
BRIEF BIOGRAPHY
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a venture-funded startup company building natively probabilistic computing machines. He spent 10 years at MIT, eventually earning an SB. in Mathematics, an SB. in Computer Science, an MEng in Computer Science, and a PhD in Computation. He held graduate fellowships from the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won the 2009 MIT George M. Sprowls award for best dissertation in computer science. He currently serves on DARPA's Information Science and Technology (ISAT) Study Group.
Eric Jonas is a co-founder of Navia Systems, responsible for in-house accelerated inference research and development. He spent ten years at MIT, where he earned SB degrees in electrical engineering and computer science and neurobiology, an MEng in EECS, with a neurobiology PhD expected really soon. He’s passionate about biological applications of probabilistic reasoning and hopes to use Navia’s capabilities to combine data from biological science, clinical histories, and patient outcomes into seamless models.
8 Nov 2010
- Speaker: Patrick Ruther
- Affiliation: Imtek, University of Freiburg
- Host: Tim
- Status: Confirmed
- Title: TBD
- Abstract: TBD
10 Nov 2010
- Speaker: Aurel Lazar
- Affiliation: Department of Electrical Engineering, Columbia University
- Host: Bruno
- Status: Confirmed
- Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons
- Abstract: We first present a general framework for the reconstruction of natural video
scenes encoded with a population of spiking neural circuits with random thresholds. The visual encoding system consists of a bank of filters, modeling the visual receptive fields, in cascade with a population of neural circuits, modeling encoding with spikes in the early visual system. The neuron models considered include integrate-and-fire neurons and ON-OFF neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed to be random. We show that for both time-varying and space-time-varying stimuli neural spike encoding is akin to taking noisy measurements on the stimulus. Second, we formulate the reconstruction problem as the minimization of a suitable cost functional in a finite-dimensional vector space and provide an explicit algorithm for stimulus recovery. We also present a general solution using the theory of smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both synthetic video as well as for natural scenes and show that the quality of the reconstruction degrades gracefully as the threshold variability of the neurons increases. Third, we demonstrate a number of simple operations on the original visual stimulus including translations, rotations and zooming. All these operations are natively executed in the spike domain. The processed spike trains are decoded for the faithful recovery of the stimulus and its transformations. Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley neurons. References: Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou, Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010, Special Issue on Mathematical Models of Visual Coding, http://dx.doi.org/10.1016/j.visres.2010.03.015 Aurel A. Lazar, Population Encoding with Hodgkin-Huxley Neurons, IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010, Special Issue on Molecular Biology and Neuroscience, http://dx.doi.org/10.1109/TIT.2009.2037040
11 Nov 2010 (UCB holiday)
- Speaker: Martha Nari Havenith
- Affiliation: UCL
- Host: Fritz
- Status: Confirmed
- Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?
- Abstract:
19 Nov 2010 (note: on Friday because of SFN)
- Speaker: Dan Butts
- Affiliation: UMD
- Host: Tim
- Status: Confirmed
- Title: Common roles of inhibition in visual and auditory processing.
- Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.
24 Nov 2010
- Speaker: Eizaburo Doi
- Affiliation: NYU
- Host: Jimmy
- Status: Confirmed
- Title:
- Abstract:
29 Nov 2010 - informal talk
- Speaker: Eero Lehtonen
- Affiliation: UTU Finland
- Host: Bruno
- Status: Confirmed
- Title: Memristors
- Abstract:
1 Dec 2010
- Speaker: Gadi Geiger
- Affiliation: MIT
- Host: Fritz
- Status: Confirmed
- Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics
- Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.
13 Dec 2010
- Speaker: Jorg Lueke
- Affiliation: FIAS
- Host: Bruno
- Status: Confirmed
- Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data
- Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.
15 Dec 2010
- Speaker: Claudia Clopath
- Affiliation: Universite Paris Decartes
- Host: Fritz
- Status: Confirmed
- Title:
- Abstract:
18 Jan 2011
- Speaker: Siwei Lyu
- Affiliation: Computer Science Department, University at Albany, SUNY
- Host: Bruno
- Status: confirmed
- Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation
- Abstract:
19 Jan 2011
- Speaker: David Field (informal talk)
- Affiliation:
- Host: Bruno
- Status: Tentative
- Title:
- Abstract:
25 Jan 2011
- Speaker: Ruth Rosenholtz
- Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT
- Host: Bruno
- Status: Confirmed
- Title: What your visual system sees where you are not looking
- Abstract:
26 Jan 2011
- Speaker: Ernst Niebur
- Affiliation: Johns Hopkins U
- Host: Fritz
- Status: Confirmed
- Title:
- Abstract:
16 March 2011
- Speaker: Vladimir Itskov
- Affiliation: University of Nebraska-Lincoln
- Host: Chris
- Status: Confirmed
- Title:
- Abstract:
23 March 2011
- Speaker: Bruce Cumming
- Affiliation: National Institutes of Health
- Host: Ivana
- Status: Confirmed
- Title: TBD
- Abstract:
27 April 2011
- Speaker: Lubomir Bourdev
- Affiliation: Computer Science, UC Berkeley
- Host:Bruno
- Status: Confirmed
- Title: "Poselets and Their Applications in High-Level Computer Vision Problems"
- Abstract:
12 May 2011 (note: Thursday)
- Speaker: Jack Culpepper
- Affiliation: Redwood Center/EECS
- Host: Bruno
- Status: Confirmed
- Title: TBA
- Abstract:
26 May 2011
- Speaker: Ian Stevenson
- Affiliation: Northwestern University
- Host: Bruno
- Status: Confirmed
- Title: Explaining tuning curves by estimating interactions between neurons
- Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.
1 June 2011
- Speaker: Michael Oliver
- Affiliation: Gallant lab
- Host: Bruno
- Status: Tentative
- Title:
- Abstract:
8 June 2011
- Speaker: Alyson Fletcher
- Affiliation: UC Berkeley
- Host: Bruno
- Status: tentative
- Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity
- Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.
2009/10 academic year
2 September 2009
- Speaker: Keith Godfrey
- Affiliation: University of Cambridge
- Host: Tim
- Status: Confirmed
- Title: TBA
- Abstract:
7 October 2009
- Speaker: Anita Schmid
- Affiliation: Cornell University
- Host: Kilian
- Status: Confirmed
- Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time
- Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.
28 October 2009
- Speaker: Andrea Benucci
- Affiliation: Institute of Ophthalmology, University College London
- Host: Bruno
- Status: Confirmed
- Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex
- Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.
12 November 2009 (Thursday)
- Speaker: Song-Chun Zhu
- Affiliation: UCLA
- Host: Jimmy
- Status: Confirmed
- Title:
- Abstract:
18 November 2009
- Speaker: Dan Graham
- Affiliation: Dept. of Mathematics, Dartmouth College
- Host: Bruno
- Status: Confirmed
- Title: The Packet-Switching Brain: A Hypothesis
- Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.
16 December 2009
- Speaker: Pietro Berkes
- Affiliation: Volen Center for Complex Systems, Brandeis University
- Host: Bruno
- Status: Confirmed
- Title: Generative models of vision: from sparse coding toward structured models
- Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.
6 January 2010
- Speaker: Susanne Still
- Affiliation: U of Hawaii
- Host: Fritz
- Status: Confirmed
- Title:
- Abstract:
20 January 2010
- Speaker: Tom Dean
- Affiliation: Google
- Host: Bruno
- Status: Confirmed
- Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors
- Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.
27 January 2010
- Speaker: David Philiponna
- Affiliation: Paris
- Host: Bruno
- Status: Confirmed
- Title:
- Abstract:
'24 Feburary 2010
- Speaker: Gordon Pipa
- Affiliation: U Osnabrueck/MPI Frankfurt
- Host: Fritz
- Status: Confirmed
- Title:
- Abstract:
3 March 2010
- Speaker: Gaute Einevoll
- Affiliation: UMB, Norway
- Host: Amir
- Status: Confirmed
- Title: TBA
- Abstract: TBA
4 March 2010
- Speaker: Harvey Swadlow
- Affiliation:
- Host: Fritz
- Status: Confirmed
- Title:
- Abstract:
8 April 2010
- Speaker: Alan Yuille
- Affiliation: UCLA
- Host: Amir
- Status: Confirmed (for 1pm)
- Title:
- Abstract:
28 April 2010
- Speaker: Dharmendra Modha - cancelled
- Affiliation: IBM
- Host:Fritz
- Status: Confirmed
- Title:
- Abstract:
5 May 2010
- Speaker: David Zipser
- Affiliation: UCB
- Host: Daniel Little
- Status: Tentative
- Title: Brytes 2:
- Abstract:
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.
12 May 2010
- Speaker: Frank Werblin (Redwood group meeting - internal only)
- Affiliation: Berkeley
- Host: Bruno
- Status: Tentative
- Title:
- Abstract:
19 May 2010
- Speaker: Anna Judith
- Affiliation: UCB
- Host: Daniel Little (Redwood Lab Meeting - internal only)
- Status: confirmed
- Title:
- Abstract: